doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
Module: tf.compat.v1.config.threading Public API for tf.config.threading namespace. Functions get_inter_op_parallelism_threads(...): Get number of threads used for parallelism between independent operations. get_intra_op_parallelism_threads(...): Get number of threads used within an individual op for parallelism. set_inter_op_parallelism_threads(...): Set number of threads used for parallelism between independent operations. set_intra_op_parallelism_threads(...): Set number of threads used within an individual op for parallelism. | tensorflow.compat.v1.config.threading |
tf.compat.v1.ConfigProto A ProtocolMessage
Attributes
allow_soft_placement bool allow_soft_placement
cluster_def ClusterDef cluster_def
device_count repeated DeviceCountEntry device_count
device_filters repeated string device_filters
experimental Experimental experimental
gpu_options GPUOptions gpu_options
graph_options GraphOptions graph_options
inter_op_parallelism_threads int32 inter_op_parallelism_threads
intra_op_parallelism_threads int32 intra_op_parallelism_threads
isolate_session_state bool isolate_session_state
log_device_placement bool log_device_placement
operation_timeout_in_ms int64 operation_timeout_in_ms
placement_period int32 placement_period
rpc_options RPCOptions rpc_options
session_inter_op_thread_pool repeated ThreadPoolOptionProto session_inter_op_thread_pool
share_cluster_devices_in_session bool share_cluster_devices_in_session
use_per_session_threads bool use_per_session_threads Child Classes class DeviceCountEntry class Experimental | tensorflow.compat.v1.configproto |
tf.compat.v1.ConfigProto.DeviceCountEntry A ProtocolMessage
Attributes
key string key
value int32 value | tensorflow.compat.v1.configproto.devicecountentry |
tf.compat.v1.ConfigProto.Experimental A ProtocolMessage
Attributes
collective_deterministic_sequential_execution bool collective_deterministic_sequential_execution
collective_group_leader string collective_group_leader
collective_nccl bool collective_nccl
disable_output_partition_graphs bool disable_output_partition_graphs
disable_thread_spinning bool disable_thread_spinning
enable_mlir_bridge bool enable_mlir_bridge
enable_mlir_graph_optimization bool enable_mlir_graph_optimization
executor_type string executor_type
mlir_bridge_rollout MlirBridgeRollout mlir_bridge_rollout
optimize_for_static_graph bool optimize_for_static_graph
recv_buf_max_chunk int32 recv_buf_max_chunk
session_metadata SessionMetadata session_metadata
share_cluster_devices_in_session bool share_cluster_devices_in_session
share_session_state_in_clusterspec_propagation bool share_session_state_in_clusterspec_propagation
use_numa_affinity bool use_numa_affinity
xla_fusion_autotuner_thresh int64 xla_fusion_autotuner_thresh
Class Variables
MLIR_BRIDGE_ROLLOUT_DISABLED 2
MLIR_BRIDGE_ROLLOUT_ENABLED 1
MLIR_BRIDGE_ROLLOUT_UNSPECIFIED 0
MlirBridgeRollout | tensorflow.compat.v1.configproto.experimental |
tf.compat.v1.confusion_matrix Computes the confusion matrix from predictions and labels. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.confusion_matrix
tf.compat.v1.confusion_matrix(
labels, predictions, num_classes=None, dtype=tf.dtypes.int32, name=None,
weights=None
)
The matrix columns represent the prediction labels and the rows represent the real labels. The confusion matrix is always a 2-D array of shape [n, n], where n is the number of valid labels for a given classification task. Both prediction and labels must be 1-D arrays of the same shape in order for this function to work. If num_classes is None, then num_classes will be set to one plus the maximum value in either predictions or labels. Class labels are expected to start at 0. For example, if num_classes is 3, then the possible labels would be [0, 1, 2]. If weights is not None, then each prediction contributes its corresponding weight to the total value of the confusion matrix cell. For example: tf.math.confusion_matrix([1, 2, 4], [2, 2, 4]) ==>
[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]
[0 0 0 0 1]]
Note that the possible labels are assumed to be [0, 1, 2, 3, 4], resulting in a 5x5 confusion matrix.
Args
labels 1-D Tensor of real labels for the classification task.
predictions 1-D Tensor of predictions for a given classification.
num_classes The possible number of labels the classification task can have. If this value is not provided, it will be calculated using both predictions and labels array.
dtype Data type of the confusion matrix.
name Scope name.
weights An optional Tensor whose shape matches predictions.
Returns A Tensor of type dtype with shape [n, n] representing the confusion matrix, where n is the number of possible labels in the classification task.
Raises
ValueError If both predictions and labels are not 1-D vectors and have mismatched shapes, or if weights is not None and its shape doesn't match predictions. | tensorflow.compat.v1.confusion_matrix |
tf.compat.v1.constant Creates a constant tensor.
tf.compat.v1.constant(
value, dtype=None, shape=None, name='Const', verify_shape=False
)
The resulting tensor is populated with values of type dtype, as specified by arguments value and (optionally) shape (see examples below). The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the shape argument (if specified). In the case where the list length is less than the number of elements specified by shape, the last element in the list will be used to fill the remaining entries. The argument shape is optional. If present, it specifies the dimensions of the resulting tensor. If not present, the shape of value is used. If the argument dtype is not specified, then the type is inferred from the type of value. For example: # Constant 1-D Tensor populated with value list.
tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]
# Constant 2-D tensor populated with scalar value -1.
tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.]
[-1. -1. -1.]]
tf.constant differs from tf.fill in a few ways:
tf.constant supports arbitrary constants, not just uniform scalar Tensors like tf.fill.
tf.constant creates a Const node in the computation graph with the exact value at graph construction time. On the other hand, tf.fill creates an Op in the graph that is expanded at runtime. Because tf.constant only embeds constant values in the graph, it does not support dynamic shapes based on other runtime Tensors, whereas tf.fill does.
Args
value A constant value (or list) of output type dtype.
dtype The type of the elements of the resulting tensor.
shape Optional dimensions of resulting tensor.
name Optional name for the tensor.
verify_shape Boolean that enables verification of a shape of values.
Returns A Constant Tensor.
Raises
TypeError if shape is incorrectly specified or unsupported. | tensorflow.compat.v1.constant |
tf.compat.v1.container Wrapper for Graph.container() using the default graph.
tf.compat.v1.container(
container_name
)
Args
container_name The container string to use in the context.
Returns A context manager that specifies the default container to use for newly created stateful ops. | tensorflow.compat.v1.container |
tf.compat.v1.control_flow_v2_enabled Returns True if v2 control flow is enabled.
tf.compat.v1.control_flow_v2_enabled()
Note: v2 control flow is always enabled inside of tf.function. | tensorflow.compat.v1.control_flow_v2_enabled |
tf.compat.v1.convert_to_tensor Converts the given value to a Tensor.
tf.compat.v1.convert_to_tensor(
value, dtype=None, name=None, preferred_dtype=None, dtype_hint=None
)
This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars. For example: import numpy as np
def my_func(arg):
arg = tf.convert_to_tensor(arg, dtype=tf.float32)
return tf.matmul(arg, arg) + arg
# The following calls are equivalent.
value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))
value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))
This function can be useful when composing a new operation in Python (such as my_func in the example above). All standard Python op constructors apply this function to each of their Tensor-valued inputs, which allows those ops to accept numpy arrays, Python lists, and scalars in addition to Tensor objects.
Note: This function diverges from default Numpy behavior for float and string types when None is present in a Python list or scalar. Rather than silently converting None values, an error will be thrown.
Args
value An object whose type has a registered Tensor conversion function.
dtype Optional element type for the returned tensor. If missing, the type is inferred from the type of value.
name Optional name to use if a new Tensor is created.
preferred_dtype Optional element type for the returned tensor, used when dtype is None. In some cases, a caller may not have a dtype in mind when converting to a tensor, so preferred_dtype can be used as a soft preference. If the conversion to preferred_dtype is not possible, this argument has no effect.
dtype_hint same meaning as preferred_dtype, and overrides it.
Returns A Tensor based on value.
Raises
TypeError If no conversion function is registered for value to dtype.
RuntimeError If a registered conversion function returns an invalid value.
ValueError If the value is a tensor not of given dtype in graph mode. | tensorflow.compat.v1.convert_to_tensor |
tf.compat.v1.convert_to_tensor_or_indexed_slices Converts the given object to a Tensor or an IndexedSlices.
tf.compat.v1.convert_to_tensor_or_indexed_slices(
value, dtype=None, name=None
)
If value is an IndexedSlices or SparseTensor it is returned unmodified. Otherwise, it is converted to a Tensor using convert_to_tensor().
Args
value An IndexedSlices, SparseTensor, or an object that can be consumed by convert_to_tensor().
dtype (Optional.) The required DType of the returned Tensor or IndexedSlices.
name (Optional.) A name to use if a new Tensor is created.
Returns A Tensor, IndexedSlices, or SparseTensor based on value.
Raises
ValueError If dtype does not match the element type of value. | tensorflow.compat.v1.convert_to_tensor_or_indexed_slices |
tf.compat.v1.convert_to_tensor_or_sparse_tensor Converts value to a SparseTensor or Tensor.
tf.compat.v1.convert_to_tensor_or_sparse_tensor(
value, dtype=None, name=None
)
Args
value A SparseTensor, SparseTensorValue, or an object whose type has a registered Tensor conversion function.
dtype Optional element type for the returned tensor. If missing, the type is inferred from the type of value.
name Optional name to use if a new Tensor is created.
Returns A SparseTensor or Tensor based on value.
Raises
RuntimeError If result type is incompatible with dtype. | tensorflow.compat.v1.convert_to_tensor_or_sparse_tensor |
tf.compat.v1.count_nonzero Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.count_nonzero
tf.compat.v1.count_nonzero(
input_tensor=None, axis=None, keepdims=None, dtype=tf.dtypes.int64, name=None,
reduction_indices=None, keep_dims=None, input=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims insteadWarning: SOME ARGUMENTS ARE DEPRECATED: (reduction_indices). They will be removed in a future version. Instructions for updating: reduction_indices is deprecated, use axis instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1. If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned.
Note: Floating point comparison to zero is done by exact floating point equality check. Small values are not rounded to zero for purposes of the nonzero check.
For example: x = tf.constant([[0, 1, 0], [1, 1, 0]])
tf.math.count_nonzero(x) # 3
tf.math.count_nonzero(x, 0) # [1, 2, 0]
tf.math.count_nonzero(x, 1) # [1, 2]
tf.math.count_nonzero(x, 1, keepdims=True) # [[1], [2]]
tf.math.count_nonzero(x, [0, 1]) # 3
Note: Strings are compared against zero-length empty string "". Any string with a size greater than zero is already considered as nonzero.
For example: x = tf.constant(["", "a", " ", "b", ""])
tf.math.count_nonzero(x) # 3, with "a", " ", and "b" as nonzero strings.
Args
input_tensor The tensor to reduce. Should be of numeric type, bool, or string.
axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
keepdims If true, retains reduced dimensions with length 1.
dtype The output dtype; defaults to tf.int64.
name A name for the operation (optional).
reduction_indices The old (deprecated) name for axis.
keep_dims Deprecated alias for keepdims.
input Overrides input_tensor. For compatibility.
Returns The reduced tensor (number of nonzero values). | tensorflow.compat.v1.count_nonzero |
tf.compat.v1.count_up_to Increments 'ref' until it reaches 'limit'. (deprecated)
tf.compat.v1.count_up_to(
ref, limit, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Prefer Dataset.range instead.
Args
ref A Variable. Must be one of the following types: int32, int64. Should be from a scalar Variable node.
limit An int. If incrementing ref would bring it above limit, instead generates an 'OutOfRange' error.
name A name for the operation (optional).
Returns A Tensor. Has the same type as ref. A copy of the input before increment. If nothing else modifies the input, the values produced will all be distinct. | tensorflow.compat.v1.count_up_to |
tf.compat.v1.create_partitioned_variables Create a list of partitioned variables according to the given slicing. (deprecated)
tf.compat.v1.create_partitioned_variables(
shape, slicing, initializer, dtype=tf.dtypes.float32, trainable=True,
collections=None, name=None, reuse=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.get_variable with a partitioner set. Currently only one dimension of the full variable can be sliced, and the full variable can be reconstructed by the concatenation of the returned list along that dimension.
Args
shape List of integers. The shape of the full variable.
slicing List of integers. How to partition the variable. Must be of the same length as shape. Each value indicate how many slices to create in the corresponding dimension. Presently only one of the values can be more than 1; that is, the variable can only be sliced along one dimension. For convenience, The requested number of partitions does not have to divide the corresponding dimension evenly. If it does not, the shapes of the partitions are incremented by 1 starting from partition 0 until all slack is absorbed. The adjustment rules may change in the future, but as you can save/restore these variables with different slicing specifications this should not be a problem.
initializer A Tensor of shape shape or a variable initializer function. If a function, it will be called once for each slice, passing the shape and data type of the slice as parameters. The function must return a tensor with the same shape as the slice.
dtype Type of the variables. Ignored if initializer is a Tensor.
trainable If True also add all the variables to the graph collection GraphKeys.TRAINABLE_VARIABLES.
collections List of graph collections keys to add the variables to. Defaults to [GraphKeys.GLOBAL_VARIABLES].
name Optional name for the full variable. Defaults to "PartitionedVariable" and gets uniquified automatically.
reuse Boolean or None; if True and name is set, it would reuse previously created variables. if False it will create new variables. if None, it would inherit the parent scope reuse.
Returns A list of Variables corresponding to the slicing.
Raises
ValueError If any of the arguments is malformed. | tensorflow.compat.v1.create_partitioned_variables |
Module: tf.compat.v1.data tf.data.Dataset API for input pipelines. See Importing Data for an overview. Modules experimental module: Experimental API for building input pipelines. Classes class Dataset: Represents a potentially large set of elements. class DatasetSpec: Type specification for tf.data.Dataset. class FixedLengthRecordDataset: A Dataset of fixed-length records from one or more binary files. class Iterator: Represents the state of iterating through a Dataset. class Options: Represents options for tf.data.Dataset. class TFRecordDataset: A Dataset comprising records from one or more TFRecord files. class TextLineDataset: A Dataset comprising lines from one or more text files. Functions get_output_classes(...): Returns the output classes for elements of the input dataset / iterator. get_output_shapes(...): Returns the output shapes for elements of the input dataset / iterator. get_output_types(...): Returns the output shapes for elements of the input dataset / iterator. make_initializable_iterator(...): Creates an iterator for elements of dataset. make_one_shot_iterator(...): Creates an iterator for elements of dataset.
Other Members
AUTOTUNE -1
INFINITE_CARDINALITY -1
UNKNOWN_CARDINALITY -2 | tensorflow.compat.v1.data |
tf.compat.v1.data.Dataset Represents a potentially large set of elements. Inherits From: Dataset
tf.compat.v1.data.Dataset()
A Dataset can be used to represent an input pipeline as a collection of elements and a "logical plan" of transformations that act on those elements.
Args
variant_tensor A DT_VARIANT tensor that represents the dataset.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.compat.v1.data.dataset |
Module: tf.compat.v1.data.experimental Experimental API for building input pipelines. This module contains experimental Dataset sources and transformations that can be used in conjunction with the tf.data.Dataset API. Note that the tf.data.experimental API is not subject to the same backwards compatibility guarantees as tf.data, but we will provide deprecation advice in advance of removing existing functionality. See Importing Data for an overview. Modules service module: API for using the tf.data service. Classes class AutoShardPolicy: Represents the type of auto-sharding we enable. class CheckpointInputPipelineHook: Checkpoints input pipeline state every N steps or seconds. class CsvDataset: A Dataset comprising lines from one or more CSV files. class DatasetStructure: Type specification for tf.data.Dataset. class DistributeOptions: Represents options for distributed data processing. class MapVectorizationOptions: Represents options for the MapVectorization optimization. class OptimizationOptions: Represents options for dataset optimizations. class Optional: Represents a value that may or may not be present. class OptionalStructure: Type specification for tf.experimental.Optional. class RandomDataset: A Dataset of pseudorandom values. class Reducer: A reducer is used for reducing a set of elements. class SqlDataset: A Dataset consisting of the results from a SQL query. class StatsAggregator: A stateful resource that aggregates statistics from one or more iterators. class StatsOptions: Represents options for collecting dataset stats using StatsAggregator. class Structure: Specifies a TensorFlow value type. class TFRecordWriter: Writes a dataset to a TFRecord file. class ThreadingOptions: Represents options for dataset threading. Functions Counter(...): Creates a Dataset that counts from start in steps of size step. RaggedTensorStructure(...): DEPRECATED FUNCTION SparseTensorStructure(...): DEPRECATED FUNCTION TensorArrayStructure(...): DEPRECATED FUNCTION TensorStructure(...): DEPRECATED FUNCTION assert_cardinality(...): Asserts the cardinality of the input dataset. bucket_by_sequence_length(...): A transformation that buckets elements in a Dataset by length. bytes_produced_stats(...): Records the number of bytes produced by each element of the input dataset. cardinality(...): Returns the cardinality of dataset, if known. choose_from_datasets(...): Creates a dataset that deterministically chooses elements from datasets. copy_to_device(...): A transformation that copies dataset elements to the given target_device. dense_to_ragged_batch(...): A transformation that batches ragged elements into tf.RaggedTensors. dense_to_sparse_batch(...): A transformation that batches ragged elements into tf.sparse.SparseTensors. enumerate_dataset(...): A transformation that enumerates the elements of a dataset. (deprecated) from_variant(...): Constructs a dataset from the given variant and structure. get_next_as_optional(...): Returns a tf.experimental.Optional with the next element of the iterator. (deprecated) get_single_element(...): Returns the single element in dataset as a nested structure of tensors. get_structure(...): Returns the type signature for elements of the input dataset / iterator. group_by_reducer(...): A transformation that groups elements and performs a reduction. group_by_window(...): A transformation that groups windows of elements by key and reduces them. ignore_errors(...): Creates a Dataset from another Dataset and silently ignores any errors. latency_stats(...): Records the latency of producing each element of the input dataset. make_batched_features_dataset(...): Returns a Dataset of feature dictionaries from Example protos. make_csv_dataset(...): Reads CSV files into a dataset. make_saveable_from_iterator(...): Returns a SaveableObject for saving/restoring iterator state using Saver. (deprecated) map_and_batch(...): Fused implementation of map and batch. (deprecated) map_and_batch_with_legacy_function(...): Fused implementation of map and batch. (deprecated) parallel_interleave(...): A parallel version of the Dataset.interleave() transformation. (deprecated) parse_example_dataset(...): A transformation that parses Example protos into a dict of tensors. prefetch_to_device(...): A transformation that prefetches dataset values to the given device. rejection_resample(...): A transformation that resamples a dataset to achieve a target distribution. sample_from_datasets(...): Samples elements at random from the datasets in datasets. scan(...): A transformation that scans a function across an input dataset. shuffle_and_repeat(...): Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated) snapshot(...): API to persist the output of the input dataset. take_while(...): A transformation that stops dataset iteration based on a predicate. to_variant(...): Returns a variant representing the given dataset. unbatch(...): Splits elements of a dataset into multiple elements on the batch dimension. (deprecated) unique(...): Creates a Dataset from another Dataset, discarding duplicates.
Other Members
AUTOTUNE -1
INFINITE_CARDINALITY -1
UNKNOWN_CARDINALITY -2 | tensorflow.compat.v1.data.experimental |
tf.compat.v1.data.experimental.choose_from_datasets Creates a dataset that deterministically chooses elements from datasets.
tf.compat.v1.data.experimental.choose_from_datasets(
datasets, choice_dataset
)
For example, given the following datasets: datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset)
The elements of result will be: "foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
Args
datasets A list of tf.data.Dataset objects with compatible structure.
choice_dataset A tf.data.Dataset of scalar tf.int64 tensors between 0 and len(datasets) - 1.
Returns A dataset that interleaves elements from datasets according to the values of choice_dataset.
Raises
TypeError If the datasets or choice_dataset arguments have the wrong type. | tensorflow.compat.v1.data.experimental.choose_from_datasets |
tf.compat.v1.data.experimental.Counter Creates a Dataset that counts from start in steps of size step.
tf.compat.v1.data.experimental.Counter(
start=0, step=1, dtype=tf.dtypes.int64
)
For example: Dataset.count() == [0, 1, 2, ...)
Dataset.count(2) == [2, 3, ...)
Dataset.count(2, 5) == [2, 7, 12, ...)
Dataset.count(0, -1) == [0, -1, -2, ...)
Dataset.count(10, -1) == [10, 9, ...)
Args
start (Optional.) The starting value for the counter. Defaults to 0.
step (Optional.) The step size for the counter. Defaults to 1.
dtype (Optional.) The data type for counter elements. Defaults to tf.int64.
Returns A Dataset of scalar dtype elements. | tensorflow.compat.v1.data.experimental.counter |
tf.compat.v1.data.experimental.CsvDataset A Dataset comprising lines from one or more CSV files. Inherits From: Dataset, Dataset
tf.compat.v1.data.experimental.CsvDataset(
filenames, record_defaults, compression_type=None, buffer_size=None,
header=False, field_delim=',', use_quote_delim=True,
na_value='', select_cols=None, exclude_cols=None
)
Args
filenames A tf.string tensor containing one or more filenames.
record_defaults A list of default values for the CSV fields. Each item in the list is either a valid CSV DType (float32, float64, int32, int64, string), or a Tensor object with one of the above types. One per column of CSV data, with either a scalar Tensor default value for the column if it is optional, or DType or empty Tensor if required. If both this and select_columns are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index. If both this and 'exclude_cols' are specified, the sum of lengths of record_defaults and exclude_cols should equal the total number of columns in the CSV file.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". Defaults to no compression.
buffer_size (Optional.) A tf.int64 scalar denoting the number of bytes to buffer while reading files. Defaults to 4MB.
header (Optional.) A tf.bool scalar indicating whether the CSV file(s) have header line(s) that should be skipped when parsing. Defaults to False.
field_delim (Optional.) A tf.string scalar containing the delimiter character that separates fields in a record. Defaults to ",".
use_quote_delim (Optional.) A tf.bool scalar. If False, treats double quotation marks as regular characters inside of string fields (ignoring RFC 4180, Section 2, Bullet 5). Defaults to True.
na_value (Optional.) A tf.string scalar indicating a value that will be treated as NA/NaN.
select_cols (Optional.) A sorted list of column indices to select from the input data. If specified, only this subset of columns will be parsed. Defaults to parsing all columns. At most one of select_cols and exclude_cols can be specified.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.compat.v1.data.experimental.csvdataset |
tf.compat.v1.data.experimental.make_batched_features_dataset Returns a Dataset of feature dictionaries from Example protos.
tf.compat.v1.data.experimental.make_batched_features_dataset(
file_pattern, batch_size, features, reader=None, label_key=None,
reader_args=None, num_epochs=None, shuffle=True, shuffle_buffer_size=10000,
shuffle_seed=None, prefetch_buffer_size=None, reader_num_threads=None,
parser_num_threads=None, sloppy_ordering=False, drop_final_batch=False
)
If label_key argument is provided, returns a Dataset of tuple comprising of feature dictionaries and label. Example: serialized_examples = [
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
feature { key: "kws" value { bytes_list { value: [ "sports" ] } } }
}
]
We can use arguments: features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
"kws": VarLenFeature(dtype=tf.string),
}
And the expected output is: {
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
"kws": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["code", "art", "sports"]
dense_shape=[2, 2]),
}
Args
file_pattern List of files or patterns of file paths containing Example records. See tf.io.gfile.glob for pattern rules.
batch_size An int representing the number of records to combine in a single batch.
features A dict mapping feature keys to FixedLenFeature or VarLenFeature values. See tf.io.parse_example.
reader A function or class that can be called with a filenames tensor and (optional) reader_args and returns a Dataset of Example tensors. Defaults to tf.data.TFRecordDataset.
label_key (Optional) A string corresponding to the key labels are stored in tf.Examples. If provided, it must be one of the features key, otherwise results in ValueError.
reader_args Additional arguments to pass to the reader class.
num_epochs Integer specifying the number of times to read through the dataset. If None, cycles through the dataset forever. Defaults to None.
shuffle A boolean, indicates whether the input should be shuffled. Defaults to True.
shuffle_buffer_size Buffer size of the ShuffleDataset. A large capacity ensures better shuffling but would increase memory usage and startup time.
shuffle_seed Randomization seed to use for shuffling.
prefetch_buffer_size Number of feature batches to prefetch in order to improve performance. Recommended value is the number of batches consumed per training step. Defaults to auto-tune.
reader_num_threads Number of threads used to read Example records. If >1, the results will be interleaved. Defaults to 1.
parser_num_threads Number of threads to use for parsing Example tensors into a dictionary of Feature tensors. Defaults to 2.
sloppy_ordering If True, reading performance will be improved at the cost of non-deterministic ordering. If False, the order of elements produced is deterministic prior to shuffling (elements are still randomized if shuffle=True. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to False.
drop_final_batch If True, and the batch size does not evenly divide the input dataset size, the final smaller batch will be dropped. Defaults to False.
Returns A dataset of dict elements, (or a tuple of dict elements and label). Each dict maps feature keys to Tensor or SparseTensor objects.
Raises
TypeError If reader is of the wrong type.
ValueError If label_key is not one of the features keys. | tensorflow.compat.v1.data.experimental.make_batched_features_dataset |
tf.compat.v1.data.experimental.make_csv_dataset Reads CSV files into a dataset.
tf.compat.v1.data.experimental.make_csv_dataset(
file_pattern, batch_size, column_names=None, column_defaults=None,
label_name=None, select_columns=None, field_delim=',',
use_quote_delim=True, na_value='', header=True, num_epochs=None,
shuffle=True, shuffle_buffer_size=10000, shuffle_seed=None,
prefetch_buffer_size=None, num_parallel_reads=None, sloppy=False,
num_rows_for_inference=100, compression_type=None, ignore_errors=False
)
Reads CSV files into a dataset, where each element is a (features, labels) tuple that corresponds to a batch of CSV rows. The features dictionary maps feature column names to Tensors containing the corresponding feature data, and labels is a Tensor containing the batch's label data.
Args
file_pattern List of files or patterns of file paths containing CSV records. See tf.io.gfile.glob for pattern rules.
batch_size An int representing the number of records to combine in a single batch.
column_names An optional list of strings that corresponds to the CSV columns, in order. One per column of the input record. If this is not provided, infers the column names from the first row of the records. These names will be the keys of the features dict of each dataset element.
column_defaults A optional list of default values for the CSV fields. One item per selected column of the input record. Each item in the list is either a valid CSV dtype (float32, float64, int32, int64, or string), or a Tensor with one of the aforementioned types. The tensor can either be a scalar default value (if the column is optional), or an empty tensor (if the column is required). If a dtype is provided instead of a tensor, the column is also treated as required. If this list is not provided, tries to infer types based on reading the first num_rows_for_inference rows of files specified, and assumes all columns are optional, defaulting to 0 for numeric values and "" for string values. If both this and select_columns are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index.
label_name A optional string corresponding to the label column. If provided, the data for this column is returned as a separate Tensor from the features dictionary, so that the dataset complies with the format expected by a tf.Estimator.train or tf.Estimator.evaluate input function.
select_columns An optional list of integer indices or string column names, that specifies a subset of columns of CSV data to select. If column names are provided, these must correspond to names provided in column_names or inferred from the file header lines. When this argument is specified, only a subset of CSV columns will be parsed and returned, corresponding to the columns specified. Using this results in faster parsing and lower memory usage. If both this and column_defaults are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index.
field_delim An optional string. Defaults to ",". Char delimiter to separate fields in a record.
use_quote_delim An optional bool. Defaults to True. If false, treats double quotation marks as regular characters inside of the string fields.
na_value Additional string to recognize as NA/NaN.
header A bool that indicates whether the first rows of provided CSV files correspond to header lines with column names, and should not be included in the data.
num_epochs An int specifying the number of times this dataset is repeated. If None, cycles through the dataset forever.
shuffle A bool that indicates whether the input should be shuffled.
shuffle_buffer_size Buffer size to use for shuffling. A large buffer size ensures better shuffling, but increases memory usage and startup time.
shuffle_seed Randomization seed to use for shuffling.
prefetch_buffer_size An int specifying the number of feature batches to prefetch for performance improvement. Recommended value is the number of batches consumed per training step. Defaults to auto-tune.
num_parallel_reads Number of threads used to read CSV records from files. If >1, the results will be interleaved. Defaults to 1.
sloppy If True, reading performance will be improved at the cost of non-deterministic ordering. If False, the order of elements produced is deterministic prior to shuffling (elements are still randomized if shuffle=True. Note that if the seed is set, then order of elements after shuffling is deterministic). Defaults to False.
num_rows_for_inference Number of rows of a file to use for type inference if record_defaults is not provided. If None, reads all the rows of all the files. Defaults to 100.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". Defaults to no compression.
ignore_errors (Optional.) If True, ignores errors with CSV file parsing, such as malformed data or empty lines, and moves on to the next valid CSV record. Otherwise, the dataset raises an error and stops processing when encountering any invalid records. Defaults to False.
Returns A dataset, where each element is a (features, labels) tuple that corresponds to a batch of batch_size CSV rows. The features dictionary maps feature column names to Tensors containing the corresponding column data, and labels is a Tensor containing the column data for the label column specified by label_name.
Raises
ValueError If any of the arguments is malformed. | tensorflow.compat.v1.data.experimental.make_csv_dataset |
tf.compat.v1.data.experimental.map_and_batch_with_legacy_function Fused implementation of map and batch. (deprecated)
tf.compat.v1.data.experimental.map_and_batch_with_legacy_function(
map_func, batch_size, num_parallel_batches=None, drop_remainder=False,
num_parallel_calls=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.experimental.map_and_batch()
Note: This is an escape hatch for existing uses of map_and_batch that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map_and_batch as this method will not be removed in V2.
Args
map_func A function mapping a nested structure of tensors to another nested structure of tensors.
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
num_parallel_batches (Optional.) A tf.int64 scalar tf.Tensor, representing the number of batches to create in parallel. On one hand, higher values can help mitigate the effect of stragglers. On the other hand, higher values can increase contention if CPU is scarce.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in case its size is smaller than desired; the default behavior is not to drop the smaller batch.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number of elements to process in parallel. If not specified, batch_size * num_parallel_batches elements will be processed in parallel. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply.
Raises
ValueError If both num_parallel_batches and num_parallel_calls are specified. | tensorflow.compat.v1.data.experimental.map_and_batch_with_legacy_function |
tf.compat.v1.data.experimental.RaggedTensorStructure DEPRECATED FUNCTION
tf.compat.v1.data.experimental.RaggedTensorStructure(
dtype, shape, ragged_rank
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.RaggedTensorSpec instead. | tensorflow.compat.v1.data.experimental.raggedtensorstructure |
tf.compat.v1.data.experimental.RandomDataset A Dataset of pseudorandom values. Inherits From: Dataset, Dataset
tf.compat.v1.data.experimental.RandomDataset(
seed=None
)
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.compat.v1.data.experimental.randomdataset |
tf.compat.v1.data.experimental.sample_from_datasets Samples elements at random from the datasets in datasets.
tf.compat.v1.data.experimental.sample_from_datasets(
datasets, weights=None, seed=None
)
Args
datasets A list of tf.data.Dataset objects with compatible structure.
weights (Optional.) A list of len(datasets) floating-point values where weights[i] represents the probability with which an element should be sampled from datasets[i], or a tf.data.Dataset object where each element is such a list. Defaults to a uniform distribution across datasets.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns A dataset that interleaves elements from datasets at random, according to weights if provided, otherwise with uniform probability.
Raises
TypeError If the datasets or weights arguments have the wrong type.
ValueError If the weights argument is specified and does not match the length of the datasets element. | tensorflow.compat.v1.data.experimental.sample_from_datasets |
Module: tf.compat.v1.data.experimental.service API for using the tf.data service. This module contains: tf.data server implementations for running the tf.data service. A distribute dataset transformation that moves a dataset's preprocessing to happen in the tf.data service. The tf.data service offers a way to improve training speed when the host attached to a training device can't keep up with the data consumption of the model. For example, suppose a host can generate 100 examples/second, but the model can process 200 examples/second. Training speed could be doubled by using the tf.data service to generate 200 examples/second. Before using the tf.data service There are a few things to do before using the tf.data service to speed up training. Understand processing_mode The tf.data service uses a cluster of workers to prepare data for training your model. The processing_mode argument to tf.data.experimental.service.distribute describes how to leverage multiple workers to process the input dataset. Currently, there are two processing modes to choose from: "distributed_epoch" and "parallel_epochs". "distributed_epoch" means that the dataset will be split across all tf.data service workers. The dispatcher produces "splits" for the dataset and sends them to workers for further processing. For example, if a dataset begins with a list of filenames, the dispatcher will iterate through the filenames and send the filenames to tf.data workers, which will perform the rest of the dataset transformations on those files. "distributed_epoch" is useful when your model needs to see each element of the dataset exactly once, or if it needs to see the data in a generally-sequential order. "distributed_epoch" only works for datasets with splittable sources, such as Dataset.from_tensor_slices, Dataset.list_files, or Dataset.range. "parallel_epochs" means that the entire input dataset will be processed independently by each of the tf.data service workers. For this reason, it is important to shuffle data (e.g. filenames) non-deterministically, so that each worker will process the elements of the dataset in a different order. "parallel_epochs" can be used to distribute datasets that aren't splittable. Measure potential impact Before using the tf.data service, it is useful to first measure the potential performance improvement. To do this, add dataset = dataset.take(1).cache().repeat()
at the end of your dataset, and see how it affects your model's step time. take(1).cache().repeat() will cache the first element of your dataset and produce it repeatedly. This should make the dataset very fast, so that the model becomes the bottleneck and you can identify the ideal model speed. With enough workers, the tf.data service should be able to achieve similar speed. Running the tf.data service tf.data servers should be brought up alongside your training jobs, and brought down when the jobs are finished. The tf.data service uses one DispatchServer and any number of WorkerServers. See https://github.com/tensorflow/ecosystem/tree/master/data_service for an example of using Google Kubernetes Engine (GKE) to manage the tf.data service. The server implementation in tf_std_data_server.py is not GKE-specific, and can be used to run the tf.data service in other contexts. Fault tolerance By default, the tf.data dispatch server stores its state in-memory, making it a single point of failure during training. To avoid this, pass fault_tolerant_mode=True when creating your DispatchServer. Dispatcher fault tolerance requires work_dir to be configured and accessible from the dispatcher both before and after restart (e.g. a GCS path). With fault tolerant mode enabled, the dispatcher will journal its state to the work directory so that no state is lost when the dispatcher is restarted. WorkerServers may be freely restarted, added, or removed during training. At startup, workers will register with the dispatcher and begin processing all outstanding jobs from the beginning. Using the tf.data service from your training job Once you have a tf.data service cluster running, take note of the dispatcher IP address and port. To connect to the service, you will use a string in the format "grpc://:". # Create the dataset however you were before using the tf.data service.
dataset = your_dataset_factory()
service = "grpc://{}:{}".format(dispatcher_address, dispatcher_port)
# This will register the dataset with the tf.data service cluster so that
# tf.data workers can run the dataset to produce elements. The dataset returned
# from applying `distribute` will fetch elements produced by tf.data workers.
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=service))
Below is a toy example that you can run yourself.
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address))
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
See the documentation of tf.data.experimental.service.distribute for more details about using the distribute transformation. Classes class DispatcherConfig: Configuration class for tf.data service dispatchers. class WorkerConfig: Configuration class for tf.data service dispatchers. Functions distribute(...): A transformation that moves dataset processing to the tf.data service. from_dataset_id(...): Creates a dataset which reads data from the tf.data service. register_dataset(...): Registers a dataset with the tf.data service. | tensorflow.compat.v1.data.experimental.service |
tf.compat.v1.data.experimental.SparseTensorStructure DEPRECATED FUNCTION
tf.compat.v1.data.experimental.SparseTensorStructure(
dtype, shape
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.SparseTensorSpec instead. | tensorflow.compat.v1.data.experimental.sparsetensorstructure |
tf.compat.v1.data.experimental.SqlDataset A Dataset consisting of the results from a SQL query. Inherits From: Dataset, Dataset
tf.compat.v1.data.experimental.SqlDataset(
driver_name, data_source_name, query, output_types
)
Args
driver_name A 0-D tf.string tensor containing the database type. Currently, the only supported value is 'sqlite'.
data_source_name A 0-D tf.string tensor containing a connection string to connect to the database.
query A 0-D tf.string tensor containing the SQL query to execute.
output_types A tuple of tf.DType objects representing the types of the columns returned by query.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.compat.v1.data.experimental.sqldataset |
tf.compat.v1.data.experimental.StatsAggregator A stateful resource that aggregates statistics from one or more iterators.
tf.compat.v1.data.experimental.StatsAggregator()
To record statistics, use one of the custom transformation functions defined in this module when defining your tf.data.Dataset. All statistics will be aggregated by the StatsAggregator that is associated with a particular iterator (see below). For example, to record the latency of producing each element by iterating over a dataset: dataset = ...
dataset = dataset.apply(tf.data.experimental.latency_stats("total_bytes"))
To associate a StatsAggregator with a tf.data.Dataset object, use the following pattern: aggregator = tf.data.experimental.StatsAggregator()
dataset = ...
# Apply `StatsOptions` to associate `dataset` with `aggregator`.
options = tf.data.Options()
options.experimental_stats.aggregator = aggregator
dataset = dataset.with_options(options)
To get a protocol buffer summary of the currently aggregated statistics, use the StatsAggregator.get_summary() tensor. The easiest way to do this is to add the returned tensor to the tf.GraphKeys.SUMMARIES collection, so that the summaries will be included with any existing summaries. aggregator = tf.data.experimental.StatsAggregator()
# ...
stats_summary = aggregator.get_summary()
tf.compat.v1.add_to_collection(tf.GraphKeys.SUMMARIES, stats_summary)
Note: This interface is experimental and expected to change. In particular, we expect to add other implementations of StatsAggregator that provide different ways of exporting statistics, and add more types of statistics.
Methods get_summary View source
get_summary()
Returns a string tf.Tensor that summarizes the aggregated statistics. The returned tensor will contain a serialized tf.compat.v1.summary.Summary protocol buffer, which can be used with the standard TensorBoard logging facilities.
Returns A scalar string tf.Tensor that summarizes the aggregated statistics. | tensorflow.compat.v1.data.experimental.statsaggregator |
tf.compat.v1.data.experimental.TensorArrayStructure DEPRECATED FUNCTION
tf.compat.v1.data.experimental.TensorArrayStructure(
dtype, element_shape, dynamic_size, infer_shape
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.TensorArraySpec instead. | tensorflow.compat.v1.data.experimental.tensorarraystructure |
tf.compat.v1.data.experimental.TensorStructure DEPRECATED FUNCTION
tf.compat.v1.data.experimental.TensorStructure(
dtype, shape
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.TensorSpec instead. | tensorflow.compat.v1.data.experimental.tensorstructure |
tf.compat.v1.data.FixedLengthRecordDataset A Dataset of fixed-length records from one or more binary files. Inherits From: Dataset, Dataset
tf.compat.v1.data.FixedLengthRecordDataset(
filenames, record_bytes, header_bytes=None, footer_bytes=None, buffer_size=None,
compression_type=None, num_parallel_reads=None
)
Args
filenames A tf.string tensor or tf.data.Dataset containing one or more filenames.
record_bytes A tf.int64 scalar representing the number of bytes in each record.
header_bytes (Optional.) A tf.int64 scalar representing the number of bytes to skip at the start of a file.
footer_bytes (Optional.) A tf.int64 scalar representing the number of bytes to ignore at the end of a file.
buffer_size (Optional.) A tf.int64 scalar representing the number of bytes to buffer when reading.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP".
num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.compat.v1.data.fixedlengthrecorddataset |
tf.compat.v1.data.get_output_classes Returns the output classes for elements of the input dataset / iterator.
tf.compat.v1.data.get_output_classes(
dataset_or_iterator
)
Args
dataset_or_iterator A tf.data.Dataset or tf.data.Iterator.
Returns A nested structure of Python type objects matching the structure of the dataset / iterator elements and specifying the class of the individual components. | tensorflow.compat.v1.data.get_output_classes |
tf.compat.v1.data.get_output_shapes Returns the output shapes for elements of the input dataset / iterator.
tf.compat.v1.data.get_output_shapes(
dataset_or_iterator
)
Args
dataset_or_iterator A tf.data.Dataset or tf.data.Iterator.
Returns A nested structure of tf.TensorShape objects matching the structure of the dataset / iterator elements and specifying the shape of the individual components. | tensorflow.compat.v1.data.get_output_shapes |
tf.compat.v1.data.get_output_types Returns the output shapes for elements of the input dataset / iterator.
tf.compat.v1.data.get_output_types(
dataset_or_iterator
)
Args
dataset_or_iterator A tf.data.Dataset or tf.data.Iterator.
Returns A nested structure of tf.DType objects objects matching the structure of dataset / iterator elements and specifying the shape of the individual components. | tensorflow.compat.v1.data.get_output_types |
tf.compat.v1.data.Iterator Represents the state of iterating through a Dataset.
tf.compat.v1.data.Iterator(
iterator_resource, initializer, output_types, output_shapes, output_classes
)
Args
iterator_resource A tf.resource scalar tf.Tensor representing the iterator.
initializer A tf.Operation that should be run to initialize this iterator.
output_types A nested structure of tf.DType objects corresponding to each component of an element of this iterator.
output_shapes A nested structure of tf.TensorShape objects corresponding to each component of an element of this iterator.
output_classes A nested structure of Python type objects corresponding to each component of an element of this iterator.
Attributes
element_spec
initializer A tf.Operation that should be run to initialize this iterator.
output_classes Returns the class of each component of an element of this iterator. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(iterator). The expected values are tf.Tensor and tf.sparse.SparseTensor.
output_shapes Returns the shape of each component of an element of this iterator. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(iterator).
output_types Returns the type of each component of an element of this iterator. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(iterator).
Methods from_string_handle View source
@staticmethod
from_string_handle(
string_handle, output_types, output_shapes=None, output_classes=None
)
Creates a new, uninitialized Iterator based on the given handle. This method allows you to define a "feedable" iterator where you can choose between concrete iterators by feeding a value in a tf.Session.run call. In that case, string_handle would be a tf.compat.v1.placeholder, and you would feed it with the value of tf.data.Iterator.string_handle in each step. For example, if you had two iterators that marked the current position in a training dataset and a test dataset, you could choose which to use in each step as follows: train_iterator = tf.data.Dataset(...).make_one_shot_iterator()
train_iterator_handle = sess.run(train_iterator.string_handle())
test_iterator = tf.data.Dataset(...).make_one_shot_iterator()
test_iterator_handle = sess.run(test_iterator.string_handle())
handle = tf.compat.v1.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(
handle, train_iterator.output_types)
next_element = iterator.get_next()
loss = f(next_element)
train_loss = sess.run(loss, feed_dict={handle: train_iterator_handle})
test_loss = sess.run(loss, feed_dict={handle: test_iterator_handle})
Args
string_handle A scalar tf.Tensor of type tf.string that evaluates to a handle produced by the Iterator.string_handle() method.
output_types A nested structure of tf.DType objects corresponding to each component of an element of this dataset.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element of this dataset. If omitted, each component will have an unconstrainted shape.
output_classes (Optional.) A nested structure of Python type objects corresponding to each component of an element of this iterator. If omitted, each component is assumed to be of type tf.Tensor.
Returns An Iterator.
from_structure View source
@staticmethod
from_structure(
output_types, output_shapes=None, shared_name=None, output_classes=None
)
Creates a new, uninitialized Iterator with the given structure. This iterator-constructing method can be used to create an iterator that is reusable with many different datasets. The returned iterator is not bound to a particular dataset, and it has no initializer. To initialize the iterator, run the operation returned by Iterator.make_initializer(dataset). The following is an example iterator = Iterator.from_structure(tf.int64, tf.TensorShape([]))
dataset_range = Dataset.range(10)
range_initializer = iterator.make_initializer(dataset_range)
dataset_evens = dataset_range.filter(lambda x: x % 2 == 0)
evens_initializer = iterator.make_initializer(dataset_evens)
# Define a model based on the iterator; in this example, the model_fn
# is expected to take scalar tf.int64 Tensors as input (see
# the definition of 'iterator' above).
prediction, loss = model_fn(iterator.get_next())
# Train for `num_epochs`, where for each epoch, we first iterate over
# dataset_range, and then iterate over dataset_evens.
for _ in range(num_epochs):
# Initialize the iterator to `dataset_range`
sess.run(range_initializer)
while True:
try:
pred, loss_val = sess.run([prediction, loss])
except tf.errors.OutOfRangeError:
break
# Initialize the iterator to `dataset_evens`
sess.run(evens_initializer)
while True:
try:
pred, loss_val = sess.run([prediction, loss])
except tf.errors.OutOfRangeError:
break
Args
output_types A nested structure of tf.DType objects corresponding to each component of an element of this dataset.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element of this dataset. If omitted, each component will have an unconstrainted shape.
shared_name (Optional.) If non-empty, this iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
output_classes (Optional.) A nested structure of Python type objects corresponding to each component of an element of this iterator. If omitted, each component is assumed to be of type tf.Tensor.
Returns An Iterator.
Raises
TypeError If the structures of output_shapes and output_types are not the same. get_next View source
get_next(
name=None
)
Returns a nested structure of tf.Tensors representing the next element. In graph mode, you should typically call this method once and use its result as the input to another computation. A typical loop will then call tf.Session.run on the result of that computation. The loop will terminate when the Iterator.get_next() operation raises tf.errors.OutOfRangeError. The following skeleton shows how to use this method when building a training loop: dataset = ... # A `tf.data.Dataset` object.
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
# Build a TensorFlow graph that does something with each element.
loss = model_function(next_element)
optimizer = ... # A `tf.compat.v1.train.Optimizer` object.
train_op = optimizer.minimize(loss)
with tf.compat.v1.Session() as sess:
try:
while True:
sess.run(train_op)
except tf.errors.OutOfRangeError:
pass
Note: It is legitimate to call Iterator.get_next() multiple times, e.g. when you are distributing different elements to multiple devices in a single step. However, a common pitfall arises when users call Iterator.get_next() in each iteration of their training loop. Iterator.get_next() adds ops to the graph, and executing each op allocates resources (including threads); as a consequence, invoking it in every iteration of a training loop causes slowdown and eventual resource exhaustion. To guard against this outcome, we log a warning when the number of uses crosses a fixed threshold of suspiciousness.
Args
name (Optional.) A name for the created operation.
Returns A nested structure of tf.Tensor objects.
get_next_as_optional View source
get_next_as_optional()
make_initializer View source
make_initializer(
dataset, name=None
)
Returns a tf.Operation that initializes this iterator on dataset.
Args
dataset A Dataset with compatible structure to this iterator.
name (Optional.) A name for the created operation.
Returns A tf.Operation that can be run to initialize this iterator on the given dataset.
Raises
TypeError If dataset and this iterator do not have a compatible element structure. string_handle View source
string_handle(
name=None
)
Returns a string-valued tf.Tensor that represents this iterator.
Args
name (Optional.) A name for the created operation.
Returns A scalar tf.Tensor of type tf.string. | tensorflow.compat.v1.data.iterator |
tf.compat.v1.data.make_initializable_iterator Creates an iterator for elements of dataset.
tf.compat.v1.data.make_initializable_iterator(
dataset, shared_name=None
)
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
dataset = ...
iterator = tf.compat.v1.data.make_initializable_iterator(dataset)
# ...
sess.run(iterator.initializer)
Args
dataset A tf.data.Dataset.
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of dataset.
Raises
RuntimeError If eager execution is enabled. | tensorflow.compat.v1.data.make_initializable_iterator |
tf.compat.v1.data.make_one_shot_iterator Creates an iterator for elements of dataset.
tf.compat.v1.data.make_one_shot_iterator(
dataset
)
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not support re-initialization.
Args
dataset A tf.data.Dataset.
Returns A tf.data.Iterator for elements of dataset. | tensorflow.compat.v1.data.make_one_shot_iterator |
tf.compat.v1.data.TextLineDataset A Dataset comprising lines from one or more text files. Inherits From: Dataset, Dataset
tf.compat.v1.data.TextLineDataset(
filenames, compression_type=None, buffer_size=None, num_parallel_reads=None
)
Args
filenames A tf.string tensor or tf.data.Dataset containing one or more filenames.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP".
buffer_size (Optional.) A tf.int64 scalar denoting the number of bytes to buffer. A value of 0 results in the default buffering values chosen based on the compression type.
num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.compat.v1.data.textlinedataset |
tf.compat.v1.data.TFRecordDataset A Dataset comprising records from one or more TFRecord files. Inherits From: Dataset, Dataset
tf.compat.v1.data.TFRecordDataset(
filenames, compression_type=None, buffer_size=None, num_parallel_reads=None
)
Args
filenames A tf.string tensor or tf.data.Dataset containing one or more filenames.
compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP".
buffer_size (Optional.) A tf.int64 scalar representing the number of bytes in the read buffer. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value 1-100 MBs. If None, a sensible default for both local and remote file systems is used.
num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially.
Raises
TypeError If any argument does not have the expected type.
ValueError If any argument does not have the expected shape.
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset).
output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset).
output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source
filter_with_legacy_function(
predicate
)
Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter()
Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2.
Args
predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_sparse_tensor_slices View source
@staticmethod
from_sparse_tensor_slices(
sparse_tensor
)
Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices().
Args
sparse_tensor A tf.sparse.SparseTensor.
Returns
Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source
make_initializable_iterator(
shared_name=None
)
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it:
# Building graph ...
dataset = ...
iterator = dataset.make_initializable_iterator()
next_value = iterator.get_next() # This is a Tensor.
# ... from within a session ...
sess.run(iterator.initializer)
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Args
shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server).
Returns A tf.data.Iterator for elements of this dataset.
Raises
RuntimeError If eager execution is enabled. make_one_shot_iterator View source
make_one_shot_iterator()
Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator.
Example: # Building graph ...
dataset = ...
next_value = dataset.make_one_shot_iterator().get_next()
# ... from within a session ...
try:
while True:
value = sess.run(next_value)
...
except tf.errors.OutOfRangeError:
pass
Returns An tf.data.Iterator for elements of this dataset.
map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. map_with_legacy_function View source
map_with_legacy_function(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map()
Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2.
Args
map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | tensorflow.compat.v1.data.tfrecorddataset |
Module: tf.compat.v1.debugging Public API for tf.debugging namespace. Modules experimental module: Public API for tf.debugging.experimental namespace. Functions Assert(...): Asserts that the given condition is true. assert_all_finite(...): Assert that the tensor does not contain any NaN's or Inf's. assert_equal(...): Assert the condition x == y holds element-wise. assert_greater(...): Assert the condition x > y holds element-wise. assert_greater_equal(...): Assert the condition x >= y holds element-wise. assert_integer(...): Assert that x is of integer dtype. assert_less(...): Assert the condition x < y holds element-wise. assert_less_equal(...): Assert the condition x <= y holds element-wise. assert_near(...): Assert the condition x and y are close element-wise. assert_negative(...): Assert the condition x < 0 holds element-wise. assert_non_negative(...): Assert the condition x >= 0 holds element-wise. assert_non_positive(...): Assert the condition x <= 0 holds element-wise. assert_none_equal(...): Assert the condition x != y holds element-wise. assert_positive(...): Assert the condition x > 0 holds element-wise. assert_proper_iterable(...): Static assert that values is a "proper" iterable. assert_rank(...): Assert x has rank equal to rank. assert_rank_at_least(...): Assert x has rank equal to rank or higher. assert_rank_in(...): Assert x has rank in ranks. assert_same_float_dtype(...): Validate and return float type based on tensors and dtype. assert_scalar(...): Asserts that the given tensor is a scalar (i.e. zero-dimensional). assert_shapes(...): Assert tensor shapes and dimension size relationships between tensors. assert_type(...): Statically asserts that the given Tensor is of the specified type. check_numerics(...): Checks a tensor for NaN and Inf values. disable_check_numerics(...): Disable the eager/graph unified numerics checking mechanism. enable_check_numerics(...): Enable tensor numerics checking in an eager/graph unified fashion. get_log_device_placement(...): Get if device placements are logged. is_finite(...): Returns which elements of x are finite. is_inf(...): Returns which elements of x are Inf. is_nan(...): Returns which elements of x are NaN. is_non_decreasing(...): Returns True if x is non-decreasing. is_numeric_tensor(...): Returns True if the elements of tensor are numbers. is_strictly_increasing(...): Returns True if x is strictly increasing. set_log_device_placement(...): Set if device placements should be logged. | tensorflow.compat.v1.debugging |
tf.compat.v1.debugging.assert_shapes Assert tensor shapes and dimension size relationships between tensors.
tf.compat.v1.debugging.assert_shapes(
shapes, data=None, summarize=None, message=None, name=None
)
This Op checks that a collection of tensors shape relationships satisfies given constraints. Example:
n = 10
q = 3
d = 7
x = tf.zeros([n,q])
y = tf.ones([n,d])
param = tf.Variable([1.0, 2.0, 3.0])
scalar = 1.0
tf.debugging.assert_shapes([
(x, ('N', 'Q')),
(y, ('N', 'D')),
(param, ('Q',)),
(scalar, ()),
])
tf.debugging.assert_shapes([
(x, ('N', 'D')),
(y, ('N', 'D'))
])
Traceback (most recent call last):
ValueError: ...
Example of adding a dependency to an operation: with tf.control_dependencies([tf.assert_shapes(shapes)]):
output = tf.matmul(x, y, transpose_a=True)
If x, y, param or scalar does not have a shape that satisfies all specified constraints, message, as well as the first summarize entries of the first encountered violating tensor are printed, and InvalidArgumentError is raised. Size entries in the specified shapes are checked against other entries by their hash, except: a size entry is interpreted as an explicit size if it can be parsed as an integer primitive. a size entry is interpreted as any size if it is None or '.'. If the first entry of a shape is ... (type Ellipsis) or '*' that indicates a variable number of outer dimensions of unspecified size, i.e. the constraint applies to the inner-most dimensions only. Scalar tensors and specified shapes of length zero (excluding the 'inner-most' prefix) are both treated as having a single dimension of size one.
Args
shapes A list of (Tensor, shape) tuples, wherein shape is the expected shape of Tensor. See the example code above. The shape must be an iterable. Each element of the iterable can be either a concrete integer value or a string that abstractly represents the dimension. For example,
('N', 'Q') specifies a 2D shape wherein the first and second dimensions of shape may or may not be equal.
('N', 'N', 'Q') specifies a 3D shape wherein the first and second dimensions are equal.
(1, 'N') specifies a 2D shape wherein the first dimension is exactly 1 and the second dimension can be any value. Note that the abstract dimension letters take effect across different tuple elements of the list. For example, tf.debugging.assert_shapes([(x, ('N', 'A')), (y, ('N', 'B'))] asserts that both x and y are rank-2 tensors and their first dimensions are equal (N). shape can also be a tf.TensorShape.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of the violating tensor.
summarize Print this many entries of the tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_shapes".
Returns Op raising InvalidArgumentError unless all shape constraints are satisfied. If static checks determine all constraints are satisfied, a no_op is returned.
Raises
ValueError If static checks determine any shape constraint is violated. | tensorflow.compat.v1.debugging.assert_shapes |
Module: tf.compat.v1.debugging.experimental Public API for tf.debugging.experimental namespace. Functions disable_dump_debug_info(...): Disable the currently-enabled debugging dumping. enable_dump_debug_info(...): Enable dumping debugging information from a TensorFlow program. | tensorflow.compat.v1.debugging.experimental |
tf.compat.v1.decode_csv Convert CSV records to tensors. Each column maps to one tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.io.decode_csv
tf.compat.v1.decode_csv(
records, record_defaults, field_delim=',', use_quote_delim=True,
name=None, na_value='', select_cols=None
)
RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.
Args
records A Tensor of type string. Each string is a record/row in the csv and all records should have the same format.
record_defaults A list of Tensor objects with specific types. Acceptable types are float32, float64, int32, int64, string. One tensor per column of the input record, with either a scalar default value for that column or an empty vector if the column is required.
field_delim An optional string. Defaults to ",". char delimiter to separate fields in a record.
use_quote_delim An optional bool. Defaults to True. If false, treats double quotation marks as regular characters inside of the string fields (ignoring RFC 4180, Section 2, Bullet 5).
name A name for the operation (optional).
na_value Additional string to recognize as NA/NaN.
select_cols Optional sorted list of column indices to select. If specified, only this subset of columns will be parsed and returned.
Returns A list of Tensor objects. Has the same type as record_defaults. Each tensor will have the same shape as records.
Raises
ValueError If any of the arguments is malformed. | tensorflow.compat.v1.decode_csv |
tf.compat.v1.decode_raw Convert raw byte strings into tensors. (deprecated arguments) View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.io.decode_raw
tf.compat.v1.decode_raw(
input_bytes=None, out_type=None, little_endian=True, name=None, bytes=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (bytes). They will be removed in a future version. Instructions for updating: bytes is deprecated, use input_bytes instead
Args
input_bytes Each element of the input Tensor is converted to an array of bytes.
out_type DType of the output. Acceptable types are half, float, double, int32, uint16, uint8, int16, int8, int64.
little_endian Whether the input_bytes data is in little-endian format. Data will be converted into host byte order if necessary.
name A name for the operation (optional).
bytes Deprecated parameter. Use input_bytes instead.
Returns A Tensor object storing the decoded bytes. | tensorflow.compat.v1.decode_raw |
tf.compat.v1.delete_session_tensor Delete the tensor for the given tensor handle.
tf.compat.v1.delete_session_tensor(
handle, name=None
)
This is EXPERIMENTAL and subject to change. Delete the tensor of a given tensor handle. The tensor is produced in a previous run() and stored in the state of the session.
Args
handle The string representation of a persistent tensor handle.
name Optional name prefix for the return tensor.
Returns A pair of graph elements. The first is a placeholder for feeding a tensor handle and the second is a deletion operation. | tensorflow.compat.v1.delete_session_tensor |
tf.compat.v1.depth_to_space DepthToSpace for tensors of type T. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.nn.depth_to_space
tf.compat.v1.depth_to_space(
input, block_size, name=None, data_format='NHWC'
)
Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. The attr block_size indicates the input block size and how the data is moved. Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size
The width the output tensor is input_depth * block_size, whereas the height is input_height * block_size. The Y, X coordinates within each block of the output image are determined by the high order component of the input channel index. The depth of the input tensor must be divisible by block_size * block_size. The data_format attr specifies the layout of the input and output tensors with the following options: "NHWC": [ batch, height, width, channels ] "NCHW": [ batch, channels, height, width ] "NCHW_VECT_C": qint8 [ batch, channels / 4, height, width, 4 ] It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates within the input image, bX, bY means coordinates within the output block, oC means output channels). The output would be the input transposed to the following layout: n,iY,bY,iX,bX,oC This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models. For example, given an input of shape [1, 1, 1, 4], data_format = "NHWC" and block_size = 2: x = [[[[1, 2, 3, 4]]]]
This operation will output a tensor of shape [1, 2, 2, 1]: [[[[1], [2]],
[[3], [4]]]]
Here, the input has a batch of 1 and each batch element has shape [1, 1, 4], the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = 4 / (block_size * block_size)). The output element shape is [2, 2, 1]. For an input tensor with larger depth, here of shape [1, 1, 1, 12], e.g. x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
This operation, for block size of 2, will return the following tensor of shape [1, 2, 2, 3] [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
Similarly, for the following input of shape [1 2 2 4], and a block size of 2: x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
[13, 14, 15, 16]]]]
the operator will return the following tensor of shape [1 4 4 1]: x = [[[ [1], [2], [5], [6]],
[ [3], [4], [7], [8]],
[ [9], [10], [13], [14]],
[ [11], [12], [15], [16]]]]
Args
input A Tensor.
block_size An int that is >= 2. The size of the spatial block, same as in Space2Depth.
data_format An optional string from: "NHWC", "NCHW", "NCHW_VECT_C". Defaults to "NHWC".
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.compat.v1.depth_to_space |
tf.compat.v1.device Wrapper for Graph.device() using the default graph.
tf.compat.v1.device(
device_name_or_function
)
See tf.Graph.device for more details.
Args
device_name_or_function The device name or function to use in the context.
Returns A context manager that specifies the default device to use for newly created ops.
Raises
RuntimeError If eager execution is enabled and a function is passed in. | tensorflow.compat.v1.device |
tf.compat.v1.DeviceSpec Represents a (possibly partial) specification for a TensorFlow device. Inherits From: DeviceSpec
tf.compat.v1.DeviceSpec(
job=None, replica=None, task=None, device_type=None, device_index=None
)
DeviceSpecs are used throughout TensorFlow to describe where state is stored and computations occur. Using DeviceSpec allows you to parse device spec strings to verify their validity, merge them or compose them programmatically. Example: # Place the operations on device "GPU:0" in the "ps" job.
device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
with tf.device(device_spec.to_string()):
# Both my_var and squared_var will be placed on /job:ps/device:GPU:0.
my_var = tf.Variable(..., name="my_variable")
squared_var = tf.square(my_var)
With eager execution disabled (by default in TensorFlow 1.x and by calling disable_eager_execution() in TensorFlow 2.x), the following syntax can be used: tf.compat.v1.disable_eager_execution()
# Same as previous
device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
# No need of .to_string() method.
with tf.device(device_spec):
my_var = tf.Variable(..., name="my_variable")
squared_var = tf.square(my_var)
```
If a `DeviceSpec` is partially specified, it will be merged with other
`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec`
components defined in inner scopes take precedence over those defined in
outer scopes.
```python
gpu0_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
with tf.device(DeviceSpec(job="train").to_string()):
with tf.device(gpu0_spec.to_string()):
# Nodes created here will be assigned to /job:ps/device:GPU:0.
with tf.device(DeviceSpec(device_type="GPU", device_index=1).to_string()):
# Nodes created here will be assigned to /job:train/device:GPU:1.
A DeviceSpec consists of 5 components -- each of which is optionally specified: Job: The job name. Replica: The replica index. Task: The task index. Device type: The device type string (e.g. "CPU" or "GPU"). Device index: The device index.
Args
job string. Optional job name.
replica int. Optional replica index.
task int. Optional task index.
device_type Optional device type string (e.g. "CPU" or "GPU")
device_index int. Optional device index. If left unspecified, device represents 'any' device_index.
Attributes
device_index
device_type
job
replica
task
Methods from_string View source
@classmethod
from_string(
spec
)
Construct a DeviceSpec from a string.
Args
spec a string of the form /job:/replica:/task:/device:CPU: or /job:/replica:/task:/device:GPU: as cpu and gpu are mutually exclusive. All entries are optional.
Returns A DeviceSpec.
make_merged_spec View source
make_merged_spec(
dev
)
Returns a new DeviceSpec which incorporates dev. When combining specs, dev will take precedence over the current spec. So for instance: first_spec = tf.DeviceSpec(job=0, device_type="CPU")
second_spec = tf.DeviceSpec(device_type="GPU")
combined_spec = first_spec.make_merged_spec(second_spec)
is equivalent to: combined_spec = tf.DeviceSpec(job=0, device_type="GPU")
Args
dev a DeviceSpec
Returns A new DeviceSpec which combines self and dev
merge_from View source
merge_from(
dev
)
Merge the properties of "dev" into this DeviceSpec.
Note: Will be removed in TensorFlow 2.x since DeviceSpecs will become immutable.
Args
dev a DeviceSpec. parse_from_string View source
parse_from_string(
spec
)
Parse a DeviceSpec name into its components. 2.x behavior change: In TensorFlow 1.x, this function mutates its own state and returns itself. In 2.x, DeviceSpecs are immutable, and this function will return a DeviceSpec which contains the spec. Recommended: ```
# my_spec and my_updated_spec are unrelated.
my_spec = tf.DeviceSpec.from_string("/CPU:0")
my_updated_spec = tf.DeviceSpec.from_string("/GPU:0")
with tf.device(my_updated_spec):
...
```
Will work in 1.x and 2.x (though deprecated in 2.x): ```
my_spec = tf.DeviceSpec.from_string("/CPU:0")
my_updated_spec = my_spec.parse_from_string("/GPU:0")
with tf.device(my_updated_spec):
...
```
Will NOT work in 2.x: ```
my_spec = tf.DeviceSpec.from_string("/CPU:0")
my_spec.parse_from_string("/GPU:0") # <== Will not update my_spec
with tf.device(my_spec):
...
```
In general, DeviceSpec.from_string should completely replace DeviceSpec.parse_from_string, and DeviceSpec.replace should completely replace setting attributes directly.
Args
spec an optional string of the form /job:/replica:/task:/device:CPU: or /job:/replica:/task:/device:GPU: as cpu and gpu are mutually exclusive. All entries are optional.
Returns The DeviceSpec.
Raises
ValueError if the spec was not valid. replace View source
replace(
**kwargs
)
Convenience method for making a new DeviceSpec by overriding fields. For instance: my_spec = DeviceSpec=(job="my_job", device="CPU")
my_updated_spec = my_spec.replace(device="GPU")
my_other_spec = my_spec.replace(device=None)
Args
**kwargs This method takes the same args as the DeviceSpec constructor
Returns A DeviceSpec with the fields specified in kwargs overridden.
to_string View source
to_string()
Return a string representation of this DeviceSpec.
Returns a string of the form /job:/replica:/task:/device::.
__eq__ View source
__eq__(
other
)
Checks if the other DeviceSpec is same as the current instance, eg have same value for all the internal fields.
Args
other Another DeviceSpec
Returns Return True if other is also a DeviceSpec instance and has same value as the current instance. Return False otherwise. | tensorflow.compat.v1.devicespec |
tf.compat.v1.Dimension Represents the value of one dimension in a TensorShape.
tf.compat.v1.Dimension(
value
)
Attributes
value The value of this dimension, or None if it is unknown. Methods assert_is_compatible_with View source
assert_is_compatible_with(
other
)
Raises an exception if other is not compatible with this Dimension.
Args
other Another Dimension.
Raises
ValueError If self and other are not compatible (see is_compatible_with). is_compatible_with View source
is_compatible_with(
other
)
Returns true if other is compatible with this Dimension. Two known Dimensions are compatible if they have the same value. An unknown Dimension is compatible with all other Dimensions.
Args
other Another Dimension.
Returns True if this Dimension and other are compatible.
merge_with View source
merge_with(
other
)
Returns a Dimension that combines the information in self and other. Dimensions are combined as follows: tf.compat.v1.Dimension(n) .merge_with(tf.compat.v1.Dimension(n)) ==
tf.compat.v1.Dimension(n)
tf.compat.v1.Dimension(n) .merge_with(tf.compat.v1.Dimension(None)) ==
tf.compat.v1.Dimension(n)
tf.compat.v1.Dimension(None).merge_with(tf.compat.v1.Dimension(n)) ==
tf.compat.v1.Dimension(n)
# equivalent to tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None).merge_with(tf.compat.v1.Dimension(None))
# raises ValueError for n != m
tf.compat.v1.Dimension(n) .merge_with(tf.compat.v1.Dimension(m))
Args
other Another Dimension.
Returns A Dimension containing the combined information of self and other.
Raises
ValueError If self and other are not compatible (see is_compatible_with). __add__ View source
__add__(
other
)
Returns the sum of self and other. Dimensions are summed as follows: tf.compat.v1.Dimension(m) + tf.compat.v1.Dimension(n) ==
tf.compat.v1.Dimension(m + n)
tf.compat.v1.Dimension(m) + tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) + tf.compat.v1.Dimension(n) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) + tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the sum of self and other.
__div__ View source
__div__(
other
)
DEPRECATED: Use __floordiv__ via x // y instead. This function exists only for backwards compatibility purposes; new code should use __floordiv__ via the syntax x // y. Using x // y communicates clearly that the result rounds down, and is forward compatible to Python 3.
Args
other Another Dimension.
Returns A Dimension whose value is the integer quotient of self and other.
__eq__ View source
__eq__(
other
)
Returns true if other has the same known value as this Dimension. __floordiv__ View source
__floordiv__(
other
)
Returns the quotient of self and other rounded down. Dimensions are divided as follows: tf.compat.v1.Dimension(m) // tf.compat.v1.Dimension(n) ==
tf.compat.v1.Dimension(m // n)
tf.compat.v1.Dimension(m) // tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) // tf.compat.v1.Dimension(n) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) // tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the integer quotient of self and other.
__ge__ View source
__ge__(
other
)
Returns True if self is known to be greater than or equal to other. Dimensions are compared as follows: (tf.compat.v1.Dimension(m) >= tf.compat.v1.Dimension(n)) == (m >= n)
(tf.compat.v1.Dimension(m) >= tf.compat.v1.Dimension(None)) == None
(tf.compat.v1.Dimension(None) >= tf.compat.v1.Dimension(n)) == None
(tf.compat.v1.Dimension(None) >= tf.compat.v1.Dimension(None)) == None
Args
other Another Dimension.
Returns The value of self.value >= other.value if both are known, otherwise None.
__gt__ View source
__gt__(
other
)
Returns True if self is known to be greater than other. Dimensions are compared as follows: (tf.compat.v1.Dimension(m) > tf.compat.v1.Dimension(n)) == (m > n)
(tf.compat.v1.Dimension(m) > tf.compat.v1.Dimension(None)) == None
(tf.compat.v1.Dimension(None) > tf.compat.v1.Dimension(n)) == None
(tf.compat.v1.Dimension(None) > tf.compat.v1.Dimension(None)) == None
Args
other Another Dimension.
Returns The value of self.value > other.value if both are known, otherwise None.
__le__ View source
__le__(
other
)
Returns True if self is known to be less than or equal to other. Dimensions are compared as follows: (tf.compat.v1.Dimension(m) <= tf.compat.v1.Dimension(n)) == (m <= n)
(tf.compat.v1.Dimension(m) <= tf.compat.v1.Dimension(None)) == None
(tf.compat.v1.Dimension(None) <= tf.compat.v1.Dimension(n)) == None
(tf.compat.v1.Dimension(None) <= tf.compat.v1.Dimension(None)) == None
Args
other Another Dimension.
Returns The value of self.value <= other.value if both are known, otherwise None.
__lt__ View source
__lt__(
other
)
Returns True if self is known to be less than other. Dimensions are compared as follows: (tf.compat.v1.Dimension(m) < tf.compat.v1.Dimension(n)) == (m < n)
(tf.compat.v1.Dimension(m) < tf.compat.v1.Dimension(None)) == None
(tf.compat.v1.Dimension(None) < tf.compat.v1.Dimension(n)) == None
(tf.compat.v1.Dimension(None) < tf.compat.v1.Dimension(None)) == None
Args
other Another Dimension.
Returns The value of self.value < other.value if both are known, otherwise None.
__mod__ View source
__mod__(
other
)
Returns self modulo other. Dimension modulo are computed as follows: tf.compat.v1.Dimension(m) % tf.compat.v1.Dimension(n) ==
tf.compat.v1.Dimension(m % n)
tf.compat.v1.Dimension(m) % tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) % tf.compat.v1.Dimension(n) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) % tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is self modulo other.
__mul__ View source
__mul__(
other
)
Returns the product of self and other. Dimensions are summed as follows: tf.compat.v1.Dimension(m) * tf.compat.v1.Dimension(n) ==
tf.compat.v1.Dimension(m * n)
tf.compat.v1.Dimension(m) * tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) * tf.compat.v1.Dimension(n) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) * tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the product of self and other.
__ne__ View source
__ne__(
other
)
Returns true if other has a different known value from self. __radd__ View source
__radd__(
other
)
Returns the sum of other and self.
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the sum of self and other.
__rdiv__ View source
__rdiv__(
other
)
Use __floordiv__ via x // y instead. This function exists only to have a better error message. Instead of: TypeError: unsupported operand type(s) for /: 'int' and 'Dimension', this function will explicitly call for usage of // instead.
Args
other Another Dimension.
Raises TypeError.
__rfloordiv__ View source
__rfloordiv__(
other
)
Returns the quotient of other and self rounded down.
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the integer quotient of self and other.
__rmod__ View source
__rmod__(
other
)
Returns other modulo self.
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is other modulo self.
__rmul__ View source
__rmul__(
other
)
Returns the product of self and other.
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the product of self and other.
__rsub__ View source
__rsub__(
other
)
Returns the subtraction of self from other.
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the subtraction of self from other.
__rtruediv__ View source
__rtruediv__(
other
)
Use __floordiv__ via x // y instead. This function exists only to have a better error message. Instead of: TypeError: unsupported operand type(s) for /: 'int' and 'Dimension', this function will explicitly call for usage of // instead.
Args
other Another Dimension.
Raises TypeError.
__sub__ View source
__sub__(
other
)
Returns the subtraction of other from self. Dimensions are subtracted as follows: tf.compat.v1.Dimension(m) - tf.compat.v1.Dimension(n) ==
tf.compat.v1.Dimension(m - n)
tf.compat.v1.Dimension(m) - tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) - tf.compat.v1.Dimension(n) # equiv. to
tf.compat.v1.Dimension(None)
tf.compat.v1.Dimension(None) - tf.compat.v1.Dimension(None) # equiv. to
tf.compat.v1.Dimension(None)
Args
other Another Dimension, or a value accepted by as_dimension.
Returns A Dimension whose value is the subtraction of other from self.
__truediv__ View source
__truediv__(
other
)
Use __floordiv__ via x // y instead. This function exists only to have a better error message. Instead of: TypeError: unsupported operand type(s) for /: 'Dimension' and 'int', this function will explicitly call for usage of // instead.
Args
other Another Dimension.
Raises TypeError. | tensorflow.compat.v1.dimension |
tf.compat.v1.disable_control_flow_v2 Opts out of control flow v2.
tf.compat.v1.disable_control_flow_v2()
Note: v2 control flow is always enabled inside of tf.function. Calling this function has no effect in that case.
If your code needs tf.disable_control_flow_v2() to be called to work properly please file a bug. | tensorflow.compat.v1.disable_control_flow_v2 |
tf.compat.v1.disable_eager_execution Disables eager execution.
tf.compat.v1.disable_eager_execution()
This function can only be called before any Graphs, Ops, or Tensors have been created. It can be used at the beginning of the program for complex migration projects from TensorFlow 1.x to 2.x. | tensorflow.compat.v1.disable_eager_execution |
tf.compat.v1.disable_resource_variables Opts out of resource variables. (deprecated)
tf.compat.v1.disable_resource_variables()
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term If your code needs tf.disable_resource_variables() to be called to work properly please file a bug. | tensorflow.compat.v1.disable_resource_variables |
tf.compat.v1.disable_tensor_equality Compare Tensors by their id and be hashable.
tf.compat.v1.disable_tensor_equality()
This is a legacy behaviour of TensorFlow and is highly discouraged. | tensorflow.compat.v1.disable_tensor_equality |
tf.compat.v1.disable_v2_behavior Disables TensorFlow 2.x behaviors.
tf.compat.v1.disable_v2_behavior()
This function can be called at the beginning of the program (before Tensors, Graphs or other structures have been created, and before devices have been initialized. It switches all global behaviors that are different between TensorFlow 1.x and 2.x to behave as intended for 1.x. User can call this function to disable 2.x behavior during complex migrations. | tensorflow.compat.v1.disable_v2_behavior |
tf.compat.v1.disable_v2_tensorshape Disables the V2 TensorShape behavior and reverts to V1 behavior.
tf.compat.v1.disable_v2_tensorshape()
See docstring for enable_v2_tensorshape for details about the new behavior. | tensorflow.compat.v1.disable_v2_tensorshape |
Module: tf.compat.v1.distribute Library for running a computation across multiple devices. The intent of this library is that you can write an algorithm in a stylized way and it will be usable with a variety of different tf.distribute.Strategy implementations. Each descendant will implement a different strategy for distributing the algorithm across multiple devices/machines. Furthermore, these changes can be hidden inside the specific layers and other library classes that need special treatment to run in a distributed setting, so that most users' model definition code can run unchanged. The tf.distribute.Strategy API works the same way with eager and graph execution. Guides TensorFlow v2.x TensorFlow v1.x Tutorials
Distributed Training Tutorials The tutorials cover how to use tf.distribute.Strategy to do distributed training with native Keras APIs, custom training loops, and Esitmator APIs. They also cover how to save/load model when using tf.distribute.Strategy.
Glossary
Data parallelism is where we run multiple copies of the model on different slices of the input data. This is in contrast to model parallelism where we divide up a single copy of a model across multiple devices. Note: we only support data parallelism for now, but hope to add support for model parallelism in the future. A device is a CPU or accelerator (e.g. GPUs, TPUs) on some machine that TensorFlow can run operations on (see e.g. tf.device). You may have multiple devices on a single machine, or be connected to devices on multiple machines. Devices used to run computations are called worker devices. Devices used to store variables are parameter devices. For some strategies, such as tf.distribute.MirroredStrategy, the worker and parameter devices will be the same (see mirrored variables below). For others they will be different. For example, tf.distribute.experimental.CentralStorageStrategy puts the variables on a single device (which may be a worker device or may be the CPU), and tf.distribute.experimental.ParameterServerStrategy puts the variables on separate machines called parameter servers (see below). A replica is one copy of the model, running on one slice of the input data. Right now each replica is executed on its own worker device, but once we add support for model parallelism a replica may span multiple worker devices. A host is the CPU device on a machine with worker devices, typically used for running input pipelines. A worker is defined to be the physical machine(s) containing the physical devices (e.g. GPUs, TPUs) on which the replicated computation is executed. A worker may contain one or more replicas, but contains at least one replica. Typically one worker will correspond to one machine, but in the case of very large models with model parallelism, one worker may span multiple machines. We typically run one input pipeline per worker, feeding all the replicas on that worker.
Synchronous, or more commonly sync, training is where the updates from each replica are aggregated together before updating the model variables. This is in contrast to asynchronous, or async training, where each replica updates the model variables independently. You may also have replicas partitioned into groups which are in sync within each group but async between groups. Parameter servers: These are machines that hold a single copy of parameters/variables, used by some strategies (right now just tf.distribute.experimental.ParameterServerStrategy). All replicas that want to operate on a variable retrieve it at the beginning of a step and send an update to be applied at the end of the step. These can in priniciple support either sync or async training, but right now we only have support for async training with parameter servers. Compare to tf.distribute.experimental.CentralStorageStrategy, which puts all variables on a single device on the same machine (and does sync training), and tf.distribute.MirroredStrategy, which mirrors variables to multiple devices (see below).
Replica context vs. Cross-replica context vs Update context A replica context applies when you execute the computation function that was called with strategy.run. Conceptually, you're in replica context when executing the computation function that is being replicated. An update context is entered in a tf.distribute.StrategyExtended.update call. An cross-replica context is entered when you enter a strategy.scope. This is useful for calling tf.distribute.Strategy methods which operate across the replicas (like reduce_to()). By default you start in a replica context (the "default single replica context") and then some methods can switch you back and forth.
Distributed value: Distributed value is represented by the base class tf.distribute.DistributedValues. tf.distribute.DistributedValues is useful to represent values on multiple devices, and it contains a map from replica id to values. Two representative kinds of tf.distribute.DistributedValues are "PerReplica" and "Mirrored" values. "PerReplica" values exist on the worker devices, with a different value for each replica. They are produced by iterating through a distributed dataset returned by tf.distribute.Strategy.experimental_distribute_dataset and tf.distribute.Strategy.distribute_datasets_from_function. They are also the typical result returned by tf.distribute.Strategy.run. "Mirrored" values are like "PerReplica" values, except we know that the value on all replicas are the same. We can safely read a "Mirrored" value in a cross-replica context by using the value on any replica.
Unwrapping and merging: Consider calling a function fn on multiple replicas, like strategy.run(fn, args=[w]) with an argument w that is a tf.distribute.DistributedValues. This means w will have a map taking replica id 0 to w0, replica id 1 to w1, etc. strategy.run() unwraps w before calling fn, so it calls fn(w0) on device d0, fn(w1) on device d1, etc. It then merges the return values from fn(), which leads to one common object if the returned values are the same object from every replica, or a DistributedValues object otherwise. Reductions and all-reduce: A reduction is a method of aggregating multiple values into one value, like "sum" or "mean". If a strategy is doing sync training, we will perform a reduction on the gradients to a parameter from all replicas before applying the update. All-reduce is an algorithm for performing a reduction on values from multiple devices and making the result available on all of those devices. Mirrored variables: These are variables that are created on multiple devices, where we keep the variables in sync by applying the same updates to every copy. Mirrored variables are created with tf.Variable(...synchronization=tf.VariableSynchronization.ON_WRITE...). Normally they are only used in synchronous training.
SyncOnRead variables SyncOnRead variables are created by tf.Variable(...synchronization=tf.VariableSynchronization.ON_READ...), and they are created on multiple devices. In replica context, each component variable on the local replica can perform reads and writes without synchronization with each other. When the SyncOnRead variable is read in cross-replica context, the values from component variables are aggregated and returned. SyncOnRead variables bring a lot of custom configuration difficulty to the underlying logic, so we do not encourage users to instantiate and use SyncOnRead variable on their own. We have mainly used SyncOnRead variables for use cases such as batch norm and metrics. For performance reasons, we often don't need to keep these statistics in sync every step and they can be accumulated on each replica independently. The only time we want to sync them is reporting or checkpointing, which typically happens in cross-replica context. SyncOnRead variables are also often used by advanced users who want to control when variable values are aggregated. For example, users sometimes want to maintain gradients independently on each replica for a couple of steps without aggregation.
Distribute-aware layers Layers are generally called in a replica context, except when defining a Keras functional model. tf.distribute.in_cross_replica_context will let you determine which case you are in. If in a replica context, the tf.distribute.get_replica_context function will return the default replica context outside a strategy scope, None within a strategy scope, and a tf.distribute.ReplicaContext object inside a strategy scope and within a tf.distribute.Strategy.run function. The ReplicaContext object has an all_reduce method for aggregating across all replicas.
Note that we provide a default version of tf.distribute.Strategy that is used when no other strategy is in scope, that provides the same API with reasonable default behavior. Modules cluster_resolver module: Library imports for ClusterResolvers. experimental module: Public API for tf.distribute.experimental namespace. Classes class CrossDeviceOps: Base class for cross-device reduction and broadcasting algorithms. class HierarchicalCopyAllReduce: Hierarchical copy all-reduce implementation of CrossDeviceOps. class InputContext: A class wrapping information needed by an input function. class InputReplicationMode: Replication mode for input function. class MirroredStrategy: Synchronous training across multiple replicas on one machine. class NcclAllReduce: NCCL all-reduce implementation of CrossDeviceOps. class OneDeviceStrategy: A distribution strategy for running on a single device. class ReduceOp: Indicates how a set of values should be reduced. class ReductionToOneDevice: A CrossDeviceOps implementation that copies values to one device to reduce. class ReplicaContext: A class with a collection of APIs that can be called in a replica context. class RunOptions: Run options for strategy.run. class Server: An in-process TensorFlow server, for use in distributed training. class Strategy: A list of devices with a state & compute distribution policy. class StrategyExtended: Additional APIs for algorithms that need to be distribution-aware. Functions experimental_set_strategy(...): Set a tf.distribute.Strategy as current without with strategy.scope(). get_loss_reduction(...): tf.distribute.ReduceOp corresponding to the last loss reduction. get_replica_context(...): Returns the current tf.distribute.ReplicaContext or None. get_strategy(...): Returns the current tf.distribute.Strategy object. has_strategy(...): Return if there is a current non-default tf.distribute.Strategy. in_cross_replica_context(...): Returns True if in a cross-replica context. | tensorflow.compat.v1.distribute |
Module: tf.compat.v1.distribute.cluster_resolver Library imports for ClusterResolvers. This library contains all implementations of ClusterResolvers. ClusterResolvers are a way of specifying cluster information for distributed execution. Built on top of existing ClusterSpec framework, ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...). Classes class ClusterResolver: Abstract class for all implementations of ClusterResolvers. class GCEClusterResolver: ClusterResolver for Google Compute Engine. class KubernetesClusterResolver: ClusterResolver for Kubernetes. class SimpleClusterResolver: Simple implementation of ClusterResolver that accepts all attributes. class SlurmClusterResolver: ClusterResolver for system with Slurm workload manager. class TFConfigClusterResolver: Implementation of a ClusterResolver which reads the TF_CONFIG EnvVar. class TPUClusterResolver: Cluster Resolver for Google Cloud TPUs. class UnionResolver: Performs a union on underlying ClusterResolvers. | tensorflow.compat.v1.distribute.cluster_resolver |
Module: tf.compat.v1.distribute.experimental Public API for tf.distribute.experimental namespace. Classes class CentralStorageStrategy: A one-machine strategy that puts all variables on a single device. class CollectiveCommunication: Cross device communication implementation. class CollectiveHints: Hints for collective operations like AllReduce. class CommunicationImplementation: Cross device communication implementation. class CommunicationOptions: Options for cross device communications like All-reduce. class MultiWorkerMirroredStrategy: A distribution strategy for synchronous training on multiple workers. class ParameterServerStrategy: An asynchronous multi-worker parameter server tf.distribute strategy. class TPUStrategy: TPU distribution strategy implementation. | tensorflow.compat.v1.distribute.experimental |
tf.compat.v1.distribute.experimental.CentralStorageStrategy A one-machine strategy that puts all variables on a single device. Inherits From: Strategy
tf.compat.v1.distribute.experimental.CentralStorageStrategy(
compute_devices=None, parameter_device=None
)
Variables are assigned to local CPU or the only GPU. If there is more than one GPU, compute operations (other than variable update operations) will be replicated across all GPUs. For Example: strategy = tf.distribute.experimental.CentralStorageStrategy()
# Create a dataset
ds = tf.data.Dataset.range(5).batch(2)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(ds)
with strategy.scope():
@tf.function
def train_step(val):
return val + 1
# Iterate over the distributed dataset
for x in dist_dataset:
# process dataset elements
strategy.run(train_step, args=(x,))
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | tensorflow.compat.v1.distribute.experimental.centralstoragestrategy |
tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy A distribution strategy for synchronous training on multiple workers. Inherits From: Strategy
tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy(
communication=tf.distribute.experimental.CollectiveCommunication.AUTO,
cluster_resolver=None
)
This strategy implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it replicates all variables and computations to each local device. The difference is that it uses a distributed collective implementation (e.g. all-reduce), so that multiple workers can work together. You need to launch your program on each worker and configure cluster_resolver correctly. For example, if you are using tf.distribute.cluster_resolver.TFConfigClusterResolver, each worker needs to have its corresponding task_type and task_id set in the TF_CONFIG environment variable. An example TF_CONFIG on worker-0 of a two worker cluster is: TF_CONFIG = '{"cluster": {"worker": ["localhost:12345", "localhost:23456"]}, "task": {"type": "worker", "index": 0} }'
Your program runs on each worker as-is. Note that collectives require each worker to participate. All tf.distribute and non tf.distribute API may use collectives internally, e.g. checkpointing and saving since reading a tf.Variable with tf.VariableSynchronization.ON_READ all-reduces the value. Therefore it's recommended to run exactly the same program on each worker. Dispatching based on task_type or task_id of the worker is error-prone. cluster_resolver.num_accelerators() determines the number of GPUs the strategy uses. If it's zero, the strategy uses the CPU. All workers need to use the same number of devices, otherwise the behavior is undefined. This strategy is not intended for TPU. Use tf.distribute.TPUStrategy instead. After setting up TF_CONFIG, using this strategy is similar to using tf.distribute.MirroredStrategy and tf.distribute.TPUStrategy. strategy = tf.distribute.MultiWorkerMirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Dense(2, input_shape=(5,)),
])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
def dataset_fn(ctx):
x = np.random.random((2, 5)).astype(np.float32)
y = np.random.randint(2, size=(2, 1))
dataset = tf.data.Dataset.from_tensor_slices((x, y))
return dataset.repeat().batch(1, drop_remainder=True)
dist_dataset = strategy.distribute_datasets_from_function(dataset_fn)
model.compile()
model.fit(dist_dataset)
You can also write your own training loop: @tf.function
def train_step(iterator):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
strategy.run(step_fn, args=(next(iterator),))
for _ in range(NUM_STEP):
train_step(iterator)
See Multi-worker training with Keras for a detailed tutorial. Saving You need to save and checkpoint on all workers instead of just one. This is because variables whose synchronization=ON_READ triggers aggregation during saving. It's recommended to save to a different path on each worker to avoid race conditions. Each worker saves the same thing. See Multi-worker training with Keras tutorial for examples. Known Issues
tf.distribute.cluster_resolver.TFConfigClusterResolver does not return the correct number of accelerators. The strategy uses all available GPUs if cluster_resolver is tf.distribute.cluster_resolver.TFConfigClusterResolver or None. In eager mode, the strategy needs to be created before calling any other Tensorflow API.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | tensorflow.compat.v1.distribute.experimental.multiworkermirroredstrategy |
tf.compat.v1.distribute.experimental.ParameterServerStrategy An asynchronous multi-worker parameter server tf.distribute strategy. Inherits From: Strategy
tf.compat.v1.distribute.experimental.ParameterServerStrategy(
cluster_resolver=None
)
This strategy requires two roles: workers and parameter servers. Variables and updates to those variables will be assigned to parameter servers and other operations are assigned to workers. When each worker has more than one GPU, operations will be replicated on all GPUs. Even though operations may be replicated, variables are not and each worker shares a common view for which parameter server a variable is assigned to. By default it uses TFConfigClusterResolver to detect configurations for multi-worker training. This requires a 'TF_CONFIG' environment variable and the 'TF_CONFIG' must have a cluster spec. This class assumes each worker is running the same code independently, but parameter servers are running a standard server. This means that while each worker will synchronously compute a single gradient update across all GPUs, updates between workers proceed asynchronously. Operations that occur only on the first replica (such as incrementing the global step), will occur on the first replica of every worker. It is expected to call call_for_each_replica(fn, ...) for any operations which potentially can be replicated across replicas (i.e. multiple GPUs) even if there is only CPU or one GPU. When defining the fn, extra caution needs to be taken: 1) It is generally not recommended to open a device scope under the strategy's scope. A device scope (i.e. calling tf.device) will be merged with or override the device for operations but will not change the device for variables. 2) It is also not recommended to open a colocation scope (i.e. calling tf.compat.v1.colocate_with) under the strategy's scope. For colocating variables, use strategy.extended.colocate_vars_with instead. Colocation of ops will possibly create device assignment conflicts.
Note: This strategy only works with the Estimator API. Pass an instance of this strategy to the experimental_distribute argument when you create the RunConfig. This instance of RunConfig should then be passed to the Estimator instance on which train_and_evaluate is called.
For Example: strategy = tf.distribute.experimental.ParameterServerStrategy()
run_config = tf.estimator.RunConfig(
experimental_distribute.train_distribute=strategy)
estimator = tf.estimator.Estimator(config=run_config)
tf.estimator.train_and_evaluate(estimator,...)
Args
cluster_resolver Optional tf.distribute.cluster_resolver.ClusterResolver object. Defaults to a tf.distribute.cluster_resolver.TFConfigClusterResolver.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | tensorflow.compat.v1.distribute.experimental.parameterserverstrategy |
tf.compat.v1.distribute.experimental.TPUStrategy TPU distribution strategy implementation. Inherits From: Strategy
tf.compat.v1.distribute.experimental.TPUStrategy(
tpu_cluster_resolver=None, steps_per_run=None, device_assignment=None
)
Args
tpu_cluster_resolver A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster.
steps_per_run Number of steps to run on device before returning to the host. Note that this can have side-effects on performance, hooks, metrics, summaries etc. This parameter is only used when Distribution Strategy is used with estimator or keras.
device_assignment Optional tf.tpu.experimental.DeviceAssignment to specify the placement of replicas on the TPU cluster. Currently only supports the usecase of using a single core within a TPU cluster.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated.
steps_per_run DEPRECATED: use .extended.steps_per_run instead. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Run fn on each replica, with the given arguments. Executes ops specified by fn on each replica. If args or kwargs have "per-replica" values, such as those produced by a "distributed Dataset", when fn is executed on a particular replica, it will be executed with the component of those "per-replica" values that correspond to that replica. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. All arguments in args or kwargs should either be nest of tensors or per-replica objects containing tensors or composite tensors. Users can pass strategy specific options to options argument. An example to enable bucketizing dynamic shapes in TPUStrategy.run is:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
options = tf.distribute.RunOptions(
experimental_bucketizing_dynamic_shape=True)
dataset = tf.data.Dataset.range(
strategy.num_replicas_in_sync, output_type=dtypes.float32).batch(
strategy.num_replicas_in_sync, drop_remainder=True)
input_iterator = iter(strategy.experimental_distribute_dataset(dataset))
@tf.function()
def step_fn(inputs):
output = tf.reduce_sum(inputs)
return output
strategy.run(step_fn, args=(next(input_iterator),), options=options)
Args
fn The function to run. The output must be a tf.nest of Tensors.
args (Optional) Positional arguments to fn.
kwargs (Optional) Keyword arguments to fn.
options (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be "per-replica" Tensor objects or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | tensorflow.compat.v1.distribute.experimental.tpustrategy |
tf.compat.v1.distribute.get_loss_reduction tf.distribute.ReduceOp corresponding to the last loss reduction.
tf.compat.v1.distribute.get_loss_reduction()
This is used to decide whether loss should be scaled in optimizer (used only for estimator + v1 optimizer use case).
Returns tf.distribute.ReduceOp corresponding to the last loss reduction for estimator and v1 optimizer use case. tf.distribute.ReduceOp.SUM otherwise. | tensorflow.compat.v1.distribute.get_loss_reduction |
tf.compat.v1.distribute.MirroredStrategy Synchronous training across multiple replicas on one machine. Inherits From: Strategy
tf.compat.v1.distribute.MirroredStrategy(
devices=None, cross_device_ops=None
)
This strategy is typically used for training on one machine with multiple GPUs. For TPUs, use tf.distribute.TPUStrategy. To use MirroredStrategy with multiple workers, please refer to tf.distribute.experimental.MultiWorkerMirroredStrategy. For example, a variable created under a MirroredStrategy is a MirroredVariable. If no devices are specified in the constructor argument of the strategy then it will use all the available GPUs. If no GPUs are found, it will use the available CPUs. Note that TensorFlow treats all CPUs on a machine as a single device, and uses threads internally for parallelism.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
with strategy.scope():
x = tf.Variable(1.)
x
MirroredVariable:{
0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable ... shape=() dtype=float32, numpy=1.0>
}
While using distribution strategies, all the variable creation should be done within the strategy's scope. This will replicate the variables across all the replicas and keep them in sync using an all-reduce algorithm. Variables created inside a MirroredStrategy which is wrapped with a tf.function are still MirroredVariables.
x = []
@tf.function # Wrap the function with tf.function.
def create_variable():
if not x:
x.append(tf.Variable(1.))
return x[0]
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
with strategy.scope():
_ = create_variable()
print(x[0])
MirroredVariable:{
0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable ... shape=() dtype=float32, numpy=1.0>
}
experimental_distribute_dataset can be used to distribute the dataset across the replicas when writing your own training loop. If you are using .fit and .compile methods available in tf.keras, then tf.keras will handle the distribution for you. For example: my_strategy = tf.distribute.MirroredStrategy()
with my_strategy.scope():
@tf.function
def distribute_train_epoch(dataset):
def replica_fn(input):
# process input and return result
return result
total_result = 0
for x in dataset:
per_replica_result = my_strategy.run(replica_fn, args=(x,))
total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,
per_replica_result, axis=None)
return total_result
dist_dataset = my_strategy.experimental_distribute_dataset(dataset)
for _ in range(EPOCHS):
train_result = distribute_train_epoch(dist_dataset)
Args
devices a list of device strings such as ['/gpu:0', '/gpu:1']. If None, all available GPUs are used. If no GPUs are found, CPU is used.
cross_device_ops optional, a descedant of CrossDeviceOps. If this is not set, NcclAllReduce() will be used by default. One would customize this if NCCL isn't available or if a special implementation that exploits the particular hardware is available.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | tensorflow.compat.v1.distribute.mirroredstrategy |
tf.compat.v1.distribute.OneDeviceStrategy A distribution strategy for running on a single device. Inherits From: Strategy
tf.compat.v1.distribute.OneDeviceStrategy(
device
)
Using this strategy will place any variables created in its scope on the specified device. Input distributed through this strategy will be prefetched to the specified device. Moreover, any functions called via strategy.run will also be placed on the specified device as well. Typical usage of this strategy could be testing your code with the tf.distribute.Strategy API before switching to other strategies which actually distribute to multiple devices/machines. For example: tf.enable_eager_execution()
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
with strategy.scope():
v = tf.Variable(1.0)
print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0
def step_fn(x):
return x * 2
result = 0
for i in range(10):
result += strategy.run(step_fn, args=(i,))
print(result) # 90
Args
device Device string identifier for the device on which the variables should be placed. See class docs for more details on how the device is used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0"
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | tensorflow.compat.v1.distribute.onedevicestrategy |
tf.compat.v1.distribute.ReplicaContext A class with a collection of APIs that can be called in a replica context.
tf.compat.v1.distribute.ReplicaContext(
strategy, replica_id_in_sync_group
)
You can use tf.distribute.get_replica_context to get an instance of ReplicaContext, which can only be called inside the function passed to tf.distribute.Strategy.run.
strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1'])
def func():
replica_context = tf.distribute.get_replica_context()
return replica_context.replica_id_in_sync_group
strategy.run(func)
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=0>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
strategy A tf.distribute.Strategy.
replica_id_in_sync_group An integer, a Tensor or None. Prefer an integer whenever possible to avoid issues with nested tf.function. It accepts a Tensor only to be compatible with tpu.replicate.
Attributes
devices Returns the devices this replica is to be executed on, as a tuple of strings. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please avoid relying on devices property.
Note: For tf.distribute.MirroredStrategy and tf.distribute.experimental.MultiWorkerMirroredStrategy, this returns a nested list of device strings, e.g, [["GPU:0"]].
num_replicas_in_sync Returns number of replicas that are kept in sync.
replica_id_in_sync_group Returns the id of the replica. This identifies the replica among all replicas that are kept in sync. The value of the replica id can range from 0 to tf.distribute.ReplicaContext.num_replicas_in_sync - 1.
Note: This is not guaranteed to be the same ID as the XLA replica ID use for low-level operations such as collective_permute.
strategy The current tf.distribute.Strategy object. Methods all_reduce View source
all_reduce(
reduce_op, value, options=None
)
All-reduces value across all replicas.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
ctx = tf.distribute.get_replica_context()
value = tf.identity(1.)
return ctx.all_reduce(tf.distribute.ReduceOp.SUM, value)
strategy.experimental_local_results(strategy.run(step_fn))
(<tf.Tensor: shape=(), dtype=float32, numpy=2.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=2.0>)
It supports batched operations. You can pass a list of values and it attempts to batch them when possible. You can also specify options to indicate the desired batching behavior, e.g. batch the values into multiple packs so that they can better overlap with computations.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
ctx = tf.distribute.get_replica_context()
value1 = tf.identity(1.)
value2 = tf.identity(2.)
return ctx.all_reduce(tf.distribute.ReduceOp.SUM, [value1, value2])
strategy.experimental_local_results(strategy.run(step_fn))
([PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=2.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=2.0>
}, PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=4.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=4.0>
}],)
Note that all replicas need to participate in the all-reduce, otherwise this operation hangs. Note that if there're multiple all-reduces, they need to execute in the same order on all replicas. Dispatching all-reduce based on conditions is usually error-prone. This API currently can only be called in the replica context. Other variants to reduce values across replicas are:
tf.distribute.StrategyExtended.reduce_to: the reduce and all-reduce API in the cross-replica context.
tf.distribute.StrategyExtended.batch_reduce_to: the batched reduce and all-reduce API in the cross-replica context.
tf.distribute.Strategy.reduce: a more convenient method to reduce to the host in cross-replica context.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a nested structure of tf.Tensor which tf.nest.flatten accepts. The structure and the shapes of the tf.Tensor need to be same on all replicas.
options a tf.distribute.experimental.CommunicationOptions. Options to perform collective operations. This overrides the default options if the tf.distribute.Strategy takes one in the constructor. See tf.distribute.experimental.CommunicationOptions for details of the options.
Returns A nested structure of tf.Tensor with the reduced values. The structure is the same as value.
merge_call View source
merge_call(
merge_fn, args=(), kwargs=None
)
Merge args across replicas and run merge_fn in a cross-replica context. This allows communication and coordination when there are multiple calls to the step_fn triggered by a call to strategy.run(step_fn, ...). See tf.distribute.Strategy.run for an explanation. If not inside a distributed scope, this is equivalent to: strategy = tf.distribute.get_strategy()
with cross-replica-context(strategy):
return merge_fn(strategy, *args, **kwargs)
Args
merge_fn Function that joins arguments from threads that are given as PerReplica. It accepts tf.distribute.Strategy object as the first argument.
args List or tuple with positional per-thread arguments for merge_fn.
kwargs Dict with keyword per-thread arguments for merge_fn.
Returns The return value of merge_fn, except for PerReplica values which are unpacked. | tensorflow.compat.v1.distribute.replicacontext |
tf.compat.v1.distribute.Strategy A list of devices with a state & compute distribution policy.
tf.compat.v1.distribute.Strategy(
extended
)
See the guide for overview and examples.
Note: Not all tf.distribute.Strategy implementations currently support TensorFlow's partitioned variables (where a single variable is split across multiple devices) at this time.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | tensorflow.compat.v1.distribute.strategy |
tf.compat.v1.distribute.StrategyExtended Additional APIs for algorithms that need to be distribution-aware. Inherits From: StrategyExtended
tf.compat.v1.distribute.StrategyExtended(
container_strategy
)
Note: For most usage of tf.distribute.Strategy, there should be no need to call these methods, since TensorFlow libraries (such as optimizers) already call these methods when needed on your behalf.
Some common use cases of functions on this page: Locality tf.distribute.DistributedValues can have the same locality as a distributed variable, which leads to a mirrored value residing on the same devices as the variable (as opposed to the compute devices). Such values may be passed to a call to tf.distribute.StrategyExtended.update to update the value of a variable. You may use tf.distribute.StrategyExtended.colocate_vars_with to give a variable the same locality as another variable. You may convert a "PerReplica" value to a variable's locality by using tf.distribute.StrategyExtended.reduce_to or tf.distribute.StrategyExtended.batch_reduce_to. How to update a distributed variable A distributed variable is variables created on multiple devices. As discussed in the glossary, mirrored variable and SyncOnRead variable are two examples. The standard pattern for updating distributed variables is to: In your function passed to tf.distribute.Strategy.run, compute a list of (update, variable) pairs. For example, the update might be a gradient of the loss with respect to the variable. Switch to cross-replica mode by calling tf.distribute.get_replica_context().merge_call() with the updates and variables as arguments. Call tf.distribute.StrategyExtended.reduce_to(VariableAggregation.SUM, t, v) (for one variable) or tf.distribute.StrategyExtended.batch_reduce_to (for a list of variables) to sum the updates. Call tf.distribute.StrategyExtended.update(v) for each variable to update its value. Steps 2 through 4 are done automatically by class tf.keras.optimizers.Optimizer if you call its tf.keras.optimizers.Optimizer.apply_gradients method in a replica context. In fact, a higher-level solution to update a distributed variable is by calling assign on the variable as you would do to a regular tf.Variable. You can call the method in both replica context and cross-replica context. For a mirrored variable, calling assign in replica context requires you to specify the aggregation type in the variable constructor. In that case, the context switching and sync described in steps 2 through 4 are handled for you. If you call assign on mirrored variable in cross-replica context, you can only assign a single value or assign values from another mirrored variable or a mirrored tf.distribute.DistributedValues. For a SyncOnRead variable, in replica context, you can simply call assign on it and no aggregation happens under the hood. In cross-replica context, you can only assign a single value to a SyncOnRead variable. One example case is restoring from a checkpoint: if the aggregation type of the variable is tf.VariableAggregation.SUM, it is assumed that replica values were added before checkpointing, so at the time of restoring, the value is divided by the number of replicas and then assigned to each replica; if the aggregation type is tf.VariableAggregation.MEAN, the value is assigned to each replica directly.
Attributes
experimental_between_graph Whether the strategy uses between-graph replication or not. This is expected to return a constant value that will not be changed throughout its life cycle.
experimental_require_static_shapes Returns True if static shape is required; False otherwise.
experimental_should_init Whether initialization is needed.
parameter_devices Returns the tuple of all devices used to place variables.
should_checkpoint Whether checkpointing is needed.
should_save_summary Whether saving summaries is needed.
worker_devices Returns the tuple of all devices used to for compute replica execution. Methods batch_reduce_to View source
batch_reduce_to(
reduce_op, value_destination_pairs, options=None
)
Combine multiple reduce_to calls into one for faster execution. Similar to reduce_to, but accepts a list of (value, destinations) pairs. It's more efficient than reduce each value separately. This API currently can only be called in cross-replica context. Other variants to reduce values across replicas are:
tf.distribute.StrategyExtended.reduce_to: the non-batch version of this API.
tf.distribute.ReplicaContext.all_reduce: the counterpart of this API in replica context. It supports both batched and non-batched all-reduce.
tf.distribute.Strategy.reduce: a more convenient method to reduce to the host in cross-replica context. See reduce_to for more information.
@tf.function
def step_fn(var):
def merge_fn(strategy, value, var):
# All-reduce the value. Note that `value` here is a
# `tf.distribute.DistributedValues`.
reduced = strategy.extended.batch_reduce_to(
tf.distribute.ReduceOp.SUM, [(value, var)])[0]
strategy.extended.update(var, lambda var, value: var.assign(value),
args=(reduced,))
value = tf.identity(1.)
tf.distribute.get_replica_context().merge_call(merge_fn,
args=(value, var))
def run(strategy):
with strategy.scope():
v = tf.Variable(0.)
strategy.run(step_fn, args=(v,))
return v
run(tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]))
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=2.0>
}
run(tf.distribute.experimental.CentralStorageStrategy(
compute_devices=["GPU:0", "GPU:1"], parameter_device="CPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>
run(tf.distribute.OneDeviceStrategy("GPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.Strategy.reduce_to for descriptions.
options a tf.distribute.experimental.CommunicationOptions. Options to perform collective operations. This overrides the default options if the tf.distribute.Strategy takes one in the constructor. See tf.distribute.experimental.CommunicationOptions for details of the options.
Returns A list of reduced values, one per pair in value_destination_pairs.
broadcast_to View source
broadcast_to(
tensor, destinations
)
Mirror a tensor on one device to all worker devices.
Args
tensor A Tensor value to broadcast.
destinations A mirrored variable or device string specifying the destination devices to copy tensor to.
Returns A value mirrored to destinations devices.
call_for_each_replica View source
call_for_each_replica(
fn, args=(), kwargs=None
)
Run fn once per replica. fn may call tf.get_replica_context() to access methods such as replica_id_in_sync_group and merge_call(). merge_call() is used to communicate between the replicas and re-enter the cross-replica context. All replicas pause their execution having encountered a merge_call() call. After that the merge_fn-function is executed. Its results are then unwrapped and given back to each replica call. After that execution resumes until fn is complete or encounters another merge_call(). Example: # Called once in "cross-replica" context.
def merge_fn(distribution, three_plus_replica_id):
# sum the values across replicas
return sum(distribution.experimental_local_results(three_plus_replica_id))
# Called once per replica in `distribution`, in a "replica" context.
def fn(three):
replica_ctx = tf.get_replica_context()
v = three + replica_ctx.replica_id_in_sync_group
# Computes the sum of the `v` values across all replicas.
s = replica_ctx.merge_call(merge_fn, args=(v,))
return s + v
with distribution.scope():
# in "cross-replica" context
...
merged_results = distribution.run(fn, args=[3])
# merged_results has the values from every replica execution of `fn`.
# This statement prints a list:
print(distribution.experimental_local_results(merged_results))
Args
fn function to run (will be run once per replica).
args Tuple or list with positional arguments for fn.
kwargs Dict with keyword arguments for fn.
Returns Merged return value of fn across all replicas.
colocate_vars_with View source
colocate_vars_with(
colocate_with_variable
)
Scope that controls which devices variables will be created on. No operations should be added to the graph inside this scope, it should only be used when creating variables (some implementations work by changing variable creation, others work by using a tf.compat.v1.colocate_with() scope). This may only be used inside self.scope(). Example usage: with strategy.scope():
var1 = tf.Variable(...)
with strategy.extended.colocate_vars_with(var1):
# var2 and var3 will be created on the same device(s) as var1
var2 = tf.Variable(...)
var3 = tf.Variable(...)
def fn(v1, v2, v3):
# operates on v1 from var1, v2 from var2, and v3 from var3
# `fn` runs on every device `var1` is on, `var2` and `var3` will be there
# too.
strategy.extended.update(var1, fn, args=(var2, var3))
Args
colocate_with_variable A variable created in this strategy's scope(). Variables created while in the returned context manager will be on the same set of devices as colocate_with_variable.
Returns A context manager.
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input.
Args
numpy_input A nest of NumPy input arrays that will be distributed evenly across all replicas. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run_steps_on_iterator View source
experimental_run_steps_on_iterator(
fn, iterator, iterations=1, initial_loop_values=None
)
DEPRECATED: please use run instead. Run fn with input from iterator for iterations times. This method can be used to run a step function for training a number of times using input from a dataset.
Args
fn function to run using this distribution strategy. The function must have the following signature: def fn(context, inputs). context is an instance of MultiStepContext that will be passed when fn is run. context can be used to specify the outputs to be returned from fn by calling context.set_last_step_output. It can also be used to capture non tensor outputs by context.set_non_tensor_output. See MultiStepContext documentation for more information. inputs will have same type/structure as iterator.get_next(). Typically, fn will use call_for_each_replica method of the strategy to distribute the computation over multiple replicas.
iterator Iterator of a dataset that represents the input for fn. The caller is responsible for initializing the iterator as needed.
iterations (Optional) Number of iterations that fn should be run. Defaults to 1.
initial_loop_values (Optional) Initial values to be passed into the loop that runs fn. Defaults to None. initial_loop_values argument when we have a mechanism to infer the outputs of fn.
Returns Returns the MultiStepContext object which has the following properties, among other things: run_op: An op that runs fn iterations times. last_step_outputs: A dictionary containing tensors set using context.set_last_step_output. Evaluating this returns the value of the tensors after the last iteration. non_tensor_outputs: A dictionary containing anything that was set by fn by calling context.set_non_tensor_output.
non_slot_devices View source
non_slot_devices(
var_list
)
Device(s) for non-slot variables. DEPRECATED: TF 1.x ONLY. This method returns non-slot devices where non-slot variables are placed. Users can create non-slot variables on these devices by using a block: with tf.distribute.StrategyExtended.colocate_vars_with(tf.distribute.StrategyExtended.non_slot_devices(...)):
...
Args
var_list The list of variables being optimized, needed with the default tf.distribute.Strategy.
Returns A sequence of devices for non-slot variables.
read_var View source
read_var(
v
)
Reads the value of a variable. Returns the aggregate value of a replica-local variable, or the (read-only) value of any other variable.
Args
v A variable allocated within the scope of this tf.distribute.Strategy.
Returns A tensor representing the value of v, aggregated across replicas if necessary.
reduce_to View source
reduce_to(
reduce_op, value, destinations, options=None
)
Combine (via e.g. sum or mean) values across replicas. reduce_to aggregates tf.distribute.DistributedValues and distributed variables. It supports both dense values and tf.IndexedSlices. This API currently can only be called in cross-replica context. Other variants to reduce values across replicas are:
tf.distribute.StrategyExtended.batch_reduce_to: the batch version of this API.
tf.distribute.ReplicaContext.all_reduce: the counterpart of this API in replica context. It supports both batched and non-batched all-reduce.
tf.distribute.Strategy.reduce: a more convenient method to reduce to the host in cross-replica context. destinations specifies where to reduce the value to, e.g. "GPU:0". You can also pass in a Tensor, and the destinations will be the device of that tensor. For all-reduce, pass the same to value and destinations. It can be used in tf.distribute.ReplicaContext.merge_call to write code that works for all tf.distribute.Strategy.
@tf.function
def step_fn(var):
def merge_fn(strategy, value, var):
# All-reduce the value. Note that `value` here is a
# `tf.distribute.DistributedValues`.
reduced = strategy.extended.reduce_to(tf.distribute.ReduceOp.SUM,
value, destinations=var)
strategy.extended.update(var, lambda var, value: var.assign(value),
args=(reduced,))
value = tf.identity(1.)
tf.distribute.get_replica_context().merge_call(merge_fn,
args=(value, var))
def run(strategy):
with strategy.scope():
v = tf.Variable(0.)
strategy.run(step_fn, args=(v,))
return v
run(tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]))
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=2.0>
}
run(tf.distribute.experimental.CentralStorageStrategy(
compute_devices=["GPU:0", "GPU:1"], parameter_device="CPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>
run(tf.distribute.OneDeviceStrategy("GPU:0"))
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues, or a tf.Tensor like object.
destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, and this method doesn't update the variable.
options a tf.distribute.experimental.CommunicationOptions. Options to perform collective operations. This overrides the default options if the tf.distribute.Strategy takes one in the constructor. See tf.distribute.experimental.CommunicationOptions for details of the options.
Returns A tensor or value reduced to destinations.
update View source
update(
var, fn, args=(), kwargs=None, group=True
)
Run fn to update var using inputs mirrored to the same devices. tf.distribute.StrategyExtended.update takes a distributed variable var to be updated, an update function fn, and args and kwargs for fn. It applies fn to each component variable of var and passes corresponding values from args and kwargs. Neither args nor kwargs may contain per-replica values. If they contain mirrored values, they will be unwrapped before calling fn. For example, fn can be assign_add and args can be a mirrored DistributedValues where each component contains the value to be added to this mirrored variable var. Calling update will call assign_add on each component variable of var with the corresponding tensor value on that device. Example usage: strategy = tf.distribute.MirroredStrategy(['GPU:0', 'GPU:1']) # With 2
devices
with strategy.scope():
v = tf.Variable(5.0, aggregation=tf.VariableAggregation.SUM)
def update_fn(v):
return v.assign(1.0)
result = strategy.extended.update(v, update_fn)
# result is
# Mirrored:{
# 0: tf.Tensor(1.0, shape=(), dtype=float32),
# 1: tf.Tensor(1.0, shape=(), dtype=float32)
# }
If var is mirrored across multiple devices, then this method implements logic as following: results = {}
for device, v in var:
with tf.device(device):
# args and kwargs will be unwrapped if they are mirrored.
results[device] = fn(v, *args, **kwargs)
return merged(results)
Otherwise, this method returns fn(var, *args, **kwargs) colocated with var.
Args
var Variable, possibly mirrored to multiple devices, to operate on.
fn Function to call. Should take the variable as the first argument.
args Tuple or list. Additional positional arguments to pass to fn().
kwargs Dict with keyword arguments to pass to fn().
group Boolean. Defaults to True. If False, the return value will be unwrapped.
Returns By default, the merged return value of fn across all replicas. The merged result has dependencies to make sure that if it is evaluated at all, the side effects (updates) will happen on every replica. If instead "group=False" is specified, this function will return a nest of lists where each list has an element per replica, and the caller is responsible for ensuring all elements are executed.
update_non_slot View source
update_non_slot(
colocate_with, fn, args=(), kwargs=None, group=True
)
Runs fn(*args, **kwargs) on colocate_with devices. Used to update non-slot variables. DEPRECATED: TF 1.x ONLY.
Args
colocate_with Devices returned by non_slot_devices().
fn Function to execute.
args Tuple or list. Positional arguments to pass to fn().
kwargs Dict with keyword arguments to pass to fn().
group Boolean. Defaults to True. If False, the return value will be unwrapped.
Returns Return value of fn, possibly merged across devices.
value_container View source
value_container(
value
)
Returns the container that this per-replica value belongs to.
Args
value A value returned by run() or a variable created in scope().
Returns A container that value belongs to. If value does not belong to any container (including the case of container having been destroyed), returns the value itself. value in experimental_local_results(value_container(value)) will always be true.
variable_created_in_scope View source
variable_created_in_scope(
v
)
Tests whether v was created while this strategy scope was active. Variables created inside the strategy scope are "owned" by it:
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
v = tf.Variable(1.)
strategy.extended.variable_created_in_scope(v)
True
Variables created outside the strategy are not owned by it:
strategy = tf.distribute.MirroredStrategy()
v = tf.Variable(1.)
strategy.extended.variable_created_in_scope(v)
False
Args
v A tf.Variable instance.
Returns True if v was created inside the scope, False if not. | tensorflow.compat.v1.distribute.strategyextended |
Module: tf.compat.v1.distributions Core module for TensorFlow distribution objects and helpers. Classes class Bernoulli: Bernoulli distribution. class Beta: Beta distribution. class Categorical: Categorical distribution. class Dirichlet: Dirichlet distribution. class DirichletMultinomial: Dirichlet-Multinomial compound distribution. class Distribution: A generic probability distribution base class. class Exponential: Exponential distribution. class Gamma: Gamma distribution. class Laplace: The Laplace distribution with location loc and scale parameters. class Multinomial: Multinomial distribution. class Normal: The Normal distribution with location loc and scale parameters. class RegisterKL: Decorator to register a KL divergence implementation function. class ReparameterizationType: Instances of this class represent how sampling is reparameterized. class StudentT: Student's t-distribution. class Uniform: Uniform distribution with low and high parameters. Functions kl_divergence(...): Get the KL-divergence KL(distribution_a || distribution_b). (deprecated)
Other Members
FULLY_REPARAMETERIZED tf.compat.v1.distributions.ReparameterizationType
NOT_REPARAMETERIZED tf.compat.v1.distributions.ReparameterizationType | tensorflow.compat.v1.distributions |
tf.compat.v1.distributions.Bernoulli Bernoulli distribution. Inherits From: Distribution
tf.compat.v1.distributions.Bernoulli(
logits=None, probs=None, dtype=tf.dtypes.int32, validate_args=False,
allow_nan_stats=True, name='Bernoulli'
)
The Bernoulli distribution with probs parameter, i.e., the probability of a 1 outcome (vs a 0 outcome).
Args
logits An N-D Tensor representing the log-odds of a 1 event. Each entry in the Tensor parametrizes an independent Bernoulli distribution where the probability of an event is sigmoid(logits). Only one of logits or probs should be passed in.
probs An N-D Tensor representing the probability of a 1 event. Each entry in the Tensor parameterizes an independent Bernoulli distribution. Only one of logits or probs should be passed in.
dtype The type of the event samples. Default: int32.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Raises
ValueError If p and logits are passed, or if neither are passed.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
logits Log-odds of a 1 outcome (vs 0).
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
probs Probability of a 1 outcome (vs 0).
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. Additional documentation from Bernoulli: Returns 1 if prob > 0.5 and 0 otherwise. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.bernoulli |
tf.compat.v1.distributions.Beta Beta distribution. Inherits From: Distribution
tf.compat.v1.distributions.Beta(
concentration1=None, concentration0=None, validate_args=False,
allow_nan_stats=True, name='Beta'
)
The Beta distribution is defined over the (0, 1) interval using parameters concentration1 (aka "alpha") and concentration0 (aka "beta"). Mathematical Details The probability density function (pdf) is, pdf(x; alpha, beta) = x**(alpha - 1) (1 - x)**(beta - 1) / Z
Z = Gamma(alpha) Gamma(beta) / Gamma(alpha + beta)
where:
concentration1 = alpha,
concentration0 = beta,
Z is the normalization constant, and,
Gamma is the gamma function. The concentration parameters represent mean total counts of a 1 or a 0, i.e., concentration1 = alpha = mean * total_concentration
concentration0 = beta = (1. - mean) * total_concentration
where mean in (0, 1) and total_concentration is a positive real number representing a mean total_count = concentration1 + concentration0. Distribution parameters are automatically broadcast in all functions; see examples for details. Warning: The samples can be zero due to finite precision. This happens more often when some of the concentrations are very small. Make sure to round the samples to np.finfo(dtype).tiny before computing the density. Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018). Examples import tensorflow_probability as tfp
tfd = tfp.distributions
# Create a batch of three Beta distributions.
alpha = [1, 2, 3]
beta = [1, 2, 3]
dist = tfd.Beta(alpha, beta)
dist.sample([4, 5]) # Shape [4, 5, 3]
# `x` has three batch entries, each with two samples.
x = [[.1, .4, .5],
[.2, .3, .5]]
# Calculate the probability of each pair of samples under the corresponding
# distribution in `dist`.
dist.prob(x) # Shape [2, 3]
# Create batch_shape=[2, 3] via parameter broadcast:
alpha = [[1.], [2]] # Shape [2, 1]
beta = [3., 4, 5] # Shape [3]
dist = tfd.Beta(alpha, beta)
# alpha broadcast as: [[1., 1, 1,],
# [2, 2, 2]]
# beta broadcast as: [[3., 4, 5],
# [3, 4, 5]]
# batch_Shape [2, 3]
dist.sample([4, 5]) # Shape [4, 5, 2, 3]
x = [.2, .3, .5]
# x will be broadcast as [[.2, .3, .5],
# [.2, .3, .5]],
# thus matching batch_shape [2, 3].
dist.prob(x) # Shape [2, 3]
Compute the gradients of samples w.r.t. the parameters: alpha = tf.constant(1.0)
beta = tf.constant(2.0)
dist = tfd.Beta(alpha, beta)
samples = dist.sample(5) # Shape [5]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, [alpha, beta])
References: Implicit Reparameterization Gradients: Figurnov et al., 2018 (pdf)
Args
concentration1 Positive floating-point Tensor indicating mean number of successes; aka "alpha". Implies self.dtype and self.batch_shape, i.e., concentration1.shape = [N1, N2, ..., Nm] = self.batch_shape.
concentration0 Positive floating-point Tensor indicating mean number of failures; aka "beta". Otherwise has same semantics as concentration1.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
concentration0 Concentration parameter associated with a 0 outcome.
concentration1 Concentration parameter associated with a 1 outcome.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
total_concentration Sum of concentration parameters.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Additional documentation from Beta:
Note: x must have dtype self.dtype and be in [0, 1]. It must have a shape compatible with self.batch_shape().
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Additional documentation from Beta:
Note: x must have dtype self.dtype and be in [0, 1]. It must have a shape compatible with self.batch_shape().
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function. Additional documentation from Beta:
Note: x must have dtype self.dtype and be in [0, 1]. It must have a shape compatible with self.batch_shape().
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. Additional documentation from Beta:
Note: The mode is undefined when concentration1 <= 1 or concentration0 <= 1. If self.allow_nan_stats is True, NaN is used for undefined modes. If self.allow_nan_stats is False an exception is raised when one or more modes are undefined.
param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function. Additional documentation from Beta:
Note: x must have dtype self.dtype and be in [0, 1]. It must have a shape compatible with self.batch_shape().
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.beta |
tf.compat.v1.distributions.Categorical Categorical distribution. Inherits From: Distribution
tf.compat.v1.distributions.Categorical(
logits=None, probs=None, dtype=tf.dtypes.int32, validate_args=False,
allow_nan_stats=True, name='Categorical'
)
The Categorical distribution is parameterized by either probabilities or log-probabilities of a set of K classes. It is defined over the integers {0, 1, ..., K}. The Categorical distribution is closely related to the OneHotCategorical and Multinomial distributions. The Categorical distribution can be intuited as generating samples according to argmax{ OneHotCategorical(probs) } itself being identical to argmax{ Multinomial(probs, total_count=1) }. Mathematical Details The probability mass function (pmf) is, pmf(k; pi) = prod_j pi_j**[k == j]
Pitfalls The number of classes, K, must not exceed: the largest integer representable by self.dtype, i.e., 2**(mantissa_bits+1) (IEEE 754), the maximum Tensor index, i.e., 2**31-1. In other words, K <= min(2**31-1, {
tf.float16: 2**11,
tf.float32: 2**24,
tf.float64: 2**53 }[param.dtype])
Note: This condition is validated only when self.validate_args = True.
Examples Creates a 3-class distribution with the 2nd class being most likely. dist = Categorical(probs=[0.1, 0.5, 0.4])
n = 1e4
empirical_prob = tf.cast(
tf.histogram_fixed_width(
dist.sample(int(n)),
[0., 2],
nbins=3),
dtype=tf.float32) / n
# ==> array([ 0.1005, 0.5037, 0.3958], dtype=float32)
Creates a 3-class distribution with the 2nd class being most likely. Parameterized by logits rather than probabilities. dist = Categorical(logits=np.log([0.1, 0.5, 0.4])
n = 1e4
empirical_prob = tf.cast(
tf.histogram_fixed_width(
dist.sample(int(n)),
[0., 2],
nbins=3),
dtype=tf.float32) / n
# ==> array([0.1045, 0.5047, 0.3908], dtype=float32)
Creates a 3-class distribution with the 3rd class being most likely. The distribution functions can be evaluated on counts. # counts is a scalar.
p = [0.1, 0.4, 0.5]
dist = Categorical(probs=p)
dist.prob(0) # Shape []
# p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match counts.
counts = [1, 0]
dist.prob(counts) # Shape [2]
# p will be broadcast to shape [3, 5, 7, 3] to match counts.
counts = [[...]] # Shape [5, 7, 3]
dist.prob(counts) # Shape [5, 7, 3]
Args
logits An N-D Tensor, N >= 1, representing the log probabilities of a set of Categorical distributions. The first N - 1 dimensions index into a batch of independent distributions and the last dimension represents a vector of logits for each class. Only one of logits or probs should be passed in.
probs An N-D Tensor, N >= 1, representing the probabilities of a set of Categorical distributions. The first N - 1 dimensions index into a batch of independent distributions and the last dimension represents a vector of probabilities for each class. Only one of logits or probs should be passed in.
dtype The type of the event samples (default: int32).
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
event_size Scalar int32 tensor: the number of classes.
logits Vector of coordinatewise logits.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
probs Vector of coordinatewise probabilities.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.categorical |
tf.compat.v1.distributions.Dirichlet Dirichlet distribution. Inherits From: Distribution
tf.compat.v1.distributions.Dirichlet(
concentration, validate_args=False, allow_nan_stats=True,
name='Dirichlet'
)
The Dirichlet distribution is defined over the (k-1)-simplex using a positive, length-k vector concentration (k > 1). The Dirichlet is identically the Beta distribution when k = 2. Mathematical Details The Dirichlet is a distribution over the open (k-1)-simplex, i.e., S^{k-1} = { (x_0, ..., x_{k-1}) in R^k : sum_j x_j = 1 and all_j x_j > 0 }.
The probability density function (pdf) is, pdf(x; alpha) = prod_j x_j**(alpha_j - 1) / Z
Z = prod_j Gamma(alpha_j) / Gamma(sum_j alpha_j)
where:
x in S^{k-1}, i.e., the (k-1)-simplex,
concentration = alpha = [alpha_0, ..., alpha_{k-1}], alpha_j > 0,
Z is the normalization constant aka the multivariate beta function, and,
Gamma is the gamma function. The concentration represents mean total counts of class occurrence, i.e., concentration = alpha = mean * total_concentration
where mean in S^{k-1} and total_concentration is a positive real number representing a mean total count. Distribution parameters are automatically broadcast in all functions; see examples for details. Warning: Some components of the samples can be zero due to finite precision. This happens more often when some of the concentrations are very small. Make sure to round the samples to np.finfo(dtype).tiny before computing the density. Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018). Examples import tensorflow_probability as tfp
tfd = tfp.distributions
# Create a single trivariate Dirichlet, with the 3rd class being three times
# more frequent than the first. I.e., batch_shape=[], event_shape=[3].
alpha = [1., 2, 3]
dist = tfd.Dirichlet(alpha)
dist.sample([4, 5]) # shape: [4, 5, 3]
# x has one sample, one batch, three classes:
x = [.2, .3, .5] # shape: [3]
dist.prob(x) # shape: []
# x has two samples from one batch:
x = [[.1, .4, .5],
[.2, .3, .5]]
dist.prob(x) # shape: [2]
# alpha will be broadcast to shape [5, 7, 3] to match x.
x = [[...]] # shape: [5, 7, 3]
dist.prob(x) # shape: [5, 7]
# Create batch_shape=[2], event_shape=[3]:
alpha = [[1., 2, 3],
[4, 5, 6]] # shape: [2, 3]
dist = tfd.Dirichlet(alpha)
dist.sample([4, 5]) # shape: [4, 5, 2, 3]
x = [.2, .3, .5]
# x will be broadcast as [[.2, .3, .5],
# [.2, .3, .5]],
# thus matching batch_shape [2, 3].
dist.prob(x) # shape: [2]
Compute the gradients of samples w.r.t. the parameters: alpha = tf.constant([1.0, 2.0, 3.0])
dist = tfd.Dirichlet(alpha)
samples = dist.sample(5) # Shape [5, 3]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, alpha)
References: Implicit Reparameterization Gradients: Figurnov et al., 2018 (pdf)
Args
concentration Positive floating-point Tensor indicating mean number of class occurrences; aka "alpha". Implies self.dtype, and self.batch_shape, self.event_shape, i.e., if concentration.shape = [N1, N2, ..., Nm, k] then batch_shape = [N1, N2, ..., Nm] and event_shape = [k].
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
concentration Concentration parameter; expected counts for that coordinate.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
total_concentration Sum of last dim of concentration parameter.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function. Additional documentation from Dirichlet:
Note: value must be a non-negative tensor with dtype self.dtype and be in the (self.event_shape() - 1)-simplex, i.e., tf.reduce_sum(value, -1) = 1. It must have a shape compatible with self.batch_shape() + self.event_shape().
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. Additional documentation from Dirichlet:
Note: The mode is undefined when any concentration <= 1. If self.allow_nan_stats is True, NaN is used for undefined modes. If self.allow_nan_stats is False an exception is raised when one or more modes are undefined.
param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function. Additional documentation from Dirichlet:
Note: value must be a non-negative tensor with dtype self.dtype and be in the (self.event_shape() - 1)-simplex, i.e., tf.reduce_sum(value, -1) = 1. It must have a shape compatible with self.batch_shape() + self.event_shape().
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.dirichlet |
tf.compat.v1.distributions.DirichletMultinomial Dirichlet-Multinomial compound distribution. Inherits From: Distribution
tf.compat.v1.distributions.DirichletMultinomial(
total_count, concentration, validate_args=False, allow_nan_stats=True,
name='DirichletMultinomial'
)
The Dirichlet-Multinomial distribution is parameterized by a (batch of) length-K concentration vectors (K > 1) and a total_count number of trials, i.e., the number of trials per draw from the DirichletMultinomial. It is defined over a (batch of) length-K vector counts such that tf.reduce_sum(counts, -1) = total_count. The Dirichlet-Multinomial is identically the Beta-Binomial distribution when K = 2. Mathematical Details The Dirichlet-Multinomial is a distribution over K-class counts, i.e., a length-K vector of non-negative integer counts = n = [n_0, ..., n_{K-1}]. The probability mass function (pmf) is, pmf(n; alpha, N) = Beta(alpha + n) / (prod_j n_j!) / Z
Z = Beta(alpha) / N!
where:
concentration = alpha = [alpha_0, ..., alpha_{K-1}], alpha_j > 0,
total_count = N, N a positive integer,
N! is N factorial, and,
Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j) is the multivariate beta function, and,
Gamma is the gamma function. Dirichlet-Multinomial is a compound distribution, i.e., its samples are generated as follows. Choose class probabilities: probs = [p_0,...,p_{K-1}] ~ Dir(concentration)
Draw integers: counts = [n_0,...,n_{K-1}] ~ Multinomial(total_count, probs)
The last concentration dimension parametrizes a single Dirichlet-Multinomial distribution. When calling distribution functions (e.g., dist.prob(counts)), concentration, total_count and counts are broadcast to the same shape. The last dimension of counts corresponds single Dirichlet-Multinomial distributions. Distribution parameters are automatically broadcast in all functions; see examples for details. Pitfalls The number of classes, K, must not exceed: the largest integer representable by self.dtype, i.e., 2**(mantissa_bits+1) (IEE754), the maximum Tensor index, i.e., 2**31-1. In other words, K <= min(2**31-1, {
tf.float16: 2**11,
tf.float32: 2**24,
tf.float64: 2**53 }[param.dtype])
Note: This condition is validated only when self.validate_args = True.
Examples alpha = [1., 2., 3.]
n = 2.
dist = DirichletMultinomial(n, alpha)
Creates a 3-class distribution, with the 3rd class is most likely to be drawn. The distribution functions can be evaluated on counts. # counts same shape as alpha.
counts = [0., 0., 2.]
dist.prob(counts) # Shape []
# alpha will be broadcast to [[1., 2., 3.], [1., 2., 3.]] to match counts.
counts = [[1., 1., 0.], [1., 0., 1.]]
dist.prob(counts) # Shape [2]
# alpha will be broadcast to shape [5, 7, 3] to match counts.
counts = [[...]] # Shape [5, 7, 3]
dist.prob(counts) # Shape [5, 7]
Creates a 2-batch of 3-class distributions. alpha = [[1., 2., 3.], [4., 5., 6.]] # Shape [2, 3]
n = [3., 3.]
dist = DirichletMultinomial(n, alpha)
# counts will be broadcast to [[2., 1., 0.], [2., 1., 0.]] to match alpha.
counts = [2., 1., 0.]
dist.prob(counts) # Shape [2]
Args
total_count Non-negative floating point tensor, whose dtype is the same as concentration. The shape is broadcastable to [N1,..., Nm] with m >= 0. Defines this as a batch of N1 x ... x Nm different Dirichlet multinomial distributions. Its components should be equal to integer values.
concentration Positive floating point tensor, whose dtype is the same as n with shape broadcastable to [N1,..., Nm, K] m >= 0. Defines this as a batch of N1 x ... x Nm different K class Dirichlet multinomial distributions.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
concentration Concentration parameter; expected prior counts for that coordinate.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
total_concentration Sum of last dim of concentration parameter.
total_count Number of trials used to construct a sample.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector. Additional documentation from DirichletMultinomial: The covariance for each batch member is defined as the following: Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) *
(n + alpha_0) / (1 + alpha_0)
where concentration = alpha and total_concentration = alpha_0 = sum_j alpha_j. The covariance between elements in a batch is defined as: Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 *
(n + alpha_0) / (1 + alpha_0)
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function. Additional documentation from DirichletMultinomial: For each batch of counts, value = [n_0, ..., n_{K-1}], P[value] is the probability that after sampling self.total_count draws from this Dirichlet-Multinomial distribution, the number of draws falling in class j is n_j. Since this definition is exchangeable; different sequences have the same counts so the probability includes a combinatorial coefficient.
Note: value must be a non-negative tensor with dtype self.dtype, have no fractional components, and such that tf.reduce_sum(value, -1) = self.total_count. Its shape must be broadcastable with self.concentration and self.total_count.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function. Additional documentation from DirichletMultinomial: For each batch of counts, value = [n_0, ..., n_{K-1}], P[value] is the probability that after sampling self.total_count draws from this Dirichlet-Multinomial distribution, the number of draws falling in class j is n_j. Since this definition is exchangeable; different sequences have the same counts so the probability includes a combinatorial coefficient.
Note: value must be a non-negative tensor with dtype self.dtype, have no fractional components, and such that tf.reduce_sum(value, -1) = self.total_count. Its shape must be broadcastable with self.concentration and self.total_count.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.dirichletmultinomial |
tf.compat.v1.distributions.Distribution A generic probability distribution base class.
tf.compat.v1.distributions.Distribution(
dtype, reparameterization_type, validate_args, allow_nan_stats, parameters=None,
graph_parents=None, name=None
)
Distribution is a base class for constructing and organizing properties (e.g., mean, variance) of random variables (e.g, Bernoulli, Gaussian). Subclassing Subclasses are expected to implement a leading-underscore version of the same-named function. The argument signature should be identical except for the omission of name="...". For example, to enable log_prob(value, name="log_prob") a subclass should implement _log_prob(value). Subclasses can append to public-level docstrings by providing docstrings for their method specializations. For example: @util.AppendDocstring("Some other details.")
def _log_prob(self, value):
...
would add the string "Some other details." to the log_prob function docstring. This is implemented as a simple decorator to avoid python linter complaining about missing Args/Returns/Raises sections in the partial docstrings. Broadcasting, batching, and shapes All distributions support batches of independent distributions of that type. The batch shape is determined by broadcasting together the parameters. The shape of arguments to __init__, cdf, log_cdf, prob, and log_prob reflect this broadcasting, as does the return value of sample and sample_n. sample_n_shape = [n] + batch_shape + event_shape, where sample_n_shape is the shape of the Tensor returned from sample_n, n is the number of samples, batch_shape defines how many independent distributions there are, and event_shape defines the shape of samples from each of those independent distributions. Samples are independent along the batch_shape dimensions, but not necessarily so along the event_shape dimensions (depending on the particulars of the underlying distribution). Using the Uniform distribution as an example: minval = 3.0
maxval = [[4.0, 6.0],
[10.0, 12.0]]
# Broadcasting:
# This instance represents 4 Uniform distributions. Each has a lower bound at
# 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape.
u = Uniform(minval, maxval)
# `event_shape` is `TensorShape([])`.
event_shape = u.event_shape
# `event_shape_t` is a `Tensor` which will evaluate to [].
event_shape_t = u.event_shape_tensor()
# Sampling returns a sample per distribution. `samples` has shape
# [5, 2, 2], which is [n] + batch_shape + event_shape, where n=5,
# batch_shape=[2, 2], and event_shape=[].
samples = u.sample_n(5)
# The broadcasting holds across methods. Here we use `cdf` as an example. The
# same holds for `log_cdf` and the likelihood functions.
# `cum_prob` has shape [2, 2] as the `value` argument was broadcasted to the
# shape of the `Uniform` instance.
cum_prob_broadcast = u.cdf(4.0)
# `cum_prob`'s shape is [2, 2], one per distribution. No broadcasting
# occurred.
cum_prob_per_dist = u.cdf([[4.0, 5.0],
[6.0, 7.0]])
# INVALID as the `value` argument is not broadcastable to the distribution's
# shape.
cum_prob_invalid = u.cdf([4.0, 5.0, 6.0])
Shapes There are three important concepts associated with TensorFlow Distributions shapes: Event shape describes the shape of a single draw from the distribution; it may be dependent across dimensions. For scalar distributions, the event shape is []. For a 5-dimensional MultivariateNormal, the event shape is [5]. Batch shape describes independent, not identically distributed draws, aka a "collection" or "bunch" of distributions. Sample shape describes independent, identically distributed draws of batches from the distribution family. The event shape and the batch shape are properties of a Distribution object, whereas the sample shape is associated with a specific call to sample or log_prob. For detailed usage examples of TensorFlow Distributions shapes, see this tutorial Parameter values leading to undefined statistics or distributions. Some distributions do not have well-defined statistics for all initialization parameter values. For example, the beta distribution is parameterized by positive real numbers concentration1 and concentration0, and does not have well-defined mode if concentration1 < 1 or concentration0 < 1. The user is given the option of raising an exception or returning NaN. a = tf.exp(tf.matmul(logits, weights_a))
b = tf.exp(tf.matmul(logits, weights_b))
# Will raise exception if ANY batch member has a < 1 or b < 1.
dist = distributions.beta(a, b, allow_nan_stats=False)
mode = dist.mode().eval()
# Will return NaN for batch members with either a < 1 or b < 1.
dist = distributions.beta(a, b, allow_nan_stats=True) # Default behavior
mode = dist.mode().eval()
In all cases, an exception is raised if invalid parameters are passed, e.g. # Will raise an exception if any Op is run.
negative_a = -1.0 * a # beta distribution by definition has a > 0.
dist = distributions.beta(negative_a, b, allow_nan_stats=True)
dist.mean().eval()
Args
dtype The type of the event samples. None implies no type-enforcement.
reparameterization_type Instance of ReparameterizationType. If distributions.FULLY_REPARAMETERIZED, this Distribution can be reparameterized in terms of some standard distribution with a function whose Jacobian is constant for the support of the standard distribution. If distributions.NOT_REPARAMETERIZED, then no such reparameterization is available.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
parameters Python dict of parameters used to instantiate this Distribution.
graph_parents Python list of graph prerequisites of this Distribution.
name Python str name prefixed to Ops created by this class. Default: subclass name.
Raises
ValueError if any member of graph_parents is None or not a Tensor.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.distribution |
tf.compat.v1.distributions.Exponential Exponential distribution. Inherits From: Gamma, Distribution
tf.compat.v1.distributions.Exponential(
rate, validate_args=False, allow_nan_stats=True, name='Exponential'
)
The Exponential distribution is parameterized by an event rate parameter. Mathematical Details The probability density function (pdf) is, pdf(x; lambda, x > 0) = exp(-lambda x) / Z
Z = 1 / lambda
where rate = lambda and Z is the normalizaing constant. The Exponential distribution is a special case of the Gamma distribution, i.e., Exponential(rate) = Gamma(concentration=1., rate)
The Exponential distribution uses a rate parameter, or "inverse scale", which can be intuited as, X ~ Exponential(rate=1)
Y = X / rate
Args
rate Floating point tensor, equivalent to 1 / mean. Must contain only positive values.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
concentration Concentration parameter.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
rate Rate parameter.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. Additional documentation from Gamma: The mode of a gamma distribution is (shape - 1) / rate when shape > 1, and NaN otherwise. If self.allow_nan_stats is False, an exception will be raised rather than returning NaN. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.exponential |
tf.compat.v1.distributions.Gamma Gamma distribution. Inherits From: Distribution
tf.compat.v1.distributions.Gamma(
concentration, rate, validate_args=False, allow_nan_stats=True,
name='Gamma'
)
The Gamma distribution is defined over positive real numbers using parameters concentration (aka "alpha") and rate (aka "beta"). Mathematical Details The probability density function (pdf) is, pdf(x; alpha, beta, x > 0) = x**(alpha - 1) exp(-x beta) / Z
Z = Gamma(alpha) beta**(-alpha)
where:
concentration = alpha, alpha > 0,
rate = beta, beta > 0,
Z is the normalizing constant, and,
Gamma is the gamma function. The cumulative density function (cdf) is, cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta x) / Gamma(alpha)
where GammaInc is the lower incomplete Gamma function. The parameters can be intuited via their relationship to mean and stddev, concentration = alpha = (mean / stddev)**2
rate = beta = mean / stddev**2 = concentration / mean
Distribution parameters are automatically broadcast in all functions; see examples for details. Warning: The samples of this distribution are always non-negative. However, the samples that are smaller than np.finfo(dtype).tiny are rounded to this value, so it appears more often than it should. This should only be noticeable when the concentration is very small, or the rate is very large. See note in tf.random.gamma docstring. Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018). Examples import tensorflow_probability as tfp
tfd = tfp.distributions
dist = tfd.Gamma(concentration=3.0, rate=2.0)
dist2 = tfd.Gamma(concentration=[3.0, 4.0], rate=[2.0, 3.0])
Compute the gradients of samples w.r.t. the parameters: concentration = tf.constant(3.0)
rate = tf.constant(2.0)
dist = tfd.Gamma(concentration, rate)
samples = dist.sample(5) # Shape [5]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, [concentration, rate])
References: Implicit Reparameterization Gradients: Figurnov et al., 2018 (pdf)
Args
concentration Floating point tensor, the concentration params of the distribution(s). Must contain only positive values.
rate Floating point tensor, the inverse scale params of the distribution(s). Must contain only positive values.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Raises
TypeError if concentration and rate are different dtypes.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
concentration Concentration parameter.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
rate Rate parameter.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. Additional documentation from Gamma: The mode of a gamma distribution is (shape - 1) / rate when shape > 1, and NaN otherwise. If self.allow_nan_stats is False, an exception will be raised rather than returning NaN. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.gamma |
tf.compat.v1.distributions.kl_divergence Get the KL-divergence KL(distribution_a || distribution_b). (deprecated)
tf.compat.v1.distributions.kl_divergence(
distribution_a, distribution_b, allow_nan_stats=True, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2019-01-01. Instructions for updating: The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use tfp.distributions instead of tf.distributions. If there is no KL method registered specifically for type(distribution_a) and type(distribution_b), then the class hierarchies of these types are searched. If one KL method is registered between any pairs of classes in these two parent hierarchies, it is used. If more than one such registered method exists, the method whose registered classes have the shortest sum MRO paths to the input types is used. If more than one such shortest path exists, the first method identified in the search is used (favoring a shorter MRO distance to type(distribution_a)).
Args
distribution_a The first distribution.
distribution_b The second distribution.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Returns A Tensor with the batchwise KL-divergence between distribution_a and distribution_b.
Raises
NotImplementedError If no KL method is defined for distribution types of distribution_a and distribution_b. | tensorflow.compat.v1.distributions.kl_divergence |
tf.compat.v1.distributions.Laplace The Laplace distribution with location loc and scale parameters. Inherits From: Distribution
tf.compat.v1.distributions.Laplace(
loc, scale, validate_args=False, allow_nan_stats=True, name='Laplace'
)
Mathematical details The probability density function (pdf) of this distribution is, pdf(x; mu, sigma) = exp(-|x - mu| / sigma) / Z
Z = 2 sigma
where loc = mu, scale = sigma, and Z is the normalization constant. Note that the Laplace distribution can be thought of two exponential distributions spliced together "back-to-back." The Lpalce distribution is a member of the location-scale family, i.e., it can be constructed as, X ~ Laplace(loc=0, scale=1)
Y = loc + scale * X
Args
loc Floating point tensor which characterizes the location (center) of the distribution.
scale Positive floating point tensor which characterizes the spread of the distribution.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Raises
TypeError if loc and scale are of different dtype.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
loc Distribution parameter for the location.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
scale Distribution parameter for scale.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.laplace |
tf.compat.v1.distributions.Multinomial Multinomial distribution. Inherits From: Distribution
tf.compat.v1.distributions.Multinomial(
total_count, logits=None, probs=None, validate_args=False, allow_nan_stats=True,
name='Multinomial'
)
This Multinomial distribution is parameterized by probs, a (batch of) length-K prob (probability) vectors (K > 1) such that tf.reduce_sum(probs, -1) = 1, and a total_count number of trials, i.e., the number of trials per draw from the Multinomial. It is defined over a (batch of) length-K vector counts such that tf.reduce_sum(counts, -1) = total_count. The Multinomial is identically the Binomial distribution when K = 2. Mathematical Details The Multinomial is a distribution over K-class counts, i.e., a length-K vector of non-negative integer counts = n = [n_0, ..., n_{K-1}]. The probability mass function (pmf) is, pmf(n; pi, N) = prod_j (pi_j)**n_j / Z
Z = (prod_j n_j!) / N!
where:
probs = pi = [pi_0, ..., pi_{K-1}], pi_j > 0, sum_j pi_j = 1,
total_count = N, N a positive integer,
Z is the normalization constant, and,
N! denotes N factorial. Distribution parameters are automatically broadcast in all functions; see examples for details. Pitfalls The number of classes, K, must not exceed: the largest integer representable by self.dtype, i.e., 2**(mantissa_bits+1) (IEE754), the maximum Tensor index, i.e., 2**31-1. In other words, K <= min(2**31-1, {
tf.float16: 2**11,
tf.float32: 2**24,
tf.float64: 2**53 }[param.dtype])
Note: This condition is validated only when self.validate_args = True.
Examples Create a 3-class distribution, with the 3rd class is most likely to be drawn, using logits. logits = [-50., -43, 0]
dist = Multinomial(total_count=4., logits=logits)
Create a 3-class distribution, with the 3rd class is most likely to be drawn. p = [.2, .3, .5]
dist = Multinomial(total_count=4., probs=p)
The distribution functions can be evaluated on counts. # counts same shape as p.
counts = [1., 0, 3]
dist.prob(counts) # Shape []
# p will be broadcast to [[.2, .3, .5], [.2, .3, .5]] to match counts.
counts = [[1., 2, 1], [2, 2, 0]]
dist.prob(counts) # Shape [2]
# p will be broadcast to shape [5, 7, 3] to match counts.
counts = [[...]] # Shape [5, 7, 3]
dist.prob(counts) # Shape [5, 7]
Create a 2-batch of 3-class distributions. p = [[.1, .2, .7], [.3, .3, .4]] # Shape [2, 3]
dist = Multinomial(total_count=[4., 5], probs=p)
counts = [[2., 1, 1], [3, 1, 1]]
dist.prob(counts) # Shape [2]
dist.sample(5) # Shape [5, 2, 3]
Args
total_count Non-negative floating point tensor with shape broadcastable to [N1,..., Nm] with m >= 0. Defines this as a batch of N1 x ... x Nm different Multinomial distributions. Its components should be equal to integer values.
logits Floating point tensor representing unnormalized log-probabilities of a positive event with shape broadcastable to [N1,..., Nm, K] m >= 0, and the same dtype as total_count. Defines this as a batch of N1 x ... x Nm different K class Multinomial distributions. Only one of logits or probs should be passed in.
probs Positive floating point tensor with shape broadcastable to [N1,..., Nm, K] m >= 0 and same dtype as total_count. Defines this as a batch of N1 x ... x Nm different K class Multinomial distributions. probs's components in the last portion of its shape should sum to 1. Only one of logits or probs should be passed in.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
logits Vector of coordinatewise logits.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
probs Probability of drawing a 1 in that coordinate.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
total_count Number of trials used to construct a sample.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function. Additional documentation from Multinomial: For each batch of counts, value = [n_0, ... ,n_{k-1}], P[value] is the probability that after sampling self.total_count draws from this Multinomial distribution, the number of draws falling in class j is n_j. Since this definition is exchangeable; different sequences have the same counts so the probability includes a combinatorial coefficient.
Note: value must be a non-negative tensor with dtype self.dtype, have no fractional components, and such that tf.reduce_sum(value, -1) = self.total_count. Its shape must be broadcastable with self.probs and self.total_count.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.multinomial |
tf.compat.v1.distributions.Normal The Normal distribution with location loc and scale parameters. Inherits From: Distribution
tf.compat.v1.distributions.Normal(
loc, scale, validate_args=False, allow_nan_stats=True, name='Normal'
)
Mathematical details The probability density function (pdf) is, pdf(x; mu, sigma) = exp(-0.5 (x - mu)**2 / sigma**2) / Z
Z = (2 pi sigma**2)**0.5
where loc = mu is the mean, scale = sigma is the std. deviation, and, Z is the normalization constant. The Normal distribution is a member of the location-scale family, i.e., it can be constructed as, X ~ Normal(loc=0, scale=1)
Y = loc + scale * X
Examples Examples of initialization of one or a batch of distributions. import tensorflow_probability as tfp
tfd = tfp.distributions
# Define a single scalar Normal distribution.
dist = tfd.Normal(loc=0., scale=3.)
# Evaluate the cdf at 1, returning a scalar.
dist.cdf(1.)
# Define a batch of two scalar valued Normals.
# The first has mean 1 and standard deviation 11, the second 2 and 22.
dist = tfd.Normal(loc=[1, 2.], scale=[11, 22.])
# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
# returning a length two tensor.
dist.prob([0, 1.5])
# Get 3 samples, returning a 3 x 2 tensor.
dist.sample([3])
Arguments are broadcast when possible. # Define a batch of two scalar valued Normals.
# Both have mean 1, but different standard deviations.
dist = tfd.Normal(loc=1., scale=[11, 22.])
# Evaluate the pdf of both distributions on the same point, 3.0,
# returning a length 2 tensor.
dist.prob(3.0)
Args
loc Floating point tensor; the means of the distribution(s).
scale Floating point tensor; the stddevs of the distribution(s). Must contain only positive values.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Raises
TypeError if loc and scale have different dtype.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
loc Distribution parameter for the mean.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
scale Distribution parameter for standard deviation.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.normal |
tf.compat.v1.distributions.RegisterKL Decorator to register a KL divergence implementation function.
tf.compat.v1.distributions.RegisterKL(
dist_cls_a, dist_cls_b
)
Usage: @distributions.RegisterKL(distributions.Normal, distributions.Normal) def _kl_normal_mvn(norm_a, norm_b): # Return KL(norm_a || norm_b)
Args
dist_cls_a the class of the first argument of the KL divergence.
dist_cls_b the class of the second argument of the KL divergence. Methods __call__ View source
__call__(
kl_fn
)
Perform the KL registration.
Args
kl_fn The function to use for the KL divergence.
Returns kl_fn
Raises
TypeError if kl_fn is not a callable.
ValueError if a KL divergence function has already been registered for the given argument classes. | tensorflow.compat.v1.distributions.registerkl |
tf.compat.v1.distributions.ReparameterizationType Instances of this class represent how sampling is reparameterized.
tf.compat.v1.distributions.ReparameterizationType(
rep_type
)
Two static instances exist in the distributions library, signifying one of two possible properties for samples from a distribution: FULLY_REPARAMETERIZED: Samples from the distribution are fully reparameterized, and straight-through gradients are supported. NOT_REPARAMETERIZED: Samples from the distribution are not fully reparameterized, and straight-through gradients are either partially unsupported or are not supported at all. In this case, for purposes of e.g. RL or variational inference, it is generally safest to wrap the sample results in a stop_gradients call and use policy gradients / surrogate loss instead. Methods __eq__ View source
__eq__(
other
)
Determine if this ReparameterizationType is equal to another. Since ReparameterizationType instances are constant static global instances, equality checks if two instances' id() values are equal.
Args
other Object to compare against.
Returns self is other. | tensorflow.compat.v1.distributions.reparameterizationtype |
tf.compat.v1.distributions.StudentT Student's t-distribution. Inherits From: Distribution
tf.compat.v1.distributions.StudentT(
df, loc, scale, validate_args=False, allow_nan_stats=True,
name='StudentT'
)
This distribution has parameters: degree of freedom df, location loc, and scale. Mathematical details The probability density function (pdf) is, pdf(x; df, mu, sigma) = (1 + y**2 / df)**(-0.5 (df + 1)) / Z
where,
y = (x - mu) / sigma
Z = abs(sigma) sqrt(df pi) Gamma(0.5 df) / Gamma(0.5 (df + 1))
where:
loc = mu,
scale = sigma, and,
Z is the normalization constant, and,
Gamma is the gamma function. The StudentT distribution is a member of the location-scale family, i.e., it can be constructed as, X ~ StudentT(df, loc=0, scale=1)
Y = loc + scale * X
Notice that scale has semantics more similar to standard deviation than variance. However it is not actually the std. deviation; the Student's t-distribution std. dev. is scale sqrt(df / (df - 2)) when df > 2. Samples of this distribution are reparameterized (pathwise differentiable). The derivatives are computed using the approach described in (Figurnov et al., 2018). Examples Examples of initialization of one or a batch of distributions. import tensorflow_probability as tfp
tfd = tfp.distributions
# Define a single scalar Student t distribution.
single_dist = tfd.StudentT(df=3)
# Evaluate the pdf at 1, returning a scalar Tensor.
single_dist.prob(1.)
# Define a batch of two scalar valued Student t's.
# The first has degrees of freedom 2, mean 1, and scale 11.
# The second 3, 2 and 22.
multi_dist = tfd.StudentT(df=[2, 3], loc=[1, 2.], scale=[11, 22.])
# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
# returning a length two tensor.
multi_dist.prob([0, 1.5])
# Get 3 samples, returning a 3 x 2 tensor.
multi_dist.sample(3)
Arguments are broadcast when possible. # Define a batch of two Student's t distributions.
# Both have df 2 and mean 1, but different scales.
dist = tfd.StudentT(df=2, loc=1, scale=[11, 22.])
# Evaluate the pdf of both distributions on the same point, 3.0,
# returning a length 2 tensor.
dist.prob(3.0)
Compute the gradients of samples w.r.t. the parameters: df = tf.constant(2.0)
loc = tf.constant(2.0)
scale = tf.constant(11.0)
dist = tfd.StudentT(df=df, loc=loc, scale=scale)
samples = dist.sample(5) # Shape [5]
loss = tf.reduce_mean(tf.square(samples)) # Arbitrary loss function
# Unbiased stochastic gradients of the loss function
grads = tf.gradients(loss, [df, loc, scale])
References: Implicit Reparameterization Gradients: Figurnov et al., 2018 (pdf)
Args
df Floating-point Tensor. The degrees of freedom of the distribution(s). df must contain only positive values.
loc Floating-point Tensor. The mean(s) of the distribution(s).
scale Floating-point Tensor. The scaling factor(s) for the distribution(s). Note that scale is not technically the standard deviation of this distribution but has semantics more similar to standard deviation than variance.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Raises
TypeError if loc and scale are different dtypes.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
df Degrees of freedom in these Student's t distribution(s).
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
loc Locations of these Student's t distribution(s).
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
scale Scaling factors of these Student's t distribution(s).
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. Additional documentation from StudentT: The mean of Student's T equals loc if df > 1, otherwise it is NaN. If self.allow_nan_stats=True, then an exception will be raised rather than returning NaN. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape. Additional documentation from StudentT: The variance for Student's T equals df / (df - 2), when df > 2
infinity, when 1 < df <= 2
NaN, when df <= 1
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.studentt |
tf.compat.v1.distributions.Uniform Uniform distribution with low and high parameters. Inherits From: Distribution
tf.compat.v1.distributions.Uniform(
low=0.0, high=1.0, validate_args=False, allow_nan_stats=True,
name='Uniform'
)
Mathematical Details The probability density function (pdf) is, pdf(x; a, b) = I[a <= x < b] / Z
Z = b - a
where
low = a,
high = b,
Z is the normalizing constant, and
I[predicate] is the indicator function for predicate. The parameters low and high must be shaped in a way that supports broadcasting (e.g., high - low is a valid operation). Examples # Without broadcasting:
u1 = Uniform(low=3.0, high=4.0) # a single uniform distribution [3, 4]
u2 = Uniform(low=[1.0, 2.0],
high=[3.0, 4.0]) # 2 distributions [1, 3], [2, 4]
u3 = Uniform(low=[[1.0, 2.0],
[3.0, 4.0]],
high=[[1.5, 2.5],
[3.5, 4.5]]) # 4 distributions
# With broadcasting:
u1 = Uniform(low=3.0, high=[5.0, 6.0, 7.0]) # 3 distributions
Args
low Floating point tensor, lower boundary of the output interval. Must have low < high.
high Floating point tensor, upper boundary of the output interval. Must have low < high.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Raises
InvalidArgumentError if low >= high and validate_args=False.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
high Upper boundary of the output interval.
low Lower boundary of the output interval.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. range View source
range(
name='range'
)
high - low. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | tensorflow.compat.v1.distributions.uniform |
Module: tf.compat.v1.dtypes Public API for tf.dtypes namespace. Classes class DType: Represents the type of the elements in a Tensor. Functions as_dtype(...): Converts the given type_value to a DType. as_string(...): Converts each entry in the given tensor to strings. cast(...): Casts a tensor to a new type. complex(...): Converts two real numbers to a complex number. saturate_cast(...): Performs a safe saturating cast of value to dtype.
Other Members
QUANTIZED_DTYPES
bfloat16 tf.dtypes.DType
bool tf.dtypes.DType
complex128 tf.dtypes.DType
complex64 tf.dtypes.DType
double tf.dtypes.DType
float16 tf.dtypes.DType
float32 tf.dtypes.DType
float64 tf.dtypes.DType
half tf.dtypes.DType
int16 tf.dtypes.DType
int32 tf.dtypes.DType
int64 tf.dtypes.DType
int8 tf.dtypes.DType
qint16 tf.dtypes.DType
qint32 tf.dtypes.DType
qint8 tf.dtypes.DType
quint16 tf.dtypes.DType
quint8 tf.dtypes.DType
resource tf.dtypes.DType
string tf.dtypes.DType
uint16 tf.dtypes.DType
uint32 tf.dtypes.DType
uint64 tf.dtypes.DType
uint8 tf.dtypes.DType
variant tf.dtypes.DType | tensorflow.compat.v1.dtypes |
tf.compat.v1.enable_control_flow_v2 Use control flow v2.
tf.compat.v1.enable_control_flow_v2()
control flow v2 (cfv2) is an improved version of control flow in TensorFlow with support for higher order derivatives. Enabling cfv2 will change the graph/function representation of control flow, e.g., tf.while_loop and tf.cond will generate functional While and If ops instead of low-level Switch, Merge etc. ops. Note: Importing and running graphs exported with old control flow will still be supported. Calling tf.enable_control_flow_v2() lets you opt-in to this TensorFlow 2.0 feature.
Note: v2 control flow is always enabled inside of tf.function. Calling this function is not required. | tensorflow.compat.v1.enable_control_flow_v2 |
tf.compat.v1.enable_eager_execution Enables eager execution for the lifetime of this program.
tf.compat.v1.enable_eager_execution(
config=None, device_policy=None, execution_mode=None
)
Eager execution provides an imperative interface to TensorFlow. With eager execution enabled, TensorFlow functions execute operations immediately (as opposed to adding to a graph to be executed later in a tf.compat.v1.Session) and return concrete values (as opposed to symbolic references to a node in a computational graph). For example: tf.compat.v1.enable_eager_execution()
# After eager execution is enabled, operations are executed as they are
# defined and Tensor objects hold concrete values, which can be accessed as
# numpy.ndarray`s through the numpy() method.
assert tf.multiply(6, 7).numpy() == 42
Eager execution cannot be enabled after TensorFlow APIs have been used to create or execute graphs. It is typically recommended to invoke this function at program startup and not in a library (as most libraries should be usable both with and without eager execution).
Args
config (Optional.) A tf.compat.v1.ConfigProto to use to configure the environment in which operations are executed. Note that tf.compat.v1.ConfigProto is also used to configure graph execution (via tf.compat.v1.Session) and many options within tf.compat.v1.ConfigProto are not implemented (or are irrelevant) when eager execution is enabled.
device_policy (Optional.) Policy controlling how operations requiring inputs on a specific device (e.g., a GPU 0) handle inputs on a different device (e.g. GPU 1 or CPU). When set to None, an appropriate value will be picked automatically. The value picked may change between TensorFlow releases. Valid values: tf.contrib.eager.DEVICE_PLACEMENT_EXPLICIT: raises an error if the placement is not correct. tf.contrib.eager.DEVICE_PLACEMENT_WARN: copies the tensors which are not on the right device but logs a warning. tf.contrib.eager.DEVICE_PLACEMENT_SILENT: silently copies the tensors. Note that this may hide performance problems as there is no notification provided when operations are blocked on the tensor being copied between devices. tf.contrib.eager.DEVICE_PLACEMENT_SILENT_FOR_INT32: silently copies int32 tensors, raising errors on the other ones.
execution_mode (Optional.) Policy controlling how operations dispatched are actually executed. When set to None, an appropriate value will be picked automatically. The value picked may change between TensorFlow releases. Valid values: tf.contrib.eager.SYNC: executes each operation synchronously. tf.contrib.eager.ASYNC: executes each operation asynchronously. These operations may return "non-ready" handles.
Raises
ValueError If eager execution is enabled after creating/executing a TensorFlow graph, or if options provided conflict with a previous call to this function. | tensorflow.compat.v1.enable_eager_execution |
tf.compat.v1.enable_resource_variables Creates resource variables by default.
tf.compat.v1.enable_resource_variables()
Resource variables are improved versions of TensorFlow variables with a well-defined memory model. Accessing a resource variable reads its value, and all ops which access a specific read value of the variable are guaranteed to see the same value for that tensor. Writes which happen after a read (by having a control or data dependency on the read) are guaranteed not to affect the value of the read tensor, and similarly writes which happen before a read are guaranteed to affect the value. No guarantees are made about unordered read/write pairs. Calling tf.enable_resource_variables() lets you opt-in to this TensorFlow 2.0 feature. | tensorflow.compat.v1.enable_resource_variables |
tf.compat.v1.enable_tensor_equality Compare Tensors with element-wise comparison and thus be unhashable.
tf.compat.v1.enable_tensor_equality()
Comparing tensors with element-wise allows comparisons such as tf.Variable(1.0) == 1.0. Element-wise equality implies that tensors are unhashable. Thus tensors can no longer be directly used in sets or as a key in a dictionary. | tensorflow.compat.v1.enable_tensor_equality |
tf.compat.v1.enable_v2_behavior Enables TensorFlow 2.x behaviors.
tf.compat.v1.enable_v2_behavior()
This function can be called at the beginning of the program (before Tensors, Graphs or other structures have been created, and before devices have been initialized. It switches all global behaviors that are different between TensorFlow 1.x and 2.x to behave as intended for 2.x. This function is called in the main TensorFlow __init__.py file, user should not need to call it, except during complex migrations. | tensorflow.compat.v1.enable_v2_behavior |
tf.compat.v1.enable_v2_tensorshape In TensorFlow 2.0, iterating over a TensorShape instance returns values.
tf.compat.v1.enable_v2_tensorshape()
This enables the new behavior. Concretely, tensor_shape[i] returned a Dimension instance in V1, but it V2 it returns either an integer, or None. Examples: #######################
# If you had this in V1:
value = tensor_shape[i].value
# Do this in V2 instead:
value = tensor_shape[i]
#######################
# If you had this in V1:
for dim in tensor_shape:
value = dim.value
print(value)
# Do this in V2 instead:
for value in tensor_shape:
print(value)
#######################
# If you had this in V1:
dim = tensor_shape[i]
dim.assert_is_compatible_with(other_shape) # or using any other shape method
# Do this in V2 instead:
if tensor_shape.rank is None:
dim = Dimension(None)
else:
dim = tensor_shape.dims[i]
dim.assert_is_compatible_with(other_shape) # or using any other shape method
# The V2 suggestion above is more explicit, which will save you from
# the following trap (present in V1):
# you might do in-place modifications to `dim` and expect them to be reflected
# in `tensor_shape[i]`, but they would not be. | tensorflow.compat.v1.enable_v2_tensorshape |
Module: tf.compat.v1.errors Exception types for TensorFlow errors. Classes class AbortedError: The operation was aborted, typically due to a concurrent action. class AlreadyExistsError: Raised when an entity that we attempted to create already exists. class CancelledError: Raised when an operation or step is cancelled. class DataLossError: Raised when unrecoverable data loss or corruption is encountered. class DeadlineExceededError: Raised when a deadline expires before an operation could complete. class FailedPreconditionError: Operation was rejected because the system is not in a state to execute it. class InternalError: Raised when the system experiences an internal error. class InvalidArgumentError: Raised when an operation receives an invalid argument. class NotFoundError: Raised when a requested entity (e.g., a file or directory) was not found. class OpError: A generic error that is raised when TensorFlow execution fails. class OutOfRangeError: Raised when an operation iterates past the valid input range. class PermissionDeniedError: Raised when the caller does not have permission to run an operation. class ResourceExhaustedError: Some resource has been exhausted. class UnauthenticatedError: The request does not have valid authentication credentials. class UnavailableError: Raised when the runtime is currently unavailable. class UnimplementedError: Raised when an operation has not been implemented. class UnknownError: Unknown error. class raise_exception_on_not_ok_status: Context manager to check for C API status. Functions error_code_from_exception_type(...) exception_type_from_error_code(...)
Other Members
ABORTED 10
ALREADY_EXISTS 6
CANCELLED 1
DATA_LOSS 15
DEADLINE_EXCEEDED 4
FAILED_PRECONDITION 9
INTERNAL 13
INVALID_ARGUMENT 3
NOT_FOUND 5
OK 0
OUT_OF_RANGE 11
PERMISSION_DENIED 7
RESOURCE_EXHAUSTED 8
UNAUTHENTICATED 16
UNAVAILABLE 14
UNIMPLEMENTED 12
UNKNOWN 2 | tensorflow.compat.v1.errors |
tf.compat.v1.errors.error_code_from_exception_type
tf.compat.v1.errors.error_code_from_exception_type() | tensorflow.compat.v1.errors.error_code_from_exception_type |
tf.compat.v1.errors.exception_type_from_error_code
tf.compat.v1.errors.exception_type_from_error_code(
error_code
) | tensorflow.compat.v1.errors.exception_type_from_error_code |
tf.compat.v1.errors.raise_exception_on_not_ok_status Context manager to check for C API status. Methods __enter__ View source
__enter__()
__exit__ View source
__exit__(
type_arg, value_arg, traceback_arg
) | tensorflow.compat.v1.errors.raise_exception_on_not_ok_status |
Module: tf.compat.v1.estimator Estimator: High level tools for working with models. Modules experimental module: Public API for tf.estimator.experimental namespace. export module: All public utility methods for exporting Estimator to SavedModel. inputs module: Utility methods to create simple input_fns. tpu module: Public API for tf.estimator.tpu namespace. Classes class BaselineClassifier: A classifier that can establish a simple baseline. class BaselineEstimator: An estimator that can establish a simple baseline. class BaselineRegressor: A regressor that can establish a simple baseline. class BestExporter: This class exports the serving graph and checkpoints of the best models. class BinaryClassHead: Creates a Head for single label binary classification. class BoostedTreesClassifier: A Classifier for Tensorflow Boosted Trees models. class BoostedTreesEstimator: An Estimator for Tensorflow Boosted Trees models. class BoostedTreesRegressor: A Regressor for Tensorflow Boosted Trees models. class CheckpointSaverHook: Saves checkpoints every N steps or seconds. class CheckpointSaverListener: Interface for listeners that take action before or after checkpoint save. class DNNClassifier: A classifier for TensorFlow DNN models. class DNNEstimator: An estimator for TensorFlow DNN models with user-specified head. class DNNLinearCombinedClassifier: An estimator for TensorFlow Linear and DNN joined classification models. class DNNLinearCombinedEstimator: An estimator for TensorFlow Linear and DNN joined models with custom head. class DNNLinearCombinedRegressor: An estimator for TensorFlow Linear and DNN joined models for regression. class DNNRegressor: A regressor for TensorFlow DNN models. class Estimator: Estimator class to train and evaluate TensorFlow models. class EstimatorSpec: Ops and objects returned from a model_fn and passed to an Estimator. class EvalSpec: Configuration for the "eval" part for the train_and_evaluate call. class Exporter: A class representing a type of model export. class FeedFnHook: Runs feed_fn and sets the feed_dict accordingly. class FinalExporter: This class exports the serving graph and checkpoints at the end. class FinalOpsHook: A hook which evaluates Tensors at the end of a session. class GlobalStepWaiterHook: Delays execution until global step reaches wait_until_step. class Head: Interface for the head/top of a model. class LatestExporter: This class regularly exports the serving graph and checkpoints. class LinearClassifier: Linear classifier model. class LinearEstimator: An estimator for TensorFlow linear models with user-specified head. class LinearRegressor: An estimator for TensorFlow Linear regression problems. class LoggingTensorHook: Prints the given tensors every N local steps, every N seconds, or at end. class LogisticRegressionHead: Creates a Head for logistic regression. class ModeKeys: Standard names for Estimator model modes. class MultiClassHead: Creates a Head for multi class classification. class MultiHead: Creates a Head for multi-objective learning. class MultiLabelHead: Creates a Head for multi-label classification. class NanLossDuringTrainingError: Unspecified run-time error. class NanTensorHook: Monitors the loss tensor and stops training if loss is NaN. class PoissonRegressionHead: Creates a Head for poisson regression using tf.nn.log_poisson_loss. class ProfilerHook: Captures CPU/GPU profiling information every N steps or seconds. class RegressionHead: Creates a Head for regression using the mean_squared_error loss. class RunConfig: This class specifies the configurations for an Estimator run. class SecondOrStepTimer: Timer that triggers at most once every N seconds or once every N steps. class SessionRunArgs: Represents arguments to be added to a Session.run() call. class SessionRunContext: Provides information about the session.run() call being made. class SessionRunHook: Hook to extend calls to MonitoredSession.run(). class SessionRunValues: Contains the results of Session.run(). class StepCounterHook: Hook that counts steps per second. class StopAtStepHook: Hook that requests stop at a specified step. class SummarySaverHook: Saves summaries every N steps. class TrainSpec: Configuration for the "train" part for the train_and_evaluate call. class VocabInfo: Vocabulary information for warm-starting. class WarmStartSettings: Settings for warm-starting in tf.estimator.Estimators. Functions add_metrics(...): Creates a new tf.estimator.Estimator which has given metrics. classifier_parse_example_spec(...): Generates parsing spec for tf.parse_example to be used with classifiers. regressor_parse_example_spec(...): Generates parsing spec for tf.parse_example to be used with regressors. train_and_evaluate(...): Train and evaluate the estimator. | tensorflow.compat.v1.estimator |
tf.compat.v1.estimator.BaselineClassifier A classifier that can establish a simple baseline. Inherits From: Estimator
tf.compat.v1.estimator.BaselineClassifier(
model_dir=None, n_classes=2, weight_column=None, label_vocabulary=None,
optimizer='Ftrl', config=None,
loss_reduction=tf.compat.v1.losses.Reduction.SUM
)
This classifier ignores feature values and will learn to predict the average value of each label. For single-label problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label problems, this will predict the fraction of examples that are positive for each class. Example:
# Build BaselineClassifier
classifier = tf.estimator.BaselineClassifier(n_classes=3)
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
classifier.train(input_fn=input_fn_train)
# Evaluate cross entropy between the test and train labels.
loss = classifier.evaluate(input_fn=input_fn_eval)["loss"]
# predict outputs the probability distribution of the classes as seen in
# training.
predictions = classifier.predict(new_samples)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor.
Args
model_fn Model function. Follows the signature:
features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same.
labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None.
mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning.
config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used.
config estimator.RunConfig configuration object.
params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types.
warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged.
Raises
ValueError parameters of model_fn don't match params.
ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | tensorflow.compat.v1.estimator.baselineclassifier |