doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
Module: tf.compat.v1.keras.applications.inception_resnet_v2 Inception-ResNet V2 model for Keras. Reference:
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (AAAI 2017) Functions InceptionResNetV2(...): Instantiates the Inception-ResNet v2 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.inception_resnet_v2 |
Module: tf.compat.v1.keras.applications.inception_v3 Inception V3 model for Keras. Reference:
Rethinking the Inception Architecture for Computer Vision (CVPR 2016) Functions InceptionV3(...): Instantiates the Inception v3 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.inception_v3 |
Module: tf.compat.v1.keras.applications.mobilenet MobileNet v1 models for Keras. MobileNet is a general architecture and can be used for multiple use cases. Depending on the use case, it can use different input layer size and different width factors. This allows different width models to reduce the number of multiply-adds and thereby reduce inference cost on mobile devices. MobileNets support any input size greater than 32 x 32, with larger image sizes offering better performance. The number of parameters and number of multiply-adds can be modified by using the alpha parameter, which increases/decreases the number of filters in each layer. By altering the image size and alpha parameter, all 16 models from the paper can be built, with ImageNet weights provided. The paper demonstrates the performance of MobileNets using alpha values of 1.0 (also called 100 % MobileNet), 0.75, 0.5 and 0.25. For each of these alpha values, weights for 4 different input image sizes are provided (224, 192, 160, 128). The following table describes the size and accuracy of the 100% MobileNet on size 224 x 224: Width Multiplier (alpha) | ImageNet Acc | Multiply-Adds (M) | Params (M) | 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | | 0.75 MobileNet-224 | 68.4 % | 325 | 2.6 | | 0.50 MobileNet-224 | 63.7 % | 149 | 1.3 | | 0.25 MobileNet-224 | 50.6 % | 41 | 0.5 | The following table describes the performance of the 100 % MobileNet on various input sizes: Resolution | ImageNet Acc | Multiply-Adds (M) | Params (M)
| 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | | 1.0 MobileNet-192 | 69.1 % | 529 | 4.2 | | 1.0 MobileNet-160 | 67.2 % | 529 | 4.2 | | 1.0 MobileNet-128 | 64.4 % | 529 | 4.2 | Reference: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications Functions MobileNet(...): Instantiates the MobileNet architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.mobilenet |
Module: tf.compat.v1.keras.applications.mobilenet_v2 MobileNet v2 models for Keras. MobileNetV2 is a general architecture and can be used for multiple use cases. Depending on the use case, it can use different input layer size and different width factors. This allows different width models to reduce the number of multiply-adds and thereby reduce inference cost on mobile devices. MobileNetV2 is very similar to the original MobileNet, except that it uses inverted residual blocks with bottlenecking features. It has a drastically lower parameter count than the original MobileNet. MobileNets support any input size greater than 32 x 32, with larger image sizes offering better performance. The number of parameters and number of multiply-adds can be modified by using the alpha parameter, which increases/decreases the number of filters in each layer. By altering the image size and alpha parameter, all 22 models from the paper can be built, with ImageNet weights provided. The paper demonstrates the performance of MobileNets using alpha values of 1.0 (also called 100 % MobileNet), 0.35, 0.5, 0.75, 1.0, 1.3, and 1.4 For each of these alpha values, weights for 5 different input image sizes are provided (224, 192, 160, 128, and 96). The following table describes the performance of MobileNet on various input sizes: MACs stands for Multiply Adds Classification Checkpoint|MACs (M)|Parameters (M)|Top 1 Accuracy|Top 5 Accuracy --------------------------|------------|---------------|---------|----|--------- | [mobilenet_v2_1.4_224] | 582 | 6.06 | 75.0 | 92.5 | | [mobilenet_v2_1.3_224] | 509 | 5.34 | 74.4 | 92.1 | | [mobilenet_v2_1.0_224] | 300 | 3.47 | 71.8 | 91.0 | | [mobilenet_v2_1.0_192] | 221 | 3.47 | 70.7 | 90.1 | | [mobilenet_v2_1.0_160] | 154 | 3.47 | 68.8 | 89.0 | | [mobilenet_v2_1.0_128] | 99 | 3.47 | 65.3 | 86.9 | | [mobilenet_v2_1.0_96] | 56 | 3.47 | 60.3 | 83.2 | | [mobilenet_v2_0.75_224] | 209 | 2.61 | 69.8 | 89.6 | | [mobilenet_v2_0.75_192] | 153 | 2.61 | 68.7 | 88.9 | | [mobilenet_v2_0.75_160] | 107 | 2.61 | 66.4 | 87.3 | | [mobilenet_v2_0.75_128] | 69 | 2.61 | 63.2 | 85.3 | | [mobilenet_v2_0.75_96] | 39 | 2.61 | 58.8 | 81.6 | | [mobilenet_v2_0.5_224] | 97 | 1.95 | 65.4 | 86.4 | | [mobilenet_v2_0.5_192] | 71 | 1.95 | 63.9 | 85.4 | | [mobilenet_v2_0.5_160] | 50 | 1.95 | 61.0 | 83.2 | | [mobilenet_v2_0.5_128] | 32 | 1.95 | 57.7 | 80.8 | | [mobilenet_v2_0.5_96] | 18 | 1.95 | 51.2 | 75.8 | | [mobilenet_v2_0.35_224] | 59 | 1.66 | 60.3 | 82.9 | | [mobilenet_v2_0.35_192] | 43 | 1.66 | 58.2 | 81.2 | | [mobilenet_v2_0.35_160] | 30 | 1.66 | 55.7 | 79.1 | | [mobilenet_v2_0.35_128] | 20 | 1.66 | 50.8 | 75.0 | | [mobilenet_v2_0.35_96] | 11 | 1.66 | 45.5 | 70.4 | Reference:
MobileNetV2: Inverted Residuals and Linear Bottlenecks (CVPR 2018) Functions MobileNetV2(...): Instantiates the MobileNetV2 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.mobilenet_v2 |
Module: tf.compat.v1.keras.applications.mobilenet_v3 MobileNet v3 models for Keras. Functions decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.mobilenet_v3 |
Module: tf.compat.v1.keras.applications.nasnet NASNet-A models for Keras. NASNet refers to Neural Architecture Search Network, a family of models that were designed automatically by learning the model architectures directly on the dataset of interest. Here we consider NASNet-A, the highest performance model that was found for the CIFAR-10 dataset, and then extended to ImageNet 2012 dataset, obtaining state of the art performance on CIFAR-10 and ImageNet 2012. Only the NASNet-A models, and their respective weights, which are suited for ImageNet 2012 are provided. The below table describes the performance on ImageNet 2012: Architecture | Top-1 Acc | Top-5 Acc | Multiply-Adds | Params (M)
| NASNet-A (4 @ 1056) | 74.0 % | 91.6 % | 564 M | 5.3 | | NASNet-A (6 @ 4032) | 82.7 % | 96.2 % | 23.8 B | 88.9 | Reference:
Learning Transferable Architectures for Scalable Image Recognition (CVPR 2018) Functions NASNetLarge(...): Instantiates a NASNet model in ImageNet mode. NASNetMobile(...): Instantiates a Mobile NASNet model in ImageNet mode. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.nasnet |
Module: tf.compat.v1.keras.applications.resnet ResNet models for Keras. Reference:
Deep Residual Learning for Image Recognition (CVPR 2015) Functions ResNet101(...): Instantiates the ResNet101 architecture. ResNet152(...): Instantiates the ResNet152 architecture. ResNet50(...): Instantiates the ResNet50 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.resnet |
Module: tf.compat.v1.keras.applications.resnet50 Public API for tf.keras.applications.resnet50 namespace. Functions ResNet50(...): Instantiates the ResNet50 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.resnet50 |
Module: tf.compat.v1.keras.applications.resnet_v2 ResNet v2 models for Keras. Reference:
Identity Mappings in Deep Residual Networks (CVPR 2016) Functions ResNet101V2(...): Instantiates the ResNet101V2 architecture. ResNet152V2(...): Instantiates the ResNet152V2 architecture. ResNet50V2(...): Instantiates the ResNet50V2 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.resnet_v2 |
Module: tf.compat.v1.keras.applications.vgg16 VGG16 model for Keras. Reference:
Very Deep Convolutional Networks for Large-Scale Image Recognition (ICLR 2015) Functions VGG16(...): Instantiates the VGG16 model. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.vgg16 |
Module: tf.compat.v1.keras.applications.vgg19 VGG19 model for Keras. Reference:
Very Deep Convolutional Networks for Large-Scale Image Recognition (ICLR 2015) Functions VGG19(...): Instantiates the VGG19 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.vgg19 |
Module: tf.compat.v1.keras.applications.xception Xception V1 model for Keras. On ImageNet, this model gets to a top-1 validation accuracy of 0.790 and a top-5 validation accuracy of 0.945. Reference:
Xception: Deep Learning with Depthwise Separable Convolutions (CVPR 2017) Functions Xception(...): Instantiates the Xception architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | tensorflow.compat.v1.keras.applications.xception |
Module: tf.compat.v1.keras.backend Keras backend API. Classes class name_scope: A context manager for use when defining a Python op. Functions clear_session(...): Resets all state generated by Keras. epsilon(...): Returns the value of the fuzz factor used in numeric expressions. floatx(...): Returns the default float type, as a string. get_session(...): Returns the TF session to be used by the backend. get_uid(...): Associates a string prefix with an integer counter in a TensorFlow graph. image_data_format(...): Returns the default image data format convention. is_keras_tensor(...): Returns whether x is a Keras tensor. reset_uids(...): Resets graph identifiers. rnn(...): Iterates over the time dimension of a tensor. set_epsilon(...): Sets the value of the fuzz factor used in numeric expressions. set_floatx(...): Sets the default float type. set_image_data_format(...): Sets the value of the image data format convention. set_session(...): Sets the global TensorFlow session. | tensorflow.compat.v1.keras.backend |
tf.compat.v1.keras.backend.get_session Returns the TF session to be used by the backend.
tf.compat.v1.keras.backend.get_session(
op_input_list=()
)
If a default TensorFlow session is available, we will return it. Else, we will return the global Keras session assuming it matches the current graph. If no global Keras session exists at this point: we will create a new global session. Note that you can manually set the global session via K.set_session(sess).
Arguments
op_input_list An option sequence of tensors or ops, which will be used to determine the current graph. Otherwise the default graph will be used.
Returns A TensorFlow session. | tensorflow.compat.v1.keras.backend.get_session |
tf.compat.v1.keras.backend.name_scope A context manager for use when defining a Python op. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.name_scope
tf.compat.v1.keras.backend.name_scope(
name, default_name=None, values=None
)
This context manager validates that the given values are from the same graph, makes that graph the default graph, and pushes a name scope in that graph (see tf.Graph.name_scope for more details on that). For example, to define a new Python op called my_op: def my_op(a, b, c, name=None):
with tf.name_scope(name, "MyOp", [a, b, c]) as scope:
a = tf.convert_to_tensor(a, name="a")
b = tf.convert_to_tensor(b, name="b")
c = tf.convert_to_tensor(c, name="c")
# Define some computation that uses `a`, `b`, and `c`.
return foo_op(..., name=scope)
Args
name The name argument that is passed to the op function.
default_name The default name to use if the name argument is None.
values The list of Tensor arguments that are passed to the op function.
Raises
TypeError if default_name is passed in but not a string.
Attributes
name
Methods __enter__ View source
__enter__()
__exit__ View source
__exit__(
*exc_info
) | tensorflow.compat.v1.keras.backend.name_scope |
tf.compat.v1.keras.backend.set_session Sets the global TensorFlow session.
tf.compat.v1.keras.backend.set_session(
session
)
Arguments
session A TF Session. | tensorflow.compat.v1.keras.backend.set_session |
Module: tf.compat.v1.keras.callbacks Callbacks: utilities called at certain points during model training. Classes class BaseLogger: Callback that accumulates epoch averages of metrics. class CSVLogger: Callback that streams epoch results to a CSV file. class Callback: Abstract base class used to build new callbacks. class CallbackList: Container abstracting a list of callbacks. class EarlyStopping: Stop training when a monitored metric has stopped improving. class History: Callback that records events into a History object. class LambdaCallback: Callback for creating simple, custom callbacks on-the-fly. class LearningRateScheduler: Learning rate scheduler. class ModelCheckpoint: Callback to save the Keras model or model weights at some frequency. class ProgbarLogger: Callback that prints metrics to stdout. class ReduceLROnPlateau: Reduce learning rate when a metric has stopped improving. class RemoteMonitor: Callback used to stream events to a server. class TensorBoard: Enable visualizations for TensorBoard. class TerminateOnNaN: Callback that terminates training when a NaN loss is encountered. | tensorflow.compat.v1.keras.callbacks |
tf.compat.v1.keras.callbacks.TensorBoard Enable visualizations for TensorBoard. Inherits From: TensorBoard, Callback
tf.compat.v1.keras.callbacks.TensorBoard(
log_dir='./logs', histogram_freq=0, batch_size=32, write_graph=True,
write_grads=False, write_images=False, embeddings_freq=0,
embeddings_layer_names=None, embeddings_metadata=None, embeddings_data=None,
update_freq='epoch', profile_batch=2
)
TensorBoard is a visualization tool provided with TensorFlow. This callback logs events for TensorBoard, including: Metrics summary plots Training graph visualization Activation histograms Sampled profiling If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line: tensorboard --logdir=path_to_your_logs
You can find more information about TensorBoard here.
Arguments
log_dir the path of the directory where to save the log files to be parsed by TensorBoard.
histogram_freq frequency (in epochs) at which to compute activation and weight histograms for the layers of the model. If set to 0, histograms won't be computed. Validation data (or split) must be specified for histogram visualizations.
write_graph whether to visualize the graph in TensorBoard. The log file can become quite large when write_graph is set to True.
write_grads whether to visualize gradient histograms in TensorBoard. histogram_freq must be greater than 0.
batch_size size of batch of inputs to feed to the network for histograms computation.
write_images whether to write model weights to visualize as image in TensorBoard.
embeddings_freq frequency (in epochs) at which selected embedding layers will be saved. If set to 0, embeddings won't be computed. Data to be visualized in TensorBoard's Embedding tab must be passed as embeddings_data.
embeddings_layer_names a list of names of layers to keep eye on. If None or empty list all the embedding layer will be watched.
embeddings_metadata a dictionary which maps layer name to a file name in which metadata for this embedding layer is saved. Here are details about metadata files format. In case if the same metadata file is used for all embedding layers, string can be passed.
embeddings_data data to be embedded at layers specified in embeddings_layer_names. Numpy array (if the model has a single input) or list of Numpy arrays (if the model has multiple inputs). Learn more about embeddings in this guide.
update_freq 'batch' or 'epoch' or integer. When using 'batch', writes the losses and metrics to TensorBoard after each batch. The same applies for 'epoch'. If using an integer, let's say 1000, the callback will write the metrics and losses to TensorBoard every 1000 samples. Note that writing too frequently to TensorBoard can slow down your training.
profile_batch Profile the batch to sample compute characteristics. By default, it will profile the second batch. Set profile_batch=0 to disable profiling.
Raises
ValueError If histogram_freq is set and no validation data is provided. Eager Compatibility Using the TensorBoard callback will work when eager execution is enabled, with the restriction that outputting histogram summaries of weights and gradients is not supported. Consequently, histogram_freq will be ignored. Methods set_model View source
set_model(
model
)
Sets Keras model and creates summary ops. set_params View source
set_params(
params
) | tensorflow.compat.v1.keras.callbacks.tensorboard |
Module: tf.compat.v1.keras.constraints Constraints: functions that impose constraints on weight values. Classes class Constraint class MaxNorm: MaxNorm weight constraint. class MinMaxNorm: MinMaxNorm weight constraint. class NonNeg: Constrains the weights to be non-negative. class RadialConstraint: Constrains Conv2D kernel weights to be the same for each radius. class UnitNorm: Constrains the weights incident to each hidden unit to have unit norm. class max_norm: MaxNorm weight constraint. class min_max_norm: MinMaxNorm weight constraint. class non_neg: Constrains the weights to be non-negative. class radial_constraint: Constrains Conv2D kernel weights to be the same for each radius. class unit_norm: Constrains the weights incident to each hidden unit to have unit norm. Functions deserialize(...) get(...) serialize(...) | tensorflow.compat.v1.keras.constraints |
Module: tf.compat.v1.keras.datasets Public API for tf.keras.datasets namespace. Modules boston_housing module: Boston housing price regression dataset. cifar10 module: CIFAR10 small images classification dataset. cifar100 module: CIFAR100 small images classification dataset. fashion_mnist module: Fashion-MNIST dataset. imdb module: IMDB sentiment classification dataset. mnist module: MNIST handwritten digits dataset. reuters module: Reuters topic classification dataset. | tensorflow.compat.v1.keras.datasets |
Module: tf.compat.v1.keras.datasets.boston_housing Boston housing price regression dataset. Functions load_data(...): Loads the Boston Housing dataset. | tensorflow.compat.v1.keras.datasets.boston_housing |
Module: tf.compat.v1.keras.datasets.cifar10 CIFAR10 small images classification dataset. Functions load_data(...): Loads CIFAR10 dataset. | tensorflow.compat.v1.keras.datasets.cifar10 |
Module: tf.compat.v1.keras.datasets.cifar100 CIFAR100 small images classification dataset. Functions load_data(...): Loads CIFAR100 dataset. | tensorflow.compat.v1.keras.datasets.cifar100 |
Module: tf.compat.v1.keras.datasets.fashion_mnist Fashion-MNIST dataset. Functions load_data(...): Loads the Fashion-MNIST dataset. | tensorflow.compat.v1.keras.datasets.fashion_mnist |
Module: tf.compat.v1.keras.datasets.imdb IMDB sentiment classification dataset. Functions get_word_index(...): Retrieves a dict mapping words to their index in the IMDB dataset. load_data(...): Loads the IMDB dataset. | tensorflow.compat.v1.keras.datasets.imdb |
Module: tf.compat.v1.keras.datasets.mnist MNIST handwritten digits dataset. Functions load_data(...): Loads the MNIST dataset. | tensorflow.compat.v1.keras.datasets.mnist |
Module: tf.compat.v1.keras.datasets.reuters Reuters topic classification dataset. Functions get_word_index(...): Retrieves a dict mapping words to their index in the Reuters dataset. load_data(...): Loads the Reuters newswire classification dataset. | tensorflow.compat.v1.keras.datasets.reuters |
Module: tf.compat.v1.keras.estimator Keras estimator API. Functions model_to_estimator(...): Constructs an Estimator instance from given keras model. | tensorflow.compat.v1.keras.estimator |
tf.compat.v1.keras.estimator.model_to_estimator Constructs an Estimator instance from given keras model.
tf.compat.v1.keras.estimator.model_to_estimator(
keras_model=None, keras_model_path=None, custom_objects=None, model_dir=None,
config=None, checkpoint_format='saver', metric_names_map=None,
export_outputs=None
)
If you use infrastructure or other tooling that relies on Estimators, you can still build a Keras model and use model_to_estimator to convert the Keras model to an Estimator for use with downstream systems. For usage example, please see: Creating estimators from Keras Models. Sample Weights: Estimators returned by model_to_estimator are configured so that they can handle sample weights (similar to keras_model.fit(x, y, sample_weights)). To pass sample weights when training or evaluating the Estimator, the first item returned by the input function should be a dictionary with keys features and sample_weights. Example below: keras_model = tf.keras.Model(...)
keras_model.compile(...)
estimator = tf.keras.estimator.model_to_estimator(keras_model)
def input_fn():
return dataset_ops.Dataset.from_tensors(
({'features': features, 'sample_weights': sample_weights},
targets))
estimator.train(input_fn, steps=1)
Example with customized export signature: inputs = {'a': tf.keras.Input(..., name='a'),
'b': tf.keras.Input(..., name='b')}
outputs = {'c': tf.keras.layers.Dense(..., name='c')(inputs['a']),
'd': tf.keras.layers.Dense(..., name='d')(inputs['b'])}
keras_model = tf.keras.Model(inputs, outputs)
keras_model.compile(...)
export_outputs = {'c': tf.estimator.export.RegressionOutput,
'd': tf.estimator.export.ClassificationOutput}
estimator = tf.keras.estimator.model_to_estimator(
keras_model, export_outputs=export_outputs)
def input_fn():
return dataset_ops.Dataset.from_tensors(
({'features': features, 'sample_weights': sample_weights},
targets))
estimator.train(input_fn, steps=1)
Args
keras_model A compiled Keras model object. This argument is mutually exclusive with keras_model_path. Estimator's model_fn uses the structure of the model to clone the model. Defaults to None.
keras_model_path Path to a compiled Keras model saved on disk, in HDF5 format, which can be generated with the save() method of a Keras model. This argument is mutually exclusive with keras_model. Defaults to None.
custom_objects Dictionary for cloning customized objects. This is used with classes that is not part of this pip package. For example, if user maintains a relu6 class that inherits from tf.keras.layers.Layer, then pass custom_objects={'relu6': relu6}. Defaults to None.
model_dir Directory to save Estimator model parameters, graph, summary files for TensorBoard, etc. If unset a directory will be created with tempfile.mkdtemp
config RunConfig to config Estimator. Allows setting up things in model_fn based on configuration such as num_ps_replicas, or model_dir. Defaults to None. If both config.model_dir and the model_dir argument (above) are specified the model_dir argument takes precedence.
checkpoint_format Sets the format of the checkpoint saved by the estimator when training. May be saver or checkpoint, depending on whether to save checkpoints from tf.train.Saver or tf.train.Checkpoint. This argument currently defaults to saver. When 2.0 is released, the default will be checkpoint. Estimators use name-based tf.train.Saver checkpoints, while Keras models use object-based checkpoints from tf.train.Checkpoint. Currently, saving object-based checkpoints from model_to_estimator is only supported by Functional and Sequential models. Defaults to 'saver'.
metric_names_map Optional dictionary mapping Keras model output metric names to custom names. This can be used to override the default Keras model output metrics names in a multi IO model use case and provide custom names for the eval_metric_ops in Estimator. The Keras model metric names can be obtained using model.metrics_names excluding any loss metrics such as total loss and output losses. For example, if your Keras model has two outputs out_1 and out_2, with mse loss and acc metric, then model.metrics_names will be ['loss', 'out_1_loss', 'out_2_loss', 'out_1_acc', 'out_2_acc']. The model metric names excluding the loss metrics will be ['out_1_acc', 'out_2_acc'].
export_outputs Optional dictionary. This can be used to override the default Keras model output exports in a multi IO model use case and provide custom names for the export_outputs in tf.estimator.EstimatorSpec. Default is None, which is equivalent to {'serving_default': tf.estimator.export.PredictOutput}. If not None, the keys must match the keys of model.output_names. A dict {name: output} where: name: An arbitrary name for this output. output: an ExportOutput class such as ClassificationOutput, RegressionOutput, or PredictOutput. Single-headed models only need to specify one entry in this dictionary. Multi-headed models should specify one entry for each head, one of which must be named using tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY If no entry is provided, a default PredictOutput mapping to predictions will be created.
Returns An Estimator from given keras model.
Raises
ValueError If neither keras_model nor keras_model_path was given.
ValueError If both keras_model and keras_model_path was given.
ValueError If the keras_model_path is a GCS URI.
ValueError If keras_model has not been compiled.
ValueError If an invalid checkpoint_format was given. | tensorflow.compat.v1.keras.estimator.model_to_estimator |
Module: tf.compat.v1.keras.experimental Public API for tf.keras.experimental namespace. Classes class CosineDecay: A LearningRateSchedule that uses a cosine decay schedule. class CosineDecayRestarts: A LearningRateSchedule that uses a cosine decay schedule with restarts. class LinearCosineDecay: A LearningRateSchedule that uses a linear cosine decay schedule. class LinearModel: Linear Model for regression and classification problems. class NoisyLinearCosineDecay: A LearningRateSchedule that uses a noisy linear cosine decay schedule. class PeepholeLSTMCell: Equivalent to LSTMCell class but adds peephole connections. class SequenceFeatures: A layer for sequence input. class WideDeepModel: Wide & Deep Model for regression and classification problems. Functions export_saved_model(...): Exports a tf.keras.Model as a Tensorflow SavedModel. load_from_saved_model(...): Loads a keras Model from a SavedModel created by export_saved_model(). | tensorflow.compat.v1.keras.experimental |
tf.compat.v1.keras.experimental.export_saved_model Exports a tf.keras.Model as a Tensorflow SavedModel.
tf.compat.v1.keras.experimental.export_saved_model(
model, saved_model_path, custom_objects=None, as_text=False,
input_signature=None, serving_only=False
)
Note that at this time, subclassed models can only be saved using serving_only=True. The exported SavedModel is a standalone serialization of Tensorflow objects, and is supported by TF language APIs and the Tensorflow Serving system. To load the model, use the function tf.keras.experimental.load_from_saved_model. The SavedModel contains: a checkpoint containing the model weights. a SavedModel proto containing the Tensorflow backend graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model has not been compiled, then only the graph computing predictions will be exported. the model's json config. If the model is subclassed, this will only be included if the model's get_config() method is overwritten. Example: import tensorflow as tf
# Create a tf.keras model.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, input_shape=[10]))
model.summary()
# Save the tf.keras model in the SavedModel format.
path = '/tmp/simple_keras_model'
tf.keras.experimental.export_saved_model(model, path)
# Load the saved keras model back.
new_model = tf.keras.experimental.load_from_saved_model(path)
new_model.summary()
Args
model A tf.keras.Model to be saved. If the model is subclassed, the flag serving_only must be set to True.
saved_model_path a string specifying the path to the SavedModel directory.
custom_objects Optional dictionary mapping string names to custom classes or functions (e.g. custom loss functions).
as_text bool, False by default. Whether to write the SavedModel proto in text format. Currently unavailable in serving-only mode.
input_signature A possibly nested sequence of tf.TensorSpec objects, used to specify the expected model inputs. See tf.function for more details.
serving_only bool, False by default. When this is true, only the prediction graph is saved.
Raises
NotImplementedError If the model is a subclassed model, and serving_only is False.
ValueError If the input signature cannot be inferred from the model.
AssertionError If the SavedModel directory already exists and isn't empty. | tensorflow.compat.v1.keras.experimental.export_saved_model |
tf.compat.v1.keras.experimental.load_from_saved_model Loads a keras Model from a SavedModel created by export_saved_model().
tf.compat.v1.keras.experimental.load_from_saved_model(
saved_model_path, custom_objects=None
)
This function reinstantiates model state by: 1) loading model topology from json (this will eventually come from metagraph). 2) loading model weights from checkpoint. Example: import tensorflow as tf
# Create a tf.keras model.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, input_shape=[10]))
model.summary()
# Save the tf.keras model in the SavedModel format.
path = '/tmp/simple_keras_model'
tf.keras.experimental.export_saved_model(model, path)
# Load the saved keras model back.
new_model = tf.keras.experimental.load_from_saved_model(path)
new_model.summary()
Args
saved_model_path a string specifying the path to an existing SavedModel.
custom_objects Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns a keras.Model instance. | tensorflow.compat.v1.keras.experimental.load_from_saved_model |
Module: tf.compat.v1.keras.initializers Keras initializer serialization / deserialization. Classes class Constant: Initializer that generates tensors with constant values. class Identity: Initializer that generates the identity matrix. class Initializer: Initializer base class: all Keras initializers inherit from this class. class Ones: Initializer that generates tensors initialized to 1. class Orthogonal: Initializer that generates an orthogonal matrix. class RandomNormal: Initializer that generates tensors with a normal distribution. class RandomUniform: Initializer that generates tensors with a uniform distribution. class TruncatedNormal: Initializer that generates a truncated normal distribution. class VarianceScaling: Initializer capable of adapting its scale to the shape of weights tensors. class Zeros: Initializer that generates tensors initialized to 0. class constant: Initializer that generates tensors with constant values. class glorot_normal: The Glorot normal initializer, also called Xavier normal initializer. class glorot_uniform: The Glorot uniform initializer, also called Xavier uniform initializer. class he_normal: Initializer capable of adapting its scale to the shape of weights tensors. class he_uniform: Initializer capable of adapting its scale to the shape of weights tensors. class identity: Initializer that generates the identity matrix. class lecun_normal: Initializer capable of adapting its scale to the shape of weights tensors. class lecun_uniform: Initializer capable of adapting its scale to the shape of weights tensors. class normal: Initializer that generates tensors with a normal distribution. class ones: Initializer that generates tensors initialized to 1. class orthogonal: Initializer that generates an orthogonal matrix. class random_normal: Initializer that generates tensors with a normal distribution. class random_uniform: Initializer that generates tensors with a uniform distribution. class truncated_normal: Initializer that generates a truncated normal distribution. class uniform: Initializer that generates tensors with a uniform distribution. class zeros: Initializer that generates tensors initialized to 0. Functions deserialize(...): Return an Initializer object from its config. get(...) serialize(...) | tensorflow.compat.v1.keras.initializers |
tf.compat.v1.keras.initializers.Constant Initializer that generates tensors with constant values. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.constant_initializer, tf.compat.v1.initializers.constant, tf.compat.v1.keras.initializers.constant
tf.compat.v1.keras.initializers.Constant(
value=0, dtype=tf.dtypes.float32, verify_shape=False
)
The resulting tensor is populated with values of type dtype, as specified by arguments value following the desired shape of the new tensor (see examples below). The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the desired shape of the tensor. In the case where the total number of elements in value is less than the number of elements required by the tensor shape, the last element in value will be used to fill the remaining entries. If the total number of elements in value is greater than the number of elements required by the tensor shape, the initializer will raise a ValueError.
Args
value A Python scalar, list or tuple of values, or a N-dimensional numpy array. All elements of the initialized variable will be set to the corresponding value in the value argument.
dtype Default data type, used if no dtype argument is provided when calling the initializer.
verify_shape Boolean that enables verification of the shape of value. If True, the initializer will throw an error if the shape of value is not compatible with the shape of the initialized tensor.
Raises
TypeError If the input value is not one of the expected types. Examples: The following example can be rewritten using a numpy.ndarray instead of the value list, even reshaped, as shown in the two commented lines below the value list initialization.
value = [0, 1, 2, 3, 4, 5, 6, 7]
init = tf.compat.v1.constant_initializer(value)
# fitting shape
with tf.compat.v1.Session():
x = tf.compat.v1.get_variable('x', shape=[2, 4], initializer=init)
x.initializer.run()
print(x.eval())
[[0. 1. 2. 3.]
[4. 5. 6. 7.]]
# Larger shape
with tf.compat.v1.Session():
y = tf.compat.v1.get_variable('y', shape=[3, 4], initializer=init)
y.initializer.run()
print(y.eval())
[[0. 1. 2. 3.]
[4. 5. 6. 7.]
[7. 7. 7. 7.]]
# Smaller shape
with tf.compat.v1.Session():
z = tf.compat.v1.get_variable('z', shape=[2, 3], initializer=init)
Traceback (most recent call last):
ValueError: Too many elements provided. Needed at most 6, but received 8
# Shape verification
init_verify = tf.compat.v1.constant_initializer(value, verify_shape=True)
with tf.compat.v1.Session():
u = tf.compat.v1.get_variable('u', shape=[3, 4],
initializer=init_verify)
Traceback (most recent call last):
TypeError: Expected Tensor's shape: (3, 4), got (8,).
Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None, verify_shape=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.constant |
tf.compat.v1.keras.initializers.glorot_normal The Glorot normal initializer, also called Xavier normal initializer. Inherits From: VarianceScaling View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.glorot_normal_initializer, tf.compat.v1.initializers.glorot_normal
tf.compat.v1.keras.initializers.glorot_normal(
seed=None, dtype=tf.dtypes.float32
)
It draws samples from a truncated normal distribution centered on 0 with standard deviation (after truncation) given by stddev = sqrt(2 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.
Args
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. References: Glorot et al., 2010 (pdf) Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.glorot_normal |
tf.compat.v1.keras.initializers.glorot_uniform The Glorot uniform initializer, also called Xavier uniform initializer. Inherits From: VarianceScaling View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.glorot_uniform_initializer, tf.compat.v1.initializers.glorot_uniform
tf.compat.v1.keras.initializers.glorot_uniform(
seed=None, dtype=tf.dtypes.float32
)
It draws samples from a uniform distribution within [-limit, limit] where limit is sqrt(6 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.
Args
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. References: Glorot et al., 2010 (pdf) Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.glorot_uniform |
tf.compat.v1.keras.initializers.he_normal Initializer capable of adapting its scale to the shape of weights tensors. Inherits From: VarianceScaling
tf.compat.v1.keras.initializers.he_normal(
seed=None
)
With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n) where n is: number of input units in the weight tensor, if mode = "fan_in" number of output units, if mode = "fan_out" average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
Args
scale Scaling factor (positive float).
mode One of "fan_in", "fan_out", "fan_avg".
distribution Random distribution to use. One of "normal", "uniform".
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported.
Raises
ValueError In case of an invalid value for the "scale", mode" or "distribution" arguments. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.he_normal |
tf.compat.v1.keras.initializers.he_uniform Initializer capable of adapting its scale to the shape of weights tensors. Inherits From: VarianceScaling
tf.compat.v1.keras.initializers.he_uniform(
seed=None
)
With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n) where n is: number of input units in the weight tensor, if mode = "fan_in" number of output units, if mode = "fan_out" average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
Args
scale Scaling factor (positive float).
mode One of "fan_in", "fan_out", "fan_avg".
distribution Random distribution to use. One of "normal", "uniform".
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported.
Raises
ValueError In case of an invalid value for the "scale", mode" or "distribution" arguments. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.he_uniform |
tf.compat.v1.keras.initializers.Identity Initializer that generates the identity matrix. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.initializers.identity, tf.compat.v1.keras.initializers.identity
tf.compat.v1.keras.initializers.Identity(
gain=1.0, dtype=tf.dtypes.float32
)
Only use for 2D matrices.
Args
gain Multiplicative factor to apply to the identity matrix.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.identity |
tf.compat.v1.keras.initializers.lecun_normal Initializer capable of adapting its scale to the shape of weights tensors. Inherits From: VarianceScaling
tf.compat.v1.keras.initializers.lecun_normal(
seed=None
)
With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n) where n is: number of input units in the weight tensor, if mode = "fan_in" number of output units, if mode = "fan_out" average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
Args
scale Scaling factor (positive float).
mode One of "fan_in", "fan_out", "fan_avg".
distribution Random distribution to use. One of "normal", "uniform".
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported.
Raises
ValueError In case of an invalid value for the "scale", mode" or "distribution" arguments. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.lecun_normal |
tf.compat.v1.keras.initializers.lecun_uniform Initializer capable of adapting its scale to the shape of weights tensors. Inherits From: VarianceScaling
tf.compat.v1.keras.initializers.lecun_uniform(
seed=None
)
With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n) where n is: number of input units in the weight tensor, if mode = "fan_in" number of output units, if mode = "fan_out" average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
Args
scale Scaling factor (positive float).
mode One of "fan_in", "fan_out", "fan_avg".
distribution Random distribution to use. One of "normal", "uniform".
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported.
Raises
ValueError In case of an invalid value for the "scale", mode" or "distribution" arguments. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.lecun_uniform |
tf.compat.v1.keras.initializers.Ones Initializer that generates tensors initialized to 1. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.initializers.ones, tf.compat.v1.keras.initializers.ones, tf.compat.v1.ones_initializer
tf.compat.v1.keras.initializers.Ones(
dtype=tf.dtypes.float32
)
Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.ones |
tf.compat.v1.keras.initializers.Orthogonal Initializer that generates an orthogonal matrix. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.initializers.orthogonal, tf.compat.v1.keras.initializers.orthogonal, tf.compat.v1.orthogonal_initializer
tf.compat.v1.keras.initializers.Orthogonal(
gain=1.0, seed=None, dtype=tf.dtypes.float32
)
If the shape of the tensor to initialize is two-dimensional, it is initialized with an orthogonal matrix obtained from the QR decomposition of a matrix of random numbers drawn from a normal distribution. If the matrix has fewer rows than columns then the output will have orthogonal rows. Otherwise, the output will have orthogonal columns. If the shape of the tensor to initialize is more than two-dimensional, a matrix of shape (shape[0] * ... * shape[n - 2], shape[n - 1]) is initialized, where n is the length of the shape vector. The matrix is subsequently reshaped to give a tensor of the desired shape.
Args
gain multiplicative factor to apply to the orthogonal matrix
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. References: Saxe et al., 2014 (pdf) Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.orthogonal |
tf.compat.v1.keras.initializers.RandomNormal Initializer that generates tensors with a normal distribution. Inherits From: random_normal_initializer View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.initializers.normal, tf.compat.v1.keras.initializers.random_normal
tf.compat.v1.keras.initializers.RandomNormal(
mean=0.0, stddev=0.05, seed=None, dtype=tf.dtypes.float32
)
Args
mean a python scalar or a scalar tensor. Mean of the random values to generate.
stddev a python scalar or a scalar tensor. Standard deviation of the random values to generate.
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.randomnormal |
tf.compat.v1.keras.initializers.RandomUniform Initializer that generates tensors with a uniform distribution. Inherits From: random_uniform_initializer View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.initializers.random_uniform, tf.compat.v1.keras.initializers.uniform
tf.compat.v1.keras.initializers.RandomUniform(
minval=-0.05, maxval=0.05, seed=None, dtype=tf.dtypes.float32
)
Args
minval A python scalar or a scalar tensor. Lower bound of the range of random values to generate.
maxval A python scalar or a scalar tensor. Upper bound of the range of random values to generate. Defaults to 1 for float types.
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.randomuniform |
tf.compat.v1.keras.initializers.TruncatedNormal Initializer that generates a truncated normal distribution. Inherits From: truncated_normal_initializer View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.keras.initializers.truncated_normal
tf.compat.v1.keras.initializers.TruncatedNormal(
mean=0.0, stddev=0.05, seed=None, dtype=tf.dtypes.float32
)
These values are similar to values from a random_normal_initializer except that values more than two standard deviations from the mean are discarded and re-drawn. This is the recommended initializer for neural network weights and filters.
Args
mean a python scalar or a scalar tensor. Mean of the random values to generate.
stddev a python scalar or a scalar tensor. Standard deviation of the random values to generate.
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.truncatednormal |
tf.compat.v1.keras.initializers.VarianceScaling Initializer capable of adapting its scale to the shape of weights tensors. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.initializers.variance_scaling, tf.compat.v1.variance_scaling_initializer
tf.compat.v1.keras.initializers.VarianceScaling(
scale=1.0, mode='fan_in', distribution='truncated_normal',
seed=None, dtype=tf.dtypes.float32
)
With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n) where n is: number of input units in the weight tensor, if mode = "fan_in" number of output units, if mode = "fan_out" average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
Args
scale Scaling factor (positive float).
mode One of "fan_in", "fan_out", "fan_avg".
distribution Random distribution to use. One of "normal", "uniform".
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported.
Raises
ValueError In case of an invalid value for the "scale", mode" or "distribution" arguments. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.variancescaling |
tf.compat.v1.keras.initializers.Zeros Initializer that generates tensors initialized to 0. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.initializers.zeros, tf.compat.v1.keras.initializers.zeros, tf.compat.v1.zeros_initializer
tf.compat.v1.keras.initializers.Zeros(
dtype=tf.dtypes.float32
)
Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | tensorflow.compat.v1.keras.initializers.zeros |
Module: tf.compat.v1.keras.layers Keras layers API. Modules experimental module: Public API for tf.keras.layers.experimental namespace. Classes class AbstractRNNCell: Abstract object representing an RNN cell. class Activation: Applies an activation function to an output. class ActivityRegularization: Layer that applies an update to the cost function based input activity. class Add: Layer that adds a list of inputs. class AdditiveAttention: Additive attention layer, a.k.a. Bahdanau-style attention. class AlphaDropout: Applies Alpha Dropout to the input. class Attention: Dot-product attention layer, a.k.a. Luong-style attention. class Average: Layer that averages a list of inputs element-wise. class AveragePooling1D: Average pooling for temporal data. class AveragePooling2D: Average pooling operation for spatial data. class AveragePooling3D: Average pooling operation for 3D data (spatial or spatio-temporal). class AvgPool1D: Average pooling for temporal data. class AvgPool2D: Average pooling operation for spatial data. class AvgPool3D: Average pooling operation for 3D data (spatial or spatio-temporal). class BatchNormalization: Layer that normalizes its inputs. class Bidirectional: Bidirectional wrapper for RNNs. class Concatenate: Layer that concatenates a list of inputs. class Conv1D: 1D convolution layer (e.g. temporal convolution). class Conv1DTranspose: Transposed convolution layer (sometimes called Deconvolution). class Conv2D: 2D convolution layer (e.g. spatial convolution over images). class Conv2DTranspose: Transposed convolution layer (sometimes called Deconvolution). class Conv3D: 3D convolution layer (e.g. spatial convolution over volumes). class Conv3DTranspose: Transposed convolution layer (sometimes called Deconvolution). class ConvLSTM2D: Convolutional LSTM. class Convolution1D: 1D convolution layer (e.g. temporal convolution). class Convolution1DTranspose: Transposed convolution layer (sometimes called Deconvolution). class Convolution2D: 2D convolution layer (e.g. spatial convolution over images). class Convolution2DTranspose: Transposed convolution layer (sometimes called Deconvolution). class Convolution3D: 3D convolution layer (e.g. spatial convolution over volumes). class Convolution3DTranspose: Transposed convolution layer (sometimes called Deconvolution). class Cropping1D: Cropping layer for 1D input (e.g. temporal sequence). class Cropping2D: Cropping layer for 2D input (e.g. picture). class Cropping3D: Cropping layer for 3D data (e.g. spatial or spatio-temporal). class CuDNNGRU: Fast GRU implementation backed by cuDNN. class CuDNNLSTM: Fast LSTM implementation backed by cuDNN. class Dense: Just your regular densely-connected NN layer. class DenseFeatures: A layer that produces a dense Tensor based on given feature_columns. class DepthwiseConv2D: Depthwise separable 2D convolution. class Dot: Layer that computes a dot product between samples in two tensors. class Dropout: Applies Dropout to the input. class ELU: Exponential Linear Unit. class Embedding: Turns positive integers (indexes) into dense vectors of fixed size. class Flatten: Flattens the input. Does not affect the batch size. class GRU: Gated Recurrent Unit - Cho et al. 2014. class GRUCell: Cell class for the GRU layer. class GaussianDropout: Apply multiplicative 1-centered Gaussian noise. class GaussianNoise: Apply additive zero-centered Gaussian noise. class GlobalAveragePooling1D: Global average pooling operation for temporal data. class GlobalAveragePooling2D: Global average pooling operation for spatial data. class GlobalAveragePooling3D: Global Average pooling operation for 3D data. class GlobalAvgPool1D: Global average pooling operation for temporal data. class GlobalAvgPool2D: Global average pooling operation for spatial data. class GlobalAvgPool3D: Global Average pooling operation for 3D data. class GlobalMaxPool1D: Global max pooling operation for 1D temporal data. class GlobalMaxPool2D: Global max pooling operation for spatial data. class GlobalMaxPool3D: Global Max pooling operation for 3D data. class GlobalMaxPooling1D: Global max pooling operation for 1D temporal data. class GlobalMaxPooling2D: Global max pooling operation for spatial data. class GlobalMaxPooling3D: Global Max pooling operation for 3D data. class InputLayer: Layer to be used as an entry point into a Network (a graph of layers). class InputSpec: Specifies the rank, dtype and shape of every input to a layer. class LSTM: Long Short-Term Memory layer - Hochreiter 1997. class LSTMCell: Cell class for the LSTM layer. class Lambda: Wraps arbitrary expressions as a Layer object. class Layer: This is the class from which all layers inherit. class LayerNormalization: Layer normalization layer (Ba et al., 2016). class LeakyReLU: Leaky version of a Rectified Linear Unit. class LocallyConnected1D: Locally-connected layer for 1D inputs. class LocallyConnected2D: Locally-connected layer for 2D inputs. class Masking: Masks a sequence by using a mask value to skip timesteps. class MaxPool1D: Max pooling operation for 1D temporal data. class MaxPool2D: Max pooling operation for 2D spatial data. class MaxPool3D: Max pooling operation for 3D data (spatial or spatio-temporal). class MaxPooling1D: Max pooling operation for 1D temporal data. class MaxPooling2D: Max pooling operation for 2D spatial data. class MaxPooling3D: Max pooling operation for 3D data (spatial or spatio-temporal). class Maximum: Layer that computes the maximum (element-wise) a list of inputs. class Minimum: Layer that computes the minimum (element-wise) a list of inputs. class MultiHeadAttention: MultiHeadAttention layer. class Multiply: Layer that multiplies (element-wise) a list of inputs. class PReLU: Parametric Rectified Linear Unit. class Permute: Permutes the dimensions of the input according to a given pattern. class RNN: Base class for recurrent layers. class ReLU: Rectified Linear Unit activation function. class RepeatVector: Repeats the input n times. class Reshape: Layer that reshapes inputs into the given shape. class SeparableConv1D: Depthwise separable 1D convolution. class SeparableConv2D: Depthwise separable 2D convolution. class SeparableConvolution1D: Depthwise separable 1D convolution. class SeparableConvolution2D: Depthwise separable 2D convolution. class SimpleRNN: Fully-connected RNN where the output is to be fed back to input. class SimpleRNNCell: Cell class for SimpleRNN. class Softmax: Softmax activation function. class SpatialDropout1D: Spatial 1D version of Dropout. class SpatialDropout2D: Spatial 2D version of Dropout. class SpatialDropout3D: Spatial 3D version of Dropout. class StackedRNNCells: Wrapper allowing a stack of RNN cells to behave as a single cell. class Subtract: Layer that subtracts two inputs. class ThresholdedReLU: Thresholded Rectified Linear Unit. class TimeDistributed: This wrapper allows to apply a layer to every temporal slice of an input. class UpSampling1D: Upsampling layer for 1D inputs. class UpSampling2D: Upsampling layer for 2D inputs. class UpSampling3D: Upsampling layer for 3D inputs. class Wrapper: Abstract wrapper base class. class ZeroPadding1D: Zero-padding layer for 1D input (e.g. temporal sequence). class ZeroPadding2D: Zero-padding layer for 2D input (e.g. picture). class ZeroPadding3D: Zero-padding layer for 3D data (spatial or spatio-temporal). Functions Input(...): Input() is used to instantiate a Keras tensor. add(...): Functional interface to the tf.keras.layers.Add layer. average(...): Functional interface to the tf.keras.layers.Average layer. concatenate(...): Functional interface to the Concatenate layer. deserialize(...): Instantiates a layer from a config dictionary. disable_v2_dtype_behavior(...): Disables the V2 dtype behavior for Keras layers. dot(...): Functional interface to the Dot layer. enable_v2_dtype_behavior(...): Enable the V2 dtype behavior for Keras layers. maximum(...): Functional interface to compute maximum (element-wise) list of inputs. minimum(...): Functional interface to the Minimum layer. multiply(...): Functional interface to the Multiply layer. serialize(...) subtract(...): Functional interface to the Subtract layer. | tensorflow.compat.v1.keras.layers |
tf.compat.v1.keras.layers.BatchNormalization Layer that normalizes its inputs. Inherits From: Layer, Module
tf.compat.v1.keras.layers.BatchNormalization(
axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True,
beta_initializer='zeros', gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones', beta_regularizer=None,
gamma_regularizer=None, beta_constraint=None, gamma_constraint=None,
renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None,
trainable=True, virtual_batch_size=None, adjustment=None, name=None, **kwargs
)
Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1. Importantly, batch normalization works differently during training and during inference. During training (i.e. when using fit() or when calling the layer/model with the argument training=True), the layer normalizes its output using the mean and standard deviation of the current batch of inputs. That is to say, for each channel being normalized, the layer returns (batch - mean(batch)) / (var(batch) + epsilon) * gamma + beta, where:
epsilon is small constant (configurable as part of the constructor arguments)
gamma is a learned scaling factor (initialized as 1), which can be disabled by passing scale=False to the constructor.
beta is a learned offset factor (initialized as 0), which can be disabled by passing center=False to the constructor. During inference (i.e. when using evaluate() or predict() or when calling the layer/model with the argument training=False (which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returns (batch - self.moving_mean) / (self.moving_var + epsilon) * gamma + beta. self.moving_mean and self.moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such: moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum) moving_var = moving_var * momentum + var(batch) * (1 - momentum) As such, the layer will only normalize its inputs during inference after having been trained on data that has similar statistics as the inference data.
Arguments
axis Integer or a list of integers, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format="channels_first", set axis=1 in BatchNormalization.
momentum Momentum for the moving average.
epsilon Small float added to variance to avoid dividing by zero.
center If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.
beta_initializer Initializer for the beta weight.
gamma_initializer Initializer for the gamma weight.
moving_mean_initializer Initializer for the moving mean.
moving_variance_initializer Initializer for the moving variance.
beta_regularizer Optional regularizer for the beta weight.
gamma_regularizer Optional regularizer for the gamma weight.
beta_constraint Optional constraint for the beta weight.
gamma_constraint Optional constraint for the gamma weight.
renorm Whether to use Batch Renormalization. This adds extra variables during training. The inference is the same for either value of this parameter.
renorm_clipping A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar Tensors used to clip the renorm correction. The correction (r, d) is used as corrected_value = normalized_value * r + d, with r clipped to [rmin, rmax], and d to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
renorm_momentum Momentum used to update the moving means and standard deviations with renorm. Unlike momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that momentum is still applied to get the means and variances for inference.
fused if None or True, use a faster, fused implementation if possible. If False, use the system recommended implementation.
trainable Boolean, if True the variables will be marked as trainable.
virtual_batch_size An int. By default, virtual_batch_size is None, which means batch normalization is performed across the whole batch. When virtual_batch_size is not None, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
adjustment A function taking the Tensor containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1)) will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If None, no adjustment is applied. Cannot be specified if virtual_batch_size is specified. Call arguments:
inputs: Input tensor (of any rank).
training: Python boolean indicating whether the layer should behave in training mode or in inference mode.
training=True: The layer will normalize its inputs using the mean and variance of the current batch of inputs.
training=False: The layer will normalize its inputs using the mean and variance of its moving statistics, learned during training.
Input shape: Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape: Same shape as input. Reference:
Ioffe and Szegedy, 2015. | tensorflow.compat.v1.keras.layers.batchnormalization |
tf.compat.v1.keras.layers.CuDNNGRU Fast GRU implementation backed by cuDNN. Inherits From: RNN, Layer, Module
tf.compat.v1.keras.layers.CuDNNGRU(
units, kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', kernel_regularizer=None,
recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, recurrent_constraint=None, bias_constraint=None,
return_sequences=False, return_state=False, go_backwards=False, stateful=False,
**kwargs
)
More information about cuDNN can be found on the NVIDIA developer website. Can only be run on GPU.
Arguments
units Positive integer, dimensionality of the output space.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation").
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
return_sequences Boolean. Whether to return the last output in the output sequence, or the full sequence.
return_state Boolean. Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
Attributes
cell
states
Methods reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | tensorflow.compat.v1.keras.layers.cudnngru |
tf.compat.v1.keras.layers.CuDNNLSTM Fast LSTM implementation backed by cuDNN. Inherits From: RNN, Layer, Module
tf.compat.v1.keras.layers.CuDNNLSTM(
units, kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', unit_forget_bias=True,
kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None,
activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None,
bias_constraint=None, return_sequences=False, return_state=False,
go_backwards=False, stateful=False, **kwargs
)
More information about cuDNN can be found on the NVIDIA developer website. Can only be run on GPU.
Arguments
units Positive integer, dimensionality of the output space.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs.
unit_forget_bias Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation").
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
return_sequences Boolean. Whether to return the last output. in the output sequence, or the full sequence.
return_state Boolean. Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
Attributes
cell
states
Methods reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | tensorflow.compat.v1.keras.layers.cudnnlstm |
tf.compat.v1.keras.layers.DenseFeatures A layer that produces a dense Tensor based on given feature_columns. Inherits From: Layer, Module
tf.compat.v1.keras.layers.DenseFeatures(
feature_columns, trainable=True, name=None, partitioner=None, **kwargs
)
Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column-oriented data should be converted to a single Tensor. This layer can be called multiple times with different features. This is the V1 version of this layer that uses variable_scope's or partitioner to create variables which works well with PartitionedVariables. Variable scopes are deprecated in V2, so the V2 version uses name_scopes instead. But currently that lacks support for partitioned variables. Use this if you need partitioned variables. Use the partitioner argument if you have a Keras model and uses tf.compat.v1.keras.estimator.model_to_estimator for training. Example: price = tf.feature_column.numeric_column('price')
keywords_embedded = tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_hash_bucket("keywords", 10K),
dimension=16)
columns = [price, keywords_embedded, ...]
partitioner = tf.compat.v1.fixed_size_partitioner(num_shards=4)
feature_layer = tf.compat.v1.keras.layers.DenseFeatures(
feature_columns=columns, partitioner=partitioner)
features = tf.io.parse_example(
..., features=tf.feature_column.make_parse_example_spec(columns))
dense_tensor = feature_layer(features)
for units in [128, 64, 32]:
dense_tensor = tf.compat.v1.keras.layers.Dense(
units, activation='relu')(dense_tensor)
prediction = tf.compat.v1.keras.layers.Dense(1)(dense_tensor)
Args
feature_columns An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from DenseColumn such as numeric_column, embedding_column, bucketized_column, indicator_column. If you have categorical features, you can wrap them with an embedding_column or indicator_column.
trainable Boolean, whether the layer's variables will be updated via gradient descent during training.
name Name to give to the DenseFeatures.
partitioner Partitioner for input layer. Defaults to None.
**kwargs Keyword arguments to construct a layer.
Raises
ValueError if an item in feature_columns is not a DenseColumn. | tensorflow.compat.v1.keras.layers.densefeatures |
tf.compat.v1.keras.layers.disable_v2_dtype_behavior Disables the V2 dtype behavior for Keras layers.
tf.compat.v1.keras.layers.disable_v2_dtype_behavior()
See tf.compat.v1.keras.layers.enable_v2_dtype_behavior. | tensorflow.compat.v1.keras.layers.disable_v2_dtype_behavior |
tf.compat.v1.keras.layers.enable_v2_dtype_behavior Enable the V2 dtype behavior for Keras layers.
tf.compat.v1.keras.layers.enable_v2_dtype_behavior()
By default, the V2 dtype behavior is enabled in TensorFlow 2, so this function is only useful if tf.compat.v1.disable_v2_behavior has been called. Since mixed precision requires V2 dtype behavior to be enabled, this function allows you to use mixed precision in Keras layers if disable_v2_behavior has been called. When enabled, the dtype of Keras layers defaults to floatx (which is typically float32) instead of None. In addition, layers will automatically cast floating-point inputs to the layer's dtype.
x = tf.ones((4, 4, 4, 4), dtype='float64')
layer = tf.keras.layers.Conv2D(filters=4, kernel_size=2)
print(layer.dtype) # float32 since V2 dtype behavior is enabled
float32
y = layer(x) # Layer casts inputs since V2 dtype behavior is enabled
print(y.dtype.name)
float32
A layer author can opt-out their layer from the automatic input casting by passing autocast=False to the base Layer's constructor. This disables the autocasting part of the V2 behavior for that layer, but not the defaulting to floatx part of the V2 behavior. When a global tf.keras.mixed_precision.Policy is set, a Keras layer's dtype will default to the global policy instead of floatx. Layers will automatically cast inputs to the policy's compute_dtype. | tensorflow.compat.v1.keras.layers.enable_v2_dtype_behavior |
Module: tf.compat.v1.keras.layers.experimental Public API for tf.keras.layers.experimental namespace. Modules preprocessing module: Public API for tf.keras.layers.experimental.preprocessing namespace. Classes class EinsumDense: A layer that uses tf.einsum as the backing computation. class RandomFourierFeatures: Layer that projects its inputs into a random feature space. | tensorflow.compat.v1.keras.layers.experimental |
Module: tf.compat.v1.keras.layers.experimental.preprocessing Public API for tf.keras.layers.experimental.preprocessing namespace. Classes class CategoryCrossing: Category crossing layer. class CategoryEncoding: CategoryEncoding layer. class CenterCrop: Crop the central portion of the images to target height and width. class Discretization: Buckets data into discrete ranges. class Hashing: Implements categorical feature hashing, also known as "hashing trick". class IntegerLookup: Maps integers from a vocabulary to integer indices. class Normalization: Feature-wise normalization of the data. class PreprocessingLayer: Base class for PreprocessingLayers. class RandomContrast: Adjust the contrast of an image or images by a random factor. class RandomCrop: Randomly crop the images to target height and width. class RandomFlip: Randomly flip each image horizontally and vertically. class RandomHeight: Randomly vary the height of a batch of images during training. class RandomRotation: Randomly rotate each image. class RandomTranslation: Randomly translate each image during training. class RandomWidth: Randomly vary the width of a batch of images during training. class RandomZoom: Randomly zoom each image during training. class Rescaling: Multiply inputs by scale and adds offset. class Resizing: Image resizing layer. class StringLookup: Maps strings from a vocabulary to integer indices. class TextVectorization: Text vectorization layer. | tensorflow.compat.v1.keras.layers.experimental.preprocessing |
tf.compat.v1.keras.layers.experimental.preprocessing.CategoryEncoding CategoryEncoding layer. Inherits From: CategoryEncoding, PreprocessingLayer, Layer, Module
tf.compat.v1.keras.layers.experimental.preprocessing.CategoryEncoding(
max_tokens=None, output_mode=BINARY, sparse=False, **kwargs
)
This layer provides options for condensing input data into denser representations. It accepts either integer values or strings as inputs, allows users to map those inputs into a contiguous integer space, and outputs either those integer values (one sample = 1D tensor of integer token indices) or a dense representation (one sample = 1D tensor of float values representing data about the sample's tokens). If desired, the user can call this layer's adapt() method on a dataset. When this layer is adapted, it will analyze the dataset, determine the frequency of individual integer or string values, and create a 'vocabulary' from them. This vocabulary can have unlimited size or be capped, depending on the configuration options for this layer; if there are more unique values in the input than the maximum vocabulary size, the most frequent terms will be used to create the vocabulary.
Attributes
max_elements The maximum size of the vocabulary for this layer. If None, there is no cap on the size of the vocabulary.
output_mode Optional specification for the output of the layer. Values can be "int", "binary", "count" or "tf-idf", configuring the layer as follows: "int": Outputs integer indices, one integer index per split string token. "binary": Outputs a single int array per batch, of either vocab_size or max_elements size, containing 1s in all elements where the token mapped to that index exists at least once in the batch item. "count": As "binary", but the int array contains a count of the number of times the token at that index appeared in the batch item. "tf-idf": As "binary", but the TF-IDF algorithm is applied to find the value in each token slot.
output_sequence_length Only valid in INT mode. If set, the output will have its time dimension padded or truncated to exactly output_sequence_length values, resulting in a tensor of shape [batch_size, output_sequence_length] regardless of the input shape.
pad_to_max_elements Only valid in "binary", "count", and "tf-idf" modes. If True, the output will have its feature axis padded to max_elements even if the number of unique values in the vocabulary is less than max_elements, resulting in a tensor of shape [batch_size, max_elements] regardless of vocabulary size. Defaults to False. Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the dataset. Overrides the default adapt method to apply relevant preprocessing to the inputs before passing to the combiner.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt. This must be True for this layer, which does not support repeated calls to adapt.
Raises
RuntimeError if the layer cannot be adapted at this time. set_num_elements View source
set_num_elements(
num_elements
)
set_tfidf_data View source
set_tfidf_data(
tfidf_data
) | tensorflow.compat.v1.keras.layers.experimental.preprocessing.categoryencoding |
tf.compat.v1.keras.layers.experimental.preprocessing.IntegerLookup Maps integers from a vocabulary to integer indices. Inherits From: IntegerLookup, PreprocessingLayer, Layer, Module
tf.compat.v1.keras.layers.experimental.preprocessing.IntegerLookup(
max_values=None, num_oov_indices=1, mask_value=0, oov_value=-1, vocabulary=None,
invert=False, **kwargs
)
Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the dataset. Overrides the default adapt method to apply relevant preprocessing to the inputs before passing to the combiner.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt. This must be True for this layer, which does not support repeated calls to adapt. get_vocabulary View source
get_vocabulary()
set_vocabulary View source
set_vocabulary(
vocab
)
Sets vocabulary data for this layer with inverse=False. This method sets the vocabulary for this layer directly, instead of analyzing a dataset through 'adapt'. It should be used whenever the vocab information is already known. If vocabulary data is already present in the layer, this method will either replace it
Arguments
vocab An array of string tokens.
Raises
ValueError If there are too many inputs, the inputs do not match, or input data is missing. vocab_size View source
vocab_size() | tensorflow.compat.v1.keras.layers.experimental.preprocessing.integerlookup |
tf.compat.v1.keras.layers.experimental.preprocessing.Normalization Feature-wise normalization of the data. Inherits From: Normalization, PreprocessingLayer, Layer, Module
tf.compat.v1.keras.layers.experimental.preprocessing.Normalization(
axis=-1, dtype=None, **kwargs
)
This layer will coerce its inputs into a distribution centered around 0 with standard deviation 1. It accomplishes this by precomputing the mean and variance of the data, and calling (input-mean)/sqrt(var) at runtime. What happens in adapt: Compute mean and variance of the data and store them as the layer's weights. adapt should be called before fit, evaluate, or predict. Examples: Calculate the mean and variance by analyzing the dataset in adapt.
adapt_data = np.array([[1.], [2.], [3.], [4.], [5.]], dtype=np.float32)
input_data = np.array([[1.], [2.], [3.]], np.float32)
layer = Normalization()
layer.adapt(adapt_data)
layer(input_data)
<tf.Tensor: shape=(3, 1), dtype=float32, numpy=
array([[-1.4142135 ],
[-0.70710677],
[ 0. ]], dtype=float32)>
Pass the mean and variance directly.
input_data = np.array([[1.], [2.], [3.]], np.float32)
layer = Normalization(mean=3., variance=2.)
layer(input_data)
<tf.Tensor: shape=(3, 1), dtype=float32, numpy=
array([[-1.4142135 ],
[-0.70710677],
[ 0. ]], dtype=float32)>
Attributes
axis Integer or tuple of integers, the axis or axes that should be "kept". These axes are not be summed over when calculating the normalization statistics. By default the last axis, the features axis is kept and any space or time axes are summed. Each element in the the axes that are kept is normalized independently. If axis is set to 'None', the layer will perform scalar normalization (dividing the input by a single scalar value). The batch axis, 0, is always summed over (axis=0 is not allowed).
mean The mean value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s) cannot be broadcast, an error will be raised when this layer's build() method is called.
variance The variance value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s)cannot be broadcast, an error will be raised when this layer's build() method is called. Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the data being passed.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt, or whether to start from the existing state. Subclasses may choose to throw if reset_state is set to 'False'. | tensorflow.compat.v1.keras.layers.experimental.preprocessing.normalization |
tf.compat.v1.keras.layers.experimental.preprocessing.StringLookup Maps strings from a vocabulary to integer indices. Inherits From: StringLookup, PreprocessingLayer, Layer, Module
tf.compat.v1.keras.layers.experimental.preprocessing.StringLookup(
max_tokens=None, num_oov_indices=1, mask_token='',
oov_token='[UNK]', vocabulary=None, encoding=None, invert=False,
**kwargs
)
Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the dataset. Overrides the default adapt method to apply relevant preprocessing to the inputs before passing to the combiner.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt. This must be True for this layer, which does not support repeated calls to adapt. get_vocabulary View source
get_vocabulary()
set_vocabulary View source
set_vocabulary(
vocab
)
Sets vocabulary data for this layer with inverse=False. This method sets the vocabulary for this layer directly, instead of analyzing a dataset through 'adapt'. It should be used whenever the vocab information is already known. If vocabulary data is already present in the layer, this method will either replace it
Arguments
vocab An array of string tokens.
Raises
ValueError If there are too many inputs, the inputs do not match, or input data is missing. vocab_size View source
vocab_size() | tensorflow.compat.v1.keras.layers.experimental.preprocessing.stringlookup |
tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization Text vectorization layer. Inherits From: TextVectorization, PreprocessingLayer, Layer, Module
tf.compat.v1.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=None, standardize=text_vectorization.LOWER_AND_STRIP_PUNCTUATION,
split=text_vectorization.SPLIT_ON_WHITESPACE, ngrams=None,
output_mode=text_vectorization.INT, output_sequence_length=None,
pad_to_max_tokens=True, **kwargs
)
This layer has basic options for managing text in a Keras model. It transforms a batch of strings (one sample = one string) into either a list of token indices (one sample = 1D tensor of integer token indices) or a dense representation (one sample = 1D tensor of float values representing data about the sample's tokens). The processing of each sample contains the following steps: 1) standardize each sample (usually lowercasing + punctuation stripping) 2) split each sample into substrings (usually words) 3) recombine substrings into tokens (usually ngrams) 4) index tokens (associate a unique int value with each token) 5) transform each sample using this index, either into a vector of ints or a dense float vector.
Attributes
max_tokens The maximum size of the vocabulary for this layer. If None, there is no cap on the size of the vocabulary.
standardize Optional specification for standardization to apply to the input text. Values can be None (no standardization), LOWER_AND_STRIP_PUNCTUATION (lowercase and remove punctuation) or a Callable.
split Optional specification for splitting the input text. Values can be None (no splitting), SPLIT_ON_WHITESPACE (split on ASCII whitespace), or a Callable.
ngrams Optional specification for ngrams to create from the possibly-split input text. Values can be None, an integer or tuple of integers; passing an integer will create ngrams up to that integer, and passing a tuple of integers will create ngrams for the specified values in the tuple. Passing None means that no ngrams will be created.
output_mode Optional specification for the output of the layer. Values can be INT, BINARY, COUNT or TFIDF, which control the outputs as follows: INT: Outputs integer indices, one integer index per split string token. BINARY: Outputs a single int array per batch, of either vocab_size or max_tokens size, containing 1s in all elements where the token mapped to that index exists at least once in the batch item. COUNT: As BINARY, but the int array contains a count of the number of times the token at that index appeared in the batch item. TFIDF: As BINARY, but the TF-IDF algorithm is applied to find the value in each token slot.
output_sequence_length Optional length for the output tensor. If set, the output will be padded or truncated to this value in INT mode.
pad_to_max_tokens If True, BINARY, COUNT, and TFIDF modes will have their outputs padded to max_tokens, even if the number of unique tokens in the vocabulary is less than max_tokens. Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the dataset. Overrides the default adapt method to apply relevant preprocessing to the inputs before passing to the combiner.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, as a NumPy array, a string tensor, or as a list of texts.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt. This must be True for this layer, which does not support repeated calls to adapt. get_vocabulary View source
get_vocabulary()
set_vocabulary View source
set_vocabulary(
vocab, df_data=None, oov_df_value=None
)
Sets vocabulary (and optionally document frequency) data for this layer. This method sets the vocabulary and DF data for this layer directly, instead of analyzing a dataset through 'adapt'. It should be used whenever the vocab (and optionally document frequency) information is already known. If vocabulary data is already present in the layer, this method will replace it.
Arguments
vocab An array of string tokens.
df_data An array of document frequency data. Only necessary if the layer output_mode is TFIDF.
oov_df_value The document frequency of the OOV token. Only necessary if output_mode is TFIDF.
Raises
ValueError If there are too many inputs, the inputs do not match, or input data is missing.
RuntimeError If the vocabulary cannot be set when this function is called. This happens when "binary", "count", and "tfidf" modes, if "pad_to_max_tokens" is False and the layer itself has already been called. | tensorflow.compat.v1.keras.layers.experimental.preprocessing.textvectorization |
tf.compat.v1.keras.layers.GRU Gated Recurrent Unit - Cho et al. 2014. Inherits From: RNN, Layer, Module
tf.compat.v1.keras.layers.GRU(
units, activation='tanh',
recurrent_activation='hard_sigmoid', use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', kernel_regularizer=None,
recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, recurrent_constraint=None, bias_constraint=None,
dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False,
go_backwards=False, stateful=False, unroll=False, reset_after=False, **kwargs
)
There are two variants. The default one is based on 1406.1078v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original 1406.1078v1 and has the order reversed. The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. Use 'reset_after'=True and recurrent_activation='sigmoid'.
Arguments
units Positive integer, dimensionality of the output space.
activation Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
recurrent_activation Activation function to use for the recurrent step. Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
use_bias Boolean, whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation")..
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
return_sequences Boolean. Whether to return the last output in the output sequence, or the full sequence.
return_state Boolean. Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
unroll Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
time_major The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
reset_after GRU convention (whether to apply reset gate after or before matrix multiplication). False = "before" (default), True = "after" (CuDNN compatible). Call arguments:
inputs: A 3D tensor.
mask: Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked.
training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used.
initial_state: List of initial state tensors to be passed to the first call of the cell.
Attributes
activation
bias_constraint
bias_initializer
bias_regularizer
dropout
implementation
kernel_constraint
kernel_initializer
kernel_regularizer
recurrent_activation
recurrent_constraint
recurrent_dropout
recurrent_initializer
recurrent_regularizer
reset_after
states
units
use_bias
Methods reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | tensorflow.compat.v1.keras.layers.gru |
tf.compat.v1.keras.layers.GRUCell Cell class for the GRU layer. Inherits From: Layer, Module
tf.compat.v1.keras.layers.GRUCell(
units, activation='tanh',
recurrent_activation='hard_sigmoid', use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', kernel_regularizer=None,
recurrent_regularizer=None, bias_regularizer=None, kernel_constraint=None,
recurrent_constraint=None, bias_constraint=None, dropout=0.0,
recurrent_dropout=0.0, reset_after=False, **kwargs
)
Arguments
units Positive integer, dimensionality of the output space.
activation Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
recurrent_activation Activation function to use for the recurrent step. Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
use_bias Boolean, whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
reset_after GRU convention (whether to apply reset gate after or before matrix multiplication). False = "before" (default), True = "after" (CuDNN compatible). Call arguments:
inputs: A 2D tensor.
states: List of state tensors corresponding to the previous timestep.
training: Python boolean indicating whether the layer should behave in training mode or in inference mode. Only relevant when dropout or recurrent_dropout is used. Methods get_dropout_mask_for_cell View source
get_dropout_mask_for_cell(
inputs, training, count=1
)
Get the dropout mask for RNN cell's input. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.
Args
inputs The input tensor whose shape will be used to generate dropout mask.
training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns List of mask tensor, generated or cached mask based on context.
get_initial_state View source
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
get_recurrent_dropout_mask_for_cell View source
get_recurrent_dropout_mask_for_cell(
inputs, training, count=1
)
Get the recurrent dropout mask for RNN cell. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.
Args
inputs The input tensor whose shape will be used to generate dropout mask.
training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns List of mask tensor, generated or cached mask based on context.
reset_dropout_mask View source
reset_dropout_mask()
Reset the cached dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch. reset_recurrent_dropout_mask View source
reset_recurrent_dropout_mask()
Reset the cached recurrent dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch. | tensorflow.compat.v1.keras.layers.grucell |
tf.compat.v1.keras.layers.LSTM Long Short-Term Memory layer - Hochreiter 1997. Inherits From: RNN, Layer, Module
tf.compat.v1.keras.layers.LSTM(
units, activation='tanh',
recurrent_activation='hard_sigmoid', use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', unit_forget_bias=True,
kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None,
activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None,
bias_constraint=None, dropout=0.0, recurrent_dropout=0.0,
return_sequences=False, return_state=False, go_backwards=False, stateful=False,
unroll=False, **kwargs
)
Note that this cell is not optimized for performance on GPU. Please use tf.compat.v1.keras.layers.CuDNNLSTM for better performance on GPU.
Arguments
units Positive integer, dimensionality of the output space.
activation Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
recurrent_activation Activation function to use for the recurrent step. Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
use_bias Boolean, whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs..
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
unit_forget_bias Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al., 2015.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation").
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
return_sequences Boolean. Whether to return the last output. in the output sequence, or the full sequence.
return_state Boolean. Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
unroll Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
time_major The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. Call arguments:
inputs: A 3D tensor.
mask: Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked.
training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used.
initial_state: List of initial state tensors to be passed to the first call of the cell.
Attributes
activation
bias_constraint
bias_initializer
bias_regularizer
dropout
implementation
kernel_constraint
kernel_initializer
kernel_regularizer
recurrent_activation
recurrent_constraint
recurrent_dropout
recurrent_initializer
recurrent_regularizer
states
unit_forget_bias
units
use_bias
Methods reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | tensorflow.compat.v1.keras.layers.lstm |
tf.compat.v1.keras.layers.LSTMCell Cell class for the LSTM layer. Inherits From: Layer, Module
tf.compat.v1.keras.layers.LSTMCell(
units, activation='tanh',
recurrent_activation='hard_sigmoid', use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', unit_forget_bias=True,
kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None,
kernel_constraint=None, recurrent_constraint=None, bias_constraint=None,
dropout=0.0, recurrent_dropout=0.0, **kwargs
)
Arguments
units Positive integer, dimensionality of the output space.
activation Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
recurrent_activation Activation function to use for the recurrent step. Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
use_bias Boolean, whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
unit_forget_bias Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al., 2015
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Call arguments:
inputs: A 2D tensor.
states: List of state tensors corresponding to the previous timestep.
training: Python boolean indicating whether the layer should behave in training mode or in inference mode. Only relevant when dropout or recurrent_dropout is used. Methods get_dropout_mask_for_cell View source
get_dropout_mask_for_cell(
inputs, training, count=1
)
Get the dropout mask for RNN cell's input. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.
Args
inputs The input tensor whose shape will be used to generate dropout mask.
training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns List of mask tensor, generated or cached mask based on context.
get_initial_state View source
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
get_recurrent_dropout_mask_for_cell View source
get_recurrent_dropout_mask_for_cell(
inputs, training, count=1
)
Get the recurrent dropout mask for RNN cell. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.
Args
inputs The input tensor whose shape will be used to generate dropout mask.
training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns List of mask tensor, generated or cached mask based on context.
reset_dropout_mask View source
reset_dropout_mask()
Reset the cached dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch. reset_recurrent_dropout_mask View source
reset_recurrent_dropout_mask()
Reset the cached recurrent dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch. | tensorflow.compat.v1.keras.layers.lstmcell |
Module: tf.compat.v1.keras.losses Built-in loss functions. Classes class BinaryCrossentropy: Computes the cross-entropy loss between true labels and predicted labels. class CategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions. class CategoricalHinge: Computes the categorical hinge loss between y_true and y_pred. class CosineSimilarity: Computes the cosine similarity between labels and predictions. class Hinge: Computes the hinge loss between y_true and y_pred. class Huber: Computes the Huber loss between y_true and y_pred. class KLDivergence: Computes Kullback-Leibler divergence loss between y_true and y_pred. class LogCosh: Computes the logarithm of the hyperbolic cosine of the prediction error. class Loss: Loss base class. class MeanAbsoluteError: Computes the mean of absolute difference between labels and predictions. class MeanAbsolutePercentageError: Computes the mean absolute percentage error between y_true and y_pred. class MeanSquaredError: Computes the mean of squares of errors between labels and predictions. class MeanSquaredLogarithmicError: Computes the mean squared logarithmic error between y_true and y_pred. class Poisson: Computes the Poisson loss between y_true and y_pred. class SparseCategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions. class SquaredHinge: Computes the squared hinge loss between y_true and y_pred. Functions KLD(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. MAE(...): Computes the mean absolute error between labels and predictions. MAPE(...): Computes the mean absolute percentage error between y_true and y_pred. MSE(...): Computes the mean squared error between labels and predictions. MSLE(...): Computes the mean squared logarithmic error between y_true and y_pred. binary_crossentropy(...): Computes the binary crossentropy loss. categorical_crossentropy(...): Computes the categorical crossentropy loss. categorical_hinge(...): Computes the categorical hinge loss between y_true and y_pred. cosine(...): Computes the cosine similarity between labels and predictions. cosine_proximity(...): Computes the cosine similarity between labels and predictions. cosine_similarity(...): Computes the cosine similarity between labels and predictions. deserialize(...): Deserializes a serialized loss class/function instance. get(...): Retrieves a Keras loss as a function/Loss class instance. hinge(...): Computes the hinge loss between y_true and y_pred. kl_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kld(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kullback_leibler_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. log_cosh(...): Logarithm of the hyperbolic cosine of the prediction error. logcosh(...): Logarithm of the hyperbolic cosine of the prediction error. mae(...): Computes the mean absolute error between labels and predictions. mape(...): Computes the mean absolute percentage error between y_true and y_pred. mean_absolute_error(...): Computes the mean absolute error between labels and predictions. mean_absolute_percentage_error(...): Computes the mean absolute percentage error between y_true and y_pred. mean_squared_error(...): Computes the mean squared error between labels and predictions. mean_squared_logarithmic_error(...): Computes the mean squared logarithmic error between y_true and y_pred. mse(...): Computes the mean squared error between labels and predictions. msle(...): Computes the mean squared logarithmic error between y_true and y_pred. poisson(...): Computes the Poisson loss between y_true and y_pred. serialize(...): Serializes loss function or Loss instance. sparse_categorical_crossentropy(...): Computes the sparse categorical crossentropy loss. squared_hinge(...): Computes the squared hinge loss between y_true and y_pred. | tensorflow.compat.v1.keras.losses |
Module: tf.compat.v1.keras.metrics Built-in metrics. Classes class AUC: Computes the approximate AUC (Area under the curve) via a Riemann sum. class Accuracy: Calculates how often predictions equal labels. class BinaryAccuracy: Calculates how often predictions match binary labels. class BinaryCrossentropy: Computes the crossentropy metric between the labels and predictions. class CategoricalAccuracy: Calculates how often predictions matches one-hot labels. class CategoricalCrossentropy: Computes the crossentropy metric between the labels and predictions. class CategoricalHinge: Computes the categorical hinge metric between y_true and y_pred. class CosineSimilarity: Computes the cosine similarity between the labels and predictions. class FalseNegatives: Calculates the number of false negatives. class FalsePositives: Calculates the number of false positives. class Hinge: Computes the hinge metric between y_true and y_pred. class KLDivergence: Computes Kullback-Leibler divergence metric between y_true and y_pred. class LogCoshError: Computes the logarithm of the hyperbolic cosine of the prediction error. class Mean: Computes the (weighted) mean of the given values. class MeanAbsoluteError: Computes the mean absolute error between the labels and predictions. class MeanAbsolutePercentageError: Computes the mean absolute percentage error between y_true and y_pred. class MeanIoU: Computes the mean Intersection-Over-Union metric. class MeanRelativeError: Computes the mean relative error by normalizing with the given values. class MeanSquaredError: Computes the mean squared error between y_true and y_pred. class MeanSquaredLogarithmicError: Computes the mean squared logarithmic error between y_true and y_pred. class MeanTensor: Computes the element-wise (weighted) mean of the given tensors. class Metric: Encapsulates metric logic and state. class Poisson: Computes the Poisson metric between y_true and y_pred. class Precision: Computes the precision of the predictions with respect to the labels. class PrecisionAtRecall: Computes best precision where recall is >= specified value. class Recall: Computes the recall of the predictions with respect to the labels. class RecallAtPrecision: Computes best recall where precision is >= specified value. class RootMeanSquaredError: Computes root mean squared error metric between y_true and y_pred. class SensitivityAtSpecificity: Computes best sensitivity where specificity is >= specified value. class SparseCategoricalAccuracy: Calculates how often predictions matches integer labels. class SparseCategoricalCrossentropy: Computes the crossentropy metric between the labels and predictions. class SparseTopKCategoricalAccuracy: Computes how often integer targets are in the top K predictions. class SpecificityAtSensitivity: Computes best specificity where sensitivity is >= specified value. class SquaredHinge: Computes the squared hinge metric between y_true and y_pred. class Sum: Computes the (weighted) sum of the given values. class TopKCategoricalAccuracy: Computes how often targets are in the top K predictions. class TrueNegatives: Calculates the number of true negatives. class TruePositives: Calculates the number of true positives. Functions KLD(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. MAE(...): Computes the mean absolute error between labels and predictions. MAPE(...): Computes the mean absolute percentage error between y_true and y_pred. MSE(...): Computes the mean squared error between labels and predictions. MSLE(...): Computes the mean squared logarithmic error between y_true and y_pred. binary_accuracy(...): Calculates how often predictions matches binary labels. binary_crossentropy(...): Computes the binary crossentropy loss. categorical_accuracy(...): Calculates how often predictions matches one-hot labels. categorical_crossentropy(...): Computes the categorical crossentropy loss. cosine(...): Computes the cosine similarity between labels and predictions. cosine_proximity(...): Computes the cosine similarity between labels and predictions. deserialize(...): Deserializes a serialized metric class/function instance. get(...): Retrieves a Keras metric as a function/Metric class instance. hinge(...): Computes the hinge loss between y_true and y_pred. kl_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kld(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. kullback_leibler_divergence(...): Computes Kullback-Leibler divergence loss between y_true and y_pred. log_cosh(...): Logarithm of the hyperbolic cosine of the prediction error. logcosh(...): Logarithm of the hyperbolic cosine of the prediction error. mae(...): Computes the mean absolute error between labels and predictions. mape(...): Computes the mean absolute percentage error between y_true and y_pred. mean_absolute_error(...): Computes the mean absolute error between labels and predictions. mean_absolute_percentage_error(...): Computes the mean absolute percentage error between y_true and y_pred. mean_squared_error(...): Computes the mean squared error between labels and predictions. mean_squared_logarithmic_error(...): Computes the mean squared logarithmic error between y_true and y_pred. mse(...): Computes the mean squared error between labels and predictions. msle(...): Computes the mean squared logarithmic error between y_true and y_pred. poisson(...): Computes the Poisson loss between y_true and y_pred. serialize(...): Serializes metric function or Metric instance. sparse_categorical_accuracy(...): Calculates how often predictions matches integer labels. sparse_categorical_crossentropy(...): Computes the sparse categorical crossentropy loss. sparse_top_k_categorical_accuracy(...): Computes how often integer targets are in the top K predictions. squared_hinge(...): Computes the squared hinge loss between y_true and y_pred. top_k_categorical_accuracy(...): Computes how often targets are in the top K predictions. | tensorflow.compat.v1.keras.metrics |
Module: tf.compat.v1.keras.mixed_precision Keras mixed precision API. See the mixed precision guide to learn how to use the API. Modules experimental module: Public API for tf.keras.mixed_precision.experimental namespace. Classes class LossScaleOptimizer: An optimizer that applies loss scaling to prevent numeric underflow. | tensorflow.compat.v1.keras.mixed_precision |
Module: tf.compat.v1.keras.mixed_precision.experimental Public API for tf.keras.mixed_precision.experimental namespace. Classes class LossScaleOptimizer: An deprecated optimizer that applies loss scaling. | tensorflow.compat.v1.keras.mixed_precision.experimental |
Module: tf.compat.v1.keras.models Code for model cloning, plus model-related API entries. Classes class Model: Model groups layers into an object with training and inference features. class Sequential: Sequential groups a linear stack of layers into a tf.keras.Model. Functions clone_model(...): Clone any Model instance. load_model(...): Loads a model saved via model.save(). model_from_config(...): Instantiates a Keras model from its config. model_from_json(...): Parses a JSON model configuration string and returns a model instance. model_from_yaml(...): Parses a yaml model configuration file and returns a model instance. save_model(...): Saves a model as a TensorFlow SavedModel or HDF5 file. | tensorflow.compat.v1.keras.models |
Module: tf.compat.v1.keras.optimizers Built-in optimizer classes. For more examples see the base class tf.keras.optimizers.Optimizer. Modules schedules module: Public API for tf.keras.optimizers.schedules namespace. Classes class Adadelta: Optimizer that implements the Adadelta algorithm. class Adagrad: Optimizer that implements the Adagrad algorithm. class Adam: Optimizer that implements the Adam algorithm. class Adamax: Optimizer that implements the Adamax algorithm. class Ftrl: Optimizer that implements the FTRL algorithm. class Nadam: Optimizer that implements the NAdam algorithm. class Optimizer: Base class for Keras optimizers. class RMSprop: Optimizer that implements the RMSprop algorithm. class SGD: Gradient descent (with momentum) optimizer. Functions deserialize(...): Inverse of the serialize function. get(...): Retrieves a Keras Optimizer instance. serialize(...) | tensorflow.compat.v1.keras.optimizers |
Module: tf.compat.v1.keras.optimizers.schedules Public API for tf.keras.optimizers.schedules namespace. Classes class ExponentialDecay: A LearningRateSchedule that uses an exponential decay schedule. class InverseTimeDecay: A LearningRateSchedule that uses an inverse time decay schedule. class LearningRateSchedule: A serializable learning rate decay schedule. class PiecewiseConstantDecay: A LearningRateSchedule that uses a piecewise constant decay schedule. class PolynomialDecay: A LearningRateSchedule that uses a polynomial decay schedule. Functions deserialize(...) serialize(...) | tensorflow.compat.v1.keras.optimizers.schedules |
Module: tf.compat.v1.keras.preprocessing Keras data preprocessing utils. Modules image module: Set of tools for real-time data augmentation on image data. sequence module: Utilities for preprocessing sequence data. text module: Utilities for text input preprocessing. | tensorflow.compat.v1.keras.preprocessing |
Module: tf.compat.v1.keras.preprocessing.image Set of tools for real-time data augmentation on image data. Classes class DirectoryIterator: Iterator capable of reading images from a directory on disk. class ImageDataGenerator: Generate batches of tensor image data with real-time data augmentation. class Iterator: Base class for image data iterators. class NumpyArrayIterator: Iterator yielding data from a Numpy array. Functions apply_affine_transform(...): Applies an affine transformation specified by the parameters given. apply_brightness_shift(...): Performs a brightness shift. apply_channel_shift(...): Performs a channel shift. array_to_img(...): Converts a 3D Numpy array to a PIL Image instance. img_to_array(...): Converts a PIL Image instance to a Numpy array. load_img(...): Loads an image into PIL format. random_brightness(...): Performs a random brightness shift. random_channel_shift(...): Performs a random channel shift. random_rotation(...): Performs a random rotation of a Numpy image tensor. random_shear(...): Performs a random spatial shear of a Numpy image tensor. random_shift(...): Performs a random spatial shift of a Numpy image tensor. random_zoom(...): Performs a random spatial zoom of a Numpy image tensor. save_img(...): Saves an image stored as a Numpy array to a path or file object. | tensorflow.compat.v1.keras.preprocessing.image |
Module: tf.compat.v1.keras.preprocessing.sequence Utilities for preprocessing sequence data. Classes class TimeseriesGenerator: Utility class for generating batches of temporal data. Functions make_sampling_table(...): Generates a word rank-based probabilistic sampling table. pad_sequences(...): Pads sequences to the same length. skipgrams(...): Generates skipgram word pairs. | tensorflow.compat.v1.keras.preprocessing.sequence |
Module: tf.compat.v1.keras.preprocessing.text Utilities for text input preprocessing. Classes class Tokenizer: Text tokenization utility class. Functions hashing_trick(...): Converts a text to a sequence of indexes in a fixed-size hashing space. one_hot(...): One-hot encodes a text into a list of word indexes of size n. text_to_word_sequence(...): Converts a text to a sequence of words (or tokens). tokenizer_from_json(...): Parses a JSON tokenizer configuration file and returns a | tensorflow.compat.v1.keras.preprocessing.text |
Module: tf.compat.v1.keras.regularizers Built-in regularizers. Classes class L1: A regularizer that applies a L1 regularization penalty. class L1L2: A regularizer that applies both L1 and L2 regularization penalties. class L2: A regularizer that applies a L2 regularization penalty. class Regularizer: Regularizer base class. class l1: A regularizer that applies a L1 regularization penalty. class l2: A regularizer that applies a L2 regularization penalty. Functions deserialize(...) get(...): Retrieve a regularizer instance from a config or identifier. l1_l2(...): Create a regularizer that applies both L1 and L2 penalties. serialize(...) | tensorflow.compat.v1.keras.regularizers |
Module: tf.compat.v1.keras.utils Public API for tf.keras.utils namespace. Classes class CustomObjectScope: Exposes custom classes/functions to Keras deserialization internals. class GeneratorEnqueuer: Builds a queue out of a data generator. class OrderedEnqueuer: Builds a Enqueuer from a Sequence. class Progbar: Displays a progress bar. class Sequence: Base object for fitting to a sequence of data, such as a dataset. class SequenceEnqueuer: Base class to enqueue inputs. class custom_object_scope: Exposes custom classes/functions to Keras deserialization internals. Functions deserialize_keras_object(...): Turns the serialized form of a Keras object back into an actual object. get_custom_objects(...): Retrieves a live reference to the global dictionary of custom objects. get_file(...): Downloads a file from a URL if it not already in the cache. get_registered_name(...): Returns the name registered to an object within the Keras framework. get_registered_object(...): Returns the class associated with name if it is registered with Keras. get_source_inputs(...): Returns the list of input tensors necessary to compute tensor. model_to_dot(...): Convert a Keras model to dot format. normalize(...): Normalizes a Numpy array. plot_model(...): Converts a Keras model to dot format and save to a file. register_keras_serializable(...): Registers an object with the Keras serialization framework. serialize_keras_object(...): Serialize a Keras object into a JSON-compatible representation. to_categorical(...): Converts a class vector (integers) to binary class matrix. | tensorflow.compat.v1.keras.utils |
Module: tf.compat.v1.keras.wrappers Public API for tf.keras.wrappers namespace. Modules scikit_learn module: Wrapper for using the Scikit-Learn API with Keras models. | tensorflow.compat.v1.keras.wrappers |
Module: tf.compat.v1.keras.wrappers.scikit_learn Wrapper for using the Scikit-Learn API with Keras models. Classes class KerasClassifier: Implementation of the scikit-learn classifier API for Keras. class KerasRegressor: Implementation of the scikit-learn regressor API for Keras. | tensorflow.compat.v1.keras.wrappers.scikit_learn |
Module: tf.compat.v1.layers Public API for tf.layers namespace. Modules experimental module: Public API for tf.layers.experimental namespace. Classes class AveragePooling1D: Average Pooling layer for 1D inputs. class AveragePooling2D: Average pooling layer for 2D inputs (e.g. images). class AveragePooling3D: Average pooling layer for 3D inputs (e.g. volumes). class BatchNormalization: Batch Normalization layer from (Ioffe et al., 2015). class Conv1D: 1D convolution layer (e.g. temporal convolution). class Conv2D: 2D convolution layer (e.g. spatial convolution over images). class Conv2DTranspose: Transposed 2D convolution layer (sometimes called 2D Deconvolution). class Conv3D: 3D convolution layer (e.g. spatial convolution over volumes). class Conv3DTranspose: Transposed 3D convolution layer (sometimes called 3D Deconvolution). class Dense: Densely-connected layer class. class Dropout: Applies Dropout to the input. class Flatten: Flattens an input tensor while preserving the batch axis (axis 0). class InputSpec: Specifies the rank, dtype and shape of every input to a layer. class Layer: Base layer class. class MaxPooling1D: Max Pooling layer for 1D inputs. class MaxPooling2D: Max pooling layer for 2D inputs (e.g. images). class MaxPooling3D: Max pooling layer for 3D inputs (e.g. volumes). class SeparableConv1D: Depthwise separable 1D convolution. class SeparableConv2D: Depthwise separable 2D convolution. Functions average_pooling1d(...): Average Pooling layer for 1D inputs. average_pooling2d(...): Average pooling layer for 2D inputs (e.g. images). average_pooling3d(...): Average pooling layer for 3D inputs (e.g. volumes). batch_normalization(...): Functional interface for the batch normalization layer from_config(Ioffe et al., 2015). conv1d(...): Functional interface for 1D convolution layer (e.g. temporal convolution). conv2d(...): Functional interface for the 2D convolution layer. conv2d_transpose(...): Functional interface for transposed 2D convolution layer. conv3d(...): Functional interface for the 3D convolution layer. conv3d_transpose(...): Functional interface for transposed 3D convolution layer. dense(...): Functional interface for the densely-connected layer. dropout(...): Applies Dropout to the input. flatten(...): Flattens an input tensor while preserving the batch axis (axis 0). max_pooling1d(...): Max Pooling layer for 1D inputs. max_pooling2d(...): Max pooling layer for 2D inputs (e.g. images). max_pooling3d(...): Max pooling layer for 3D inputs (e.g. separable_conv1d(...): Functional interface for the depthwise separable 1D convolution layer. separable_conv2d(...): Functional interface for the depthwise separable 2D convolution layer. | tensorflow.compat.v1.layers |
tf.compat.v1.layers.AveragePooling1D Average Pooling layer for 1D inputs. Inherits From: AveragePooling1D, Layer, Layer, Module
tf.compat.v1.layers.AveragePooling1D(
pool_size, strides, padding='valid',
data_format='channels_last', name=None, **kwargs
)
Arguments
pool_size An integer or tuple/list of a single integer, representing the size of the pooling window.
strides An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
padding A string. The padding method, either 'valid' or 'same'. Case-insensitive.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.averagepooling1d |
tf.compat.v1.layers.AveragePooling2D Average pooling layer for 2D inputs (e.g. images). Inherits From: AveragePooling2D, Layer, Layer, Module
tf.compat.v1.layers.AveragePooling2D(
pool_size, strides, padding='valid',
data_format='channels_last', name=None, **kwargs
)
Arguments
pool_size An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
padding A string. The padding method, either 'valid' or 'same'. Case-insensitive.
data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.averagepooling2d |
tf.compat.v1.layers.AveragePooling3D Average pooling layer for 3D inputs (e.g. volumes). Inherits From: AveragePooling3D, Layer, Layer, Module
tf.compat.v1.layers.AveragePooling3D(
pool_size, strides, padding='valid',
data_format='channels_last', name=None, **kwargs
)
Arguments
pool_size An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
padding A string. The padding method, either 'valid' or 'same'. Case-insensitive.
data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.averagepooling3d |
tf.compat.v1.layers.average_pooling1d Average Pooling layer for 1D inputs.
tf.compat.v1.layers.average_pooling1d(
inputs, pool_size, strides, padding='valid',
data_format='channels_last', name=None
)
Arguments
inputs The tensor over which to pool. Must have rank 3.
pool_size An integer or tuple/list of a single integer, representing the size of the pooling window.
strides An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
padding A string. The padding method, either 'valid' or 'same'. Case-insensitive.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length).
name A string, the name of the layer.
Returns The output tensor, of rank 3.
Raises
ValueError if eager execution is enabled. | tensorflow.compat.v1.layers.average_pooling1d |
tf.compat.v1.layers.average_pooling2d Average pooling layer for 2D inputs (e.g. images).
tf.compat.v1.layers.average_pooling2d(
inputs, pool_size, strides, padding='valid',
data_format='channels_last', name=None
)
Arguments
inputs The tensor over which to pool. Must have rank 4.
pool_size An integer or tuple/list of 2 integers: (pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 2 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
padding A string. The padding method, either 'valid' or 'same'. Case-insensitive.
data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).
name A string, the name of the layer.
Returns Output tensor.
Raises
ValueError if eager execution is enabled. | tensorflow.compat.v1.layers.average_pooling2d |
tf.compat.v1.layers.average_pooling3d Average pooling layer for 3D inputs (e.g. volumes).
tf.compat.v1.layers.average_pooling3d(
inputs, pool_size, strides, padding='valid',
data_format='channels_last', name=None
)
Arguments
inputs The tensor over which to pool. Must have rank 5.
pool_size An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
padding A string. The padding method, either 'valid' or 'same'. Case-insensitive.
data_format A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width).
name A string, the name of the layer.
Returns Output tensor.
Raises
ValueError if eager execution is enabled. | tensorflow.compat.v1.layers.average_pooling3d |
tf.compat.v1.layers.BatchNormalization Batch Normalization layer from (Ioffe et al., 2015). Inherits From: BatchNormalization, Layer, Layer, Module
tf.compat.v1.layers.BatchNormalization(
axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True,
beta_initializer=tf.zeros_initializer(),
gamma_initializer=tf.ones_initializer(),
moving_mean_initializer=tf.zeros_initializer(),
moving_variance_initializer=tf.ones_initializer(), beta_regularizer=None,
gamma_regularizer=None, beta_constraint=None, gamma_constraint=None,
renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None,
trainable=True, virtual_batch_size=None, adjustment=None, name=None, **kwargs
)
Keras APIs handle BatchNormalization updates to the moving_mean and moving_variance as part of their fit() and evaluate() loops. However, if a custom training loop is used with an instance of Model, these updates need to be explicitly included. Here's a simple example of how it can be done: # model is an instance of Model that contains BatchNormalization layer.
update_ops = model.get_updates_for(None) + model.get_updates_for(features)
train_op = optimizer.minimize(loss)
train_op = tf.group([train_op, update_ops])
Arguments
axis An int or list of int, the axis or axes that should be normalized, typically the features axis/axes. For instance, after a Conv2D layer with data_format="channels_first", set axis=1. If a list of axes is provided, each axis in axis will be normalized simultaneously. Default is -1 which uses the last axis. Note: when using multi-axis batch norm, the beta, gamma, moving_mean, and moving_variance variables are the same rank as the input Tensor, with dimension size 1 in all reduced (non-axis) dimensions).
momentum Momentum for the moving average.
epsilon Small float added to variance to avoid dividing by zero.
center If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling can be done by the next layer.
beta_initializer Initializer for the beta weight.
gamma_initializer Initializer for the gamma weight.
moving_mean_initializer Initializer for the moving mean.
moving_variance_initializer Initializer for the moving variance.
beta_regularizer Optional regularizer for the beta weight.
gamma_regularizer Optional regularizer for the gamma weight.
beta_constraint An optional projection function to be applied to the beta weight after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
gamma_constraint An optional projection function to be applied to the gamma weight after being updated by an Optimizer.
renorm Whether to use Batch Renormalization (Ioffe, 2017). This adds extra variables during training. The inference is the same for either value of this parameter.
renorm_clipping A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar Tensors used to clip the renorm correction. The correction (r, d) is used as corrected_value = normalized_value * r + d, with r clipped to [rmin, rmax], and d to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
renorm_momentum Momentum used to update the moving means and standard deviations with renorm. Unlike momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that momentum is still applied to get the means and variances for inference.
fused if None or True, use a faster, fused implementation if possible. If False, use the system recommended implementation.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
virtual_batch_size An int. By default, virtual_batch_size is None, which means batch normalization is performed across the whole batch. When virtual_batch_size is not None, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
adjustment A function taking the Tensor containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1)) will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If None, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
name A string, the name of the layer. References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf) Batch Renormalization - Towards Reducing Minibatch Dependence in Batch-Normalized Models: Ioffe, 2017 (pdf)
Attributes
graph
scope_name | tensorflow.compat.v1.layers.batchnormalization |
tf.compat.v1.layers.batch_normalization Functional interface for the batch normalization layer from_config(Ioffe et al., 2015).
tf.compat.v1.layers.batch_normalization(
inputs, axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True,
beta_initializer=tf.zeros_initializer(),
gamma_initializer=tf.ones_initializer(),
moving_mean_initializer=tf.zeros_initializer(),
moving_variance_initializer=tf.ones_initializer(), beta_regularizer=None,
gamma_regularizer=None, beta_constraint=None, gamma_constraint=None,
training=False, trainable=True, name=None, reuse=None, renorm=False,
renorm_clipping=None, renorm_momentum=0.99, fused=None, virtual_batch_size=None,
adjustment=None
)
Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be executed alongside the train_op. Also, be sure to add any batch_normalization ops before getting the update_ops collection. Otherwise, update_ops will be empty, and training/inference will not work properly. For example:
x_norm = tf.compat.v1.layers.batch_normalization(x, training=training)
# ...
update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS)
train_op = optimizer.minimize(loss)
train_op = tf.group([train_op, update_ops])
Arguments
inputs Tensor input.
axis An int, the axis that should be normalized (typically the features axis). For instance, after a Convolution2D layer with data_format="channels_first", set axis=1 in BatchNormalization.
momentum Momentum for the moving average.
epsilon Small float added to variance to avoid dividing by zero.
center If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling can be done by the next layer.
beta_initializer Initializer for the beta weight.
gamma_initializer Initializer for the gamma weight.
moving_mean_initializer Initializer for the moving mean.
moving_variance_initializer Initializer for the moving variance.
beta_regularizer Optional regularizer for the beta weight.
gamma_regularizer Optional regularizer for the gamma weight.
beta_constraint An optional projection function to be applied to the beta weight after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
gamma_constraint An optional projection function to be applied to the gamma weight after being updated by an Optimizer.
training Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). NOTE: make sure to set this parameter correctly, or else your training/inference will not work properly.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name String, the name of the layer.
reuse Boolean, whether to reuse the weights of a previous layer by the same name.
renorm Whether to use Batch Renormalization (Ioffe, 2017). This adds extra variables during training. The inference is the same for either value of this parameter.
renorm_clipping A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar Tensors used to clip the renorm correction. The correction (r, d) is used as corrected_value = normalized_value * r + d, with r clipped to [rmin, rmax], and d to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
renorm_momentum Momentum used to update the moving means and standard deviations with renorm. Unlike momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that momentum is still applied to get the means and variances for inference.
fused if None or True, use a faster, fused implementation if possible. If False, use the system recommended implementation.
virtual_batch_size An int. By default, virtual_batch_size is None, which means batch normalization is performed across the whole batch. When virtual_batch_size is not None, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
adjustment A function taking the Tensor containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1, adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1)) will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If None, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Returns Output tensor.
Raises
ValueError if eager execution is enabled. References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf) Batch Renormalization - Towards Reducing Minibatch Dependence in Batch-Normalized Models: Ioffe, 2017 (pdf) | tensorflow.compat.v1.layers.batch_normalization |
tf.compat.v1.layers.Conv1D 1D convolution layer (e.g. temporal convolution). Inherits From: Conv1D, Layer, Layer, Module
tf.compat.v1.layers.Conv1D(
filters, kernel_size, strides=1, padding='valid',
data_format='channels_last', dilation_rate=1, activation=None,
use_bias=True, kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
**kwargs
)
This layer creates a convolution kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs. If use_bias is True (and a bias_initializer is provided), a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
Arguments
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
strides An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length).
dilation_rate An integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1.
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.conv1d |
tf.compat.v1.layers.Conv2D 2D convolution layer (e.g. spatial convolution over images). Inherits From: Conv2D, Layer, Layer, Module
tf.compat.v1.layers.Conv2D(
filters, kernel_size, strides=(1, 1), padding='valid',
data_format='channels_last', dilation_rate=(1, 1), activation=None,
use_bias=True, kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
**kwargs
)
This layer creates a convolution kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs. If use_bias is True (and a bias_initializer is provided), a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
Arguments
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).
dilation_rate An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.conv2d |
tf.compat.v1.layers.Conv2DTranspose Transposed 2D convolution layer (sometimes called 2D Deconvolution). Inherits From: Conv2DTranspose, Conv2D, Layer, Layer, Module
tf.compat.v1.layers.Conv2DTranspose(
filters, kernel_size, strides=(1, 1), padding='valid',
data_format='channels_last', activation=None, use_bias=True,
kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
**kwargs
)
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
Arguments
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
strides A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
padding one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.conv2dtranspose |
tf.compat.v1.layers.conv2d_transpose Functional interface for transposed 2D convolution layer.
tf.compat.v1.layers.conv2d_transpose(
inputs, filters, kernel_size, strides=(1, 1), padding='valid',
data_format='channels_last', activation=None, use_bias=True,
kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
reuse=None
)
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
Arguments
inputs Input tensor.
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
strides A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
padding one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
reuse Boolean, whether to reuse the weights of a previous layer by the same name.
Returns Output tensor.
Raises
ValueError if eager execution is enabled. | tensorflow.compat.v1.layers.conv2d_transpose |
tf.compat.v1.layers.Conv3D 3D convolution layer (e.g. spatial convolution over volumes). Inherits From: Conv3D, Layer, Layer, Module
tf.compat.v1.layers.Conv3D(
filters, kernel_size, strides=(1, 1, 1), padding='valid',
data_format='channels_last', dilation_rate=(1, 1, 1), activation=None,
use_bias=True, kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
**kwargs
)
This layer creates a convolution kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs. If use_bias is True (and a bias_initializer is provided), a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
Arguments
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 3 integers, specifying the strides of the convolution along the depth, height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width).
dilation_rate An integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.conv3d |
tf.compat.v1.layers.Conv3DTranspose Transposed 3D convolution layer (sometimes called 3D Deconvolution). Inherits From: Conv3DTranspose, Conv3D, Layer, Layer, Module
tf.compat.v1.layers.Conv3DTranspose(
filters, kernel_size, strides=(1, 1, 1), padding='valid',
data_format='channels_last', activation=None, use_bias=True,
kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
**kwargs
)
Arguments
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 3 integers, specifying the strides of the convolution along the depth, height and width. Can be a single integer to specify the same value for all spatial dimensions.
padding One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width).
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
Attributes
graph
scope_name | tensorflow.compat.v1.layers.conv3dtranspose |
tf.compat.v1.layers.conv3d_transpose Functional interface for transposed 3D convolution layer.
tf.compat.v1.layers.conv3d_transpose(
inputs, filters, kernel_size, strides=(1, 1, 1), padding='valid',
data_format='channels_last', activation=None, use_bias=True,
kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
reuse=None
)
Arguments
inputs Input tensor.
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size A tuple or list of 3 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
strides A tuple or list of 3 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
padding one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width).
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
reuse Boolean, whether to reuse the weights of a previous layer by the same name.
Returns Output tensor.
Raises
ValueError if eager execution is enabled. | tensorflow.compat.v1.layers.conv3d_transpose |
tf.compat.v1.layers.Dense Densely-connected layer class. Inherits From: Dense, Layer, Layer, Module
tf.compat.v1.layers.Dense(
units, activation=None, use_bias=True, kernel_initializer=None,
bias_initializer=tf.zeros_initializer(), kernel_regularizer=None,
bias_regularizer=None, activity_regularizer=None, kernel_constraint=None,
bias_constraint=None, trainable=True, name=None, **kwargs
)
This layer implements the operation: outputs = activation(inputs * kernel + bias) Where activation is the activation function passed as the activation argument (if not None), kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only if use_bias is True).
Arguments
units Integer or Long, dimensionality of the output space.
activation Activation function (callable). Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer Initializer function for the weight matrix. If None (default), weights are initialized using the default initializer used by tf.compat.v1.get_variable.
bias_initializer Initializer function for the bias.
kernel_regularizer Regularizer function for the weight matrix.
bias_regularizer Regularizer function for the bias.
activity_regularizer Regularizer function for the output.
kernel_constraint An optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint An optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases.
_reuse Boolean, whether to reuse the weights of a previous layer by the same name. Properties:
units: Python integer, dimensionality of the output space.
activation: Activation function (callable).
use_bias: Boolean, whether the layer uses a bias.
kernel_initializer: Initializer instance (or name) for the kernel matrix.
bias_initializer: Initializer instance (or name) for the bias.
kernel_regularizer: Regularizer instance for the kernel matrix (callable)
bias_regularizer: Regularizer instance for the bias (callable).
activity_regularizer: Regularizer instance for the output (callable)
kernel_constraint: Constraint function for the kernel matrix.
bias_constraint: Constraint function for the bias.
kernel: Weight matrix (TensorFlow variable or tensor).
bias: Bias vector, if applicable (TensorFlow variable or tensor).
Attributes
graph
scope_name | tensorflow.compat.v1.layers.dense |
tf.compat.v1.layers.Dropout Applies Dropout to the input. Inherits From: Dropout, Layer, Layer, Module
tf.compat.v1.layers.Dropout(
rate=0.5, noise_shape=None, seed=None, name=None, **kwargs
)
Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting. The units that are kept are scaled by 1 / (1 - rate), so that their sum is unchanged at training time and inference time.
Arguments
rate The dropout rate, between 0 and 1. E.g. rate=0.1 would drop out 10% of input units.
noise_shape 1D tensor of type int32 representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features), and you want the dropout mask to be the same for all timesteps, you can use noise_shape=[batch_size, 1, features].
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed. for behavior.
name The name of the layer (string).
Attributes
graph
scope_name | tensorflow.compat.v1.layers.dropout |
Module: tf.compat.v1.layers.experimental Public API for tf.layers.experimental namespace. Functions keras_style_scope(...): Use Keras-style variable management. set_keras_style(...): Use Keras-style variable management. | tensorflow.compat.v1.layers.experimental |