doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.compat.v1.flags.register_validator Adds a constraint, which will be enforced during program execution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.register_validator tf.compat.v1.flags.register_validator( flag_name, checker, message='Flag validation failed', flag_values=_flagvalues.FLAGS ) The constraint is validated when flags are initially parsed, and after each change of the corresponding flag's value. Args: flag_name: str, name of the flag to be checked. checker: callable, a function to validate the flag. input - A single positional argument: The value of the corresponding flag (string, boolean, etc. This value will be passed to checker by the library). output - bool, True if validator constraint is satisfied. If constraint is not satisfied, it should either return False or raise flags.ValidationError(desired_error_message). message: str, error text to be shown to the user if checker returns False. If checker raises flags.ValidationError, message from the raised error will be shown. flag_values: flags.FlagValues, optional FlagValues instance to validate against. Raises: AttributeError: Raised when flag_name is not registered as a valid flag name.
tensorflow.compat.v1.flags.register_validator
tf.compat.v1.flags.text_wrap Wraps a given text to a maximum line length and returns it. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.text_wrap tf.compat.v1.flags.text_wrap( text, length=None, indent='', firstline_indent=None ) It turns lines that only contain whitespace into empty lines, keeps new lines, and expands tabs using 4 spaces. Args text str, text to wrap. length int, maximum length of a line, includes indentation. If this is None then use get_help_width() indent str, indent for all but first line. firstline_indent str, indent for first line; if None, fall back to indent. Returns str, the wrapped text. Raises ValueError Raised if indent or firstline_indent not shorter than length.
tensorflow.compat.v1.flags.text_wrap
Module: tf.compat.v1.flags.tf_decorator Base TFDecorator class and utility functions for working with decorators. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator There are two ways to create decorators that TensorFlow can introspect into. This is important for documentation generation purposes, so that function signatures aren't obscured by the (*args, **kwds) signature that decorators often provide. Call tf_decorator.make_decorator on your wrapper function. If your decorator is stateless, or can capture all of the variables it needs to work with through lexical closure, this is the simplest option. Create your wrapper function as usual, but instead of returning it, return tf_decorator.make_decorator(target, your_wrapper). This will attach some decorator introspection metadata onto your wrapper and return it. Example: def print_hello_before_calling(target): def wrapper(*args, *kwargs): print('hello') return target(args, **kwargs) return tf_decorator.make_decorator(target, wrapper) Derive from TFDecorator. If your decorator needs to be stateful, you can implement it in terms of a TFDecorator. Store whatever state you need in your derived class, and implement the __call__ method to do your work before calling into your target. You can retrieve the target via super(MyDecoratorClass, self).decorated_target, and call it with whatever parameters it needs. Example: class CallCounter(tf_decorator.TFDecorator): def init(self, target): super(CallCounter, self).init('count_calls', target) self.call_count = 0 def call(self, *args, *kwargs): self.call_count += 1 return super(CallCounter, self).decorated_target(args, **kwargs) def count_calls(target): return CallCounter(target) Modules tf_stack module: Functions used to extract and analyze stacks. Faster than Python libs. Classes class TFDecorator: Base class for all TensorFlow decorators. Functions make_decorator(...): Make a decorator from a wrapper and a target. rewrap(...): Injects a new target into a function built by make_decorator. unwrap(...): Unwraps an object into a list of TFDecorators and a final target.
tensorflow.compat.v1.flags.tf_decorator
tf.compat.v1.flags.tf_decorator.make_decorator Make a decorator from a wrapper and a target. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.make_decorator tf.compat.v1.flags.tf_decorator.make_decorator( target, decorator_func, decorator_name=None, decorator_doc='', decorator_argspec=None ) Args target The final callable to be wrapped. decorator_func The wrapper function. decorator_name The name of the decorator. If None, the name of the function calling make_decorator. decorator_doc Documentation specific to this application of decorator_func to target. decorator_argspec The new callable signature of this decorator. Returns The decorator_func argument with new metadata attached.
tensorflow.compat.v1.flags.tf_decorator.make_decorator
tf.compat.v1.flags.tf_decorator.rewrap Injects a new target into a function built by make_decorator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.rewrap tf.compat.v1.flags.tf_decorator.rewrap( decorator_func, previous_target, new_target ) This function allows replacing a function wrapped by decorator_func, assuming the decorator that wraps the function is written as described below. The decorator function must use <decorator name>.__wrapped__ instead of the wrapped function that is normally used: Example: Instead of this: def simple_parametrized_wrapper(*args, *kwds): return wrapped_fn(args, **kwds) tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn) Write this: def simple_parametrized_wrapper(*args, *kwds): return simple_parametrizedwrapper.wrapped_(args, **kwds) tf_decorator.make_decorator(simple_parametrized_wrapper, wrapped_fn) Note that this process modifies decorator_func. Args decorator_func Callable returned by wrap. previous_target Callable that needs to be replaced. new_target Callable to replace previous_target with. Returns The updated decorator. If decorator_func is not a tf_decorator, new_target is returned.
tensorflow.compat.v1.flags.tf_decorator.rewrap
tf.compat.v1.flags.tf_decorator.TFDecorator Base class for all TensorFlow decorators. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.TFDecorator tf.compat.v1.flags.tf_decorator.TFDecorator( decorator_name, target, decorator_doc='', decorator_argspec=None ) TFDecorator captures and exposes the wrapped target, and provides details about the current decorator. Attributes decorated_target decorator_argspec decorator_doc decorator_name Methods __call__ View source __call__( *args, **kwargs ) Call self as a function.
tensorflow.compat.v1.flags.tf_decorator.tfdecorator
Module: tf.compat.v1.flags.tf_decorator.tf_stack Functions used to extract and analyze stacks. Faster than Python libs. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack Classes class CurrentModuleFilter: Filters stack frames from the module where this is used (best effort). class FrameSummary class StackSummary class StackTraceFilter: Allows filtering traceback information by removing superfluous frames. class StackTraceMapper: Allows remapping traceback information to different source code. class StackTraceTransform: Base class for stack trace transformation functions. Functions extract_stack(...): A lightweight, extensible re-implementation of traceback.extract_stack.
tensorflow.compat.v1.flags.tf_decorator.tf_stack
tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter Filters stack frames from the module where this is used (best effort). Inherits From: StackTraceFilter, StackTraceTransform View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack.CurrentModuleFilter tf.compat.v1.flags.tf_decorator.tf_stack.CurrentModuleFilter() Methods get_filtered_filenames View source get_filtered_filenames() reset View source reset() __enter__ View source __enter__() __exit__ View source __exit__( unused_type, unused_value, unused_traceback )
tensorflow.compat.v1.flags.tf_decorator.tf_stack.currentmodulefilter
tf.compat.v1.flags.tf_decorator.tf_stack.extract_stack A lightweight, extensible re-implementation of traceback.extract_stack. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack.extract_stack tf.compat.v1.flags.tf_decorator.tf_stack.extract_stack( limit=-1 ) NOTE(mrry): traceback.extract_stack eagerly retrieves the line of code for each stack frame using linecache, which results in an abundance of stat() calls. This implementation does not retrieve the code, and any consumer should apply _convert_stack to the result to obtain a traceback that can be formatted etc. using traceback methods. Args limit A limit on the number of frames to return. Returns A sequence of FrameSummary objects (filename, lineno, name, line) corresponding to the call stack of the current thread.
tensorflow.compat.v1.flags.tf_decorator.tf_stack.extract_stack
tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack.FrameSummary tf.compat.v1.flags.tf_decorator.tf_stack.FrameSummary( *args, **kwargs ) Attributes filename line lineno name Methods __eq__ __eq__() eq(self: tensorflow.python._tf_stack.FrameSummary, arg0: tensorflow.python._tf_stack.FrameSummary) -> bool __getitem__ __getitem__() getitem(self: tensorflow.python._tf_stack.FrameSummary, arg0: object) -> object __iter__ __iter__() iter(self: tensorflow.python._tf_stack.FrameSummary) -> iterator __len__ __len__() len(self: tensorflow.python._tf_stack.FrameSummary) -> int __ne__ __ne__() ne(self: tensorflow.python._tf_stack.FrameSummary, arg0: tensorflow.python._tf_stack.FrameSummary) -> bool
tensorflow.compat.v1.flags.tf_decorator.tf_stack.framesummary
tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack.StackSummary tf.compat.v1.flags.tf_decorator.tf_stack.StackSummary() Methods append append() append(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> None Add an item to the end of the list count count() count(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> int Return the number of times x appears in the list extend extend() extend(*args, **kwargs) Overloaded function. extend(self: tensorflow.python._tf_stack.StackSummary, L: tensorflow.python._tf_stack.StackSummary) -> None Extend the list by appending all the items in the given list extend(self: tensorflow.python._tf_stack.StackSummary, L: iterable) -> None Extend the list by appending all the items in the given list insert insert() insert(self: tensorflow.python._tf_stack.StackSummary, i: int, x: tensorflow.python._tf_stack.FrameSummary) -> None Insert an item at a given position. pop pop() pop(*args, **kwargs) Overloaded function. pop(self: tensorflow.python._tf_stack.StackSummary) -> tensorflow.python._tf_stack.FrameSummary Remove and return the last item pop(self: tensorflow.python._tf_stack.StackSummary, i: int) -> tensorflow.python._tf_stack.FrameSummary Remove and return the item at index i remove remove() remove(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> None Remove the first item from the list whose value is x. It is an error if there is no such item. __bool__ __bool__() bool(self: tensorflow.python._tf_stack.StackSummary) -> bool Check whether the list is nonempty __contains__ __contains__() contains(self: tensorflow.python._tf_stack.StackSummary, x: tensorflow.python._tf_stack.FrameSummary) -> bool Return true the container contains x __eq__ __eq__() eq(self: tensorflow.python._tf_stack.StackSummary, arg0: tensorflow.python._tf_stack.StackSummary) -> bool __getitem__ __getitem__() getitem(*args, **kwargs) Overloaded function. getitem(self: tensorflow.python._tf_stack.StackSummary, s: slice) -> tensorflow.python._tf_stack.StackSummary Retrieve list elements using a slice object getitem(self: tensorflow.python._tf_stack.StackSummary, arg0: int) -> tensorflow.python._tf_stack.FrameSummary getitem(self: tensorflow.python._tf_stack.StackSummary, arg0: int) -> tensorflow.python._tf_stack.FrameSummary __iter__ __iter__() iter(self: tensorflow.python._tf_stack.StackSummary) -> iterator __len__ __len__() len(self: tensorflow.python._tf_stack.StackSummary) -> int __ne__ __ne__() ne(self: tensorflow.python._tf_stack.StackSummary, arg0: tensorflow.python._tf_stack.StackSummary) -> bool
tensorflow.compat.v1.flags.tf_decorator.tf_stack.stacksummary
tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceFilter Allows filtering traceback information by removing superfluous frames. Inherits From: StackTraceTransform View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceFilter Methods get_filtered_filenames View source get_filtered_filenames() reset View source reset() __enter__ View source __enter__() __exit__ View source __exit__( unused_type, unused_value, unused_traceback )
tensorflow.compat.v1.flags.tf_decorator.tf_stack.stacktracefilter
tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceMapper Allows remapping traceback information to different source code. Inherits From: StackTraceTransform View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceMapper Methods get_effective_source_map View source get_effective_source_map() Returns a map (filename, lineno) -> (filename, lineno, function_name). reset View source reset() __enter__ View source __enter__() __exit__ View source __exit__( unused_type, unused_value, unused_traceback )
tensorflow.compat.v1.flags.tf_decorator.tf_stack.stacktracemapper
tf.compat.v1.flags.tf_decorator.tf_stack.StackTraceTransform Base class for stack trace transformation functions. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.tf_stack.StackTraceTransform Methods reset View source reset() __enter__ View source __enter__() __exit__ View source __exit__( unused_type, unused_value, unused_traceback )
tensorflow.compat.v1.flags.tf_decorator.tf_stack.stacktracetransform
tf.compat.v1.flags.tf_decorator.unwrap Unwraps an object into a list of TFDecorators and a final target. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.tf_decorator.unwrap tf.compat.v1.flags.tf_decorator.unwrap( maybe_tf_decorator ) Args maybe_tf_decorator Any callable object. Returns A tuple whose first element is an list of TFDecorator-derived objects that were applied to the final callable target, and whose second element is the final undecorated callable target. If the maybe_tf_decorator parameter is not decorated by any TFDecorators, the first tuple element will be an empty list. The TFDecorator list is ordered from outermost to innermost decorators.
tensorflow.compat.v1.flags.tf_decorator.unwrap
tf.compat.v1.flags.UnparsedFlagAccessError Raised when accessing the flag value from unparsed FlagValues. Inherits From: Error View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.UnparsedFlagAccessError
tensorflow.compat.v1.flags.unparsedflagaccesserror
tf.compat.v1.flags.UnrecognizedFlagError Raised when a flag is unrecognized. Inherits From: Error View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.UnrecognizedFlagError tf.compat.v1.flags.UnrecognizedFlagError( flagname, flagvalue='', suggestions=None ) Attributes flagname str, the name of the unrecognized flag. flagvalue The value of the flag, empty if the flag is not defined.
tensorflow.compat.v1.flags.unrecognizedflagerror
tf.compat.v1.flags.ValidationError Raised when flag validator constraint is not satisfied. Inherits From: Error View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.ValidationError
tensorflow.compat.v1.flags.validationerror
tf.compat.v1.flags.validator A function decorator for defining a flag validator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.validator tf.compat.v1.flags.validator( flag_name, message='Flag validation failed', flag_values=_flagvalues.FLAGS ) Registers the decorated function as a validator for flag_name, e.g. @flags.validator('foo') def _CheckFoo(foo): ... See register_validator() for the specification of checker function. Args flag_name str, name of the flag to be checked. message str, error text to be shown to the user if checker returns False. If checker raises flags.ValidationError, message from the raised error will be shown. flag_values flags.FlagValues, optional FlagValues instance to validate against. Returns A function decorator that registers its function argument as a validator. Raises AttributeError Raised when flag_name is not registered as a valid flag name.
tensorflow.compat.v1.flags.validator
tf.compat.v1.flags.WhitespaceSeparatedListParser Parser for a whitespace-separated list of strings. Inherits From: BaseListParser, ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.WhitespaceSeparatedListParser tf.compat.v1.flags.WhitespaceSeparatedListParser( comma_compat=False ) Args comma_compat bool, whether to support comma as an additional separator. If False then only whitespace is supported. This is intended only for backwards compatibility with flags that used to be comma-separated. Methods flag_type flag_type() See base class. parse parse( argument ) Parses argument as whitespace-separated list of strings. It also parses argument as comma-separated list of strings if requested. Args argument string argument passed in the commandline. Returns [str], the parsed flag value. Class Variables syntactic_help ''
tensorflow.compat.v1.flags.whitespaceseparatedlistparser
tf.compat.v1.floor_div Returns x // y element-wise. tf.compat.v1.floor_div( x, y, name=None ) Note: floor_div supports broadcasting. More about broadcasting here Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
tensorflow.compat.v1.floor_div
tf.compat.v1.foldl foldl on the list of tensors unpacked from elems on dimension 0. tf.compat.v1.foldl( fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None ) This foldl operator repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer. Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape`. This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):. Args fn The callable to be performed. elems A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn. initializer (optional) A tensor or (possibly nested) sequence of tensors, as the initial value for the accumulator. parallel_iterations (optional) The number of iterations allowed to run in parallel. back_prop (optional) True enables support for back propagation. swap_memory (optional) True enables GPU-CPU memory swapping. name (optional) Name prefix for the returned tensors. Returns A tensor or (possibly nested) sequence of tensors, resulting from applying fn consecutively to the list of tensors unpacked from elems, from first to last. Raises TypeError if fn is not callable. Example: elems = tf.constant([1, 2, 3, 4, 5, 6]) sum = foldl(lambda a, x: a + x, elems) # sum == 21
tensorflow.compat.v1.foldl
tf.compat.v1.foldr foldr on the list of tensors unpacked from elems on dimension 0. tf.compat.v1.foldr( fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None ) This foldr operator repeatedly applies the callable fn to a sequence of elements from last to first. The elements are made of the tensors unpacked from elems. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer. Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape. This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):. Args fn The callable to be performed. elems A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn. initializer (optional) A tensor or (possibly nested) sequence of tensors, as the initial value for the accumulator. parallel_iterations (optional) The number of iterations allowed to run in parallel. back_prop (optional) True enables support for back propagation. swap_memory (optional) True enables GPU-CPU memory swapping. name (optional) Name prefix for the returned tensors. Returns A tensor or (possibly nested) sequence of tensors, resulting from applying fn consecutively to the list of tensors unpacked from elems, from last to first. Raises TypeError if fn is not callable. Example: elems = [1, 2, 3, 4, 5, 6] sum = foldr(lambda a, x: a + x, elems) # sum == 21
tensorflow.compat.v1.foldr
tf.compat.v1.gather Gather slices from params axis axis according to indices. tf.compat.v1.gather( params, indices, validate_indices=None, name=None, axis=None, batch_dims=0 ) Gather slices from params axis axis according to indices. indices must be an integer tensor of any dimension (usually 0-D or 1-D). For 0-D (scalar) indices: $$\begin{align*} output[p_0, ..., p_{axis-1}, && &&& p_{axis + 1}, ..., p_{N-1}] = \\ params[p_0, ..., p_{axis-1}, && indices, &&& p_{axis + 1}, ..., p_{N-1}] \end{align*}$$ Where N = ndims(params). For 1-D (vector) indices with batch_dims=0: $$\begin{align*} output[p_0, ..., p_{axis-1}, && &i, &&p_{axis + 1}, ..., p_{N-1}] =\\ params[p_0, ..., p_{axis-1}, && indices[&i], &&p_{axis + 1}, ..., p_{N-1}] \end{align*}$$ In the general case, produces an output tensor where: $$\begin{align*} output[p_0, &..., p_{axis-1}, & &i_{B}, ..., i_{M-1}, & p_{axis + 1}, &..., p_{N-1}] = \\ params[p_0, &..., p_{axis-1}, & indices[p_0, ..., p_{B-1}, &i_{B}, ..., i_{M-1}], & p_{axis + 1}, &..., p_{N-1}] \end{align*}$$ Where N = ndims(params), M = ndims(indices), and B = batch_dims. Note that params.shape[:batch_dims] must be identical to indices.shape[:batch_dims]. The shape of the output tensor is: output.shape = params.shape[:axis] + indices.shape[batch_dims:] + params.shape[axis + 1:]. Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value. See also tf.gather_nd. Args params The Tensor from which to gather values. Must be at least rank axis + 1. indices The index Tensor. Must be one of the following types: int32, int64. Must be in range [0, params.shape[axis]). validate_indices Deprecated, does nothing. axis A Tensor. Must be one of the following types: int32, int64. The axis in params to gather indices from. Must be greater than or equal to batch_dims. Defaults to the first non-batch dimension. Supports negative indexes. batch_dims An integer. The number of batch dimensions. Must be less than or equal to rank(indices). name A name for the operation (optional). Returns A Tensor. Has the same type as params.
tensorflow.compat.v1.gather
tf.compat.v1.gather_nd Gather slices from params into a Tensor with shape specified by indices. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.manip.gather_nd tf.compat.v1.gather_nd( params, indices, name=None, batch_dims=0 ) indices is an K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices into params, where each element defines a slice of params: output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]] Whereas in tf.gather indices defines slices into the first dimension of params, in tf.gather_nd, indices defines slices into the first N dimensions of params, where N = indices.shape[-1]. The last dimension of indices can be at most the rank of params: indices.shape[-1] <= params.rank The last dimension of indices corresponds to elements (if indices.shape[-1] == params.rank) or slices (if indices.shape[-1] < params.rank) along dimension indices.shape[-1] of params. The output tensor has shape indices.shape[:-1] + params.shape[indices.shape[-1]:] Additionally both 'params' and 'indices' can have M leading batch dimensions that exactly match. In this case 'batch_dims' must be M. Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value. Some examples below. Simple indexing into a matrix: indices = [[0, 0], [1, 1]] params = [['a', 'b'], ['c', 'd']] output = ['a', 'd'] Slice indexing into a matrix: indices = [[1], [0]] params = [['a', 'b'], ['c', 'd']] output = [['c', 'd'], ['a', 'b']] Indexing into a 3-tensor: indices = [[1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['a1', 'b1'], ['c1', 'd1']]] indices = [[0, 1], [1, 0]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['c0', 'd0'], ['a1', 'b1']] indices = [[0, 0, 1], [1, 0, 1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = ['b0', 'b1'] The examples below are for the case when only indices have leading extra dimensions. If both 'params' and 'indices' have leading batch dimensions, use the 'batch_dims' parameter to run gather_nd in batch mode. Batched indexing into a matrix: indices = [[[0, 0]], [[0, 1]]] params = [['a', 'b'], ['c', 'd']] output = [['a'], ['b']] Batched slice indexing into a matrix: indices = [[[1]], [[0]]] params = [['a', 'b'], ['c', 'd']] output = [[['c', 'd']], [['a', 'b']]] Batched indexing into a 3-tensor: indices = [[[1]], [[0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[[['a1', 'b1'], ['c1', 'd1']]], [[['a0', 'b0'], ['c0', 'd0']]]] indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['c0', 'd0'], ['a1', 'b1']], [['a0', 'b0'], ['c1', 'd1']]] indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['b0', 'b1'], ['d0', 'c1']] Examples with batched 'params' and 'indices': batch_dims = 1 indices = [[1], [0]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['c0', 'd0'], ['a1', 'b1']] batch_dims = 1 indices = [[[1]], [[0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['c0', 'd0']], [['a1', 'b1']]] batch_dims = 1 indices = [[[1, 0]], [[0, 1]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['c0'], ['b1']] See also tf.gather. Args params A Tensor. The tensor from which to gather values. indices A Tensor. Must be one of the following types: int32, int64. Index tensor. name A name for the operation (optional). batch_dims An integer or a scalar 'Tensor'. The number of batch dimensions. Returns A Tensor. Has the same type as params.
tensorflow.compat.v1.gather_nd
tf.compat.v1.get_collection Wrapper for Graph.get_collection() using the default graph. tf.compat.v1.get_collection( key, scope=None ) See tf.Graph.get_collection for more details. Args key The key for the collection. For example, the GraphKeys class contains many standard names for collections. scope (Optional.) If supplied, the resulting list is filtered to include only items whose name attribute matches using re.match. Items without a name attribute are never returned if a scope is supplied and the choice or re.match means that a scope without special tokens filters by prefix. Returns The list of values in the collection with the given name, or an empty list if no value has been added to that collection. The list contains the values in the order under which they were collected. Eager Compatibility Collections are not supported when eager execution is enabled.
tensorflow.compat.v1.get_collection
tf.compat.v1.get_collection_ref Wrapper for Graph.get_collection_ref() using the default graph. tf.compat.v1.get_collection_ref( key ) See tf.Graph.get_collection_ref for more details. Args key The key for the collection. For example, the GraphKeys class contains many standard names for collections. Returns The list of values in the collection with the given name, or an empty list if no value has been added to that collection. Note that this returns the collection list itself, which can be modified in place to change the collection. Eager Compatibility Collections are not supported when eager execution is enabled.
tensorflow.compat.v1.get_collection_ref
tf.compat.v1.get_default_graph Returns the default graph for the current thread. tf.compat.v1.get_default_graph() The returned graph will be the innermost graph on which a Graph.as_default() context has been entered, or a global default graph if none has been explicitly created. Note: The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function. Returns The default Graph being used in the current thread.
tensorflow.compat.v1.get_default_graph
tf.compat.v1.get_default_session Returns the default session for the current thread. tf.compat.v1.get_default_session() The returned Session will be the innermost session on which a Session or Session.as_default() context has been entered. Note: The default session is a property of the current thread. If you create a new thread, and wish to use the default session in that thread, you must explicitly add a with sess.as_default(): in that thread's function. Returns The default Session being used in the current thread.
tensorflow.compat.v1.get_default_session
tf.compat.v1.get_local_variable Gets an existing local variable or creates a new one. tf.compat.v1.get_local_variable( name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=False, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None, synchronization=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE ) Behavior is the same as in get_variable, except that variables are added to the LOCAL_VARIABLES collection and trainable is set to False. This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example: def foo(): with tf.variable_scope("foo", reuse=tf.AUTO_REUSE): v = tf.get_variable("v", [1]) return v v1 = foo() # Creates v. v2 = foo() # Gets the same, existing v. assert v1 == v2 If initializer is None (the default), the default initializer passed in the variable scope will be used. If that one is None too, a glorot_uniform_initializer will be used. The initializer can also be a Tensor, in which case the variable is initialized to this value and shape. Similarly, if the regularizer is None (the default), the default regularizer passed in the variable scope will be used (if that is None too, then by default no regularization is performed). If a partitioner is provided, a PartitionedVariable is returned. Accessing this object as a Tensor returns the shards concatenated along the partition axis. Some useful partitioners are available. See, e.g., variable_axis_size_partitioner and min_max_variable_partitioner. Args name The name of the new or existing variable. shape Shape of the new or existing variable. dtype Type of the new or existing variable (defaults to DT_FLOAT). initializer Initializer for the variable if one is created. Can either be an initializer object or a Tensor. If it's a Tensor, its shape must be known unless validate_shape is False. regularizer A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. collections List of graph collections keys to add the Variable to. Defaults to [GraphKeys.LOCAL_VARIABLES] (see tf.Variable). caching_device Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements. partitioner Optional callable that accepts a fully defined TensorShape and dtype of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). validate_shape If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. For this to be used the initializer must be a Tensor and not an initializer object. use_resource If False, creates a regular Variable. If true, creates an experimental ResourceVariable instead with well-defined semantics. Defaults to False (will later change to True). When eager execution is enabled this argument is always forced to be True. custom_getter Callable that takes as a first argument the true getter, and allows overwriting the internal get_variable method. The signature of custom_getter should match that of this method, but the most future-proof version will allow for changes: def custom_getter(getter, *args, **kwargs). Direct access to all get_variable parameters is also allowed: def custom_getter(getter, name, *args, **kwargs). A simple identity custom getter that simply creates variables with modified names is: def custom_getter(getter, name, *args, **kwargs): return getter(name + '_suffix', *args, **kwargs) constraint An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. synchronization Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. aggregation Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class tf.VariableAggregation. Returns The created or existing Variable (or PartitionedVariable, if a partitioner was used). Raises ValueError when creating a new variable and shape is not declared, when violating reuse during variable creation, or when initializer dtype and dtype don't match. Reuse is set inside variable_scope.
tensorflow.compat.v1.get_local_variable
tf.compat.v1.get_seed Returns the local seeds an operation should use given an op-specific seed. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.random.get_seed tf.compat.v1.get_seed( op_seed ) Given operation-specific seed, op_seed, this helper function returns two seeds derived from graph-level and op-level seeds. Many random operations internally use the two seeds to allow user to change the seed globally for a graph, or for only specific operations. For details on how the graph-level seed interacts with op seeds, see tf.compat.v1.random.set_random_seed. Args op_seed integer. Returns A tuple of two integers that should be used for the local seed of this operation.
tensorflow.compat.v1.get_seed
tf.compat.v1.get_session_handle Return the handle of data. tf.compat.v1.get_session_handle( data, name=None ) This is EXPERIMENTAL and subject to change. Keep data "in-place" in the runtime and create a handle that can be used to retrieve data in a subsequent run(). Combined with get_session_tensor, we can keep a tensor produced in one run call in place, and use it as the input in a future run call. Args data A tensor to be stored in the session. name Optional name prefix for the return tensor. Returns A scalar string tensor representing a unique handle for data. Raises TypeError if data is not a Tensor. Example: c = tf.multiply(a, b) h = tf.compat.v1.get_session_handle(c) h = sess.run(h) p, a = tf.compat.v1.get_session_tensor(h.handle, tf.float32) b = tf.multiply(a, 10) c = sess.run(b, feed_dict={p: h.handle})
tensorflow.compat.v1.get_session_handle
tf.compat.v1.get_session_tensor Get the tensor of type dtype by feeding a tensor handle. tf.compat.v1.get_session_tensor( handle, dtype, name=None ) This is EXPERIMENTAL and subject to change. Get the value of the tensor from a tensor handle. The tensor is produced in a previous run() and stored in the state of the session. Args handle The string representation of a persistent tensor handle. dtype The type of the output tensor. name Optional name prefix for the return tensor. Returns A pair of tensors. The first is a placeholder for feeding a tensor handle and the second is the tensor in the session state keyed by the tensor handle. Example: c = tf.multiply(a, b) h = tf.compat.v1.get_session_handle(c) h = sess.run(h) p, a = tf.compat.v1.get_session_tensor(h.handle, tf.float32) b = tf.multiply(a, 10) c = sess.run(b, feed_dict={p: h.handle})
tensorflow.compat.v1.get_session_tensor
tf.compat.v1.get_variable Gets an existing variable with these parameters or create a new one. tf.compat.v1.get_variable( name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None, synchronization=tf.VariableSynchronization.AUTO, aggregation=tf.compat.v1.VariableAggregation.NONE ) This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example: def foo(): with tf.variable_scope("foo", reuse=tf.AUTO_REUSE): v = tf.get_variable("v", [1]) return v v1 = foo() # Creates v. v2 = foo() # Gets the same, existing v. assert v1 == v2 If initializer is None (the default), the default initializer passed in the variable scope will be used. If that one is None too, a glorot_uniform_initializer will be used. The initializer can also be a Tensor, in which case the variable is initialized to this value and shape. Similarly, if the regularizer is None (the default), the default regularizer passed in the variable scope will be used (if that is None too, then by default no regularization is performed). If a partitioner is provided, a PartitionedVariable is returned. Accessing this object as a Tensor returns the shards concatenated along the partition axis. Some useful partitioners are available. See, e.g., variable_axis_size_partitioner and min_max_variable_partitioner. Args name The name of the new or existing variable. shape Shape of the new or existing variable. dtype Type of the new or existing variable (defaults to DT_FLOAT). initializer Initializer for the variable if one is created. Can either be an initializer object or a Tensor. If it's a Tensor, its shape must be known unless validate_shape is False. regularizer A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. trainable If True also add the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). collections List of graph collections keys to add the Variable to. Defaults to [GraphKeys.GLOBAL_VARIABLES] (see tf.Variable). caching_device Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements. partitioner Optional callable that accepts a fully defined TensorShape and dtype of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). validate_shape If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. For this to be used the initializer must be a Tensor and not an initializer object. use_resource If False, creates a regular Variable. If true, creates an experimental ResourceVariable instead with well-defined semantics. Defaults to False (will later change to True). When eager execution is enabled this argument is always forced to be True. custom_getter Callable that takes as a first argument the true getter, and allows overwriting the internal get_variable method. The signature of custom_getter should match that of this method, but the most future-proof version will allow for changes: def custom_getter(getter, *args, **kwargs). Direct access to all get_variable parameters is also allowed: def custom_getter(getter, name, *args, **kwargs). A simple identity custom getter that simply creates variables with modified names is: def custom_getter(getter, name, *args, **kwargs): return getter(name + '_suffix', *args, **kwargs) constraint An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. synchronization Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. aggregation Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class tf.VariableAggregation. Returns The created or existing Variable (or PartitionedVariable, if a partitioner was used). Raises ValueError when creating a new variable and shape is not declared, when violating reuse during variable creation, or when initializer dtype and dtype don't match. Reuse is set inside variable_scope.
tensorflow.compat.v1.get_variable
tf.compat.v1.get_variable_scope Returns the current variable scope. tf.compat.v1.get_variable_scope()
tensorflow.compat.v1.get_variable_scope
Module: tf.compat.v1.gfile Import router for file_io. Classes class FastGFile: File I/O wrappers without thread locking. class GFile: File I/O wrappers without thread locking. class Open: File I/O wrappers without thread locking. Functions Copy(...): Copies data from oldpath to newpath. DeleteRecursively(...): Deletes everything under dirname recursively. Exists(...): Determines whether a path exists or not. Glob(...): Returns a list of files that match the given pattern(s). IsDirectory(...): Returns whether the path is a directory or not. ListDirectory(...): Returns a list of entries contained within a directory. MakeDirs(...): Creates a directory and all parent/intermediate directories. MkDir(...): Creates a directory with the name dirname. Remove(...): Deletes the file located at 'filename'. Rename(...): Rename or move a file / directory. Stat(...): Returns file statistics for a given path. Walk(...): Recursive directory tree generator for directories.
tensorflow.compat.v1.gfile
tf.compat.v1.gfile.Copy Copies data from oldpath to newpath. tf.compat.v1.gfile.Copy( oldpath, newpath, overwrite=False ) Args oldpath string, name of the file who's contents need to be copied newpath string, name of the file to which to copy to overwrite boolean, if false it's an error for newpath to be occupied by an existing file. Raises errors.OpError If the operation fails.
tensorflow.compat.v1.gfile.copy
tf.compat.v1.gfile.DeleteRecursively Deletes everything under dirname recursively. tf.compat.v1.gfile.DeleteRecursively( dirname ) Args dirname string, a path to a directory Raises errors.OpError If the operation fails.
tensorflow.compat.v1.gfile.deleterecursively
tf.compat.v1.gfile.Exists Determines whether a path exists or not. tf.compat.v1.gfile.Exists( filename ) Args filename string, a path Returns True if the path exists, whether it's a file or a directory. False if the path does not exist and there are no filesystem errors. Raises errors.OpError Propagates any errors reported by the FileSystem API.
tensorflow.compat.v1.gfile.exists
tf.compat.v1.gfile.FastGFile File I/O wrappers without thread locking. tf.compat.v1.gfile.FastGFile( name, mode='r' ) Note, that this is somewhat like builtin Python file I/O, but there are semantic differences to make it more efficient for some backing filesystems. For example, a write mode file will not be opened until the first write call (to minimize RPC invocations in network filesystems). Attributes mode Returns the mode in which the file was opened. name Returns the file name. Methods close View source close() Closes FileIO. Should be called for the WritableFile to be flushed. flush View source flush() Flushes the Writable file. This only ensures that the data has made its way out of the process without any guarantees on whether it's written to disk. This means that the data would survive an application crash but not necessarily an OS crash. next View source next() read View source read( n=-1 ) Returns the contents of a file as a string. Starts reading from current position in file. Args n Read n bytes if n != -1. If n = -1, reads to end of file. Returns n bytes of the file (or whole file) in bytes mode or n bytes of the string if in string (regular) mode. readline View source readline() Reads the next line, keeping \n. At EOF, returns ''. readlines View source readlines() Returns all lines from the file in a list. seek View source seek( offset=None, whence=0, position=None ) Seeks to the offset in the file. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (position). They will be removed in a future version. Instructions for updating: position is deprecated in favor of the offset argument. Args offset The byte count relative to the whence argument. whence Valid values for whence are: 0: start of the file (default) 1: relative to the current position of the file 2: relative to the end of file. offset is usually negative. seekable View source seekable() Returns True as FileIO supports random access ops of seek()/tell() size View source size() Returns the size of the file. tell View source tell() Returns the current position in the file. write View source write( file_content ) Writes file_content to the file. Appends to the end of the file. __enter__ View source __enter__() Make usable with "with" statement. __exit__ View source __exit__( unused_type, unused_value, unused_traceback ) Make usable with "with" statement. __iter__ View source __iter__()
tensorflow.compat.v1.gfile.fastgfile
tf.compat.v1.gfile.Glob Returns a list of files that match the given pattern(s). tf.compat.v1.gfile.Glob( filename ) Args filename string or iterable of strings. The glob pattern(s). Returns A list of strings containing filenames that match the given pattern(s). Raises errors.OpError: If there are filesystem / directory listing errors. errors.NotFoundError: If pattern to be matched is an invalid directory.
tensorflow.compat.v1.gfile.glob
tf.compat.v1.gfile.IsDirectory Returns whether the path is a directory or not. tf.compat.v1.gfile.IsDirectory( dirname ) Args dirname string, path to a potential directory Returns True, if the path is a directory; False otherwise
tensorflow.compat.v1.gfile.isdirectory
tf.compat.v1.gfile.ListDirectory Returns a list of entries contained within a directory. tf.compat.v1.gfile.ListDirectory( dirname ) The list is in arbitrary order. It does not contain the special entries "." and "..". Args dirname string, path to a directory Returns [filename1, filename2, ... filenameN] as strings Raises errors.NotFoundError if directory doesn't exist
tensorflow.compat.v1.gfile.listdirectory
tf.compat.v1.gfile.MakeDirs Creates a directory and all parent/intermediate directories. tf.compat.v1.gfile.MakeDirs( dirname ) It succeeds if dirname already exists and is writable. Args dirname string, name of the directory to be created Raises errors.OpError If the operation fails.
tensorflow.compat.v1.gfile.makedirs
tf.compat.v1.gfile.MkDir Creates a directory with the name dirname. tf.compat.v1.gfile.MkDir( dirname ) Args dirname string, name of the directory to be created Notes: The parent directories need to exist. Use tf.io.gfile.makedirs instead if there is the possibility that the parent dirs don't exist. Raises errors.OpError If the operation fails.
tensorflow.compat.v1.gfile.mkdir
tf.compat.v1.gfile.Remove Deletes the file located at 'filename'. tf.compat.v1.gfile.Remove( filename ) Args filename string, a filename Raises errors.OpError Propagates any errors reported by the FileSystem API. E.g., NotFoundError if the file does not exist.
tensorflow.compat.v1.gfile.remove
tf.compat.v1.gfile.Rename Rename or move a file / directory. tf.compat.v1.gfile.Rename( oldname, newname, overwrite=False ) Args oldname string, pathname for a file newname string, pathname to which the file needs to be moved overwrite boolean, if false it's an error for newname to be occupied by an existing file. Raises errors.OpError If the operation fails.
tensorflow.compat.v1.gfile.rename
tf.compat.v1.gfile.Stat Returns file statistics for a given path. tf.compat.v1.gfile.Stat( filename ) Args filename string, path to a file Returns FileStatistics struct that contains information about the path Raises errors.OpError If the operation fails.
tensorflow.compat.v1.gfile.stat
tf.compat.v1.gfile.Walk Recursive directory tree generator for directories. tf.compat.v1.gfile.Walk( top, in_order=True ) Args top string, a Directory name in_order bool, Traverse in order if True, post order if False. Errors that happen while listing directories are ignored. Yields Each yield is a 3-tuple: the pathname of a directory, followed by lists of all its subdirectories and leaf files. That is, each yield looks like: (dirname, [subdirname, subdirname, ...], [filename, filename, ...]). Each item is a string.
tensorflow.compat.v1.gfile.walk
tf.compat.v1.global_variables Returns global variables. tf.compat.v1.global_variables( scope=None ) Global variables are variables that are shared across machines in a distributed environment. The Variable() constructor or get_variable() automatically adds new variables to the graph collection GraphKeys.GLOBAL_VARIABLES. This convenience function returns the contents of that collection. An alternative to global variables are local variables. See tf.compat.v1.local_variables Args scope (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix. Returns A list of Variable objects.
tensorflow.compat.v1.global_variables
tf.compat.v1.global_variables_initializer Returns an Op that initializes global variables. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.initializers.global_variables tf.compat.v1.global_variables_initializer() This is just a shortcut for variables_initializer(global_variables()) Returns An Op that initializes global variables in the graph.
tensorflow.compat.v1.global_variables_initializer
tf.compat.v1.GPUOptions A ProtocolMessage Attributes allocator_type string allocator_type allow_growth bool allow_growth deferred_deletion_bytes int64 deferred_deletion_bytes experimental Experimental experimental force_gpu_compatible bool force_gpu_compatible per_process_gpu_memory_fraction double per_process_gpu_memory_fraction polling_active_delay_usecs int32 polling_active_delay_usecs polling_inactive_delay_msecs int32 polling_inactive_delay_msecs visible_device_list string visible_device_list Child Classes class Experimental
tensorflow.compat.v1.gpuoptions
tf.compat.v1.GPUOptions.Experimental A ProtocolMessage Attributes collective_ring_order string collective_ring_order kernel_tracker_max_bytes int32 kernel_tracker_max_bytes kernel_tracker_max_interval int32 kernel_tracker_max_interval kernel_tracker_max_pending int32 kernel_tracker_max_pending num_dev_to_dev_copy_streams int32 num_dev_to_dev_copy_streams timestamped_allocator bool timestamped_allocator use_unified_memory bool use_unified_memory virtual_devices repeated VirtualDevices virtual_devices Child Classes class VirtualDevices
tensorflow.compat.v1.gpuoptions.experimental
tf.compat.v1.GPUOptions.Experimental.VirtualDevices A ProtocolMessage Attributes memory_limit_mb repeated float memory_limit_mb priority repeated int32 priority
tensorflow.compat.v1.gpuoptions.experimental.virtualdevices
tf.compat.v1.gradients Constructs symbolic derivatives of sum of ys w.r.t. x in xs. tf.compat.v1.gradients( ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None, stop_gradients=None, unconnected_gradients=tf.UnconnectedGradients.NONE ) ys and xs are each a Tensor or a list of tensors. grad_ys is a list of Tensor, holding the gradients received by the ys. The list must be the same length as ys. gradients() adds ops to the graph to output the derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs. grad_ys is a list of tensors of the same length as ys that holds the initial gradients for each y in ys. When grad_ys is None, we fill in a tensor of '1's of the shape of y for each y in ys. A user can provide their own initial grad_ys to compute the derivatives using a different initial gradient for each y (e.g., if one wanted to weight the gradient differently for each value in each y). stop_gradients is a Tensor or a list of tensors to be considered constant with respect to all xs. These tensors will not be backpropagated through, as though they had been explicitly disconnected using stop_gradient. Among other things, this allows computation of partial derivatives as opposed to total derivatives. For example: a = tf.constant(0.) b = 2 * a g = tf.gradients(a + b, [a, b], stop_gradients=[a, b]) Here the partial derivatives g evaluate to [1.0, 1.0], compared to the total derivatives tf.gradients(a + b, [a, b]), which take into account the influence of a on b and evaluate to [3.0, 1.0]. Note that the above is equivalent to: a = tf.stop_gradient(tf.constant(0.)) b = tf.stop_gradient(2 * a) g = tf.gradients(a + b, [a, b]) stop_gradients provides a way of stopping gradient after the graph has already been constructed, as compared to tf.stop_gradient which is used during graph construction. When the two approaches are combined, backpropagation stops at both tf.stop_gradient nodes and nodes in stop_gradients, whichever is encountered first. All integer tensors are considered constant with respect to all xs, as if they were included in stop_gradients. unconnected_gradients determines the value returned for each x in xs if it is unconnected in the graph to ys. By default this is None to safeguard against errors. Mathematically these gradients are zero which can be requested using the 'zero' option. tf.UnconnectedGradients provides the following options and behaviors: a = tf.ones([1, 2]) b = tf.ones([3, 1]) g1 = tf.gradients([b], [a], unconnected_gradients='none') sess.run(g1) # [None] g2 = tf.gradients([b], [a], unconnected_gradients='zero') sess.run(g2) # [array([[0., 0.]], dtype=float32)] Let us take one practical example which comes during the back propogation phase. This function is used to evaluate the derivatives of the cost function with respect to Weights Ws and Biases bs. Below sample implementation provides the exaplantion of what it is actually used for : Ws = tf.constant(0.) bs = 2 * Ws cost = Ws + bs # This is just an example. So, please ignore the formulas. g = tf.gradients(cost, [Ws, bs]) dCost_dW, dCost_db = g Args ys A Tensor or list of tensors to be differentiated. xs A Tensor or list of tensors to be used for differentiation. grad_ys Optional. A Tensor or list of tensors the same size as ys and holding the gradients computed for each y in ys. name Optional name to use for grouping all the gradient ops together. defaults to 'gradients'. colocate_gradients_with_ops If True, try colocating gradients with the corresponding op. gate_gradients If True, add a tuple around the gradients returned for an operations. This avoids some race conditions. aggregation_method Specifies the method used to combine gradient terms. Accepted values are constants defined in the class AggregationMethod. stop_gradients Optional. A Tensor or list of tensors not to differentiate through. unconnected_gradients Optional. Specifies the gradient value returned when the given input tensors are unconnected. Accepted values are constants defined in the class tf.UnconnectedGradients and the default value is none. Returns A list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs. Raises LookupError if one of the operations between x and y does not have a registered gradient function. ValueError if the arguments are invalid. RuntimeError if called in Eager mode.
tensorflow.compat.v1.gradients
tf.compat.v1.GraphDef A ProtocolMessage Attributes library FunctionDefLibrary library node repeated NodeDef node version int32 version versions VersionDef versions
tensorflow.compat.v1.graphdef
tf.compat.v1.GraphKeys Standard names to use for graph collections. The standard library uses various well-known names to collect and retrieve values associated with a graph. For example, the tf.Optimizer subclasses default to optimizing the variables collected under tf.GraphKeys.TRAINABLE_VARIABLES if none is specified, but it is also possible to pass an explicit list of variables. The following standard keys are defined: GLOBAL_VARIABLES: the default collection of Variable objects, shared across distributed environment (model variables are subset of these). See tf.compat.v1.global_variables for more details. Commonly, all TRAINABLE_VARIABLES variables will be in MODEL_VARIABLES, and all MODEL_VARIABLES variables will be in GLOBAL_VARIABLES. LOCAL_VARIABLES: the subset of Variable objects that are local to each machine. Usually used for temporarily variables, like counters. Note: use tf.contrib.framework.local_variable to add to this collection. MODEL_VARIABLES: the subset of Variable objects that are used in the model for inference (feed forward). Note: use tf.contrib.framework.model_variable to add to this collection. TRAINABLE_VARIABLES: the subset of Variable objects that will be trained by an optimizer. See tf.compat.v1.trainable_variables for more details. SUMMARIES: the summary Tensor objects that have been created in the graph. See tf.compat.v1.summary.merge_all for more details. QUEUE_RUNNERS: the QueueRunner objects that are used to produce input for a computation. See tf.compat.v1.train.start_queue_runners for more details. MOVING_AVERAGE_VARIABLES: the subset of Variable objects that will also keep moving averages. See tf.compat.v1.moving_average_variables for more details. REGULARIZATION_LOSSES: regularization losses collected during graph construction. The following standard keys are defined, but their collections are not automatically populated as many of the others are: WEIGHTS BIASES ACTIVATIONS Class Variables ACTIVATIONS 'activations' ASSET_FILEPATHS 'asset_filepaths' BIASES 'biases' CONCATENATED_VARIABLES 'concatenated_variables' COND_CONTEXT 'cond_context' EVAL_STEP 'eval_step' GLOBAL_STEP 'global_step' GLOBAL_VARIABLES 'variables' INIT_OP 'init_op' LOCAL_INIT_OP 'local_init_op' LOCAL_RESOURCES 'local_resources' LOCAL_VARIABLES 'local_variables' LOSSES 'losses' METRIC_VARIABLES 'metric_variables' MODEL_VARIABLES 'model_variables' MOVING_AVERAGE_VARIABLES 'moving_average_variables' QUEUE_RUNNERS 'queue_runners' READY_FOR_LOCAL_INIT_OP 'ready_for_local_init_op' READY_OP 'ready_op' REGULARIZATION_LOSSES 'regularization_losses' RESOURCES 'resources' SAVEABLE_OBJECTS 'saveable_objects' SAVERS 'savers' SUMMARIES 'summaries' SUMMARY_OP 'summary_op' TABLE_INITIALIZERS 'table_initializer' TRAINABLE_RESOURCE_VARIABLES 'trainable_resource_variables' TRAINABLE_VARIABLES 'trainable_variables' TRAIN_OP 'train_op' UPDATE_OPS 'update_ops' VARIABLES 'variables' WEIGHTS 'weights' WHILE_CONTEXT 'while_context'
tensorflow.compat.v1.graphkeys
tf.compat.v1.GraphOptions A ProtocolMessage Attributes build_cost_model int64 build_cost_model build_cost_model_after int64 build_cost_model_after enable_bfloat16_sendrecv bool enable_bfloat16_sendrecv enable_recv_scheduling bool enable_recv_scheduling infer_shapes bool infer_shapes optimizer_options OptimizerOptions optimizer_options place_pruned_graph bool place_pruned_graph rewrite_options RewriterConfig rewrite_options timeline_step int32 timeline_step
tensorflow.compat.v1.graphoptions
Module: tf.compat.v1.graph_util Helpers to manipulate a tensor graph in python. Functions convert_variables_to_constants(...): Replaces all the variables in a graph with constants of the same values. (deprecated) extract_sub_graph(...): Extract the subgraph that can reach any of the nodes in 'dest_nodes'. (deprecated) import_graph_def(...): Imports the graph from graph_def into the current default Graph. (deprecated arguments) must_run_on_cpu(...): Returns True if the given node_def must run on CPU, otherwise False. (deprecated) remove_training_nodes(...): Prunes out nodes that aren't needed for inference. (deprecated) tensor_shape_from_node_def_name(...): Convenience function to get a shape from a NodeDef's input string. (deprecated)
tensorflow.compat.v1.graph_util
tf.compat.v1.graph_util.convert_variables_to_constants Replaces all the variables in a graph with constants of the same values. (deprecated) tf.compat.v1.graph_util.convert_variables_to_constants( sess, input_graph_def, output_node_names, variable_names_whitelist=None, variable_names_blacklist=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.convert_variables_to_constants If you have a trained graph containing Variable ops, it can be convenient to convert them all to Const ops holding the same values. This makes it possible to describe the network fully with a single GraphDef file, and allows the removal of a lot of ops related to loading and saving the variables. Args sess Active TensorFlow session containing the variables. input_graph_def GraphDef object holding the network. output_node_names List of name strings for the result nodes of the graph. variable_names_whitelist The set of variable names to convert (by default, all variables are converted). variable_names_blacklist The set of variable names to omit converting to constants. Returns GraphDef containing a simplified version of the original. Raises RuntimeError if a DT_RESOURCE op is found whose ancestor Variables are both denylisted AND whitelisted for freezing.
tensorflow.compat.v1.graph_util.convert_variables_to_constants
tf.compat.v1.graph_util.extract_sub_graph Extract the subgraph that can reach any of the nodes in 'dest_nodes'. (deprecated) tf.compat.v1.graph_util.extract_sub_graph( graph_def, dest_nodes ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph Args graph_def A graph_pb2.GraphDef proto. dest_nodes A list of strings specifying the destination node names. Returns The GraphDef of the sub-graph. Raises TypeError If 'graph_def' is not a graph_pb2.GraphDef proto.
tensorflow.compat.v1.graph_util.extract_sub_graph
tf.compat.v1.graph_util.must_run_on_cpu Returns True if the given node_def must run on CPU, otherwise False. (deprecated) tf.compat.v1.graph_util.must_run_on_cpu( node, pin_variables_on_cpu=False ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.must_run_on_cpu Args node The node to be assigned to a device. Could be either an ops.Operation or NodeDef. pin_variables_on_cpu If True, this function will return False if node_def represents a variable-related op. Returns True if the given node must run on CPU, otherwise False.
tensorflow.compat.v1.graph_util.must_run_on_cpu
tf.compat.v1.graph_util.remove_training_nodes Prunes out nodes that aren't needed for inference. (deprecated) tf.compat.v1.graph_util.remove_training_nodes( input_graph, protected_nodes=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.remove_training_nodes There are nodes like Identity and CheckNumerics that are only useful during training, and can be removed in graphs that will be used for nothing but inference. Here we identify and remove them, returning an equivalent graph. To be specific, CheckNumerics nodes are always removed, and Identity nodes that aren't involved in control edges are spliced out so that their input and outputs are directly connected. Args input_graph Model to analyze and prune. protected_nodes An optional list of names of nodes to be kept unconditionally. This is for example useful to preserve Identity output nodes. Returns A list of nodes with the unnecessary ones removed.
tensorflow.compat.v1.graph_util.remove_training_nodes
tf.compat.v1.graph_util.tensor_shape_from_node_def_name Convenience function to get a shape from a NodeDef's input string. (deprecated) tf.compat.v1.graph_util.tensor_shape_from_node_def_name( graph, input_name ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.tensor_shape_from_node_def_name
tensorflow.compat.v1.graph_util.tensor_shape_from_node_def_name
tf.compat.v1.hessians Constructs the Hessian of sum of ys with respect to x in xs. tf.compat.v1.hessians( ys, xs, name='hessians', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None ) hessians() adds ops to the graph to output the Hessian matrix of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the Hessian of sum(ys). The Hessian is a matrix of second-order partial derivatives of a scalar tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details). Args ys A Tensor or list of tensors to be differentiated. xs A Tensor or list of tensors to be used for differentiation. name Optional name to use for grouping all the gradient ops together. defaults to 'hessians'. colocate_gradients_with_ops See gradients() documentation for details. gate_gradients See gradients() documentation for details. aggregation_method See gradients() documentation for details. Returns A list of Hessian matrices of sum(ys) for each x in xs. Raises LookupError if one of the operations between xs and ys does not have a registered gradient function.
tensorflow.compat.v1.hessians
tf.compat.v1.HistogramProto A ProtocolMessage Attributes bucket repeated double bucket bucket_limit repeated double bucket_limit max double max min double min num double num sum double sum sum_squares double sum_squares
tensorflow.compat.v1.histogramproto
tf.compat.v1.IdentityReader A Reader that outputs the queued work as both the key and value. Inherits From: ReaderBase tf.compat.v1.IdentityReader( name=None ) To use, enqueue strings in a Queue. Read will take the front work string and output (work, work). See ReaderBase for supported methods. Args name A name for the operation (optional). Eager Compatibility Readers are not compatible with eager execution. Instead, please use tf.data to get data into your model. Attributes reader_ref Op that implements the reader. supports_serialize Whether the Reader implementation can serialize its state. Methods num_records_produced View source num_records_produced( name=None ) Returns the number of records this reader has produced. This is the same as the number of Read executions that have succeeded. Args name A name for the operation (optional). Returns An int64 Tensor. num_work_units_completed View source num_work_units_completed( name=None ) Returns the number of work units this reader has finished processing. Args name A name for the operation (optional). Returns An int64 Tensor. read View source read( queue, name=None ) Returns the next record (key, value) pair produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name A name for the operation (optional). Returns A tuple of Tensors (key, value). key A string scalar Tensor. value A string scalar Tensor. read_up_to View source read_up_to( queue, num_records, name=None ) Returns up to num_records (key, value) pairs produced by a reader. Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch. Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. num_records Number of records to read. name A name for the operation (optional). Returns A tuple of Tensors (keys, values). keys A 1-D string Tensor. values A 1-D string Tensor. reset View source reset( name=None ) Restore a reader to its initial clean state. Args name A name for the operation (optional). Returns The created Operation. restore_state View source restore_state( state, name=None ) Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error. Args state A string Tensor. Result of a SerializeState of a Reader with matching type. name A name for the operation (optional). Returns The created Operation. serialize_state View source serialize_state( name=None ) Produce a string tensor that encodes the state of a reader. Not all Readers support being serialized, so this can produce an Unimplemented error. Args name A name for the operation (optional). Returns A string Tensor.
tensorflow.compat.v1.identityreader
Module: tf.compat.v1.image Image ops. The tf.image module contains various functions for image processing and decoding-encoding Ops. Many of the encoding/decoding functions are also available in the core tf.io module. Image processing Resizing The resizing Ops accept input images as tensors of several types. They always output resized images as float32 tensors. The convenience function tf.image.resize supports both 4-D and 3-D tensors as input and output. 4-D tensors are for batches of images, 3-D tensors for individual images. Resized images will be distorted if their original aspect ratio is not the same as size. To avoid distortions see tf.image.resize_with_pad. tf.image.resize tf.image.resize_with_pad tf.image.resize_with_crop_or_pad The Class tf.image.ResizeMethod provides various resize methods like bilinear, nearest_neighbor. Converting Between Colorspaces Image ops work either on individual images or on batches of images, depending on the shape of their input Tensor. If 3-D, the shape is [height, width, channels], and the Tensor represents one image. If 4-D, the shape is [batch_size, height, width, channels], and the Tensor represents batch_size images. Currently, channels can usefully be 1, 2, 3, or 4. Single-channel images are grayscale, images with 3 channels are encoded as either RGB or HSV. Images with 2 or 4 channels include an alpha channel, which has to be stripped from the image before passing the image to most image processing functions (and can be re-attached later). Internally, images are either stored in as one float32 per channel per pixel (implicitly, values are assumed to lie in [0,1)) or one uint8 per channel per pixel (values are assumed to lie in [0,255]). TensorFlow can convert between images in RGB or HSV or YIQ. tf.image.rgb_to_grayscale, tf.image.grayscale_to_rgb tf.image.rgb_to_hsv, tf.image.hsv_to_rgb tf.image.rgb_to_yiq, tf.image.yiq_to_rgb tf.image.rgb_to_yuv, tf.image.yuv_to_rgb tf.image.image_gradients tf.image.convert_image_dtype Image Adjustments TensorFlow provides functions to adjust images in various ways: brightness, contrast, hue, and saturation. Each adjustment can be done with predefined parameters or with random parameters picked from predefined intervals. Random adjustments are often useful to expand a training set and reduce overfitting. If several adjustments are chained it is advisable to minimize the number of redundant conversions by first converting the images to the most natural data type and representation. tf.image.adjust_brightness tf.image.adjust_contrast tf.image.adjust_gamma tf.image.adjust_hue tf.image.adjust_jpeg_quality tf.image.adjust_saturation tf.image.random_brightness tf.image.random_contrast tf.image.random_hue tf.image.random_saturation tf.image.per_image_standardization Working with Bounding Boxes tf.image.draw_bounding_boxes tf.image.combined_non_max_suppression tf.image.generate_bounding_box_proposals tf.image.non_max_suppression tf.image.non_max_suppression_overlaps tf.image.non_max_suppression_padded tf.image.non_max_suppression_with_scores tf.image.pad_to_bounding_box tf.image.sample_distorted_bounding_box Cropping tf.image.central_crop tf.image.crop_and_resize tf.image.crop_to_bounding_box tf.io.decode_and_crop_jpeg tf.image.extract_glimpse tf.image.random_crop tf.image.resize_with_crop_or_pad Flipping, Rotating and Transposing tf.image.flip_left_right tf.image.flip_up_down tf.image.random_flip_left_right tf.image.random_flip_up_down tf.image.rot90 tf.image.transpose Image decoding and encoding TensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded images are represented by scalar string Tensors, decoded images by 3-D uint8 tensors of shape [height, width, channels]. (PNG also supports uint16.) Note: decode_gif returns a 4-D array [num_frames, height, width, 3] The encode and decode Ops apply to one image at a time. Their input and output are all of variable size. If you need fixed size images, pass the output of the decode Ops to one of the cropping and resizing Ops. tf.io.decode_bmp tf.io.decode_gif tf.io.decode_image tf.io.decode_jpeg tf.io.decode_and_crop_jpeg tf.io.decode_png tf.io.encode_jpeg tf.io.encode_png Classes class ResizeMethod: See v1.image.resize for details. Functions adjust_brightness(...): Adjust the brightness of RGB or Grayscale images. adjust_contrast(...): Adjust contrast of RGB or grayscale images. adjust_gamma(...): Performs Gamma Correction. adjust_hue(...): Adjust hue of RGB images. adjust_jpeg_quality(...): Adjust jpeg encoding quality of an image. adjust_saturation(...): Adjust saturation of RGB images. central_crop(...): Crop the central region of the image(s). combined_non_max_suppression(...): Greedily selects a subset of bounding boxes in descending order of score. convert_image_dtype(...): Convert image to dtype, scaling its values if needed. crop_and_resize(...): Extracts crops from the input image tensor and resizes them. crop_to_bounding_box(...): Crops an image to a specified bounding box. decode_and_crop_jpeg(...): Decode and Crop a JPEG-encoded image to a uint8 tensor. decode_bmp(...): Decode the first frame of a BMP-encoded image to a uint8 tensor. decode_gif(...): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. decode_image(...): Function for decode_bmp, decode_gif, decode_jpeg, and decode_png. decode_jpeg(...): Decode a JPEG-encoded image to a uint8 tensor. decode_png(...): Decode a PNG-encoded image to a uint8 or uint16 tensor. draw_bounding_boxes(...): Draw bounding boxes on a batch of images. encode_jpeg(...): JPEG-encode an image. encode_png(...): PNG-encode an image. extract_glimpse(...): Extracts a glimpse from the input tensor. extract_image_patches(...): Extract patches from images and put them in the "depth" output dimension. extract_jpeg_shape(...): Extract the shape information of a JPEG-encoded image. extract_patches(...): Extract patches from images. flip_left_right(...): Flip an image horizontally (left to right). flip_up_down(...): Flip an image vertically (upside down). generate_bounding_box_proposals(...): Generate bounding box proposals from encoded bounding boxes. grayscale_to_rgb(...): Converts one or more images from Grayscale to RGB. hsv_to_rgb(...): Convert one or more images from HSV to RGB. image_gradients(...): Returns image gradients (dy, dx) for each color channel. is_jpeg(...): Convenience function to check if the 'contents' encodes a JPEG image. non_max_suppression(...): Greedily selects a subset of bounding boxes in descending order of score. non_max_suppression_overlaps(...): Greedily selects a subset of bounding boxes in descending order of score. non_max_suppression_padded(...): Greedily selects a subset of bounding boxes in descending order of score. non_max_suppression_with_scores(...): Greedily selects a subset of bounding boxes in descending order of score. pad_to_bounding_box(...): Pad image with zeros to the specified height and width. per_image_standardization(...): Linearly scales each image in image to have mean 0 and variance 1. psnr(...): Returns the Peak Signal-to-Noise Ratio between a and b. random_brightness(...): Adjust the brightness of images by a random factor. random_contrast(...): Adjust the contrast of an image or images by a random factor. random_crop(...): Randomly crops a tensor to a given size. random_flip_left_right(...): Randomly flip an image horizontally (left to right). random_flip_up_down(...): Randomly flips an image vertically (upside down). random_hue(...): Adjust the hue of RGB images by a random factor. random_jpeg_quality(...): Randomly changes jpeg encoding quality for inducing jpeg noise. random_saturation(...): Adjust the saturation of RGB images by a random factor. resize(...): Resize images to size using the specified method. resize_area(...): Resize images to size using area interpolation. resize_bicubic(...) resize_bilinear(...) resize_image_with_crop_or_pad(...): Crops and/or pads an image to a target width and height. resize_image_with_pad(...): Resizes and pads an image to a target width and height. resize_images(...): Resize images to size using the specified method. resize_nearest_neighbor(...) resize_with_crop_or_pad(...): Crops and/or pads an image to a target width and height. rgb_to_grayscale(...): Converts one or more images from RGB to Grayscale. rgb_to_hsv(...): Converts one or more images from RGB to HSV. rgb_to_yiq(...): Converts one or more images from RGB to YIQ. rgb_to_yuv(...): Converts one or more images from RGB to YUV. rot90(...): Rotate image(s) counter-clockwise by 90 degrees. sample_distorted_bounding_box(...): Generate a single randomly distorted bounding box for an image. (deprecated) sobel_edges(...): Returns a tensor holding Sobel edge maps. ssim(...): Computes SSIM index between img1 and img2. ssim_multiscale(...): Computes the MS-SSIM between img1 and img2. total_variation(...): Calculate and return the total variation for one or more images. transpose(...): Transpose image(s) by swapping the height and width dimension. transpose_image(...): Transpose image(s) by swapping the height and width dimension. yiq_to_rgb(...): Converts one or more images from YIQ to RGB. yuv_to_rgb(...): Converts one or more images from YUV to RGB.
tensorflow.compat.v1.image
tf.compat.v1.image.crop_and_resize Extracts crops from the input image tensor and resizes them. tf.compat.v1.image.crop_and_resize( image, boxes, box_ind=None, crop_size=None, method='bilinear', extrapolation_value=0, name=None, box_indices=None ) Extracts crops from the input image tensor and resizes them using bilinear sampling or nearest neighbor sampling (possibly with aspect ratio change) to a common output size specified by crop_size. This is more general than the crop_to_bounding_box op which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change. Returns a tensor with crops from the input image at positions defined at the bounding box locations in boxes. The cropped boxes are all resized (with bilinear or nearest neighbor interpolation) to a fixed size = [crop_height, crop_width]. The result is a 4-D tensor [num_boxes, crop_height, crop_width, depth]. The resizing is corner aligned. In particular, if boxes = [[0, 0, 1, 1]], the method will give identical results to using tf.image.resize_bilinear() or tf.image.resize_nearest_neighbor()(depends on the method argument) with align_corners=True. Args image A Tensor. Must be one of the following types: uint8, uint16, int8, int16, int32, int64, half, float32, float64. A 4-D tensor of shape [batch, image_height, image_width, depth]. Both image_height and image_width need to be positive. boxes A Tensor of type float32. A 2-D tensor of shape [num_boxes, 4]. The i-th row of the tensor specifies the coordinates of a box in the box_ind[i] image and is specified in normalized coordinates [y1, x1, y2, x2]. A normalized coordinate value of y is mapped to the image coordinate at y * (image_height - 1), so as the [0, 1] interval of normalized image height is mapped to [0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the [0, 1] range are allowed, in which case we use extrapolation_value to extrapolate the input image values. box_ind A Tensor of type int32. A 1-D tensor of shape [num_boxes] with int32 values in [0, batch). The value of box_ind[i] specifies the image that the i-th box refers to. crop_size A Tensor of type int32. A 1-D tensor of 2 elements, size = [crop_height, crop_width]. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both crop_height and crop_width need to be positive. method An optional string from: "bilinear", "nearest". Defaults to "bilinear". A string specifying the sampling method for resizing. It can be either "bilinear" or "nearest" and default to "bilinear". Currently two sampling methods are supported: Bilinear and Nearest Neighbor. extrapolation_value An optional float. Defaults to 0. Value used for extrapolation, when applicable. name A name for the operation (optional). Returns A Tensor of type float32.
tensorflow.compat.v1.image.crop_and_resize
tf.compat.v1.image.draw_bounding_boxes Draw bounding boxes on a batch of images. tf.compat.v1.image.draw_bounding_boxes( images, boxes, name=None, colors=None ) Outputs a copy of images but draws on top of the pixels zero or more bounding boxes specified by the locations in boxes. The coordinates of the each bounding box in boxes are encoded as [y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in [0.0, 1.0] relative to the width and the height of the underlying image. For example, if an image is 100 x 200 pixels (height x width) and the bounding box is [0.1, 0.2, 0.5, 0.9], the upper-left and bottom-right coordinates of the bounding box will be (40, 10) to (180, 50) (in (x,y) coordinates). Parts of the bounding box may fall outside the image. Args images A Tensor. Must be one of the following types: float32, half. 4-D with shape [batch, height, width, depth]. A batch of images. boxes A Tensor of type float32. 3-D with shape [batch, num_bounding_boxes, 4] containing bounding boxes. name A name for the operation (optional). colors A Tensor of type float32. 2-D. A list of RGBA colors to cycle through for the boxes. Returns A Tensor. Has the same type as images. Usage Example: # create an empty image img = tf.zeros([1, 3, 3, 3]) # draw a box around the image box = np.array([0, 0, 1, 1]) boxes = box.reshape([1, 1, 4]) # alternate between red and blue colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]]) tf.image.draw_bounding_boxes(img, boxes, colors) <tf.Tensor: shape=(1, 3, 3, 3), dtype=float32, numpy= array([[[[1., 0., 0.], [1., 0., 0.], [1., 0., 0.]], [[1., 0., 0.], [0., 0., 0.], [1., 0., 0.]], [[1., 0., 0.], [1., 0., 0.], [1., 0., 0.]]]], dtype=float32)>
tensorflow.compat.v1.image.draw_bounding_boxes
tf.compat.v1.image.extract_glimpse Extracts a glimpse from the input tensor. tf.compat.v1.image.extract_glimpse( input, size, offsets, centered=True, normalized=True, uniform_noise=True, name=None ) Returns a set of windows called glimpses extracted at location offsets from the input tensor. If the windows only partially overlaps the inputs, the non-overlapping areas will be filled with random noise. The result is a 4-D tensor of shape [batch_size, glimpse_height, glimpse_width, channels]. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in the size parameter. The argument normalized and centered controls how the windows are built: If the coordinates are normalized but not centered, 0.0 and 1.0 correspond to the minimum and maximum of each height and width dimension. If the coordinates are both normalized and centered, they range from -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper left corner, the lower right corner is located at (1.0, 1.0) and the center is at (0, 0). If the coordinates are not normalized they are interpreted as numbers of pixels. Usage Example: x = [[[[0.0], [1.0], [2.0]], [[3.0], [4.0], [5.0]], [[6.0], [7.0], [8.0]]]] tf.compat.v1.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]], centered=False, normalized=False) <tf.Tensor: shape=(1, 2, 2, 1), dtype=float32, numpy= array([[[[0.], [1.]], [[3.], [4.]]]], dtype=float32)> Args input A Tensor of type float32. A 4-D float tensor of shape [batch_size, height, width, channels]. size A Tensor of type int32. A 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width. offsets A Tensor of type float32. A 2-D integer tensor of shape [batch_size, 2] containing the y, x locations of the center of each window. centered An optional bool. Defaults to True. indicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the input images. If false, the (0,0) offset corresponds to the upper left corner of the input images. normalized An optional bool. Defaults to True. indicates if the offset coordinates are normalized. uniform_noise An optional bool. Defaults to True. indicates if the noise should be generated using a uniform distribution or a Gaussian distribution. name A name for the operation (optional). Returns A Tensor of type float32.
tensorflow.compat.v1.image.extract_glimpse
tf.compat.v1.image.resize Resize images to size using the specified method. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.image.resize_images tf.compat.v1.image.resize( images, size, method=ResizeMethodV1.BILINEAR, align_corners=False, preserve_aspect_ratio=False, name=None ) Resized images will be distorted if their original aspect ratio is not the same as size. To avoid distortions see tf.image.resize_with_pad or tf.image.resize_with_crop_or_pad. The method can be one of: tf.image.ResizeMethod.BILINEAR: Bilinear interpolation. tf.image.ResizeMethod.NEAREST_NEIGHBOR: Nearest neighbor interpolation. tf.image.ResizeMethod.BICUBIC: Bicubic interpolation. tf.image.ResizeMethod.AREA: Area interpolation. The return value has the same type as images if method is tf.image.ResizeMethod.NEAREST_NEIGHBOR. It will also have the same type as images if the size of images can be statically determined to be the same as size, because images is returned in this case. Otherwise, the return value has type float32. Args images 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels]. size A 1-D int32 Tensor of 2 elements: new_height, new_width. The new size for the images. method ResizeMethod. Defaults to tf.image.ResizeMethod.BILINEAR. align_corners bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to False. preserve_aspect_ratio Whether to preserve the aspect ratio. If this is set, then images will be resized to a size that fits in size while preserving the aspect ratio of the original image. Scales up the image if size is bigger than the current size of the image. Defaults to False. name A name for this operation (optional). Raises ValueError if the shape of images is incompatible with the shape arguments to this function ValueError if size has invalid shape or type. ValueError if an unsupported resize method is specified. Returns If images was 4-D, a 4-D float Tensor of shape [batch, new_height, new_width, channels]. If images was 3-D, a 3-D float Tensor of shape [new_height, new_width, channels].
tensorflow.compat.v1.image.resize
tf.compat.v1.image.ResizeMethod See v1.image.resize for details. Class Variables AREA 3 BICUBIC 2 BILINEAR 0 NEAREST_NEIGHBOR 1
tensorflow.compat.v1.image.resizemethod
tf.compat.v1.image.resize_area Resize images to size using area interpolation. tf.compat.v1.image.resize_area( images, size, align_corners=False, name=None ) Input images can be of different types but output images are always float. The range of pixel values for the output image might be slightly different from the range for the input image because of limited numerical precision. To guarantee an output range, for example [0.0, 1.0], apply tf.clip_by_value to the output. Each output pixel is computed by first transforming the pixel's footprint into the input tensor and then averaging the pixels that intersect the footprint. An input pixel's contribution to the average is weighted by the fraction of its area that intersects the footprint. This is the same as OpenCV's INTER_AREA. Args images A Tensor. Must be one of the following types: int8, uint8, int16, uint16, int32, int64, half, float32, float64, bfloat16. 4-D with shape [batch, height, width, channels]. size A 1-D int32 Tensor of 2 elements: new_height, new_width. The new size for the images. align_corners An optional bool. Defaults to False. If true, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to false. name A name for the operation (optional). Returns A Tensor of type float32.
tensorflow.compat.v1.image.resize_area
tf.compat.v1.image.resize_bicubic tf.compat.v1.image.resize_bicubic( images, size, align_corners=False, name=None, half_pixel_centers=False )
tensorflow.compat.v1.image.resize_bicubic
tf.compat.v1.image.resize_bilinear tf.compat.v1.image.resize_bilinear( images, size, align_corners=False, name=None, half_pixel_centers=False )
tensorflow.compat.v1.image.resize_bilinear
tf.compat.v1.image.resize_image_with_pad Resizes and pads an image to a target width and height. tf.compat.v1.image.resize_image_with_pad( image, target_height, target_width, method=ResizeMethodV1.BILINEAR, align_corners=False ) Resizes an image to a target width and height by keeping the aspect ratio the same without distortion. If the target dimensions don't match the image dimensions, the image is resized and then padded with zeroes to match requested dimensions. Args image 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels]. target_height Target height. target_width Target width. method Method to use for resizing image. See resize_images() align_corners bool. If True, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to False. Raises ValueError if target_height or target_width are zero or negative. Returns Resized and padded image. If images was 4-D, a 4-D float Tensor of shape [batch, new_height, new_width, channels]. If images was 3-D, a 3-D float Tensor of shape [new_height, new_width, channels].
tensorflow.compat.v1.image.resize_image_with_pad
tf.compat.v1.image.resize_nearest_neighbor tf.compat.v1.image.resize_nearest_neighbor( images, size, align_corners=False, name=None, half_pixel_centers=False )
tensorflow.compat.v1.image.resize_nearest_neighbor
tf.compat.v1.image.sample_distorted_bounding_box Generate a single randomly distorted bounding box for an image. (deprecated) tf.compat.v1.image.sample_distorted_bounding_box( image_size, bounding_boxes, seed=None, seed2=None, min_object_covered=0.1, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead. Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. data augmentation. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an image_size, bounding_boxes and a series of constraints. The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: begin, size and bboxes. The first 2 tensors can be fed directly into tf.slice to crop the image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize what the bounding box looks like. Bounding boxes are supplied and returned as [y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in [0.0, 1.0] relative to the width and height of the underlying image. For example, # Generate a single distorted bounding box. begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( tf.shape(image), bounding_boxes=bounding_boxes, min_object_covered=0.1) # Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.compat.v1.summary.image('images_with_box', image_with_box) # Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size) Note that if no bounding box information is available, setting use_image_if_no_bounding_boxes = True will assume there is a single implicit bounding box covering the whole image. If use_image_if_no_bounding_boxes is false and no bounding boxes are supplied, an error is raised. Args image_size A Tensor. Must be one of the following types: uint8, int8, int16, int32, int64. 1-D, containing [height, width, channels]. bounding_boxes A Tensor of type float32. 3-D with shape [batch, N, 4] describing the N bounding boxes associated with the image. seed An optional int. Defaults to 0. If either seed or seed2 are set to non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed. seed2 An optional int. Defaults to 0. A second seed to avoid seed collision. min_object_covered A Tensor of type float32. Defaults to 0.1. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied. aspect_ratio_range An optional list of floats. Defaults to [0.75, 1.33]. The cropped area of the image must have an aspect ratio = width / height within this range. area_range An optional list of floats. Defaults to [0.05, 1]. The cropped area of the image must contain a fraction of the supplied image within this range. max_attempts An optional int. Defaults to 100. Number of attempts at generating a cropped region of the image of the specified constraints. After max_attempts failures, return the entire image. use_image_if_no_bounding_boxes An optional bool. Defaults to False. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error. name A name for the operation (optional). Returns A tuple of Tensor objects (begin, size, bboxes). begin A Tensor. Has the same type as image_size. 1-D, containing [offset_height, offset_width, 0]. Provide as input to tf.slice. size A Tensor. Has the same type as image_size. 1-D, containing [target_height, target_width, -1]. Provide as input to tf.slice. bboxes A Tensor of type float32. 3-D with shape [1, 1, 4] containing the distorted bounding box. Provide as input to tf.image.draw_bounding_boxes.
tensorflow.compat.v1.image.sample_distorted_bounding_box
Module: tf.compat.v1.initializers Public API for tf.initializers namespace. Classes class constant: Initializer that generates tensors with constant values. class glorot_normal: The Glorot normal initializer, also called Xavier normal initializer. class glorot_uniform: The Glorot uniform initializer, also called Xavier uniform initializer. class identity: Initializer that generates the identity matrix. class ones: Initializer that generates tensors initialized to 1. class orthogonal: Initializer that generates an orthogonal matrix. class random_normal: Initializer that generates tensors with a normal distribution. class random_uniform: Initializer that generates tensors with a uniform distribution. class truncated_normal: Initializer that generates a truncated normal distribution. class uniform_unit_scaling: Initializer that generates tensors without scaling variance. class variance_scaling: Initializer capable of adapting its scale to the shape of weights tensors. class zeros: Initializer that generates tensors initialized to 0. Functions global_variables(...): Returns an Op that initializes global variables. he_normal(...): He normal initializer. he_uniform(...): He uniform variance scaling initializer. lecun_normal(...): LeCun normal initializer. lecun_uniform(...): LeCun uniform initializer. local_variables(...): Returns an Op that initializes all local variables. tables_initializer(...): Returns an Op that initializes all tables of the default graph. variables(...): Returns an Op that initializes a list of variables.
tensorflow.compat.v1.initializers
tf.compat.v1.initializers.he_normal He normal initializer. tf.compat.v1.initializers.he_normal( seed=None ) It draws samples from a truncated normal distribution centered on 0 with standard deviation (after truncation) given by stddev = sqrt(2 / fan_in) where fan_in is the number of input units in the weight tensor. Arguments seed A Python integer. Used to seed the random generator. Returns An initializer. References: He et al., 2015 (pdf)
tensorflow.compat.v1.initializers.he_normal
tf.compat.v1.initializers.he_uniform He uniform variance scaling initializer. tf.compat.v1.initializers.he_uniform( seed=None ) It draws samples from a uniform distribution within [-limit, limit] where limit is sqrt(6 / fan_in) where fan_in is the number of input units in the weight tensor. Arguments seed A Python integer. Used to seed the random generator. Returns An initializer. References: He et al., 2015 (pdf)
tensorflow.compat.v1.initializers.he_uniform
tf.compat.v1.initializers.lecun_normal LeCun normal initializer. tf.compat.v1.initializers.lecun_normal( seed=None ) It draws samples from a truncated normal distribution centered on 0 with standard deviation (after truncation) given by stddev = sqrt(1 / fan_in) where fan_in is the number of input units in the weight tensor. Arguments seed A Python integer. Used to seed the random generator. Returns An initializer. References: Self-Normalizing Neural Networks, Klambauer et al., 2017 (pdf) Efficient Backprop, Lecun et al., 1998
tensorflow.compat.v1.initializers.lecun_normal
tf.compat.v1.initializers.lecun_uniform LeCun uniform initializer. tf.compat.v1.initializers.lecun_uniform( seed=None ) It draws samples from a uniform distribution within [-limit, limit] where limit is sqrt(3 / fan_in) where fan_in is the number of input units in the weight tensor. Arguments seed A Python integer. Used to seed the random generator. Returns An initializer. References: Self-Normalizing Neural Networks, Klambauer et al., 2017 (pdf) Efficient Backprop, Lecun et al., 1998
tensorflow.compat.v1.initializers.lecun_uniform
tf.compat.v1.initialize_all_tables Returns an Op that initializes all tables of the default graph. (deprecated) tf.compat.v1.initialize_all_tables( name='init_all_tables' ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.tables_initializer instead. Args name Optional name for the initialization op. Returns An Op that initializes all tables. Note that if there are not tables the returned Op is a NoOp.
tensorflow.compat.v1.initialize_all_tables
tf.compat.v1.initialize_all_variables See tf.compat.v1.global_variables_initializer. (deprecated) tf.compat.v1.initialize_all_variables() Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Use tf.global_variables_initializer instead. Note: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.
tensorflow.compat.v1.initialize_all_variables
tf.compat.v1.initialize_local_variables See tf.compat.v1.local_variables_initializer. (deprecated) tf.compat.v1.initialize_local_variables() Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Use tf.local_variables_initializer instead. Note: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.
tensorflow.compat.v1.initialize_local_variables
tf.compat.v1.initialize_variables See tf.compat.v1.variables_initializer. (deprecated) tf.compat.v1.initialize_variables( var_list, name='init' ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Use tf.variables_initializer instead. Note: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.
tensorflow.compat.v1.initialize_variables
tf.compat.v1.InteractiveSession A TensorFlow Session for use in interactive contexts, such as a shell. tf.compat.v1.InteractiveSession( target='', graph=None, config=None ) The only difference with a regular Session is that an InteractiveSession installs itself as the default session on construction. The methods tf.Tensor.eval and tf.Operation.run will use that session to run ops. This is convenient in interactive shells and IPython notebooks, as it avoids having to pass an explicit Session object to run ops. For example: sess = tf.compat.v1.InteractiveSession() a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # We can just use 'c.eval()' without passing 'sess' print(c.eval()) sess.close() Note that a regular session installs itself as the default session when it is created in a with statement. The common usage in non-interactive programs is to follow that pattern: a = tf.constant(5.0) b = tf.constant(6.0) c = a * b with tf.compat.v1.Session(): # We can also use 'c.eval()' here. print(c.eval()) Args target (Optional.) The execution engine to connect to. Defaults to using an in-process engine. graph (Optional.) The Graph to be launched (described above). config (Optional) ConfigProto proto used to configure the session. Attributes graph The graph that was launched in this session. graph_def A serializable version of the underlying TensorFlow graph. sess_str The TensorFlow process to which this session will connect. Methods as_default View source as_default() Returns a context manager that makes this object the default session. Use with the with keyword to specify that calls to tf.Operation.run or tf.Tensor.eval should be executed in this session. c = tf.constant(..) sess = tf.compat.v1.Session() with sess.as_default(): assert tf.compat.v1.get_default_session() is sess print(c.eval()) To get the current default session, use tf.compat.v1.get_default_session. Note: The as_default context manager does not close the session when you exit the context, and you must close the session explicitly. c = tf.constant(...) sess = tf.compat.v1.Session() with sess.as_default(): print(c.eval()) # ... with sess.as_default(): print(c.eval()) sess.close() Alternatively, you can use with tf.compat.v1.Session(): to create a session that is automatically closed on exiting the context, including when an uncaught exception is raised. Note: The default session is a property of the current thread. If you create a new thread, and wish to use the default session in that thread, you must explicitly add a with sess.as_default(): in that thread's function. Note: Entering a with sess.as_default(): block does not affect the current default graph. If you are using multiple graphs, and sess.graph is different from the value of tf.compat.v1.get_default_graph, you must explicitly enter a with sess.graph.as_default(): block to make sess.graph the default graph. Returns A context manager using this session as the default session. close View source close() Closes an InteractiveSession. list_devices View source list_devices() Lists available devices in this session. devices = sess.list_devices() for d in devices: print(d.name) Where: Each element in the list has the following properties name: A string with the full name of the device. ex: /job:worker/replica:0/task:3/device:CPU:0 device_type: The type of the device (e.g. CPU, GPU, TPU.) memory_limit: The maximum amount of memory available on the device. Note: depending on the device, it is possible the usable memory could be substantially less. Raises tf.errors.OpError If it encounters an error (e.g. session is in an invalid state, or network errors occur). Returns A list of devices in the session. make_callable View source make_callable( fetches, feed_list=None, accept_options=False ) Returns a Python callable that runs a particular step. The returned callable will take len(feed_list) arguments whose types must be compatible feed values for the respective elements of feed_list. For example, if element i of feed_list is a tf.Tensor, the ith argument to the returned callable must be a numpy ndarray (or something convertible to an ndarray) with matching element type and shape. See tf.Session.run for details of the allowable feed key and value types. The returned callable will have the same return type as tf.Session.run(fetches, ...). For example, if fetches is a tf.Tensor, the callable will return a numpy ndarray; if fetches is a tf.Operation, it will return None. Args fetches A value or list of values to fetch. See tf.Session.run for details of the allowable fetch types. feed_list (Optional.) A list of feed_dict keys. See tf.Session.run for details of the allowable feed key types. accept_options (Optional.) If True, the returned Callable will be able to accept tf.compat.v1.RunOptions and tf.compat.v1.RunMetadata as optional keyword arguments options and run_metadata, respectively, with the same syntax and semantics as tf.Session.run, which is useful for certain use cases (profiling and debugging) but will result in measurable slowdown of the Callable's performance. Default: False. Returns A function that when called will execute the step defined by feed_list and fetches in this session. Raises TypeError If fetches or feed_list cannot be interpreted as arguments to tf.Session.run. partial_run View source partial_run( handle, fetches, feed_dict=None ) Continues the execution with more feeds and fetches. This is EXPERIMENTAL and subject to change. To use partial execution, a user first calls partial_run_setup() and then a sequence of partial_run(). partial_run_setup specifies the list of feeds and fetches that will be used in the subsequent partial_run calls. The optional feed_dict argument allows the caller to override the value of tensors in the graph. See run() for more information. Below is a simple example: a = array_ops.placeholder(dtypes.float32, shape=[]) b = array_ops.placeholder(dtypes.float32, shape=[]) c = array_ops.placeholder(dtypes.float32, shape=[]) r1 = math_ops.add(a, b) r2 = math_ops.multiply(r1, c) h = sess.partial_run_setup([r1, r2], [a, b, c]) res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2}) res = sess.partial_run(h, r2, feed_dict={c: res}) Args handle A handle for a sequence of partial runs. fetches A single graph element, a list of graph elements, or a dictionary whose values are graph elements or lists of graph elements (see documentation for run). feed_dict A dictionary that maps graph elements to values (described above). Returns Either a single value if fetches is a single graph element, or a list of values if fetches is a list, or a dictionary with the same keys as fetches if that is a dictionary (see documentation for run). Raises tf.errors.OpError Or one of its subclasses on error. partial_run_setup View source partial_run_setup( fetches, feeds=None ) Sets up a graph with feeds and fetches for partial run. This is EXPERIMENTAL and subject to change. Note that contrary to run, feeds only specifies the graph elements. The tensors will be supplied by the subsequent partial_run calls. Args fetches A single graph element, or a list of graph elements. feeds A single graph element, or a list of graph elements. Returns A handle for partial run. Raises RuntimeError If this Session is in an invalid state (e.g. has been closed). TypeError If fetches or feed_dict keys are of an inappropriate type. tf.errors.OpError Or one of its subclasses if a TensorFlow error happens. run View source run( fetches, feed_dict=None, options=None, run_metadata=None ) Runs operations and evaluates tensors in fetches. This method runs one "step" of TensorFlow computation, by running the necessary graph fragment to execute every Operation and evaluate every Tensor in fetches, substituting the values in feed_dict for the corresponding input values. The fetches argument may be a single graph element, or an arbitrarily nested list, tuple, namedtuple, dict, or OrderedDict containing graph elements at its leaves. A graph element can be one of the following types: A tf.Operation. The corresponding fetched value will be None. A tf.Tensor. The corresponding fetched value will be a numpy ndarray containing the value of that tensor. A tf.sparse.SparseTensor. The corresponding fetched value will be a tf.compat.v1.SparseTensorValue containing the value of that sparse tensor. A get_tensor_handle op. The corresponding fetched value will be a numpy ndarray containing the handle of that tensor. A string which is the name of a tensor or operation in the graph. The value returned by run() has the same shape as the fetches argument, where the leaves are replaced by the corresponding values returned by TensorFlow. Example: a = tf.constant([10, 20]) b = tf.constant([1.0, 2.0]) # 'fetches' can be a singleton v = session.run(a) # v is the numpy array [10, 20] # 'fetches' can be a list. v = session.run([a, b]) # v is a Python list with 2 numpy arrays: the 1-D array [10, 20] and the # 1-D array [1.0, 2.0] # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts: MyData = collections.namedtuple('MyData', ['a', 'b']) v = session.run({'k1': MyData(a, b), 'k2': [b, a]}) # v is a dict with # v['k1'] is a MyData namedtuple with 'a' (the numpy array [10, 20]) and # 'b' (the numpy array [1.0, 2.0]) # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array # [10, 20]. The optional feed_dict argument allows the caller to override the value of tensors in the graph. Each key in feed_dict can be one of the following types: If the key is a tf.Tensor, the value may be a Python scalar, string, list, or numpy ndarray that can be converted to the same dtype as that tensor. Additionally, if the key is a tf.compat.v1.placeholder, the shape of the value will be checked for compatibility with the placeholder. If the key is a tf.sparse.SparseTensor, the value should be a tf.compat.v1.SparseTensorValue. If the key is a nested tuple of Tensors or SparseTensors, the value should be a nested tuple with the same structure that maps to their corresponding values as above. Each value in feed_dict must be convertible to a numpy array of the dtype of the corresponding key. The optional options argument expects a [RunOptions] proto. The options allow controlling the behavior of this particular step (e.g. turning tracing on). The optional run_metadata argument expects a [RunMetadata] proto. When appropriate, the non-Tensor output of this step will be collected there. For example, when users turn on tracing in options, the profiled info will be collected into this argument and passed back. Args fetches A single graph element, a list of graph elements, or a dictionary whose values are graph elements or lists of graph elements (described above). feed_dict A dictionary that maps graph elements to values (described above). options A [RunOptions] protocol buffer run_metadata A [RunMetadata] protocol buffer Returns Either a single value if fetches is a single graph element, or a list of values if fetches is a list, or a dictionary with the same keys as fetches if that is a dictionary (described above). Order in which fetches operations are evaluated inside the call is undefined. Raises RuntimeError If this Session is in an invalid state (e.g. has been closed). TypeError If fetches or feed_dict keys are of an inappropriate type. ValueError If fetches or feed_dict keys are invalid or refer to a Tensor that doesn't exist.
tensorflow.compat.v1.interactivesession
Module: tf.compat.v1.io Public API for tf.io namespace. Modules gfile module: Public API for tf.io.gfile namespace. Classes class FixedLenFeature: Configuration for parsing a fixed-length input feature. class FixedLenSequenceFeature: Configuration for parsing a variable-length input feature into a Tensor. class PaddingFIFOQueue: A FIFOQueue that supports batching variable-sized tensors by padding. class PriorityQueue: A queue implementation that dequeues elements in prioritized order. class QueueBase: Base class for queue implementations. class RaggedFeature: Configuration for passing a RaggedTensor input feature. class RandomShuffleQueue: A queue implementation that dequeues elements in a random order. class SparseFeature: Configuration for parsing a sparse input feature from an Example. class TFRecordCompressionType: The type of compression for the record. class TFRecordOptions: Options used for manipulating TFRecord files. class TFRecordWriter: A class to write records to a TFRecords file. class VarLenFeature: Configuration for parsing a variable-length input feature. Functions decode_and_crop_jpeg(...): Decode and Crop a JPEG-encoded image to a uint8 tensor. decode_base64(...): Decode web-safe base64-encoded strings. decode_bmp(...): Decode the first frame of a BMP-encoded image to a uint8 tensor. decode_compressed(...): Decompress strings. decode_csv(...): Convert CSV records to tensors. Each column maps to one tensor. decode_gif(...): Decode the frame(s) of a GIF-encoded image to a uint8 tensor. decode_image(...): Function for decode_bmp, decode_gif, decode_jpeg, and decode_png. decode_jpeg(...): Decode a JPEG-encoded image to a uint8 tensor. decode_json_example(...): Convert JSON-encoded Example records to binary protocol buffer strings. decode_png(...): Decode a PNG-encoded image to a uint8 or uint16 tensor. decode_proto(...): The op extracts fields from a serialized protocol buffers message into tensors. decode_raw(...): Convert raw byte strings into tensors. (deprecated arguments) deserialize_many_sparse(...): Deserialize and concatenate SparseTensors from a serialized minibatch. encode_base64(...): Encode strings into web-safe base64 format. encode_jpeg(...): JPEG-encode an image. encode_png(...): PNG-encode an image. encode_proto(...): The op serializes protobuf messages provided in the input tensors. extract_jpeg_shape(...): Extract the shape information of a JPEG-encoded image. is_jpeg(...): Convenience function to check if the 'contents' encodes a JPEG image. match_filenames_once(...): Save the list of files matching pattern, so it is only computed once. matching_files(...): Returns the set of files matching one or more glob patterns. parse_example(...): Parses Example protos into a dict of tensors. parse_sequence_example(...): Parses a batch of SequenceExample protos. parse_single_example(...): Parses a single Example proto. parse_single_sequence_example(...): Parses a single SequenceExample proto. parse_tensor(...): Transforms a serialized tensorflow.TensorProto proto into a Tensor. read_file(...): Reads and outputs the entire contents of the input filename. serialize_many_sparse(...): Serialize N-minibatch SparseTensor into an [N, 3] Tensor. serialize_sparse(...): Serialize a SparseTensor into a 3-vector (1-D Tensor) object. serialize_tensor(...): Transforms a Tensor into a serialized TensorProto proto. tf_record_iterator(...): An iterator that read the records from a TFRecords file. (deprecated) write_file(...): Writes contents to the file at input filename. Creates file and recursively write_graph(...): Writes a graph proto to a file.
tensorflow.compat.v1.io
Module: tf.compat.v1.io.gfile Public API for tf.io.gfile namespace. Classes class GFile: File I/O wrappers without thread locking. Functions copy(...): Copies data from src to dst. exists(...): Determines whether a path exists or not. glob(...): Returns a list of files that match the given pattern(s). isdir(...): Returns whether the path is a directory or not. listdir(...): Returns a list of entries contained within a directory. makedirs(...): Creates a directory and all parent/intermediate directories. mkdir(...): Creates a directory with the name given by path. remove(...): Deletes the path located at 'path'. rename(...): Rename or move a file / directory. rmtree(...): Deletes everything under path recursively. stat(...): Returns file statistics for a given path. walk(...): Recursive directory tree generator for directories.
tensorflow.compat.v1.io.gfile
tf.compat.v1.io.TFRecordCompressionType The type of compression for the record. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.python_io.TFRecordCompressionType Class Variables GZIP 2 NONE 0 ZLIB 1
tensorflow.compat.v1.io.tfrecordcompressiontype
tf.compat.v1.io.tf_record_iterator An iterator that read the records from a TFRecords file. (deprecated) View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.python_io.tf_record_iterator tf.compat.v1.io.tf_record_iterator( path, options=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use eager execution and: tf.data.TFRecordDataset(path) Args path The path to the TFRecords file. options (optional) A TFRecordOptions object. Returns An iterator of serialized TFRecords. Raises IOError If path cannot be opened for reading.
tensorflow.compat.v1.io.tf_record_iterator
tf.compat.v1.is_variable_initialized Tests if a variable has been initialized. tf.compat.v1.is_variable_initialized( variable ) Args variable A Variable. Returns Returns a scalar boolean Tensor, True if the variable has been initialized, False otherwise. Note: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.
tensorflow.compat.v1.is_variable_initialized
Module: tf.compat.v1.keras Implementation of the Keras API meant to be a high-level API for TensorFlow. Detailed documentation and user guides are available at tensorflow.org. Modules activations module: Built-in activation functions. applications module: Keras Applications are canned architectures with pre-trained weights. backend module: Keras backend API. callbacks module: Callbacks: utilities called at certain points during model training. constraints module: Constraints: functions that impose constraints on weight values. datasets module: Public API for tf.keras.datasets namespace. estimator module: Keras estimator API. experimental module: Public API for tf.keras.experimental namespace. initializers module: Keras initializer serialization / deserialization. layers module: Keras layers API. losses module: Built-in loss functions. metrics module: Built-in metrics. mixed_precision module: Keras mixed precision API. models module: Code for model cloning, plus model-related API entries. optimizers module: Built-in optimizer classes. preprocessing module: Keras data preprocessing utils. regularizers module: Built-in regularizers. utils module: Public API for tf.keras.utils namespace. wrappers module: Public API for tf.keras.wrappers namespace. Classes class Model: Model groups layers into an object with training and inference features. class Sequential: Sequential groups a linear stack of layers into a tf.keras.Model. Functions Input(...): Input() is used to instantiate a Keras tensor.
tensorflow.compat.v1.keras
Module: tf.compat.v1.keras.activations Built-in activation functions. Functions deserialize(...): Returns activation function given a string identifier. elu(...): Exponential Linear Unit. exponential(...): Exponential activation function. get(...): Returns function. hard_sigmoid(...): Hard sigmoid activation function. linear(...): Linear activation function (pass-through). relu(...): Applies the rectified linear unit activation function. selu(...): Scaled Exponential Linear Unit (SELU). serialize(...): Returns the string identifier of an activation function. sigmoid(...): Sigmoid activation function, sigmoid(x) = 1 / (1 + exp(-x)). softmax(...): Softmax converts a real vector to a vector of categorical probabilities. softplus(...): Softplus activation function, softplus(x) = log(exp(x) + 1). softsign(...): Softsign activation function, softsign(x) = x / (abs(x) + 1). swish(...): Swish activation function, swish(x) = x * sigmoid(x). tanh(...): Hyperbolic tangent activation function.
tensorflow.compat.v1.keras.activations
Module: tf.compat.v1.keras.applications Keras Applications are canned architectures with pre-trained weights. Modules densenet module: DenseNet models for Keras. efficientnet module: EfficientNet models for Keras. imagenet_utils module: Utilities for ImageNet data preprocessing & prediction decoding. inception_resnet_v2 module: Inception-ResNet V2 model for Keras. inception_v3 module: Inception V3 model for Keras. mobilenet module: MobileNet v1 models for Keras. mobilenet_v2 module: MobileNet v2 models for Keras. mobilenet_v3 module: MobileNet v3 models for Keras. nasnet module: NASNet-A models for Keras. resnet module: ResNet models for Keras. resnet50 module: Public API for tf.keras.applications.resnet50 namespace. resnet_v2 module: ResNet v2 models for Keras. vgg16 module: VGG16 model for Keras. vgg19 module: VGG19 model for Keras. xception module: Xception V1 model for Keras. Functions DenseNet121(...): Instantiates the Densenet121 architecture. DenseNet169(...): Instantiates the Densenet169 architecture. DenseNet201(...): Instantiates the Densenet201 architecture. EfficientNetB0(...): Instantiates the EfficientNetB0 architecture. EfficientNetB1(...): Instantiates the EfficientNetB1 architecture. EfficientNetB2(...): Instantiates the EfficientNetB2 architecture. EfficientNetB3(...): Instantiates the EfficientNetB3 architecture. EfficientNetB4(...): Instantiates the EfficientNetB4 architecture. EfficientNetB5(...): Instantiates the EfficientNetB5 architecture. EfficientNetB6(...): Instantiates the EfficientNetB6 architecture. EfficientNetB7(...): Instantiates the EfficientNetB7 architecture. InceptionResNetV2(...): Instantiates the Inception-ResNet v2 architecture. InceptionV3(...): Instantiates the Inception v3 architecture. MobileNet(...): Instantiates the MobileNet architecture. MobileNetV2(...): Instantiates the MobileNetV2 architecture. MobileNetV3Large(...): Instantiates the MobileNetV3Large architecture. MobileNetV3Small(...): Instantiates the MobileNetV3Small architecture. NASNetLarge(...): Instantiates a NASNet model in ImageNet mode. NASNetMobile(...): Instantiates a Mobile NASNet model in ImageNet mode. ResNet101(...): Instantiates the ResNet101 architecture. ResNet101V2(...): Instantiates the ResNet101V2 architecture. ResNet152(...): Instantiates the ResNet152 architecture. ResNet152V2(...): Instantiates the ResNet152V2 architecture. ResNet50(...): Instantiates the ResNet50 architecture. ResNet50V2(...): Instantiates the ResNet50V2 architecture. VGG16(...): Instantiates the VGG16 model. VGG19(...): Instantiates the VGG19 architecture. Xception(...): Instantiates the Xception architecture.
tensorflow.compat.v1.keras.applications
Module: tf.compat.v1.keras.applications.densenet DenseNet models for Keras. Reference: Densely Connected Convolutional Networks (CVPR 2017) Functions DenseNet121(...): Instantiates the Densenet121 architecture. DenseNet169(...): Instantiates the Densenet169 architecture. DenseNet201(...): Instantiates the Densenet201 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
tensorflow.compat.v1.keras.applications.densenet
Module: tf.compat.v1.keras.applications.efficientnet EfficientNet models for Keras. Reference: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) Functions EfficientNetB0(...): Instantiates the EfficientNetB0 architecture. EfficientNetB1(...): Instantiates the EfficientNetB1 architecture. EfficientNetB2(...): Instantiates the EfficientNetB2 architecture. EfficientNetB3(...): Instantiates the EfficientNetB3 architecture. EfficientNetB4(...): Instantiates the EfficientNetB4 architecture. EfficientNetB5(...): Instantiates the EfficientNetB5 architecture. EfficientNetB6(...): Instantiates the EfficientNetB6 architecture. EfficientNetB7(...): Instantiates the EfficientNetB7 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...)
tensorflow.compat.v1.keras.applications.efficientnet
Module: tf.compat.v1.keras.applications.imagenet_utils Utilities for ImageNet data preprocessing & prediction decoding. Functions decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
tensorflow.compat.v1.keras.applications.imagenet_utils