doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
class numpy.longlong[source]
Signed integer type, compatible with C long long. Character code
'q' | numpy.reference.arrays.scalars#numpy.longlong |
numpy.lookfor numpy.lookfor(what, module=None, import_modules=True, regenerate=False, output=None)[source]
Do a keyword search on docstrings. A list of objects that matched the search is displayed, sorted by relevance. All given keywords need to be found in the docstring for it to be returned as a result, but the order does not matter. Parameters
whatstr
String containing words to look for.
modulestr or list, optional
Name of module(s) whose docstrings to go through.
import_modulesbool, optional
Whether to import sub-modules in packages. Default is True.
regeneratebool, optional
Whether to re-generate the docstring cache. Default is False.
outputfile-like, optional
File-like object to write the output to. If omitted, use a pager. See also
source, info
Notes Relevance is determined only roughly, by checking if the keywords occur in the function name, at the start of a docstring, etc. Examples >>> np.lookfor('binary representation')
Search results for 'binary representation'
------------------------------------------
numpy.binary_repr
Return the binary representation of the input number as a string.
numpy.core.setup_common.long_double_representation
Given a binary dump as given by GNU od -b, look for long double
numpy.base_repr
Return a string representation of a number in the given base system.
... | numpy.reference.generated.numpy.lookfor |
Constants of the numpy.ma module In addition to the MaskedArray class, the numpy.ma module defines several constants. numpy.ma.masked
The masked constant is a special case of MaskedArray, with a float datatype and a null shape. It is used to test whether a specific entry of a masked array is masked, or to mask one or several entries of a masked array: >>> x = ma.array([1, 2, 3], mask=[0, 1, 0])
>>> x[1] is ma.masked
True
>>> x[-1] = ma.masked
>>> x
masked_array(data=[1, --, --],
mask=[False, True, True],
fill_value=999999)
numpy.ma.nomask
Value indicating that a masked array has no invalid entry. nomask is used internally to speed up computations when the mask is not needed. It is represented internally as np.False_.
numpy.ma.masked_print_options
String used in lieu of missing data when a masked array is printed. By default, this string is '--'. | numpy.reference.maskedarray.baseclass |
numpy.ma.masked_array numpy.ma.masked_array[source]
alias of numpy.ma.core.MaskedArray | numpy.reference.generated.numpy.ma.masked_array |
numpy.ma.masked_print_options
String used in lieu of missing data when a masked array is printed. By default, this string is '--'. | numpy.reference.maskedarray.baseclass#numpy.ma.masked_print_options |
numpy.ma.MaskType numpy.ma.MaskType[source]
alias of numpy.bool_ | numpy.reference.generated.numpy.ma.masktype |
numpy.ma.nomask
Value indicating that a masked array has no invalid entry. nomask is used internally to speed up computations when the mask is not needed. It is represented internally as np.False_. | numpy.reference.maskedarray.baseclass#numpy.ma.nomask |
numpy.MachAr class numpy.MachAr(float_conv=<class 'float'>, int_conv=<class 'int'>, float_to_float=<class 'float'>, float_to_str=<function MachAr.<lambda>>, title='Python floating point number')[source]
Diagnosing machine parameters. Parameters
float_convfunction, optional
Function that converts an integer or integer array to a float or float array. Default is float.
int_convfunction, optional
Function that converts a float or float array to an integer or integer array. Default is int.
float_to_floatfunction, optional
Function that converts a float array to float. Default is float. Note that this does not seem to do anything useful in the current implementation.
float_to_strfunction, optional
Function that converts a single float to a string. Default is lambda v:'%24.16e' %v.
titlestr, optional
Title that is printed in the string representation of MachAr. See also finfo
Machine limits for floating point types. iinfo
Machine limits for integer types. References 1
Press, Teukolsky, Vetterling and Flannery, “Numerical Recipes in C++,” 2nd ed, Cambridge University Press, 2002, p. 31. Attributes
ibetaint
Radix in which numbers are represented.
itint
Number of base-ibeta digits in the floating point mantissa M.
machepint
Exponent of the smallest (most negative) power of ibeta that, added to 1.0, gives something different from 1.0
epsfloat
Floating-point number beta**machep (floating point precision)
negepint
Exponent of the smallest power of ibeta that, subtracted from 1.0, gives something different from 1.0.
epsnegfloat
Floating-point number beta**negep.
iexpint
Number of bits in the exponent (including its sign and bias).
minexpint
Smallest (most negative) power of ibeta consistent with there being no leading zeros in the mantissa.
xminfloat
Floating-point number beta**minexp (the smallest [in magnitude] positive floating point number with full precision).
maxexpint
Smallest (positive) power of ibeta that causes overflow.
xmaxfloat
(1-epsneg) * beta**maxexp (the largest [in magnitude] usable floating value).
irndint
In range(6), information on what kind of rounding is done in addition, and on how underflow is handled.
ngrdint
Number of ‘guard digits’ used when truncating the product of two mantissas to fit the representation.
epsilonfloat
Same as eps.
tinyfloat
An alias for smallest_normal, kept for backwards compatibility.
hugefloat
Same as xmax.
precisionfloat
- int(-log10(eps))
resolutionfloat
- 10**(-precision)
smallest_normalfloat
The smallest positive floating point number with 1 as leading bit in the mantissa following IEEE-754. Same as xmin.
smallest_subnormalfloat
The smallest positive floating point number with 0 as leading bit in the mantissa following IEEE-754. | numpy.reference.generated.numpy.machar |
numpy.mask_indices numpy.mask_indices(n, mask_func, k=0)[source]
Return the indices to access (n, n) arrays, given a masking function. Assume mask_func is a function that, for a square array a of size (n, n) with a possible offset argument k, when called as mask_func(a, k) returns a new array with zeros in certain locations (functions like triu or tril do precisely this). Then this function returns the indices where the non-zero values would be located. Parameters
nint
The returned indices will be valid to access arrays of shape (n, n).
mask_funccallable
A function whose call signature is similar to that of triu, tril. That is, mask_func(x, k) returns a boolean array, shaped like x. k is an optional argument to the function.
kscalar
An optional argument which is passed through to mask_func. Functions like triu, tril take a second argument that is interpreted as an offset. Returns
indicestuple of arrays.
The n arrays of indices corresponding to the locations where mask_func(np.ones((n, n)), k) is True. See also
triu, tril, triu_indices, tril_indices
Notes New in version 1.4.0. Examples These are the indices that would allow you to access the upper triangular part of any 3x3 array: >>> iu = np.mask_indices(3, np.triu)
For example, if a is a 3x3 array: >>> a = np.arange(9).reshape(3, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a[iu]
array([0, 1, 2, 4, 5, 8])
An offset can be passed also to the masking function. This gets us the indices starting on the first diagonal right of the main one: >>> iu1 = np.mask_indices(3, np.triu, 1)
with which we now extract only three elements: >>> a[iu1]
array([1, 2, 5]) | numpy.reference.generated.numpy.mask_indices |
numpy.mat numpy.mat(data, dtype=None)[source]
Interpret the input as a matrix. Unlike matrix, asmatrix does not make a copy if the input is already a matrix or an ndarray. Equivalent to matrix(data, copy=False). Parameters
dataarray_like
Input data.
dtypedata-type
Data-type of the output matrix. Returns
matmatrix
data interpreted as a matrix. Examples >>> x = np.array([[1, 2], [3, 4]])
>>> m = np.asmatrix(x)
>>> x[0,0] = 5
>>> m
matrix([[5, 2],
[3, 4]]) | numpy.reference.generated.numpy.mat |
numpy.matmul numpy.matmul(x1, x2, /, out=None, *, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj, axes, axis]) = <ufunc 'matmul'>
Matrix product of two arrays. Parameters
x1, x2array_like
Input arrays, scalars not allowed.
outndarray, optional
A location into which the result is stored. If provided, it must have a shape that matches the signature (n,k),(k,m)->(n,m). If not provided or None, a freshly-allocated array is returned. **kwargs
For other keyword-only arguments, see the ufunc docs. New in version 1.16: Now handles ufunc kwargs Returns
yndarray
The matrix product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors. Raises
ValueError
If the last dimension of x1 is not the same size as the second-to-last dimension of x2. If a scalar value is passed in. See also vdot
Complex-conjugating dot product. tensordot
Sum products over arbitrary axes. einsum
Einstein summation convention. dot
alternative matrix product with different broadcasting rules. Notes The behavior depends on the arguments in the following way. If both arguments are 2-D they are multiplied like conventional matrices. If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly. If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed. If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed. matmul differs from dot in two important ways: Multiplication by scalars is not allowed, use * instead.
Stacks of matrices are broadcast together as if the matrices were elements, respecting the signature (n,k),(k,m)->(n,m): >>> a = np.ones([9, 5, 7, 4])
>>> c = np.ones([9, 5, 4, 3])
>>> np.dot(a, c).shape
(9, 5, 7, 9, 5, 3)
>>> np.matmul(a, c).shape
(9, 5, 7, 3)
>>> # n is 7, k is 4, m is 3
The matmul function implements the semantics of the @ operator introduced in Python 3.5 following PEP 465. Examples For 2-D arrays it is the matrix product: >>> a = np.array([[1, 0],
... [0, 1]])
>>> b = np.array([[4, 1],
... [2, 2]])
>>> np.matmul(a, b)
array([[4, 1],
[2, 2]])
For 2-D mixed with 1-D, the result is the usual. >>> a = np.array([[1, 0],
... [0, 1]])
>>> b = np.array([1, 2])
>>> np.matmul(a, b)
array([1, 2])
>>> np.matmul(b, a)
array([1, 2])
Broadcasting is conventional for stacks of arrays >>> a = np.arange(2 * 2 * 4).reshape((2, 2, 4))
>>> b = np.arange(2 * 2 * 4).reshape((2, 4, 2))
>>> np.matmul(a,b).shape
(2, 2, 2)
>>> np.matmul(a, b)[0, 1, 1]
98
>>> sum(a[0, 1, :] * b[0 , :, 1])
98
Vector, vector returns the scalar inner product, but neither argument is complex-conjugated: >>> np.matmul([2j, 3j], [2j, 3j])
(-13+0j)
Scalar multiplication raises an error. >>> np.matmul([1,2], 3)
Traceback (most recent call last):
...
ValueError: matmul: Input operand 1 does not have enough dimensions ...
The @ operator can be used as a shorthand for np.matmul on ndarrays. >>> x1 = np.array([2j, 3j])
>>> x2 = np.array([2j, 3j])
>>> x1 @ x2
(-13+0j)
New in version 1.10.0. | numpy.reference.generated.numpy.matmul |
numpy.matrix class numpy.matrix(data, dtype=None, copy=True)[source]
Note It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. Returns a matrix from an array-like object, or from a string of data. A matrix is a specialized 2-D array that retains its 2-D nature through operations. It has certain special operators, such as * (matrix multiplication) and ** (matrix power). Parameters
dataarray_like or string
If data is a string, it is interpreted as a matrix with commas or spaces separating columns, and semicolons separating rows.
dtypedata-type
Data-type of the output matrix.
copybool
If data is already an ndarray, then this flag determines whether the data is copied (the default), or whether a view is constructed. See also array
Examples >>> a = np.matrix('1 2; 3 4')
>>> a
matrix([[1, 2],
[3, 4]])
>>> np.matrix([[1, 2], [3, 4]])
matrix([[1, 2],
[3, 4]])
Attributes
A
Return self as an ndarray object. A1
Return self as a flattened ndarray. H
Returns the (complex) conjugate transpose of self. I
Returns the (multiplicative) inverse of invertible self. T
Returns the transpose of the matrix. base
Base object if memory is from some other object. ctypes
An object to simplify the interaction of the array with the ctypes module. data
Python buffer object pointing to the start of the array’s data. dtype
Data-type of the array’s elements. flags
Information about the memory layout of the array. flat
A 1-D iterator over the array. imag
The imaginary part of the array. itemsize
Length of one array element in bytes. nbytes
Total bytes consumed by the elements of the array. ndim
Number of array dimensions. real
The real part of the array. shape
Tuple of array dimensions. size
Number of elements in the array. strides
Tuple of bytes to step in each dimension when traversing an array. Methods
all([axis, out]) Test whether all matrix elements along a given axis evaluate to True.
any([axis, out]) Test whether any array element along a given axis evaluates to True.
argmax([axis, out]) Indexes of the maximum values along an axis.
argmin([axis, out]) Indexes of the minimum values along an axis.
argpartition(kth[, axis, kind, order]) Returns the indices that would partition this array.
argsort([axis, kind, order]) Returns the indices that would sort this array.
astype(dtype[, order, casting, subok, copy]) Copy of the array, cast to a specified type.
byteswap([inplace]) Swap the bytes of the array elements
choose(choices[, out, mode]) Use an index array to construct a new array from a set of choices.
clip([min, max, out]) Return an array whose values are limited to [min, max].
compress(condition[, axis, out]) Return selected slices of this array along given axis.
conj() Complex-conjugate all elements.
conjugate() Return the complex conjugate, element-wise.
copy([order]) Return a copy of the array.
cumprod([axis, dtype, out]) Return the cumulative product of the elements along the given axis.
cumsum([axis, dtype, out]) Return the cumulative sum of the elements along the given axis.
diagonal([offset, axis1, axis2]) Return specified diagonals.
dump(file) Dump a pickle of the array to the specified file.
dumps() Returns the pickle of the array as a string.
fill(value) Fill the array with a scalar value.
flatten([order]) Return a flattened copy of the matrix.
getA() Return self as an ndarray object.
getA1() Return self as a flattened ndarray.
getH() Returns the (complex) conjugate transpose of self.
getI() Returns the (multiplicative) inverse of invertible self.
getT() Returns the transpose of the matrix.
getfield(dtype[, offset]) Returns a field of the given array as a certain type.
item(*args) Copy an element of an array to a standard Python scalar and return it.
itemset(*args) Insert scalar into an array (scalar is cast to array's dtype, if possible)
max([axis, out]) Return the maximum value along an axis.
mean([axis, dtype, out]) Returns the average of the matrix elements along the given axis.
min([axis, out]) Return the minimum value along an axis.
newbyteorder([new_order]) Return the array with the same data viewed with a different byte order.
nonzero() Return the indices of the elements that are non-zero.
partition(kth[, axis, kind, order]) Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array.
prod([axis, dtype, out]) Return the product of the array elements over the given axis.
ptp([axis, out]) Peak-to-peak (maximum - minimum) value along the given axis.
put(indices, values[, mode]) Set a.flat[n] = values[n] for all n in indices.
ravel([order]) Return a flattened matrix.
repeat(repeats[, axis]) Repeat elements of an array.
reshape(shape[, order]) Returns an array containing the same data with a new shape.
resize(new_shape[, refcheck]) Change shape and size of array in-place.
round([decimals, out]) Return a with each element rounded to the given number of decimals.
searchsorted(v[, side, sorter]) Find indices where elements of v should be inserted in a to maintain order.
setfield(val, dtype[, offset]) Put a value into a specified place in a field defined by a data-type.
setflags([write, align, uic]) Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY), respectively.
sort([axis, kind, order]) Sort an array in-place.
squeeze([axis]) Return a possibly reshaped matrix.
std([axis, dtype, out, ddof]) Return the standard deviation of the array elements along the given axis.
sum([axis, dtype, out]) Returns the sum of the matrix elements, along the given axis.
swapaxes(axis1, axis2) Return a view of the array with axis1 and axis2 interchanged.
take(indices[, axis, out, mode]) Return an array formed from the elements of a at the given indices.
tobytes([order]) Construct Python bytes containing the raw data bytes in the array.
tofile(fid[, sep, format]) Write array to a file as text or binary (default).
tolist() Return the matrix as a (possibly nested) list.
tostring([order]) A compatibility alias for tobytes, with exactly the same behavior.
trace([offset, axis1, axis2, dtype, out]) Return the sum along diagonals of the array.
transpose(*axes) Returns a view of the array with axes transposed.
var([axis, dtype, out, ddof]) Returns the variance of the matrix elements, along the given axis.
view([dtype][, type]) New view of array with the same data.
dot | numpy.reference.generated.numpy.matrix |
numpy.maximum numpy.maximum(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'maximum'>
Element-wise maximum of array elements. Compare two arrays and returns a new array containing the element-wise maxima. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated. Parameters
x1, x2array_like
The arrays holding the elements to be compared. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray or scalar
The maximum of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars. See also minimum
Element-wise minimum of two arrays, propagates NaNs. fmax
Element-wise maximum of two arrays, ignores NaNs. amax
The maximum value of an array along a given axis, propagates NaNs. nanmax
The maximum value of an array along a given axis, ignores NaNs.
fmin, amin, nanmin
Notes The maximum is equivalent to np.where(x1 >= x2, x1, x2) when neither x1 nor x2 are nans, but it is faster and does proper broadcasting. Examples >>> np.maximum([2, 3, 4], [1, 5, 2])
array([2, 5, 4])
>>> np.maximum(np.eye(2), [0.5, 2]) # broadcasting
array([[ 1. , 2. ],
[ 0.5, 2. ]])
>>> np.maximum([np.nan, 0, np.nan], [0, np.nan, np.nan])
array([nan, nan, nan])
>>> np.maximum(np.Inf, 1)
inf | numpy.reference.generated.numpy.maximum |
numpy.maximum_sctype numpy.maximum_sctype(t)[source]
Return the scalar type of highest precision of the same kind as the input. Parameters
tdtype or dtype specifier
The input data type. This can be a dtype object or an object that is convertible to a dtype. Returns
outdtype
The highest precision data type of the same kind (dtype.kind) as t. See also
obj2sctype, mintypecode, sctype2char
dtype
Examples >>> np.maximum_sctype(int)
<class 'numpy.int64'>
>>> np.maximum_sctype(np.uint8)
<class 'numpy.uint64'>
>>> np.maximum_sctype(complex)
<class 'numpy.complex256'> # may vary
>>> np.maximum_sctype(str)
<class 'numpy.str_'>
>>> np.maximum_sctype('i2')
<class 'numpy.int64'>
>>> np.maximum_sctype('f4')
<class 'numpy.float128'> # may vary | numpy.reference.generated.numpy.maximum_sctype |
numpy.may_share_memory numpy.may_share_memory(a, b, /, max_work=None)
Determine if two arrays might share memory A return of True does not necessarily mean that the two arrays share any element. It just means that they might. Only the memory bounds of a and b are checked by default. Parameters
a, bndarray
Input arrays
max_workint, optional
Effort to spend on solving the overlap problem. See shares_memory for details. Default for may_share_memory is to do a bounds check. Returns
outbool
See also shares_memory
Examples >>> np.may_share_memory(np.array([1,2]), np.array([5,8,9]))
False
>>> x = np.zeros([3, 4])
>>> np.may_share_memory(x[:,0], x[:,1])
True | numpy.reference.generated.numpy.may_share_memory |
numpy.mean numpy.mean(a, axis=None, dtype=None, out=None, keepdims=<no value>, *, where=<no value>)[source]
Compute the arithmetic mean along the specified axis. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. float64 intermediate and return values are used for integer inputs. Parameters
aarray_like
Array containing numbers whose mean is desired. If a is not an array, a conversion is attempted.
axisNone or int or tuple of ints, optional
Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. New in version 1.7.0. If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before.
dtypedata-type, optional
Type to use in computing the mean. For integer inputs, the default is float64; for floating point inputs, it is the same as the input dtype.
outndarray, optional
Alternate output array in which to place the result. The default is None; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See Output type determination for more details.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the mean method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised.
wherearray_like of bool, optional
Elements to include in the mean. See reduce for details. New in version 1.20.0. Returns
mndarray, see dtype parameter above
If out=None, returns a new array containing the mean values, otherwise a reference to the output array is returned. See also average
Weighted average
std, var, nanmean, nanstd, nanvar
Notes The arithmetic mean is the sum of the elements along the axis divided by the number of elements. Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-precision accumulator using the dtype keyword can alleviate this issue. By default, float16 results are computed using float32 intermediates for extra precision. Examples >>> a = np.array([[1, 2], [3, 4]])
>>> np.mean(a)
2.5
>>> np.mean(a, axis=0)
array([2., 3.])
>>> np.mean(a, axis=1)
array([1.5, 3.5])
In single precision, mean can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32)
>>> a[0, :] = 1.0
>>> a[1, :] = 0.1
>>> np.mean(a)
0.54999924
Computing the mean in float64 is more accurate: >>> np.mean(a, dtype=np.float64)
0.55000000074505806 # may vary
Specifying a where argument: >>> a = np.array([[5, 9, 13], [14, 10, 12], [11, 15, 19]]) >>> np.mean(a) 12.0 >>> np.mean(a, where=[[True], [False], [False]]) 9.0 | numpy.reference.generated.numpy.mean |
numpy.median numpy.median(a, axis=None, out=None, overwrite_input=False, keepdims=False)[source]
Compute the median along the specified axis. Returns the median of the array elements. Parameters
aarray_like
Input array or object that can be converted to an array.
axis{int, sequence of int, None}, optional
Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.
overwrite_inputbool, optional
If True, then allow use of memory of input array a for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If overwrite_input is True and a is not already an ndarray, an error will be raised.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original arr. New in version 1.9.0. Returns
medianndarray
A new array holding the result. If the input contains integers or floats smaller than float64, then the output data-type is np.float64. Otherwise, the data-type of the output is the same as that of the input. If out is specified, that array is returned instead. See also
mean, percentile
Notes Given a vector V of length N, the median of V is the middle value of a sorted copy of V, V_sorted - i e., V_sorted[(N-1)/2], when N is odd, and the average of the two middle values of V_sorted when N is even. Examples >>> a = np.array([[10, 7, 4], [3, 2, 1]])
>>> a
array([[10, 7, 4],
[ 3, 2, 1]])
>>> np.median(a)
3.5
>>> np.median(a, axis=0)
array([6.5, 4.5, 2.5])
>>> np.median(a, axis=1)
array([7., 2.])
>>> m = np.median(a, axis=0)
>>> out = np.zeros_like(m)
>>> np.median(a, axis=0, out=m)
array([6.5, 4.5, 2.5])
>>> m
array([6.5, 4.5, 2.5])
>>> b = a.copy()
>>> np.median(b, axis=1, overwrite_input=True)
array([7., 2.])
>>> assert not np.all(a==b)
>>> b = a.copy()
>>> np.median(b, axis=None, overwrite_input=True)
3.5
>>> assert not np.all(a==b) | numpy.reference.generated.numpy.median |
numpy.memmap class numpy.memmap(filename, dtype=<class 'numpy.ubyte'>, mode='r+', offset=0, shape=None, order='C')[source]
Create a memory-map to an array stored in a binary file on disk. Memory-mapped files are used for accessing small segments of large files on disk, without reading the entire file into memory. NumPy’s memmap’s are array-like objects. This differs from Python’s mmap module, which uses file-like objects. This subclass of ndarray has some unpleasant interactions with some operations, because it doesn’t quite fit properly as a subclass. An alternative to using this subclass is to create the mmap object yourself, then create an ndarray with ndarray.__new__ directly, passing the object created in its ‘buffer=’ parameter. This class may at some point be turned into a factory function which returns a view into an mmap buffer. Flush the memmap instance to write the changes to the file. Currently there is no API to close the underlying mmap. It is tricky to ensure the resource is actually closed, since it may be shared between different memmap instances. Parameters
filenamestr, file-like object, or pathlib.Path instance
The file name or file object to be used as the array data buffer.
dtypedata-type, optional
The data-type used to interpret the file contents. Default is uint8.
mode{‘r+’, ‘r’, ‘w+’, ‘c’}, optional
The file is opened in this mode:
‘r’ Open existing file for reading only.
‘r+’ Open existing file for reading and writing.
‘w+’ Create or overwrite existing file for reading and writing.
‘c’ Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only. Default is ‘r+’.
offsetint, optional
In the file, array data starts at this offset. Since offset is measured in bytes, it should normally be a multiple of the byte-size of dtype. When mode != 'r', even positive offsets beyond end of file are valid; The file will be extended to accommodate the additional data. By default, memmap will start at the beginning of the file, even if filename is a file pointer fp and fp.tell() != 0.
shapetuple, optional
The desired shape of the array. If mode == 'r' and the number of remaining bytes after offset is not a multiple of the byte-size of dtype, you must specify shape. By default, the returned array will be 1-D with the number of elements determined by file size and data-type.
order{‘C’, ‘F’}, optional
Specify the order of the ndarray memory layout: row-major, C-style or column-major, Fortran-style. This only has an effect if the shape is greater than 1-D. The default order is ‘C’. See also lib.format.open_memmap
Create or load a memory-mapped .npy file. Notes The memmap object can be used anywhere an ndarray is accepted. Given a memmap fp, isinstance(fp, numpy.ndarray) returns True. Memory-mapped files cannot be larger than 2GB on 32-bit systems. When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes. Examples >>> data = np.arange(12, dtype='float32')
>>> data.resize((3,4))
This example uses a temporary file so that doctest doesn’t write files to your directory. You would use a ‘normal’ filename. >>> from tempfile import mkdtemp
>>> import os.path as path
>>> filename = path.join(mkdtemp(), 'newfile.dat')
Create a memmap with dtype and shape that matches our data: >>> fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4))
>>> fp
memmap([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], dtype=float32)
Write data to memmap array: >>> fp[:] = data[:]
>>> fp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fp.filename == path.abspath(filename)
True
Flushes memory changes to disk in order to read them back >>> fp.flush()
Load the memmap and verify data was stored: >>> newfp = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> newfp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Read-only memmap: >>> fpr = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> fpr.flags.writeable
False
Copy-on-write memmap: >>> fpc = np.memmap(filename, dtype='float32', mode='c', shape=(3,4))
>>> fpc.flags.writeable
True
It’s possible to assign to copy-on-write array, but values are only written into the memory copy of the array, and not written to disk: >>> fpc
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fpc[0,:] = 0
>>> fpc
memmap([[ 0., 0., 0., 0.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
File on disk is unchanged: >>> fpr
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Offset into a memmap: >>> fpo = np.memmap(filename, dtype='float32', mode='r', offset=16)
>>> fpo
memmap([ 4., 5., 6., 7., 8., 9., 10., 11.], dtype=float32)
Attributes
filenamestr or pathlib.Path instance
Path to the mapped file.
offsetint
Offset position in the file.
modestr
File mode. Methods
flush() Write any changes in the array to the file on disk. | numpy.reference.generated.numpy.memmap |
numpy.meshgrid numpy.meshgrid(*xi, copy=True, sparse=False, indexing='xy')[source]
Return coordinate matrices from coordinate vectors. Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,…, xn. Changed in version 1.9: 1-D and 0-D cases are allowed. Parameters
x1, x2,…, xnarray_like
1-D arrays representing the coordinates of a grid.
indexing{‘xy’, ‘ij’}, optional
Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. See Notes for more details. New in version 1.7.0.
sparsebool, optional
If True the shape of the returned coordinate array for dimension i is reduced from (N1, ..., Ni, ... Nn) to (1, ..., 1, Ni, 1, ..., 1). These sparse coordinate grids are intended to be use with Broadcasting. When all coordinates are used in an expression, broadcasting still leads to a fully-dimensonal result array. Default is False. New in version 1.7.0.
copybool, optional
If False, a view into the original arrays are returned in order to conserve memory. Default is True. Please note that sparse=False, copy=False will likely return non-contiguous arrays. Furthermore, more than one element of a broadcast array may refer to a single memory location. If you need to write to the arrays, make copies first. New in version 1.7.0. Returns
X1, X2,…, XNndarray
For vectors x1, x2,…, ‘xn’ with lengths Ni=len(xi) , return (N1, N2, N3,...Nn) shaped arrays if indexing=’ij’ or (N2, N1, N3,...Nn) shaped arrays if indexing=’xy’ with the elements of xi repeated to fill the matrix along the first dimension for x1, the second for x2 and so on. See also mgrid
Construct a multi-dimensional “meshgrid” using indexing notation. ogrid
Construct an open multi-dimensional “meshgrid” using indexing notation. Notes This function supports both indexing conventions through the indexing keyword argument. Giving the string ‘ij’ returns a meshgrid with matrix indexing, while ‘xy’ returns a meshgrid with Cartesian indexing. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for ‘xy’ indexing and (M, N) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape (N, M, P) for ‘xy’ indexing and (M, N, P) for ‘ij’ indexing. The difference is illustrated by the following code snippet: xv, yv = np.meshgrid(x, y, indexing='ij')
for i in range(nx):
for j in range(ny):
# treat xv[i,j], yv[i,j]
xv, yv = np.meshgrid(x, y, indexing='xy')
for i in range(nx):
for j in range(ny):
# treat xv[j,i], yv[j,i]
In the 1-D and 0-D case, the indexing and sparse keywords have no effect. Examples >>> nx, ny = (3, 2)
>>> x = np.linspace(0, 1, nx)
>>> y = np.linspace(0, 1, ny)
>>> xv, yv = np.meshgrid(x, y)
>>> xv
array([[0. , 0.5, 1. ],
[0. , 0.5, 1. ]])
>>> yv
array([[0., 0., 0.],
[1., 1., 1.]])
>>> xv, yv = np.meshgrid(x, y, sparse=True) # make sparse output arrays
>>> xv
array([[0. , 0.5, 1. ]])
>>> yv
array([[0.],
[1.]])
meshgrid is very useful to evaluate functions on a grid. If the function depends on all coordinates, you can use the parameter sparse=True to save memory and computation time. >>> x = np.linspace(-5, 5, 101)
>>> y = np.linspace(-5, 5, 101)
>>> # full coorindate arrays
>>> xx, yy = np.meshgrid(x, y)
>>> zz = np.sqrt(xx**2 + yy**2)
>>> xx.shape, yy.shape, zz.shape
((101, 101), (101, 101), (101, 101))
>>> # sparse coordinate arrays
>>> xs, ys = np.meshgrid(x, y, sparse=True)
>>> zs = np.sqrt(xs**2 + ys**2)
>>> xs.shape, ys.shape, zs.shape
((1, 101), (101, 1), (101, 101))
>>> np.array_equal(zz, zs)
True
>>> import matplotlib.pyplot as plt
>>> h = plt.contourf(x, y, zs)
>>> plt.axis('scaled')
>>> plt.colorbar()
>>> plt.show() | numpy.reference.generated.numpy.meshgrid |
numpy.mgrid numpy.mgrid = <numpy.lib.index_tricks.MGridClass object>
nd_grid instance which returns a dense multi-dimensional “meshgrid”. An instance of numpy.lib.index_tricks.nd_grid which returns an dense (or fleshed out) mesh-grid when indexed, so that each returned argument has the same shape. The dimensions and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive. However, if the step length is a complex number (e.g. 5j), then the integer part of its magnitude is interpreted as specifying the number of points to create between the start and stop values, where the stop value is inclusive. Returns
mesh-grid ndarrays all of the same dimensions
See also numpy.lib.index_tricks.nd_grid
class of ogrid and mgrid objects ogrid
like mgrid but returns open (not fleshed out) mesh grids r_
array concatenator Examples >>> np.mgrid[0:5,0:5]
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
>>> np.mgrid[-1:1:5j]
array([-1. , -0.5, 0. , 0.5, 1. ]) | numpy.reference.generated.numpy.mgrid |
numpy.min_scalar_type numpy.min_scalar_type(a, /)
For scalar a, returns the data type with the smallest size and smallest scalar kind which can hold its value. For non-scalar array a, returns the vector’s dtype unmodified. Floating point values are not demoted to integers, and complex values are not demoted to floats. Parameters
ascalar or array_like
The value whose minimal data type is to be found. Returns
outdtype
The minimal data type. See also
result_type, promote_types, dtype, can_cast
Notes New in version 1.6.0. Examples >>> np.min_scalar_type(10)
dtype('uint8')
>>> np.min_scalar_type(-260)
dtype('int16')
>>> np.min_scalar_type(3.1)
dtype('float16')
>>> np.min_scalar_type(1e50)
dtype('float64')
>>> np.min_scalar_type(np.arange(4,dtype='f8'))
dtype('float64') | numpy.reference.generated.numpy.min_scalar_type |
numpy.minimum numpy.minimum(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'minimum'>
Element-wise minimum of array elements. Compare two arrays and returns a new array containing the element-wise minima. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated. Parameters
x1, x2array_like
The arrays holding the elements to be compared. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray or scalar
The minimum of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars. See also maximum
Element-wise maximum of two arrays, propagates NaNs. fmin
Element-wise minimum of two arrays, ignores NaNs. amin
The minimum value of an array along a given axis, propagates NaNs. nanmin
The minimum value of an array along a given axis, ignores NaNs.
fmax, amax, nanmax
Notes The minimum is equivalent to np.where(x1 <= x2, x1, x2) when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting. Examples >>> np.minimum([2, 3, 4], [1, 5, 2])
array([1, 3, 2])
>>> np.minimum(np.eye(2), [0.5, 2]) # broadcasting
array([[ 0.5, 0. ],
[ 0. , 1. ]])
>>> np.minimum([np.nan, 0, np.nan],[0, np.nan, np.nan])
array([nan, nan, nan])
>>> np.minimum(-np.Inf, 1)
-inf | numpy.reference.generated.numpy.minimum |
numpy.mintypecode numpy.mintypecode(typechars, typeset='GDFgdf', default='d')[source]
Return the character for the minimum-size type to which given types can be safely cast. The returned type character must represent the smallest size dtype such that an array of the returned type can handle the data from an array of all types in typechars (or if typechars is an array, then its dtype.char). Parameters
typecharslist of str or array_like
If a list of strings, each string should represent a dtype. If array_like, the character representation of the array dtype is used.
typesetstr or list of str, optional
The set of characters that the returned character is chosen from. The default set is ‘GDFgdf’.
defaultstr, optional
The default character, this is returned if none of the characters in typechars matches a character in typeset. Returns
typecharstr
The character representing the minimum-size type that was found. See also
dtype, sctype2char, maximum_sctype
Examples >>> np.mintypecode(['d', 'f', 'S'])
'd'
>>> x = np.array([1.1, 2-3.j])
>>> np.mintypecode(x)
'D'
>>> np.mintypecode('abceh', default='G')
'G' | numpy.reference.generated.numpy.mintypecode |
numpy.mod numpy.mod(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'remainder'>
Returns the element-wise remainder of division. Computes the remainder complementary to the floor_divide function. It is equivalent to the Python modulus operator``x1 % x2`` and has the same sign as the divisor x2. The MATLAB function equivalent to np.remainder is mod. Warning This should not be confused with: Python 3.7’s math.remainder and C’s remainder, which computes the IEEE remainder, which are the complement to round(x1 / x2). The MATLAB rem function and or the C % operator which is the complement to int(x1 / x2). Parameters
x1array_like
Dividend array.
x2array_like
Divisor array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray
The element-wise remainder of the quotient floor_divide(x1, x2). This is a scalar if both x1 and x2 are scalars. See also floor_divide
Equivalent of Python // operator. divmod
Simultaneous floor division and remainder. fmod
Equivalent of the MATLAB rem function.
divide, floor
Notes Returns 0 when x2 is 0 and both x1 and x2 are (arrays of) integers. mod is an alias of remainder. Examples >>> np.remainder([4, 7], [2, 3])
array([0, 1])
>>> np.remainder(np.arange(7), 5)
array([0, 1, 2, 3, 4, 0, 1])
The % operator can be used as a shorthand for np.remainder on ndarrays. >>> x1 = np.arange(7)
>>> x1 % 5
array([0, 1, 2, 3, 4, 0, 1]) | numpy.reference.generated.numpy.mod |
numpy.modf numpy.modf(x, [out1, out2, ]/, [out=(None, None), ]*, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'modf'>
Return the fractional and integral parts of an array, element-wise. The fractional and integral parts are negative if the given number is negative. Parameters
xarray_like
Input array.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
y1ndarray
Fractional part of x. This is a scalar if x is a scalar.
y2ndarray
Integral part of x. This is a scalar if x is a scalar. See also divmod
divmod(x, 1) is equivalent to modf with the return values switched, except it always has a positive remainder. Notes For integer input the return values are floats. Examples >>> np.modf([0, 3.5])
(array([ 0. , 0.5]), array([ 0., 3.]))
>>> np.modf(-0.5)
(-0.5, -0) | numpy.reference.generated.numpy.modf |
numpy.moveaxis numpy.moveaxis(a, source, destination)[source]
Move axes of an array to new positions. Other axes remain in their original order. New in version 1.11.0. Parameters
anp.ndarray
The array whose axes should be reordered.
sourceint or sequence of int
Original positions of the axes to move. These must be unique.
destinationint or sequence of int
Destination positions for each of the original axes. These must also be unique. Returns
resultnp.ndarray
Array with moved axes. This array is a view of the input array. See also transpose
Permute the dimensions of an array. swapaxes
Interchange two axes of an array. Examples >>> x = np.zeros((3, 4, 5))
>>> np.moveaxis(x, 0, -1).shape
(4, 5, 3)
>>> np.moveaxis(x, -1, 0).shape
(5, 3, 4)
These all achieve the same result: >>> np.transpose(x).shape
(5, 4, 3)
>>> np.swapaxes(x, 0, -1).shape
(5, 4, 3)
>>> np.moveaxis(x, [0, 1], [-1, -2]).shape
(5, 4, 3)
>>> np.moveaxis(x, [0, 1, 2], [-1, -2, -3]).shape
(5, 4, 3) | numpy.reference.generated.numpy.moveaxis |
numpy.msort numpy.msort(a)[source]
Return a copy of an array sorted along the first axis. Parameters
aarray_like
Array to be sorted. Returns
sorted_arrayndarray
Array of the same type and shape as a. See also sort
Notes np.msort(a) is equivalent to np.sort(a, axis=0). | numpy.reference.generated.numpy.msort |
numpy.multiply numpy.multiply(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'multiply'>
Multiply arguments element-wise. Parameters
x1, x2array_like
Input arrays to be multiplied. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray
The product of x1 and x2, element-wise. This is a scalar if both x1 and x2 are scalars. Notes Equivalent to x1 * x2 in terms of array broadcasting. Examples >>> np.multiply(2.0, 4.0)
8.0
>>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
>>> np.multiply(x1, x2)
array([[ 0., 1., 4.],
[ 0., 4., 10.],
[ 0., 7., 16.]])
The * operator can be used as a shorthand for np.multiply on ndarrays. >>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
>>> x1 * x2
array([[ 0., 1., 4.],
[ 0., 4., 10.],
[ 0., 7., 16.]]) | numpy.reference.generated.numpy.multiply |
numpy.NAN
IEEE 754 floating point representation of Not a Number (NaN). NaN and NAN are equivalent definitions of nan. Please use nan instead of NAN. See Also nan | numpy.reference.constants#numpy.NAN |
numpy.NaN
IEEE 754 floating point representation of Not a Number (NaN). NaN and NAN are equivalent definitions of nan. Please use nan instead of NaN. See Also nan | numpy.reference.constants#numpy.NaN |
numpy.nan
IEEE 754 floating point representation of Not a Number (NaN). Returns y : A floating point representation of Not a Number. See Also isnan : Shows which elements are Not a Number. isfinite : Shows which elements are finite (not one of Not a Number, positive infinity and negative infinity) Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. NaN and NAN are aliases of nan. Examples >>> np.nan
nan
>>> np.log(-1)
nan
>>> np.log([-1, 1, 2])
array([ NaN, 0. , 0.69314718]) | numpy.reference.constants#numpy.nan |
numpy.nan_to_num numpy.nan_to_num(x, copy=True, nan=0.0, posinf=None, neginf=None)[source]
Replace NaN with zero and infinity with large finite numbers (default behaviour) or with the numbers defined by the user using the nan, posinf and/or neginf keywords. If x is inexact, NaN is replaced by zero or by the user defined value in nan keyword, infinity is replaced by the largest finite floating point values representable by x.dtype or by the user defined value in posinf keyword and -infinity is replaced by the most negative finite floating point values representable by x.dtype or by the user defined value in neginf keyword. For complex dtypes, the above is applied to each of the real and imaginary components of x separately. If x is not inexact, then no replacements are made. Parameters
xscalar or array_like
Input data.
copybool, optional
Whether to create a copy of x (True) or to replace values in-place (False). The in-place operation only occurs if casting to an array does not require a copy. Default is True. New in version 1.13.
nanint, float, optional
Value to be used to fill NaN values. If no value is passed then NaN values will be replaced with 0.0. New in version 1.17.
posinfint, float, optional
Value to be used to fill positive infinity values. If no value is passed then positive infinity values will be replaced with a very large number. New in version 1.17.
neginfint, float, optional
Value to be used to fill negative infinity values. If no value is passed then negative infinity values will be replaced with a very small (or negative) number. New in version 1.17. Returns
outndarray
x, with the non-finite values replaced. If copy is False, this may be x itself. See also isinf
Shows which elements are positive or negative infinity. isneginf
Shows which elements are negative infinity. isposinf
Shows which elements are positive infinity. isnan
Shows which elements are Not a Number (NaN). isfinite
Shows which elements are finite (not NaN, not infinity) Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Examples >>> np.nan_to_num(np.inf)
1.7976931348623157e+308
>>> np.nan_to_num(-np.inf)
-1.7976931348623157e+308
>>> np.nan_to_num(np.nan)
0.0
>>> x = np.array([np.inf, -np.inf, np.nan, -128, 128])
>>> np.nan_to_num(x)
array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, # may vary
-1.28000000e+002, 1.28000000e+002])
>>> np.nan_to_num(x, nan=-9999, posinf=33333333, neginf=33333333)
array([ 3.3333333e+07, 3.3333333e+07, -9.9990000e+03,
-1.2800000e+02, 1.2800000e+02])
>>> y = np.array([complex(np.inf, np.nan), np.nan, complex(np.nan, np.inf)])
array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, # may vary
-1.28000000e+002, 1.28000000e+002])
>>> np.nan_to_num(y)
array([ 1.79769313e+308 +0.00000000e+000j, # may vary
0.00000000e+000 +0.00000000e+000j,
0.00000000e+000 +1.79769313e+308j])
>>> np.nan_to_num(y, nan=111111, posinf=222222)
array([222222.+111111.j, 111111. +0.j, 111111.+222222.j]) | numpy.reference.generated.numpy.nan_to_num |
numpy.nanargmax numpy.nanargmax(a, axis=None, out=None, *, keepdims=<no value>)[source]
Return the indices of the maximum values in the specified axis ignoring NaNs. For all-NaN slices ValueError is raised. Warning: the results cannot be trusted if a slice contains only NaNs and -Infs. Parameters
aarray_like
Input data.
axisint, optional
Axis along which to operate. By default flattened input is used.
outarray, optional
If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. New in version 1.22.0.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns
index_arrayndarray
An array of indices or a single index value. See also
argmax, nanargmin
Examples >>> a = np.array([[np.nan, 4], [2, 3]])
>>> np.argmax(a)
0
>>> np.nanargmax(a)
1
>>> np.nanargmax(a, axis=0)
array([1, 0])
>>> np.nanargmax(a, axis=1)
array([1, 1]) | numpy.reference.generated.numpy.nanargmax |
numpy.nanargmin numpy.nanargmin(a, axis=None, out=None, *, keepdims=<no value>)[source]
Return the indices of the minimum values in the specified axis ignoring NaNs. For all-NaN slices ValueError is raised. Warning: the results cannot be trusted if a slice contains only NaNs and Infs. Parameters
aarray_like
Input data.
axisint, optional
Axis along which to operate. By default flattened input is used.
outarray, optional
If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. New in version 1.22.0.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns
index_arrayndarray
An array of indices or a single index value. See also
argmin, nanargmax
Examples >>> a = np.array([[np.nan, 4], [2, 3]])
>>> np.argmin(a)
0
>>> np.nanargmin(a)
2
>>> np.nanargmin(a, axis=0)
array([1, 1])
>>> np.nanargmin(a, axis=1)
array([1, 0]) | numpy.reference.generated.numpy.nanargmin |
numpy.nancumprod numpy.nancumprod(a, axis=None, dtype=None, out=None)[source]
Return the cumulative product of array elements over a given axis treating Not a Numbers (NaNs) as one. The cumulative product does not change when NaNs are encountered and leading NaNs are replaced by ones. Ones are returned for slices that are all-NaN or empty. New in version 1.12.0. Parameters
aarray_like
Input array.
axisint, optional
Axis along which the cumulative product is computed. By default the input is flattened.
dtypedtype, optional
Type of the returned array, as well as of the accumulator in which the elements are multiplied. If dtype is not specified, it defaults to the dtype of a, unless a has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. Returns
nancumprodndarray
A new array holding the result is returned unless out is specified, in which case it is returned. See also numpy.cumprod
Cumulative product across array propagating NaNs. isnan
Show which elements are NaN. Examples >>> np.nancumprod(1)
array([1])
>>> np.nancumprod([1])
array([1])
>>> np.nancumprod([1, np.nan])
array([1., 1.])
>>> a = np.array([[1, 2], [3, np.nan]])
>>> np.nancumprod(a)
array([1., 2., 6., 6.])
>>> np.nancumprod(a, axis=0)
array([[1., 2.],
[3., 2.]])
>>> np.nancumprod(a, axis=1)
array([[1., 2.],
[3., 3.]]) | numpy.reference.generated.numpy.nancumprod |
numpy.nancumsum numpy.nancumsum(a, axis=None, dtype=None, out=None)[source]
Return the cumulative sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. The cumulative sum does not change when NaNs are encountered and leading NaNs are replaced by zeros. Zeros are returned for slices that are all-NaN or empty. New in version 1.12.0. Parameters
aarray_like
Input array.
axisint, optional
Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.
dtypedtype, optional
Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of a, unless a has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. See Output type determination for more details. Returns
nancumsumndarray.
A new array holding the result is returned unless out is specified, in which it is returned. The result has the same size as a, and the same shape as a if axis is not None or a is a 1-d array. See also numpy.cumsum
Cumulative sum across array propagating NaNs. isnan
Show which elements are NaN. Examples >>> np.nancumsum(1)
array([1])
>>> np.nancumsum([1])
array([1])
>>> np.nancumsum([1, np.nan])
array([1., 1.])
>>> a = np.array([[1, 2], [3, np.nan]])
>>> np.nancumsum(a)
array([1., 3., 6., 6.])
>>> np.nancumsum(a, axis=0)
array([[1., 2.],
[4., 2.]])
>>> np.nancumsum(a, axis=1)
array([[1., 3.],
[3., 3.]]) | numpy.reference.generated.numpy.nancumsum |
numpy.nanmax numpy.nanmax(a, axis=None, out=None, keepdims=<no value>, initial=<no value>, where=<no value>)[source]
Return the maximum of an array or maximum along an axis, ignoring any NaNs. When all-NaN slices are encountered a RuntimeWarning is raised and NaN is returned for that slice. Parameters
aarray_like
Array containing numbers whose maximum is desired. If a is not an array, a conversion is attempted.
axis{int, tuple of int, None}, optional
Axis or axes along which the maximum is computed. The default is to compute the maximum of the flattened array.
outndarray, optional
Alternate output array in which to place the result. The default is None; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See Output type determination for more details. New in version 1.8.0.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a. If the value is anything but the default, then keepdims will be passed through to the max method of sub-classes of ndarray. If the sub-classes methods does not implement keepdims any exceptions will be raised. New in version 1.8.0.
initialscalar, optional
The minimum value of an output element. Must be present to allow computation on empty slice. See reduce for details. New in version 1.22.0.
wherearray_like of bool, optional
Elements to compare for the maximum. See reduce for details. New in version 1.22.0. Returns
nanmaxndarray
An array with the same shape as a, with the specified axis removed. If a is a 0-d array, or if axis is None, an ndarray scalar is returned. The same dtype as a is returned. See also nanmin
The minimum value of an array along a given axis, ignoring any NaNs. amax
The maximum value of an array along a given axis, propagating any NaNs. fmax
Element-wise maximum of two arrays, ignoring any NaNs. maximum
Element-wise maximum of two arrays, propagating any NaNs. isnan
Shows which elements are Not a Number (NaN). isfinite
Shows which elements are neither NaN nor infinity.
amin, fmin, minimum
Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Positive infinity is treated as a very large number and negative infinity is treated as a very small (i.e. negative) number. If the input has a integer type the function is equivalent to np.max. Examples >>> a = np.array([[1, 2], [3, np.nan]])
>>> np.nanmax(a)
3.0
>>> np.nanmax(a, axis=0)
array([3., 2.])
>>> np.nanmax(a, axis=1)
array([2., 3.])
When positive infinity and negative infinity are present: >>> np.nanmax([1, 2, np.nan, np.NINF])
2.0
>>> np.nanmax([1, 2, np.nan, np.inf])
inf | numpy.reference.generated.numpy.nanmax |
numpy.nanmean numpy.nanmean(a, axis=None, dtype=None, out=None, keepdims=<no value>, *, where=<no value>)[source]
Compute the arithmetic mean along the specified axis, ignoring NaNs. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. float64 intermediate and return values are used for integer inputs. For all-NaN slices, NaN is returned and a RuntimeWarning is raised. New in version 1.8.0. Parameters
aarray_like
Array containing numbers whose mean is desired. If a is not an array, a conversion is attempted.
axis{int, tuple of int, None}, optional
Axis or axes along which the means are computed. The default is to compute the mean of the flattened array.
dtypedata-type, optional
Type to use in computing the mean. For integer inputs, the default is float64; for inexact inputs, it is the same as the input dtype.
outndarray, optional
Alternate output array in which to place the result. The default is None; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See Output type determination for more details.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a. If the value is anything but the default, then keepdims will be passed through to the mean or sum methods of sub-classes of ndarray. If the sub-classes methods does not implement keepdims any exceptions will be raised.
wherearray_like of bool, optional
Elements to include in the mean. See reduce for details. New in version 1.22.0. Returns
mndarray, see dtype parameter above
If out=None, returns a new array containing the mean values, otherwise a reference to the output array is returned. Nan is returned for slices that contain only NaNs. See also average
Weighted average mean
Arithmetic mean taken while not ignoring NaNs
var, nanvar
Notes The arithmetic mean is the sum of the non-NaN elements along the axis divided by the number of non-NaN elements. Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32. Specifying a higher-precision accumulator using the dtype keyword can alleviate this issue. Examples >>> a = np.array([[1, np.nan], [3, 4]])
>>> np.nanmean(a)
2.6666666666666665
>>> np.nanmean(a, axis=0)
array([2., 4.])
>>> np.nanmean(a, axis=1)
array([1., 3.5]) # may vary | numpy.reference.generated.numpy.nanmean |
numpy.nanmedian numpy.nanmedian(a, axis=None, out=None, overwrite_input=False, keepdims=<no value>)[source]
Compute the median along the specified axis, while ignoring NaNs. Returns the median of the array elements. New in version 1.9.0. Parameters
aarray_like
Input array or object that can be converted to an array.
axis{int, sequence of int, None}, optional
Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.
overwrite_inputbool, optional
If True, then allow use of memory of input array a for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If overwrite_input is True and a is not already an ndarray, an error will be raised.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a. If this is anything but the default value it will be passed through (in the special case of an empty array) to the mean function of the underlying array. If the array is a sub-class and mean does not have the kwarg keepdims this will raise a RuntimeError. Returns
medianndarray
A new array holding the result. If the input contains integers or floats smaller than float64, then the output data-type is np.float64. Otherwise, the data-type of the output is the same as that of the input. If out is specified, that array is returned instead. See also
mean, median, percentile
Notes Given a vector V of length N, the median of V is the middle value of a sorted copy of V, V_sorted - i.e., V_sorted[(N-1)/2], when N is odd and the average of the two middle values of V_sorted when N is even. Examples >>> a = np.array([[10.0, 7, 4], [3, 2, 1]])
>>> a[0, 1] = np.nan
>>> a
array([[10., nan, 4.],
[ 3., 2., 1.]])
>>> np.median(a)
nan
>>> np.nanmedian(a)
3.0
>>> np.nanmedian(a, axis=0)
array([6.5, 2. , 2.5])
>>> np.median(a, axis=1)
array([nan, 2.])
>>> b = a.copy()
>>> np.nanmedian(b, axis=1, overwrite_input=True)
array([7., 2.])
>>> assert not np.all(a==b)
>>> b = a.copy()
>>> np.nanmedian(b, axis=None, overwrite_input=True)
3.0
>>> assert not np.all(a==b) | numpy.reference.generated.numpy.nanmedian |
numpy.nanmin numpy.nanmin(a, axis=None, out=None, keepdims=<no value>, initial=<no value>, where=<no value>)[source]
Return minimum of an array or minimum along an axis, ignoring any NaNs. When all-NaN slices are encountered a RuntimeWarning is raised and Nan is returned for that slice. Parameters
aarray_like
Array containing numbers whose minimum is desired. If a is not an array, a conversion is attempted.
axis{int, tuple of int, None}, optional
Axis or axes along which the minimum is computed. The default is to compute the minimum of the flattened array.
outndarray, optional
Alternate output array in which to place the result. The default is None; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See Output type determination for more details. New in version 1.8.0.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a. If the value is anything but the default, then keepdims will be passed through to the min method of sub-classes of ndarray. If the sub-classes methods does not implement keepdims any exceptions will be raised. New in version 1.8.0.
initialscalar, optional
The maximum value of an output element. Must be present to allow computation on empty slice. See reduce for details. New in version 1.22.0.
wherearray_like of bool, optional
Elements to compare for the minimum. See reduce for details. New in version 1.22.0. Returns
nanminndarray
An array with the same shape as a, with the specified axis removed. If a is a 0-d array, or if axis is None, an ndarray scalar is returned. The same dtype as a is returned. See also nanmax
The maximum value of an array along a given axis, ignoring any NaNs. amin
The minimum value of an array along a given axis, propagating any NaNs. fmin
Element-wise minimum of two arrays, ignoring any NaNs. minimum
Element-wise minimum of two arrays, propagating any NaNs. isnan
Shows which elements are Not a Number (NaN). isfinite
Shows which elements are neither NaN nor infinity.
amax, fmax, maximum
Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Positive infinity is treated as a very large number and negative infinity is treated as a very small (i.e. negative) number. If the input has a integer type the function is equivalent to np.min. Examples >>> a = np.array([[1, 2], [3, np.nan]])
>>> np.nanmin(a)
1.0
>>> np.nanmin(a, axis=0)
array([1., 2.])
>>> np.nanmin(a, axis=1)
array([1., 3.])
When positive infinity and negative infinity are present: >>> np.nanmin([1, 2, np.nan, np.inf])
1.0
>>> np.nanmin([1, 2, np.nan, np.NINF])
-inf | numpy.reference.generated.numpy.nanmin |
numpy.nanpercentile numpy.nanpercentile(a, q, axis=None, out=None, overwrite_input=False, method='linear', keepdims=<no value>, *, interpolation=None)[source]
Compute the qth percentile of the data along the specified axis, while ignoring nan values. Returns the qth percentile(s) of the array elements. New in version 1.9.0. Parameters
aarray_like
Input array or object that can be converted to an array, containing nan values to be ignored.
qarray_like of float
Percentile or sequence of percentiles to compute, which must be between 0 and 100 inclusive.
axis{int, tuple of int, None}, optional
Axis or axes along which the percentiles are computed. The default is to compute the percentile(s) along a flattened version of the array.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.
overwrite_inputbool, optional
If True, then allow the input array a to be modified by intermediate calculations, to save memory. In this case, the contents of the input a after this function completes is undefined.
methodstr, optional
This parameter specifies the method to use for estimating the percentile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1] are: ‘inverted_cdf’ ‘averaged_inverted_cdf’ ‘closest_observation’ ‘interpolated_inverted_cdf’ ‘hazen’ ‘weibull’ ‘linear’ (default) ‘median_unbiased’ ‘normal_unbiased’ The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: ‘lower’ ‘higher’, ‘midpoint’ ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array a. If this is anything but the default value it will be passed through (in the special case of an empty array) to the mean function of the underlying array. If the array is a sub-class and mean does not have the kwarg keepdims this will raise a RuntimeError.
interpolationstr, optional
Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns
percentilescalar or ndarray
If q is a single percentile and axis=None, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the percentiles. The other axes are the axes that remain after the reduction of a. If the input contains integers or floats smaller than float64, the output data-type is float64. Otherwise, the output data-type is the same as that of the input. If out is specified, that array is returned instead. See also nanmean
nanmedian
equivalent to nanpercentile(..., 50)
percentile, median, mean
nanquantile
equivalent to nanpercentile, except q in range [0, 1]. Notes For more information please see numpy.percentile References 1
R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 Examples >>> a = np.array([[10., 7., 4.], [3., 2., 1.]])
>>> a[0][1] = np.nan
>>> a
array([[10., nan, 4.],
[ 3., 2., 1.]])
>>> np.percentile(a, 50)
nan
>>> np.nanpercentile(a, 50)
3.0
>>> np.nanpercentile(a, 50, axis=0)
array([6.5, 2. , 2.5])
>>> np.nanpercentile(a, 50, axis=1, keepdims=True)
array([[7.],
[2.]])
>>> m = np.nanpercentile(a, 50, axis=0)
>>> out = np.zeros_like(m)
>>> np.nanpercentile(a, 50, axis=0, out=out)
array([6.5, 2. , 2.5])
>>> m
array([6.5, 2. , 2.5])
>>> b = a.copy()
>>> np.nanpercentile(b, 50, axis=1, overwrite_input=True)
array([7., 2.])
>>> assert not np.all(a==b) | numpy.reference.generated.numpy.nanpercentile |
numpy.nanprod numpy.nanprod(a, axis=None, dtype=None, out=None, keepdims=<no value>, initial=<no value>, where=<no value>)[source]
Return the product of array elements over a given axis treating Not a Numbers (NaNs) as ones. One is returned for slices that are all-NaN or empty. New in version 1.10.0. Parameters
aarray_like
Array containing numbers whose product is desired. If a is not an array, a conversion is attempted.
axis{int, tuple of int, None}, optional
Axis or axes along which the product is computed. The default is to compute the product of the flattened array.
dtypedata-type, optional
The type of the returned array and of the accumulator in which the elements are summed. By default, the dtype of a is used. An exception is when a has an integer type with less precision than the platform (u)intp. In that case, the default will be either (u)int32 or (u)int64 depending on whether the platform is 32 or 64 bits. For inexact inputs, dtype must be inexact.
outndarray, optional
Alternate output array in which to place the result. The default is None. If provided, it must have the same shape as the expected output, but the type will be cast if necessary. See Output type determination for more details. The casting of NaN to integer can yield unexpected results.
keepdimsbool, optional
If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original arr.
initialscalar, optional
The starting value for this product. See reduce for details. New in version 1.22.0.
wherearray_like of bool, optional
Elements to include in the product. See reduce for details. New in version 1.22.0. Returns
nanprodndarray
A new array holding the result is returned unless out is specified, in which case it is returned. See also numpy.prod
Product across array propagating NaNs. isnan
Show which elements are NaN. Examples >>> np.nanprod(1)
1
>>> np.nanprod([1])
1
>>> np.nanprod([1, np.nan])
1.0
>>> a = np.array([[1, 2], [3, np.nan]])
>>> np.nanprod(a)
6.0
>>> np.nanprod(a, axis=0)
array([3., 2.]) | numpy.reference.generated.numpy.nanprod |
numpy.nanquantile numpy.nanquantile(a, q, axis=None, out=None, overwrite_input=False, method='linear', keepdims=<no value>, *, interpolation=None)[source]
Compute the qth quantile of the data along the specified axis, while ignoring nan values. Returns the qth quantile(s) of the array elements. New in version 1.15.0. Parameters
aarray_like
Input array or object that can be converted to an array, containing nan values to be ignored
qarray_like of float
Quantile or sequence of quantiles to compute, which must be between 0 and 1 inclusive.
axis{int, tuple of int, None}, optional
Axis or axes along which the quantiles are computed. The default is to compute the quantile(s) along a flattened version of the array.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.
overwrite_inputbool, optional
If True, then allow the input array a to be modified by intermediate calculations, to save memory. In this case, the contents of the input a after this function completes is undefined.
methodstr, optional
This parameter specifies the method to use for estimating the quantile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1] are: ‘inverted_cdf’ ‘averaged_inverted_cdf’ ‘closest_observation’ ‘interpolated_inverted_cdf’ ‘hazen’ ‘weibull’ ‘linear’ (default) ‘median_unbiased’ ‘normal_unbiased’ The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: ‘lower’ ‘higher’, ‘midpoint’ ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array a. If this is anything but the default value it will be passed through (in the special case of an empty array) to the mean function of the underlying array. If the array is a sub-class and mean does not have the kwarg keepdims this will raise a RuntimeError.
interpolationstr, optional
Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns
quantilescalar or ndarray
If q is a single percentile and axis=None, then the result is a scalar. If multiple quantiles are given, first axis of the result corresponds to the quantiles. The other axes are the axes that remain after the reduction of a. If the input contains integers or floats smaller than float64, the output data-type is float64. Otherwise, the output data-type is the same as that of the input. If out is specified, that array is returned instead. See also quantile
nanmean, nanmedian
nanmedian
equivalent to nanquantile(..., 0.5) nanpercentile
same as nanquantile, but with q in the range [0, 100]. Notes For more information please see numpy.quantile References 1
R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 Examples >>> a = np.array([[10., 7., 4.], [3., 2., 1.]])
>>> a[0][1] = np.nan
>>> a
array([[10., nan, 4.],
[ 3., 2., 1.]])
>>> np.quantile(a, 0.5)
nan
>>> np.nanquantile(a, 0.5)
3.0
>>> np.nanquantile(a, 0.5, axis=0)
array([6.5, 2. , 2.5])
>>> np.nanquantile(a, 0.5, axis=1, keepdims=True)
array([[7.],
[2.]])
>>> m = np.nanquantile(a, 0.5, axis=0)
>>> out = np.zeros_like(m)
>>> np.nanquantile(a, 0.5, axis=0, out=out)
array([6.5, 2. , 2.5])
>>> m
array([6.5, 2. , 2.5])
>>> b = a.copy()
>>> np.nanquantile(b, 0.5, axis=1, overwrite_input=True)
array([7., 2.])
>>> assert not np.all(a==b) | numpy.reference.generated.numpy.nanquantile |
numpy.nanstd numpy.nanstd(a, axis=None, dtype=None, out=None, ddof=0, keepdims=<no value>, *, where=<no value>)[source]
Compute the standard deviation along the specified axis, while ignoring NaNs. Returns the standard deviation, a measure of the spread of a distribution, of the non-NaN array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis. For all-NaN slices or slices with zero degrees of freedom, NaN is returned and a RuntimeWarning is raised. New in version 1.8.0. Parameters
aarray_like
Calculate the standard deviation of the non-NaN values.
axis{int, tuple of int, None}, optional
Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array.
dtypedtype, optional
Type to use in computing the standard deviation. For arrays of integer type the default is float64, for arrays of float types it is the same as the array type.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape as the expected output but the type (of the calculated values) will be cast if necessary.
ddofint, optional
Means Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of non-NaN elements. By default ddof is zero.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a. If this value is anything but the default it is passed through as-is to the relevant functions of the sub-classes. If these functions do not have a keepdims kwarg, a RuntimeError will be raised.
wherearray_like of bool, optional
Elements to include in the standard deviation. See reduce for details. New in version 1.22.0. Returns
standard_deviationndarray, see dtype parameter above.
If out is None, return a new array containing the standard deviation, otherwise return a reference to the output array. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also
var, mean, std
nanvar, nanmean
Output type determination
Notes The standard deviation is the square root of the average of the squared deviations from the mean: std = sqrt(mean(abs(x - x.mean())**2)). The average squared deviation is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se. Note that, for complex numbers, std takes the absolute value before squaring, so that the result is always real and nonnegative. For floating-point input, the std is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue. Examples >>> a = np.array([[1, np.nan], [3, 4]])
>>> np.nanstd(a)
1.247219128924647
>>> np.nanstd(a, axis=0)
array([1., 0.])
>>> np.nanstd(a, axis=1)
array([0., 0.5]) # may vary | numpy.reference.generated.numpy.nanstd |
numpy.nansum numpy.nansum(a, axis=None, dtype=None, out=None, keepdims=<no value>, initial=<no value>, where=<no value>)[source]
Return the sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. In NumPy versions <= 1.9.0 Nan is returned for slices that are all-NaN or empty. In later versions zero is returned. Parameters
aarray_like
Array containing numbers whose sum is desired. If a is not an array, a conversion is attempted.
axis{int, tuple of int, None}, optional
Axis or axes along which the sum is computed. The default is to compute the sum of the flattened array.
dtypedata-type, optional
The type of the returned array and of the accumulator in which the elements are summed. By default, the dtype of a is used. An exception is when a has an integer type with less precision than the platform (u)intp. In that case, the default will be either (u)int32 or (u)int64 depending on whether the platform is 32 or 64 bits. For inexact inputs, dtype must be inexact. New in version 1.8.0.
outndarray, optional
Alternate output array in which to place the result. The default is None. If provided, it must have the same shape as the expected output, but the type will be cast if necessary. See Output type determination for more details. The casting of NaN to integer can yield unexpected results. New in version 1.8.0.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a. If the value is anything but the default, then keepdims will be passed through to the mean or sum methods of sub-classes of ndarray. If the sub-classes methods does not implement keepdims any exceptions will be raised. New in version 1.8.0.
initialscalar, optional
Starting value for the sum. See reduce for details. New in version 1.22.0.
wherearray_like of bool, optional
Elements to include in the sum. See reduce for details. New in version 1.22.0. Returns
nansumndarray.
A new array holding the result is returned unless out is specified, in which it is returned. The result has the same size as a, and the same shape as a if axis is not None or a is a 1-d array. See also numpy.sum
Sum across array propagating NaNs. isnan
Show which elements are NaN. isfinite
Show which elements are not NaN or +/-inf. Notes If both positive and negative infinity are present, the sum will be Not A Number (NaN). Examples >>> np.nansum(1)
1
>>> np.nansum([1])
1
>>> np.nansum([1, np.nan])
1.0
>>> a = np.array([[1, 1], [1, np.nan]])
>>> np.nansum(a)
3.0
>>> np.nansum(a, axis=0)
array([2., 1.])
>>> np.nansum([1, np.nan, np.inf])
inf
>>> np.nansum([1, np.nan, np.NINF])
-inf
>>> from numpy.testing import suppress_warnings
>>> with suppress_warnings() as sup:
... sup.filter(RuntimeWarning)
... np.nansum([1, np.nan, np.inf, -np.inf]) # both +/- infinity present
nan | numpy.reference.generated.numpy.nansum |
numpy.nanvar numpy.nanvar(a, axis=None, dtype=None, out=None, ddof=0, keepdims=<no value>, *, where=<no value>)[source]
Compute the variance along the specified axis, while ignoring NaNs. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. For all-NaN slices or slices with zero degrees of freedom, NaN is returned and a RuntimeWarning is raised. New in version 1.8.0. Parameters
aarray_like
Array containing numbers whose variance is desired. If a is not an array, a conversion is attempted.
axis{int, tuple of int, None}, optional
Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.
dtypedata-type, optional
Type to use in computing the variance. For arrays of integer type the default is float64; for arrays of float types it is the same as the array type.
outndarray, optional
Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary.
ddofint, optional
“Delta Degrees of Freedom”: the divisor used in the calculation is N - ddof, where N represents the number of non-NaN elements. By default ddof is zero.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a.
wherearray_like of bool, optional
Elements to include in the variance. See reduce for details. New in version 1.22.0. Returns
variancendarray, see dtype parameter above
If out is None, return a new array containing the variance, otherwise return a reference to the output array. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also std
Standard deviation mean
Average var
Variance while not ignoring NaNs
nanstd, nanmean
Output type determination
Notes The variance is the average of the squared deviations from the mean, i.e., var = mean(abs(x - x.mean())**2). The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue. For this function to work on sub-classes of ndarray, they must define sum with the kwarg keepdims Examples >>> a = np.array([[1, np.nan], [3, 4]])
>>> np.nanvar(a)
1.5555555555555554
>>> np.nanvar(a, axis=0)
array([1., 0.])
>>> np.nanvar(a, axis=1)
array([0., 0.25]) # may vary | numpy.reference.generated.numpy.nanvar |
numpy.ndarray class numpy.ndarray(shape, dtype=float, buffer=None, offset=0, strides=None, order=None)[source]
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters
(for the __new__ method; see Notes below)
shapetuple of ints
Shape of created array.
dtypedata-type, optional
Any object that can be interpreted as a numpy data type.
bufferobject exposing buffer interface, optional
Used to fill the array with data.
offsetint, optional
Offset of array data in buffer.
stridestuple of ints, optional
Strides of data in memory.
order{‘C’, ‘F’}, optional
Row-major (C-style) or column-major (Fortran-style) order. See also array
Construct an array. zeros
Create an array, each element of which is zero. empty
Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype
Create a data-type. numpy.typing.NDArray
An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e-323]])
Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
Attributes
Tndarray
The transposed array.
databuffer
Python buffer object pointing to the start of the array’s data.
dtypedtype object
Data-type of the array’s elements.
flagsdict
Information about the memory layout of the array.
flatnumpy.flatiter object
A 1-D iterator over the array.
imagndarray
The imaginary part of the array.
realndarray
The real part of the array.
sizeint
Number of elements in the array.
itemsizeint
Length of one array element in bytes.
nbytesint
Total bytes consumed by the elements of the array.
ndimint
Number of array dimensions.
shapetuple of ints
Tuple of array dimensions.
stridestuple of ints
Tuple of bytes to step in each dimension when traversing an array.
ctypesctypes object
An object to simplify the interaction of the array with the ctypes module.
basendarray
Base object if memory is from some other object. Methods
all([axis, out, keepdims, where]) Returns True if all elements evaluate to True.
any([axis, out, keepdims, where]) Returns True if any of the elements of a evaluate to True.
argmax([axis, out]) Return indices of the maximum values along the given axis.
argmin([axis, out]) Return indices of the minimum values along the given axis.
argpartition(kth[, axis, kind, order]) Returns the indices that would partition this array.
argsort([axis, kind, order]) Returns the indices that would sort this array.
astype(dtype[, order, casting, subok, copy]) Copy of the array, cast to a specified type.
byteswap([inplace]) Swap the bytes of the array elements
choose(choices[, out, mode]) Use an index array to construct a new array from a set of choices.
clip([min, max, out]) Return an array whose values are limited to [min, max].
compress(condition[, axis, out]) Return selected slices of this array along given axis.
conj() Complex-conjugate all elements.
conjugate() Return the complex conjugate, element-wise.
copy([order]) Return a copy of the array.
cumprod([axis, dtype, out]) Return the cumulative product of the elements along the given axis.
cumsum([axis, dtype, out]) Return the cumulative sum of the elements along the given axis.
diagonal([offset, axis1, axis2]) Return specified diagonals.
dump(file) Dump a pickle of the array to the specified file.
dumps() Returns the pickle of the array as a string.
fill(value) Fill the array with a scalar value.
flatten([order]) Return a copy of the array collapsed into one dimension.
getfield(dtype[, offset]) Returns a field of the given array as a certain type.
item(*args) Copy an element of an array to a standard Python scalar and return it.
itemset(*args) Insert scalar into an array (scalar is cast to array's dtype, if possible)
max([axis, out, keepdims, initial, where]) Return the maximum along a given axis.
mean([axis, dtype, out, keepdims, where]) Returns the average of the array elements along given axis.
min([axis, out, keepdims, initial, where]) Return the minimum along a given axis.
newbyteorder([new_order]) Return the array with the same data viewed with a different byte order.
nonzero() Return the indices of the elements that are non-zero.
partition(kth[, axis, kind, order]) Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array.
prod([axis, dtype, out, keepdims, initial, ...]) Return the product of the array elements over the given axis
ptp([axis, out, keepdims]) Peak to peak (maximum - minimum) value along a given axis.
put(indices, values[, mode]) Set a.flat[n] = values[n] for all n in indices.
ravel([order]) Return a flattened array.
repeat(repeats[, axis]) Repeat elements of an array.
reshape(shape[, order]) Returns an array containing the same data with a new shape.
resize(new_shape[, refcheck]) Change shape and size of array in-place.
round([decimals, out]) Return a with each element rounded to the given number of decimals.
searchsorted(v[, side, sorter]) Find indices where elements of v should be inserted in a to maintain order.
setfield(val, dtype[, offset]) Put a value into a specified place in a field defined by a data-type.
setflags([write, align, uic]) Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY), respectively.
sort([axis, kind, order]) Sort an array in-place.
squeeze([axis]) Remove axes of length one from a.
std([axis, dtype, out, ddof, keepdims, where]) Returns the standard deviation of the array elements along given axis.
sum([axis, dtype, out, keepdims, initial, where]) Return the sum of the array elements over the given axis.
swapaxes(axis1, axis2) Return a view of the array with axis1 and axis2 interchanged.
take(indices[, axis, out, mode]) Return an array formed from the elements of a at the given indices.
tobytes([order]) Construct Python bytes containing the raw data bytes in the array.
tofile(fid[, sep, format]) Write array to a file as text or binary (default).
tolist() Return the array as an a.ndim-levels deep nested list of Python scalars.
tostring([order]) A compatibility alias for tobytes, with exactly the same behavior.
trace([offset, axis1, axis2, dtype, out]) Return the sum along diagonals of the array.
transpose(*axes) Returns a view of the array with axes transposed.
var([axis, dtype, out, ddof, keepdims, where]) Returns the variance of the array elements, along given axis.
view([dtype][, type]) New view of array with the same data.
dot | numpy.reference.generated.numpy.ndarray |
numpy.ndenumerate class numpy.ndenumerate(arr)[source]
Multidimensional index iterator. Return an iterator yielding pairs of array coordinates and values. Parameters
arrndarray
Input array. See also
ndindex, flatiter
Examples >>> a = np.array([[1, 2], [3, 4]])
>>> for index, x in np.ndenumerate(a):
... print(index, x)
(0, 0) 1
(0, 1) 2
(1, 0) 3
(1, 1) 4 | numpy.reference.generated.numpy.ndenumerate |
numpy.ndindex class numpy.ndindex(*shape)[source]
An N-dimensional iterator object to index arrays. Given the shape of an array, an ndindex instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned, the last dimension is iterated over first. Parameters
shapeints, or a single tuple of ints
The size of each dimension of the array can be passed as individual parameters or as the elements of a tuple. See also
ndenumerate, flatiter
Examples Dimensions as individual arguments >>> for index in np.ndindex(3, 2, 1):
... print(index)
(0, 0, 0)
(0, 1, 0)
(1, 0, 0)
(1, 1, 0)
(2, 0, 0)
(2, 1, 0)
Same dimensions - but in a tuple (3, 2, 1) >>> for index in np.ndindex((3, 2, 1)):
... print(index)
(0, 0, 0)
(0, 1, 0)
(1, 0, 0)
(1, 1, 0)
(2, 0, 0)
(2, 1, 0)
Methods
ndincr() Increment the multi-dimensional index by one. | numpy.reference.generated.numpy.ndindex |
numpy.nditer class numpy.nditer(op, flags=None, op_flags=None, op_dtypes=None, order='K', casting='safe', op_axes=None, itershape=None, buffersize=0)[source]
Efficient multi-dimensional iterator object to iterate over arrays. To get started using this object, see the introductory guide to array iteration. Parameters
opndarray or sequence of array_like
The array(s) to iterate over.
flagssequence of str, optional
Flags to control the behavior of the iterator.
buffered enables buffering when required.
c_index causes a C-order index to be tracked.
f_index causes a Fortran-order index to be tracked.
multi_index causes a multi-index, or a tuple of indices with one per iteration dimension, to be tracked.
common_dtype causes all the operands to be converted to a common data type, with copying or buffering as necessary.
copy_if_overlap causes the iterator to determine if read operands have overlap with write operands, and make temporary copies as necessary to avoid overlap. False positives (needless copying) are possible in some cases.
delay_bufalloc delays allocation of the buffers until a reset() call is made. Allows allocate operands to be initialized before their values are copied into the buffers.
external_loop causes the values given to be one-dimensional arrays with multiple values instead of zero-dimensional arrays.
grow_inner allows the value array sizes to be made larger than the buffer size when both buffered and external_loop is used.
ranged allows the iterator to be restricted to a sub-range of the iterindex values.
refs_ok enables iteration of reference types, such as object arrays.
reduce_ok enables iteration of readwrite operands which are broadcasted, also known as reduction operands.
zerosize_ok allows itersize to be zero.
op_flagslist of list of str, optional
This is a list of flags for each operand. At minimum, one of readonly, readwrite, or writeonly must be specified.
readonly indicates the operand will only be read from.
readwrite indicates the operand will be read from and written to.
writeonly indicates the operand will only be written to.
no_broadcast prevents the operand from being broadcasted.
contig forces the operand data to be contiguous.
aligned forces the operand data to be aligned.
nbo forces the operand data to be in native byte order.
copy allows a temporary read-only copy if required.
updateifcopy allows a temporary read-write copy if required.
allocate causes the array to be allocated if it is None in the op parameter.
no_subtype prevents an allocate operand from using a subtype.
arraymask indicates that this operand is the mask to use for selecting elements when writing to operands with the ‘writemasked’ flag set. The iterator does not enforce this, but when writing from a buffer back to the array, it only copies those elements indicated by this mask.
writemasked indicates that only elements where the chosen arraymask operand is True will be written to.
overlap_assume_elementwise can be used to mark operands that are accessed only in the iterator order, to allow less conservative copying when copy_if_overlap is present.
op_dtypesdtype or tuple of dtype(s), optional
The required data type(s) of the operands. If copying or buffering is enabled, the data will be converted to/from their original types.
order{‘C’, ‘F’, ‘A’, ‘K’}, optional
Controls the iteration order. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. This also affects the element memory order of allocate operands, as they are allocated to be compatible with iteration order. Default is ‘K’.
casting{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional
Controls what kind of data casting may occur when making a copy or buffering. Setting this to ‘unsafe’ is not recommended, as it can adversely affect accumulations. ‘no’ means the data types should not be cast at all. ‘equiv’ means only byte-order changes are allowed. ‘safe’ means only casts which can preserve values are allowed. ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. ‘unsafe’ means any data conversions may be done.
op_axeslist of list of ints, optional
If provided, is a list of ints or None for each operands. The list of axes for an operand is a mapping from the dimensions of the iterator to the dimensions of the operand. A value of -1 can be placed for entries, causing that dimension to be treated as newaxis.
itershapetuple of ints, optional
The desired shape of the iterator. This allows allocate operands with a dimension mapped by op_axes not corresponding to a dimension of a different operand to get a value not equal to 1 for that dimension.
buffersizeint, optional
When buffering is enabled, controls the size of the temporary buffers. Set to 0 for the default value. Notes nditer supersedes flatiter. The iterator implementation behind nditer is also exposed by the NumPy C API. The Python exposure supplies two iteration interfaces, one which follows the Python iterator protocol, and another which mirrors the C-style do-while pattern. The native Python approach is better in most cases, but if you need the coordinates or index of an iterator, use the C-style pattern. Examples Here is how we might write an iter_add function, using the Python iterator protocol: >>> def iter_add_py(x, y, out=None):
... addop = np.add
... it = np.nditer([x, y, out], [],
... [['readonly'], ['readonly'], ['writeonly','allocate']])
... with it:
... for (a, b, c) in it:
... addop(a, b, out=c)
... return it.operands[2]
Here is the same function, but following the C-style pattern: >>> def iter_add(x, y, out=None):
... addop = np.add
... it = np.nditer([x, y, out], [],
... [['readonly'], ['readonly'], ['writeonly','allocate']])
... with it:
... while not it.finished:
... addop(it[0], it[1], out=it[2])
... it.iternext()
... return it.operands[2]
Here is an example outer product function: >>> def outer_it(x, y, out=None):
... mulop = np.multiply
... it = np.nditer([x, y, out], ['external_loop'],
... [['readonly'], ['readonly'], ['writeonly', 'allocate']],
... op_axes=[list(range(x.ndim)) + [-1] * y.ndim,
... [-1] * x.ndim + list(range(y.ndim)),
... None])
... with it:
... for (a, b, c) in it:
... mulop(a, b, out=c)
... return it.operands[2]
>>> a = np.arange(2)+1
>>> b = np.arange(3)+1
>>> outer_it(a,b)
array([[1, 2, 3],
[2, 4, 6]])
Here is an example function which operates like a “lambda” ufunc: >>> def luf(lamdaexpr, *args, **kwargs):
... '''luf(lambdaexpr, op1, ..., opn, out=None, order='K', casting='safe', buffersize=0)'''
... nargs = len(args)
... op = (kwargs.get('out',None),) + args
... it = np.nditer(op, ['buffered','external_loop'],
... [['writeonly','allocate','no_broadcast']] +
... [['readonly','nbo','aligned']]*nargs,
... order=kwargs.get('order','K'),
... casting=kwargs.get('casting','safe'),
... buffersize=kwargs.get('buffersize',0))
... while not it.finished:
... it[0] = lamdaexpr(*it[1:])
... it.iternext()
... return it.operands[0]
>>> a = np.arange(5)
>>> b = np.ones(5)
>>> luf(lambda i,j:i*i + j/2, a, b)
array([ 0.5, 1.5, 4.5, 9.5, 16.5])
If operand flags “writeonly” or “readwrite” are used the operands may be views into the original data with the WRITEBACKIFCOPY flag. In this case nditer must be used as a context manager or the nditer.close method must be called before using the result. The temporary data will be written back to the original data when the __exit__ function is called but not before: >>> a = np.arange(6, dtype='i4')[::-2]
>>> with np.nditer(a, [],
... [['writeonly', 'updateifcopy']],
... casting='unsafe',
... op_dtypes=[np.dtype('f4')]) as i:
... x = i.operands[0]
... x[:] = [-1, -2, -3]
... # a still unchanged here
>>> a, x
(array([-1, -2, -3], dtype=int32), array([-1., -2., -3.], dtype=float32))
It is important to note that once the iterator is exited, dangling references (like x in the example) may or may not share data with the original data a. If writeback semantics were active, i.e. if x.base.flags.writebackifcopy is True, then exiting the iterator will sever the connection between x and a, writing to x will no longer write to a. If writeback semantics are not active, then x.data will still point at some part of a.data, and writing to one will affect the other. Context management and the close method appeared in version 1.15.0. Attributes
dtypestuple of dtype(s)
The data types of the values provided in value. This may be different from the operand data types if buffering is enabled. Valid only before the iterator is closed.
finishedbool
Whether the iteration over the operands is finished or not.
has_delayed_bufallocbool
If True, the iterator was created with the delay_bufalloc flag, and no reset() function was called on it yet.
has_indexbool
If True, the iterator was created with either the c_index or the f_index flag, and the property index can be used to retrieve it.
has_multi_indexbool
If True, the iterator was created with the multi_index flag, and the property multi_index can be used to retrieve it. index
When the c_index or f_index flag was used, this property provides access to the index. Raises a ValueError if accessed and has_index is False.
iterationneedsapibool
Whether iteration requires access to the Python API, for example if one of the operands is an object array.
iterindexint
An index which matches the order of iteration.
itersizeint
Size of the iterator. itviews
Structured view(s) of operands in memory, matching the reordered and optimized iterator access pattern. Valid only before the iterator is closed. multi_index
When the multi_index flag was used, this property provides access to the index. Raises a ValueError if accessed accessed and has_multi_index is False.
ndimint
The dimensions of the iterator.
nopint
The number of iterator operands.
operandstuple of operand(s)
operands[Slice]
shapetuple of ints
Shape tuple, the shape of the iterator. value
Value of operands at current iteration. Normally, this is a tuple of array scalars, but if the flag external_loop is used, it is a tuple of one dimensional arrays. Methods
close() Resolve all writeback semantics in writeable operands.
copy() Get a copy of the iterator in its current state.
debug_print() Print the current state of the nditer instance and debug info to stdout.
enable_external_loop() When the "external_loop" was not used during construction, but is desired, this modifies the iterator to behave as if the flag was specified.
iternext() Check whether iterations are left, and perform a single internal iteration without returning the result.
remove_axis(i, /) Removes axis i from the iterator.
remove_multi_index() When the "multi_index" flag was specified, this removes it, allowing the internal iteration structure to be optimized further.
reset() Reset the iterator to its initial state. | numpy.reference.generated.numpy.nditer |
numpy.negative numpy.negative(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'negative'>
Numerical negative, element-wise. Parameters
xarray_like or scalar
Input array.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray or scalar
Returned array or scalar: y = -x. This is a scalar if x is a scalar. Examples >>> np.negative([1.,-1.])
array([-1., 1.])
The unary - operator can be used as a shorthand for np.negative on ndarrays. >>> x1 = np.array(([1., -1.]))
>>> -x1
array([-1., 1.]) | numpy.reference.generated.numpy.negative |
numpy.nested_iters numpy.nested_iters(op, axes, flags=None, op_flags=None, op_dtypes=None, order='K', casting='safe', buffersize=0)
Create nditers for use in nested loops Create a tuple of nditer objects which iterate in nested loops over different axes of the op argument. The first iterator is used in the outermost loop, the last in the innermost loop. Advancing one will change the subsequent iterators to point at its new element. Parameters
opndarray or sequence of array_like
The array(s) to iterate over.
axeslist of list of int
Each item is used as an “op_axes” argument to an nditer flags, op_flags, op_dtypes, order, casting, buffersize (optional)
See nditer parameters of the same name Returns
iterstuple of nditer
An nditer for each item in axes, outermost first See also nditer
Examples Basic usage. Note how y is the “flattened” version of [a[:, 0, :], a[:, 1, 0], a[:, 2, :]] since we specified the first iter’s axes as [1] >>> a = np.arange(12).reshape(2, 3, 2)
>>> i, j = np.nested_iters(a, [[1], [0, 2]], flags=["multi_index"])
>>> for x in i:
... print(i.multi_index)
... for y in j:
... print('', j.multi_index, y)
(0,)
(0, 0) 0
(0, 1) 1
(1, 0) 6
(1, 1) 7
(1,)
(0, 0) 2
(0, 1) 3
(1, 0) 8
(1, 1) 9
(2,)
(0, 0) 4
(0, 1) 5
(1, 0) 10
(1, 1) 11 | numpy.reference.generated.numpy.nested_iters |
numpy.newaxis
A convenient alias for None, useful for indexing arrays. Examples >>> newaxis is None
True
>>> x = np.arange(3)
>>> x
array([0, 1, 2])
>>> x[:, newaxis]
array([[0],
[1],
[2]])
>>> x[:, newaxis, newaxis]
array([[[0]],
[[1]],
[[2]]])
>>> x[:, newaxis] * x
array([[0, 0, 0],
[0, 1, 2],
[0, 2, 4]])
Outer product, same as outer(x, y): >>> y = np.arange(3, 6)
>>> x[:, newaxis] * y
array([[ 0, 0, 0],
[ 3, 4, 5],
[ 6, 8, 10]])
x[newaxis, :] is equivalent to x[newaxis] and x[None]: >>> x[newaxis, :].shape
(1, 3)
>>> x[newaxis].shape
(1, 3)
>>> x[None].shape
(1, 3)
>>> x[:, newaxis].shape
(3, 1) | numpy.reference.constants#numpy.newaxis |
numpy.nextafter numpy.nextafter(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'nextafter'>
Return the next floating-point value after x1 towards x2, element-wise. Parameters
x1array_like
Values to find the next representable value of.
x2array_like
The direction where to look for the next representable value of x1. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
outndarray or scalar
The next representable values of x1 in the direction of x2. This is a scalar if both x1 and x2 are scalars. Examples >>> eps = np.finfo(np.float64).eps
>>> np.nextafter(1, 2) == eps + 1
True
>>> np.nextafter([1, 2], [2, 1]) == [eps + 1, 2 - eps]
array([ True, True]) | numpy.reference.generated.numpy.nextafter |
numpy.NINF
IEEE 754 floating point representation of negative infinity. Returns yfloat
A floating point representation of negative infinity. See Also isinf : Shows which elements are positive or negative infinity isposinf : Shows which elements are positive infinity isneginf : Shows which elements are negative infinity isnan : Shows which elements are Not a Number isfinite : Shows which elements are finite (not one of Not a Number, positive infinity and negative infinity) Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Also that positive infinity is not equivalent to negative infinity. But infinity is equivalent to positive infinity. Examples >>> np.NINF
-inf
>>> np.log(0)
-inf | numpy.reference.constants#numpy.NINF |
numpy.nonzero numpy.nonzero(a)[source]
Return the indices of the elements that are non-zero. Returns a tuple of arrays, one for each dimension of a, containing the indices of the non-zero elements in that dimension. The values in a are always tested and returned in row-major, C-style order. To group the indices by element, rather than dimension, use argwhere, which returns a row for each non-zero element. Note When called on a zero-d array or scalar, nonzero(a) is treated as nonzero(atleast_1d(a)). Deprecated since version 1.17.0: Use atleast_1d explicitly if this behavior is deliberate. Parameters
aarray_like
Input array. Returns
tuple_of_arraystuple
Indices of elements that are non-zero. See also flatnonzero
Return indices that are non-zero in the flattened version of the input array. ndarray.nonzero
Equivalent ndarray method. count_nonzero
Counts the number of non-zero elements in the input array. Notes While the nonzero values can be obtained with a[nonzero(a)], it is recommended to use x[x.astype(bool)] or x[x != 0] instead, which will correctly handle 0-d arrays. Examples >>> x = np.array([[3, 0, 0], [0, 4, 0], [5, 6, 0]])
>>> x
array([[3, 0, 0],
[0, 4, 0],
[5, 6, 0]])
>>> np.nonzero(x)
(array([0, 1, 2, 2]), array([0, 1, 0, 1]))
>>> x[np.nonzero(x)]
array([3, 4, 5, 6])
>>> np.transpose(np.nonzero(x))
array([[0, 0],
[1, 1],
[2, 0],
[2, 1]])
A common use for nonzero is to find the indices of an array, where a condition is True. Given an array a, the condition a > 3 is a boolean array and since False is interpreted as 0, np.nonzero(a > 3) yields the indices of the a where the condition is true. >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> a > 3
array([[False, False, False],
[ True, True, True],
[ True, True, True]])
>>> np.nonzero(a > 3)
(array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
Using this result to index a is equivalent to using the mask directly: >>> a[np.nonzero(a > 3)]
array([4, 5, 6, 7, 8, 9])
>>> a[a > 3] # prefer this spelling
array([4, 5, 6, 7, 8, 9])
nonzero can also be called as a method of the array. >>> (a > 3).nonzero()
(array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) | numpy.reference.generated.numpy.nonzero |
numpy.not_equal numpy.not_equal(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'not_equal'>
Return (x1 != x2) element-wise. Parameters
x1, x2array_like
Input arrays. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
outndarray or scalar
Output array, element-wise comparison of x1 and x2. Typically of type bool, unless dtype=object is passed. This is a scalar if both x1 and x2 are scalars. See also
equal, greater, greater_equal, less, less_equal
Examples >>> np.not_equal([1.,2.], [1., 3.])
array([False, True])
>>> np.not_equal([1, 2], [[1, 3],[1, 4]])
array([[False, True],
[False, True]])
The != operator can be used as a shorthand for np.not_equal on ndarrays. >>> a = np.array([1., 2.])
>>> b = np.array([1., 3.])
>>> a != b
array([False, True]) | numpy.reference.generated.numpy.not_equal |
class numpy.number[source]
Abstract base class of all numeric scalar types. | numpy.reference.arrays.scalars#numpy.number |
numpy.NZERO
IEEE 754 floating point representation of negative zero. Returns yfloat
A floating point representation of negative zero. See Also PZERO : Defines positive zero. isinf : Shows which elements are positive or negative infinity. isposinf : Shows which elements are positive infinity. isneginf : Shows which elements are negative infinity. isnan : Shows which elements are Not a Number. isfiniteShows which elements are finite - not one of
Not a Number, positive infinity and negative infinity. Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Negative zero is considered to be a finite number. Examples >>> np.NZERO
-0.0
>>> np.PZERO
0.0
>>> np.isfinite([np.NZERO])
array([ True])
>>> np.isnan([np.NZERO])
array([False])
>>> np.isinf([np.NZERO])
array([False]) | numpy.reference.constants#numpy.NZERO |
numpy.obj2sctype numpy.obj2sctype(rep, default=None)[source]
Return the scalar dtype or NumPy equivalent of Python type of an object. Parameters
repany
The object of which the type is returned.
defaultany, optional
If given, this is returned for objects whose types can not be determined. If not given, None is returned for those objects. Returns
dtypedtype or Python type
The data type of rep. See also
sctype2char, issctype, issubsctype, issubdtype, maximum_sctype
Examples >>> np.obj2sctype(np.int32)
<class 'numpy.int32'>
>>> np.obj2sctype(np.array([1., 2.]))
<class 'numpy.float64'>
>>> np.obj2sctype(np.array([1.j]))
<class 'numpy.complex128'>
>>> np.obj2sctype(dict)
<class 'numpy.object_'>
>>> np.obj2sctype('string')
>>> np.obj2sctype(1, default=list)
<class 'list'> | numpy.reference.generated.numpy.obj2sctype |
class numpy.object_[source]
Any Python object. Character code
'O' | numpy.reference.arrays.scalars#numpy.object_ |
numpy.ogrid numpy.ogrid = <numpy.lib.index_tricks.OGridClass object>
nd_grid instance which returns an open multi-dimensional “meshgrid”. An instance of numpy.lib.index_tricks.nd_grid which returns an open (i.e. not fleshed out) mesh-grid when indexed, so that only one dimension of each returned array is greater than 1. The dimension and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive. However, if the step length is a complex number (e.g. 5j), then the integer part of its magnitude is interpreted as specifying the number of points to create between the start and stop values, where the stop value is inclusive. Returns
mesh-grid
ndarrays with only one dimension not equal to 1 See also np.lib.index_tricks.nd_grid
class of ogrid and mgrid objects mgrid
like ogrid but returns dense (or fleshed out) mesh grids r_
array concatenator Examples >>> from numpy import ogrid
>>> ogrid[-1:1:5j]
array([-1. , -0.5, 0. , 0.5, 1. ])
>>> ogrid[0:5,0:5]
[array([[0],
[1],
[2],
[3],
[4]]), array([[0, 1, 2, 3, 4]])] | numpy.reference.generated.numpy.ogrid |
numpy.ones numpy.ones(shape, dtype=None, order='C', *, like=None)[source]
Return a new array of given shape and type, filled with ones. Parameters
shapeint or sequence of ints
Shape of the new array, e.g., (2, 3) or 2.
dtypedata-type, optional
The desired data-type for the array, e.g., numpy.int8. Default is numpy.float64.
order{‘C’, ‘F’}, optional, default: C
Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory.
likearray_like
Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as like supports the __array_function__ protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns
outndarray
Array of ones with the given shape, dtype, and order. See also ones_like
Return an array of ones with shape and type of input. empty
Return a new uninitialized array. zeros
Return a new array setting values to zero. full
Return a new array of given shape filled with value. Examples >>> np.ones(5)
array([1., 1., 1., 1., 1.])
>>> np.ones((5,), dtype=int)
array([1, 1, 1, 1, 1])
>>> np.ones((2, 1))
array([[1.],
[1.]])
>>> s = (2,2)
>>> np.ones(s)
array([[1., 1.],
[1., 1.]]) | numpy.reference.generated.numpy.ones |
numpy.ones_like numpy.ones_like(a, dtype=None, order='K', subok=True, shape=None)[source]
Return an array of ones with the same shape and type as a given array. Parameters
aarray_like
The shape and data-type of a define these same attributes of the returned array.
dtypedata-type, optional
Overrides the data type of the result. New in version 1.6.0.
order{‘C’, ‘F’, ‘A’, or ‘K’}, optional
Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. New in version 1.6.0.
subokbool, optional.
If True, then the newly created array will use the sub-class type of a, otherwise it will be a base-class array. Defaults to True.
shapeint or sequence of ints, optional.
Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns
outndarray
Array of ones with the same shape and type as a. See also empty_like
Return an empty array with shape and type of input. zeros_like
Return an array of zeros with shape and type of input. full_like
Return a new array with shape of input filled with value. ones
Return a new array setting values to one. Examples >>> x = np.arange(6)
>>> x = x.reshape((2, 3))
>>> x
array([[0, 1, 2],
[3, 4, 5]])
>>> np.ones_like(x)
array([[1, 1, 1],
[1, 1, 1]])
>>> y = np.arange(3, dtype=float)
>>> y
array([0., 1., 2.])
>>> np.ones_like(y)
array([1., 1., 1.]) | numpy.reference.generated.numpy.ones_like |
numpy.outer numpy.outer(a, b, out=None)[source]
Compute the outer product of two vectors. Given two vectors, a = [a0, a1, ..., aM] and b = [b0, b1, ..., bN], the outer product [1] is: [[a0*b0 a0*b1 ... a0*bN ]
[a1*b0 .
[ ... .
[aM*b0 aM*bN ]]
Parameters
a(M,) array_like
First input vector. Input is flattened if not already 1-dimensional.
b(N,) array_like
Second input vector. Input is flattened if not already 1-dimensional.
out(M, N) ndarray, optional
A location where the result is stored New in version 1.9.0. Returns
out(M, N) ndarray
out[i, j] = a[i] * b[j] See also inner
einsum
einsum('i,j->ij', a.ravel(), b.ravel()) is the equivalent. ufunc.outer
A generalization to dimensions other than 1D and other operations. np.multiply.outer(a.ravel(), b.ravel()) is the equivalent. tensordot
np.tensordot(a.ravel(), b.ravel(), axes=((), ())) is the equivalent. References 1
: G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. Examples Make a (very coarse) grid for computing a Mandelbrot set: >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5))
>>> rl
array([[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.],
[-2., -1., 0., 1., 2.]])
>>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,)))
>>> im
array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j],
[0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j],
[0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],
[0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j],
[0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]])
>>> grid = rl + im
>>> grid
array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j],
[-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j],
[-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j],
[-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j],
[-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]])
An example using a “vector” of letters: >>> x = np.array(['a', 'b', 'c'], dtype=object)
>>> np.outer(x, [1, 2, 3])
array([['a', 'aa', 'aaa'],
['b', 'bb', 'bbb'],
['c', 'cc', 'ccc']], dtype=object) | numpy.reference.generated.numpy.outer |
numpy.packbits numpy.packbits(a, /, axis=None, bitorder='big')
Packs the elements of a binary-valued array into bits in a uint8 array. The result is padded to full bytes by inserting zero bits at the end. Parameters
aarray_like
An array of integers or booleans whose elements should be packed to bits.
axisint, optional
The dimension over which bit-packing is done. None implies packing the flattened array.
bitorder{‘big’, ‘little’}, optional
The order of the input bits. ‘big’ will mimic bin(val), [0, 0, 0, 0, 0, 0, 1, 1] => 3 = 0b00000011, ‘little’ will reverse the order so [1, 1, 0, 0, 0, 0, 0, 0] => 3. Defaults to ‘big’. New in version 1.17.0. Returns
packedndarray
Array of type uint8 whose elements represent bits corresponding to the logical (0 or nonzero) value of the input elements. The shape of packed has the same number of dimensions as the input (unless axis is None, in which case the output is 1-D). See also unpackbits
Unpacks elements of a uint8 array into a binary-valued output array. Examples >>> a = np.array([[[1,0,1],
... [0,1,0]],
... [[1,1,0],
... [0,0,1]]])
>>> b = np.packbits(a, axis=-1)
>>> b
array([[[160],
[ 64]],
[[192],
[ 32]]], dtype=uint8)
Note that in binary 160 = 1010 0000, 64 = 0100 0000, 192 = 1100 0000, and 32 = 0010 0000. | numpy.reference.generated.numpy.packbits |
numpy.pad numpy.pad(array, pad_width, mode='constant', **kwargs)[source]
Pad an array. Parameters
arrayarray_like of rank N
The array to pad.
pad_width{sequence, array_like, int}
Number of values padded to the edges of each axis. ((before_1, after_1), … (before_N, after_N)) unique pad widths for each axis. ((before, after),) yields same before and after pad for each axis. (pad,) or int is a shortcut for before = after = pad width for all axes.
modestr or function, optional
One of the following string values or a user supplied function. ‘constant’ (default)
Pads with a constant value. ‘edge’
Pads with the edge values of array. ‘linear_ramp’
Pads with the linear ramp between end_value and the array edge value. ‘maximum’
Pads with the maximum value of all or part of the vector along each axis. ‘mean’
Pads with the mean value of all or part of the vector along each axis. ‘median’
Pads with the median value of all or part of the vector along each axis. ‘minimum’
Pads with the minimum value of all or part of the vector along each axis. ‘reflect’
Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. ‘symmetric’
Pads with the reflection of the vector mirrored along the edge of the array. ‘wrap’
Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning. ‘empty’
Pads with undefined values. New in version 1.17. <function>
Padding function, see Notes.
stat_lengthsequence or int, optional
Used in ‘maximum’, ‘mean’, ‘median’, and ‘minimum’. Number of values at edge of each axis used to calculate the statistic value. ((before_1, after_1), … (before_N, after_N)) unique statistic lengths for each axis. ((before, after),) yields same before and after statistic lengths for each axis. (stat_length,) or int is a shortcut for before = after = statistic length for all axes. Default is None, to use the entire axis.
constant_valuessequence or scalar, optional
Used in ‘constant’. The values to set the padded values for each axis. ((before_1, after_1), ... (before_N, after_N)) unique pad constants for each axis. ((before, after),) yields same before and after constants for each axis. (constant,) or constant is a shortcut for before = after = constant for all axes. Default is 0.
end_valuessequence or scalar, optional
Used in ‘linear_ramp’. The values used for the ending value of the linear_ramp and that will form the edge of the padded array. ((before_1, after_1), ... (before_N, after_N)) unique end values for each axis. ((before, after),) yields same before and after end values for each axis. (constant,) or constant is a shortcut for before = after = constant for all axes. Default is 0.
reflect_type{‘even’, ‘odd’}, optional
Used in ‘reflect’, and ‘symmetric’. The ‘even’ style is the default with an unaltered reflection around the edge value. For the ‘odd’ style, the extended part of the array is created by subtracting the reflected values from two times the edge value. Returns
padndarray
Padded array of rank equal to array with shape increased according to pad_width. Notes New in version 1.7.0. For an array with rank greater than 1, some of the padding of later axes is calculated from padding of previous axes. This is easiest to think about with a rank 2 array where the corners of the padded array are calculated by using padded values from the first axis. The padding function, if used, should modify a rank 1 array in-place. It has the following signature: padding_func(vector, iaxis_pad_width, iaxis, kwargs)
where vectorndarray
A rank 1 array already padded with zeros. Padded values are vector[:iaxis_pad_width[0]] and vector[-iaxis_pad_width[1]:]. iaxis_pad_widthtuple
A 2-tuple of ints, iaxis_pad_width[0] represents the number of values padded at the beginning of vector where iaxis_pad_width[1] represents the number of values padded at the end of vector. iaxisint
The axis currently being calculated. kwargsdict
Any keyword arguments the function requires. Examples >>> a = [1, 2, 3, 4, 5]
>>> np.pad(a, (2, 3), 'constant', constant_values=(4, 6))
array([4, 4, 1, ..., 6, 6, 6])
>>> np.pad(a, (2, 3), 'edge')
array([1, 1, 1, ..., 5, 5, 5])
>>> np.pad(a, (2, 3), 'linear_ramp', end_values=(5, -4))
array([ 5, 3, 1, 2, 3, 4, 5, 2, -1, -4])
>>> np.pad(a, (2,), 'maximum')
array([5, 5, 1, 2, 3, 4, 5, 5, 5])
>>> np.pad(a, (2,), 'mean')
array([3, 3, 1, 2, 3, 4, 5, 3, 3])
>>> np.pad(a, (2,), 'median')
array([3, 3, 1, 2, 3, 4, 5, 3, 3])
>>> a = [[1, 2], [3, 4]]
>>> np.pad(a, ((3, 2), (2, 3)), 'minimum')
array([[1, 1, 1, 2, 1, 1, 1],
[1, 1, 1, 2, 1, 1, 1],
[1, 1, 1, 2, 1, 1, 1],
[1, 1, 1, 2, 1, 1, 1],
[3, 3, 3, 4, 3, 3, 3],
[1, 1, 1, 2, 1, 1, 1],
[1, 1, 1, 2, 1, 1, 1]])
>>> a = [1, 2, 3, 4, 5]
>>> np.pad(a, (2, 3), 'reflect')
array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2])
>>> np.pad(a, (2, 3), 'reflect', reflect_type='odd')
array([-1, 0, 1, 2, 3, 4, 5, 6, 7, 8])
>>> np.pad(a, (2, 3), 'symmetric')
array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3])
>>> np.pad(a, (2, 3), 'symmetric', reflect_type='odd')
array([0, 1, 1, 2, 3, 4, 5, 5, 6, 7])
>>> np.pad(a, (2, 3), 'wrap')
array([4, 5, 1, 2, 3, 4, 5, 1, 2, 3])
>>> def pad_with(vector, pad_width, iaxis, kwargs):
... pad_value = kwargs.get('padder', 10)
... vector[:pad_width[0]] = pad_value
... vector[-pad_width[1]:] = pad_value
>>> a = np.arange(6)
>>> a = a.reshape((2, 3))
>>> np.pad(a, 2, pad_with)
array([[10, 10, 10, 10, 10, 10, 10],
[10, 10, 10, 10, 10, 10, 10],
[10, 10, 0, 1, 2, 10, 10],
[10, 10, 3, 4, 5, 10, 10],
[10, 10, 10, 10, 10, 10, 10],
[10, 10, 10, 10, 10, 10, 10]])
>>> np.pad(a, 2, pad_with, padder=100)
array([[100, 100, 100, 100, 100, 100, 100],
[100, 100, 100, 100, 100, 100, 100],
[100, 100, 0, 1, 2, 100, 100],
[100, 100, 3, 4, 5, 100, 100],
[100, 100, 100, 100, 100, 100, 100],
[100, 100, 100, 100, 100, 100, 100]]) | numpy.reference.generated.numpy.pad |
numpy.partition numpy.partition(a, kth, axis=- 1, kind='introselect', order=None)[source]
Return a partitioned copy of an array. Creates a copy of the array with its elements rearranged in such a way that the value of the element in k-th position is in the position it would be in a sorted array. All elements smaller than the k-th element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined. New in version 1.8.0. Parameters
aarray_like
Array to be sorted.
kthint or sequence of ints
Element index to partition by. The k-th value of the element will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all elements indexed by k-th of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated.
axisint or None, optional
Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis.
kind{‘introselect’}, optional
Selection algorithm. Default is ‘introselect’.
orderstr or list of str, optional
When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string. Not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns
partitioned_arrayndarray
Array of the same type and shape as a. See also ndarray.partition
Method to sort an array in-place. argpartition
Indirect partition. sort
Full sorting Notes The various selection algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The available algorithms have the following properties:
kind speed worst case work space stable
‘introselect’ 1 O(n) 0 no All the partition algorithms make temporary copies of the data when partitioning along any but the last axis. Consequently, partitioning along the last axis is faster and uses less space than partitioning along any other axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. Examples >>> a = np.array([3, 4, 2, 1])
>>> np.partition(a, 3)
array([2, 1, 3, 4])
>>> np.partition(a, (1, 3))
array([1, 2, 3, 4]) | numpy.reference.generated.numpy.partition |
numpy.percentile numpy.percentile(a, q, axis=None, out=None, overwrite_input=False, method='linear', keepdims=False, *, interpolation=None)[source]
Compute the q-th percentile of the data along the specified axis. Returns the q-th percentile(s) of the array elements. Parameters
aarray_like
Input array or object that can be converted to an array.
qarray_like of float
Percentile or sequence of percentiles to compute, which must be between 0 and 100 inclusive.
axis{int, tuple of int, None}, optional
Axis or axes along which the percentiles are computed. The default is to compute the percentile(s) along a flattened version of the array. Changed in version 1.9.0: A tuple of axes is supported
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.
overwrite_inputbool, optional
If True, then allow the input array a to be modified by intermediate calculations, to save memory. In this case, the contents of the input a after this function completes is undefined.
methodstr, optional
This parameter specifies the method to use for estimating the percentile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1] are: ‘inverted_cdf’ ‘averaged_inverted_cdf’ ‘closest_observation’ ‘interpolated_inverted_cdf’ ‘hazen’ ‘weibull’ ‘linear’ (default) ‘median_unbiased’ ‘normal_unbiased’ The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: ‘lower’ ‘higher’, ‘midpoint’ ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array a. New in version 1.9.0.
interpolationstr, optional
Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns
percentilescalar or ndarray
If q is a single percentile and axis=None, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the percentiles. The other axes are the axes that remain after the reduction of a. If the input contains integers or floats smaller than float64, the output data-type is float64. Otherwise, the output data-type is the same as that of the input. If out is specified, that array is returned instead. See also mean
median
equivalent to percentile(..., 50) nanpercentile
quantile
equivalent to percentile, except q in the range [0, 1]. Notes Given a vector V of length N, the q-th percentile of V is the value q/100 of the way from the minimum to the maximum in a sorted copy of V. The values and distances of the two nearest neighbors as well as the method parameter will determine the percentile if the normalized ranking does not match the location of q exactly. This function is the same as the median if q=50, the same as the minimum if q=0 and the same as the maximum if q=100. This optional method parameter specifies the method to use when the desired quantile lies between two data points i < j. If g is the fractional part of the index surrounded by i and alpha and beta are correction constants modifying i and j. Below, ‘q’ is the quantile value, ‘n’ is the sample size and alpha and beta are constants. The following formula gives an interpolation “i + g” of where the quantile would be in the sorted sample. With ‘i’ being the floor and ‘g’ the fractional part of the result. \[i + g = (q - alpha) / ( n - alpha - beta + 1 )\] The different methods then work as follows inverted_cdf:
method 1 of H&F [1]. This method gives discontinuous results: * if g > 0 ; then take j * if g = 0 ; then take i averaged_inverted_cdf:
method 2 of H&F [1]. This method give discontinuous results: * if g > 0 ; then take j * if g = 0 ; then average between bounds closest_observation:
method 3 of H&F [1]. This method give discontinuous results: * if g > 0 ; then take j * if g = 0 and index is odd ; then take j * if g = 0 and index is even ; then take i interpolated_inverted_cdf:
method 4 of H&F [1]. This method give continuous results using: * alpha = 0 * beta = 1 hazen:
method 5 of H&F [1]. This method give continuous results using: * alpha = 1/2 * beta = 1/2 weibull:
method 6 of H&F [1]. This method give continuous results using: * alpha = 0 * beta = 0 linear:
method 7 of H&F [1]. This method give continuous results using: * alpha = 1 * beta = 1 median_unbiased:
method 8 of H&F [1]. This method is probably the best method if the sample distribution function is unknown (see reference). This method give continuous results using: * alpha = 1/3 * beta = 1/3 normal_unbiased:
method 9 of H&F [1]. This method is probably the best method if the sample distribution function is known to be normal. This method give continuous results using: * alpha = 3/8 * beta = 3/8 lower:
NumPy method kept for backwards compatibility. Takes i as the interpolation point. higher:
NumPy method kept for backwards compatibility. Takes j as the interpolation point. nearest:
NumPy method kept for backwards compatibility. Takes i or j, whichever is nearest. midpoint:
NumPy method kept for backwards compatibility. Uses (i + j) / 2. References
1(1,2,3,4,5,6,7,8,9,10)
R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 Examples >>> a = np.array([[10, 7, 4], [3, 2, 1]])
>>> a
array([[10, 7, 4],
[ 3, 2, 1]])
>>> np.percentile(a, 50)
3.5
>>> np.percentile(a, 50, axis=0)
array([6.5, 4.5, 2.5])
>>> np.percentile(a, 50, axis=1)
array([7., 2.])
>>> np.percentile(a, 50, axis=1, keepdims=True)
array([[7.],
[2.]])
>>> m = np.percentile(a, 50, axis=0)
>>> out = np.zeros_like(m)
>>> np.percentile(a, 50, axis=0, out=out)
array([6.5, 4.5, 2.5])
>>> m
array([6.5, 4.5, 2.5])
>>> b = a.copy()
>>> np.percentile(b, 50, axis=1, overwrite_input=True)
array([7., 2.])
>>> assert not np.all(a == b)
The different methods can be visualized graphically: import matplotlib.pyplot as plt
a = np.arange(4)
p = np.linspace(0, 100, 6001)
ax = plt.gca()
lines = [
('linear', '-', 'C0'),
('inverted_cdf', ':', 'C1'),
# Almost the same as `inverted_cdf`:
('averaged_inverted_cdf', '-.', 'C1'),
('closest_observation', ':', 'C2'),
('interpolated_inverted_cdf', '--', 'C1'),
('hazen', '--', 'C3'),
('weibull', '-.', 'C4'),
('median_unbiased', '--', 'C5'),
('normal_unbiased', '-.', 'C6'),
]
for method, style, color in lines:
ax.plot(
p, np.percentile(a, p, method=method),
label=method, linestyle=style, color=color)
ax.set(
title='Percentiles for different methods and data: ' + str(a),
xlabel='Percentile',
ylabel='Estimated percentile value',
yticks=a)
ax.legend()
plt.show() | numpy.reference.generated.numpy.percentile |
numpy.pi
pi = 3.1415926535897932384626433... References https://en.wikipedia.org/wiki/Pi | numpy.reference.constants#numpy.pi |
numpy.piecewise numpy.piecewise(x, condlist, funclist, *args, **kw)[source]
Evaluate a piecewise-defined function. Given a set of conditions and corresponding functions, evaluate each function on the input data wherever its condition is true. Parameters
xndarray or scalar
The input domain.
condlistlist of bool arrays or bool scalars
Each boolean array corresponds to a function in funclist. Wherever condlist[i] is True, funclist[i](x) is used as the output value. Each boolean array in condlist selects a piece of x, and should therefore be of the same shape as x. The length of condlist must correspond to that of funclist. If one extra function is given, i.e. if len(funclist) == len(condlist) + 1, then that extra function is the default value, used wherever all conditions are false.
funclistlist of callables, f(x,*args,**kw), or scalars
Each function is evaluated over x wherever its corresponding condition is True. It should take a 1d array as input and give an 1d array or a scalar value as output. If, instead of a callable, a scalar is provided then a constant function (lambda x: scalar) is assumed.
argstuple, optional
Any further arguments given to piecewise are passed to the functions upon execution, i.e., if called piecewise(..., ..., 1, 'a'), then each function is called as f(x, 1, 'a').
kwdict, optional
Keyword arguments used in calling piecewise are passed to the functions upon execution, i.e., if called piecewise(..., ..., alpha=1), then each function is called as f(x, alpha=1). Returns
outndarray
The output is the same shape and type as x and is found by calling the functions in funclist on the appropriate portions of x, as defined by the boolean arrays in condlist. Portions not covered by any condition have a default value of 0. See also
choose, select, where
Notes This is similar to choose or select, except that functions are evaluated on elements of x that satisfy the corresponding condition from condlist. The result is: |--
|funclist[0](x[condlist[0]])
out = |funclist[1](x[condlist[1]])
|...
|funclist[n2](x[condlist[n2]])
|--
Examples Define the sigma function, which is -1 for x < 0 and +1 for x >= 0. >>> x = np.linspace(-2.5, 2.5, 6)
>>> np.piecewise(x, [x < 0, x >= 0], [-1, 1])
array([-1., -1., -1., 1., 1., 1.])
Define the absolute value, which is -x for x <0 and x for x >= 0. >>> np.piecewise(x, [x < 0, x >= 0], [lambda x: -x, lambda x: x])
array([2.5, 1.5, 0.5, 0.5, 1.5, 2.5])
Apply the same function to a scalar value. >>> y = -2
>>> np.piecewise(y, [y < 0, y >= 0], [lambda x: -x, lambda x: x])
array(2) | numpy.reference.generated.numpy.piecewise |
numpy.PINF
IEEE 754 floating point representation of (positive) infinity. Use inf because Inf, Infinity, PINF and infty are aliases for inf. For more details, see inf. See Also inf | numpy.reference.constants#numpy.PINF |
numpy.place numpy.place(arr, mask, vals)[source]
Change elements of an array based on conditional and input values. Similar to np.copyto(arr, vals, where=mask), the difference is that place uses the first N elements of vals, where N is the number of True values in mask, while copyto uses the elements where mask is True. Note that extract does the exact opposite of place. Parameters
arrndarray
Array to put data into.
maskarray_like
Boolean mask array. Must have the same size as a.
vals1-D sequence
Values to put into a. Only the first N elements are used, where N is the number of True values in mask. If vals is smaller than N, it will be repeated, and if elements of a are to be masked, this sequence must be non-empty. See also
copyto, put, take, extract
Examples >>> arr = np.arange(6).reshape(2, 3)
>>> np.place(arr, arr>2, [44, 55])
>>> arr
array([[ 0, 1, 2],
[44, 55, 44]]) | numpy.reference.generated.numpy.place |
numpy.poly numpy.poly(seq_of_zeros)[source]
Find the coefficients of a polynomial with the given sequence of roots. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Returns the coefficients of the polynomial whose leading coefficient is one for the given sequence of zeros (multiple roots must be included in the sequence as many times as their multiplicity; see Examples). A square matrix (or array, which will be treated as a matrix) can also be given, in which case the coefficients of the characteristic polynomial of the matrix are returned. Parameters
seq_of_zerosarray_like, shape (N,) or (N, N)
A sequence of polynomial roots, or a square array or matrix object. Returns
cndarray
1D array of polynomial coefficients from highest to lowest degree: c[0] * x**(N) + c[1] * x**(N-1) + ... + c[N-1] * x + c[N] where c[0] always equals 1. Raises
ValueError
If input is the wrong shape (the input must be a 1-D or square 2-D array). See also polyval
Compute polynomial values. roots
Return the roots of a polynomial. polyfit
Least squares polynomial fit. poly1d
A one-dimensional polynomial class. Notes Specifying the roots of a polynomial still leaves one degree of freedom, typically represented by an undetermined leading coefficient. [1] In the case of this function, that coefficient - the first one in the returned array - is always taken as one. (If for some reason you have one other point, the only automatic way presently to leverage that information is to use polyfit.) The characteristic polynomial, \(p_a(t)\), of an n-by-n matrix A is given by \(p_a(t) = \mathrm{det}(t\, \mathbf{I} - \mathbf{A})\), where I is the n-by-n identity matrix. [2] References 1
M. Sullivan and M. Sullivan, III, “Algebra and Trignometry, Enhanced With Graphing Utilities,” Prentice-Hall, pg. 318, 1996. 2
G. Strang, “Linear Algebra and Its Applications, 2nd Edition,” Academic Press, pg. 182, 1980. Examples Given a sequence of a polynomial’s zeros: >>> np.poly((0, 0, 0)) # Multiple root example
array([1., 0., 0., 0.])
The line above represents z**3 + 0*z**2 + 0*z + 0. >>> np.poly((-1./2, 0, 1./2))
array([ 1. , 0. , -0.25, 0. ])
The line above represents z**3 - z/4 >>> np.poly((np.random.random(1)[0], 0, np.random.random(1)[0]))
array([ 1. , -0.77086955, 0.08618131, 0. ]) # random
Given a square array object: >>> P = np.array([[0, 1./3], [-1./2, 0]])
>>> np.poly(P)
array([1. , 0. , 0.16666667])
Note how in all cases the leading coefficient is always 1. | numpy.reference.generated.numpy.poly |
numpy.poly1d class numpy.poly1d(c_or_r, r=False, variable=None)[source]
A one-dimensional polynomial class. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. A convenience class, used to encapsulate “natural” operations on polynomials so that said operations may take on their customary form in code (see Examples). Parameters
c_or_rarray_like
The polynomial’s coefficients, in decreasing powers, or if the value of the second parameter is True, the polynomial’s roots (values where the polynomial evaluates to 0). For example, poly1d([1, 2, 3]) returns an object that represents \(x^2 + 2x + 3\), whereas poly1d([1, 2, 3], True) returns one that represents \((x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x -6\).
rbool, optional
If True, c_or_r specifies the polynomial’s roots; the default is False.
variablestr, optional
Changes the variable used when printing p from x to variable (see Examples). Examples Construct the polynomial \(x^2 + 2x + 3\): >>> p = np.poly1d([1, 2, 3])
>>> print(np.poly1d(p))
2
1 x + 2 x + 3
Evaluate the polynomial at \(x = 0.5\): >>> p(0.5)
4.25
Find the roots: >>> p.r
array([-1.+1.41421356j, -1.-1.41421356j])
>>> p(p.r)
array([ -4.44089210e-16+0.j, -4.44089210e-16+0.j]) # may vary
These numbers in the previous line represent (0, 0) to machine precision Show the coefficients: >>> p.c
array([1, 2, 3])
Display the order (the leading zero-coefficients are removed): >>> p.order
2
Show the coefficient of the k-th power in the polynomial (which is equivalent to p.c[-(i+1)]): >>> p[1]
2
Polynomials can be added, subtracted, multiplied, and divided (returns quotient and remainder): >>> p * p
poly1d([ 1, 4, 10, 12, 9])
>>> (p**3 + 4) / p
(poly1d([ 1., 4., 10., 12., 9.]), poly1d([4.]))
asarray(p) gives the coefficient array, so polynomials can be used in all functions that accept arrays: >>> p**2 # square of polynomial
poly1d([ 1, 4, 10, 12, 9])
>>> np.square(p) # square of individual coefficients
array([1, 4, 9])
The variable used in the string representation of p can be modified, using the variable parameter: >>> p = np.poly1d([1,2,3], variable='z')
>>> print(p)
2
1 z + 2 z + 3
Construct a polynomial from its roots: >>> np.poly1d([1, 2], True)
poly1d([ 1., -3., 2.])
This is the same polynomial as obtained by: >>> np.poly1d([1, -1]) * np.poly1d([1, -2])
poly1d([ 1, -3, 2])
Attributes
c
The polynomial coefficients coef
The polynomial coefficients coefficients
The polynomial coefficients coeffs
The polynomial coefficients o
The order or degree of the polynomial order
The order or degree of the polynomial r
The roots of the polynomial, where self(x) == 0 roots
The roots of the polynomial, where self(x) == 0 variable
The name of the polynomial variable Methods
__call__(val) Call self as a function.
deriv([m]) Return a derivative of this polynomial.
integ([m, k]) Return an antiderivative (indefinite integral) of this polynomial. | numpy.reference.generated.numpy.poly1d |
numpy.polyadd numpy.polyadd(a1, a2)[source]
Find the sum of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Returns the polynomial resulting from the sum of two input polynomials. Each input must be either a poly1d object or a 1D sequence of polynomial coefficients, from highest to lowest degree. Parameters
a1, a2array_like or poly1d object
Input polynomials. Returns
outndarray or poly1d object
The sum of the inputs. If either input is a poly1d object, then the output is also a poly1d object. Otherwise, it is a 1D array of polynomial coefficients from highest to lowest degree. See also poly1d
A one-dimensional polynomial class.
poly, polyadd, polyder, polydiv, polyfit, polyint, polysub, polyval
Examples >>> np.polyadd([1, 2], [9, 5, 4])
array([9, 6, 6])
Using poly1d objects: >>> p1 = np.poly1d([1, 2])
>>> p2 = np.poly1d([9, 5, 4])
>>> print(p1)
1 x + 2
>>> print(p2)
2
9 x + 5 x + 4
>>> print(np.polyadd(p1, p2))
2
9 x + 6 x + 6 | numpy.reference.generated.numpy.polyadd |
numpy.polyder numpy.polyder(p, m=1)[source]
Return the derivative of the specified order of a polynomial. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Parameters
ppoly1d or sequence
Polynomial to differentiate. A sequence is interpreted as polynomial coefficients, see poly1d.
mint, optional
Order of differentiation (default: 1) Returns
derpoly1d
A new polynomial representing the derivative. See also polyint
Anti-derivative of a polynomial. poly1d
Class for one-dimensional polynomials. Examples The derivative of the polynomial \(x^3 + x^2 + x^1 + 1\) is: >>> p = np.poly1d([1,1,1,1])
>>> p2 = np.polyder(p)
>>> p2
poly1d([3, 2, 1])
which evaluates to: >>> p2(2.)
17.0
We can verify this, approximating the derivative with (f(x + h) - f(x))/h: >>> (p(2. + 0.001) - p(2.)) / 0.001
17.007000999997857
The fourth-order derivative of a 3rd-order polynomial is zero: >>> np.polyder(p, 2)
poly1d([6, 2])
>>> np.polyder(p, 3)
poly1d([6])
>>> np.polyder(p, 4)
poly1d([0]) | numpy.reference.generated.numpy.polyder |
numpy.polydiv numpy.polydiv(u, v)[source]
Returns the quotient and remainder of polynomial division. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. The input arrays are the coefficients (including any coefficients equal to zero) of the “numerator” (dividend) and “denominator” (divisor) polynomials, respectively. Parameters
uarray_like or poly1d
Dividend polynomial’s coefficients.
varray_like or poly1d
Divisor polynomial’s coefficients. Returns
qndarray
Coefficients, including those equal to zero, of the quotient.
rndarray
Coefficients, including those equal to zero, of the remainder. See also
poly, polyadd, polyder, polydiv, polyfit, polyint, polymul, polysub
polyval
Notes Both u and v must be 0-d or 1-d (ndim = 0 or 1), but u.ndim need not equal v.ndim. In other words, all four possible combinations - u.ndim = v.ndim = 0, u.ndim = v.ndim = 1, u.ndim = 1, v.ndim = 0, and u.ndim = 0, v.ndim = 1 - work. Examples \[\frac{3x^2 + 5x + 2}{2x + 1} = 1.5x + 1.75, remainder 0.25\] >>> x = np.array([3.0, 5.0, 2.0])
>>> y = np.array([2.0, 1.0])
>>> np.polydiv(x, y)
(array([1.5 , 1.75]), array([0.25])) | numpy.reference.generated.numpy.polydiv |
numpy.polyfit numpy.polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False)[source]
Least squares polynomial fit. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Fit a polynomial p(x) = p[0] * x**deg + ... + p[deg] of degree deg to points (x, y). Returns a vector of coefficients p that minimises the squared error in the order deg, deg-1, … 0. The Polynomial.fit class method is recommended for new code as it is more stable numerically. See the documentation of the method for more information. Parameters
xarray_like, shape (M,)
x-coordinates of the M sample points (x[i], y[i]).
yarray_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column.
degint
Degree of the fitting polynomial
rcondfloat, optional
Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases.
fullbool, optional
Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned.
warray_like, shape (M,), optional
Weights. If not None, the weight w[i] applies to the unsquared residual y[i] - y_hat[i] at x[i]. Ideally the weights are chosen so that the errors of the products w[i]*y[i] all have the same variance. When using inverse-variance weighting, use w[i] = 1/sigma(y[i]). The default value is None.
covbool or str, optional
If given and not False, return not just the estimate but also its covariance matrix. By default, the covariance are scaled by chi2/dof, where dof = M - (deg + 1), i.e., the weights are presumed to be unreliable except in a relative sense and everything is scaled such that the reduced chi2 is unity. This scaling is omitted if cov='unscaled', as is relevant for the case that the weights are w = 1/sigma, with sigma known to be a reliable estimate of the uncertainty. Returns
pndarray, shape (deg + 1,) or (deg + 1, K)
Polynomial coefficients, highest power first. If y was 2-D, the coefficients for k-th data set are in p[:,k]. residuals, rank, singular_values, rcond
These values are only returned if full == True residuals – sum of squared residuals of the least squares fit
rank – the effective rank of the scaled Vandermonde
coefficient matrix
singular_values – singular values of the scaled Vandermonde
coefficient matrix rcond – value of rcond. For more details, see numpy.linalg.lstsq.
Vndarray, shape (M,M) or (M,M,K)
Present only if full == False and cov == True. The covariance matrix of the polynomial coefficient estimates. The diagonal of this matrix are the variance estimates for each coefficient. If y is a 2-D array, then the covariance matrix for the k-th data set are in V[:,:,k] Warns
RankWarning
The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if full == False. The warnings can be turned off by >>> import warnings
>>> warnings.simplefilter('ignore', np.RankWarning)
See also polyval
Compute polynomial values. linalg.lstsq
Computes a least-squares fit. scipy.interpolate.UnivariateSpline
Computes spline fits. Notes The solution minimizes the squared error \[E = \sum_{j=0}^k |p(x_j) - y_j|^2\] in the equations: x[0]**n * p[0] + ... + x[0] * p[n-1] + p[n] = y[0]
x[1]**n * p[0] + ... + x[1] * p[n-1] + p[n] = y[1]
...
x[k]**n * p[0] + ... + x[k] * p[n-1] + p[n] = y[k]
The coefficient matrix of the coefficients p is a Vandermonde matrix. polyfit issues a RankWarning when the least-squares fit is badly conditioned. This implies that the best fit is not well-defined due to numerical error. The results may be improved by lowering the polynomial degree or by replacing x by x - x.mean(). The rcond parameter can also be set to a value smaller than its default, but the resulting fit may be spurious: including contributions from the small singular values can add numerical noise to the result. Note that fitting polynomial coefficients is inherently badly conditioned when the degree of the polynomial is large or the interval of sample points is badly centered. The quality of the fit should always be checked in these cases. When polynomial fits are not satisfactory, splines may be a good alternative. References 1
Wikipedia, “Curve fitting”, https://en.wikipedia.org/wiki/Curve_fitting 2
Wikipedia, “Polynomial interpolation”, https://en.wikipedia.org/wiki/Polynomial_interpolation Examples >>> import warnings
>>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
>>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
>>> z = np.polyfit(x, y, 3)
>>> z
array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254]) # may vary
It is convenient to use poly1d objects for dealing with polynomials: >>> p = np.poly1d(z)
>>> p(0.5)
0.6143849206349179 # may vary
>>> p(3.5)
-0.34732142857143039 # may vary
>>> p(10)
22.579365079365115 # may vary
High-order polynomials may oscillate wildly: >>> with warnings.catch_warnings():
... warnings.simplefilter('ignore', np.RankWarning)
... p30 = np.poly1d(np.polyfit(x, y, 30))
...
>>> p30(4)
-0.80000000000000204 # may vary
>>> p30(5)
-0.99999999999999445 # may vary
>>> p30(4.5)
-0.10547061179440398 # may vary
Illustration: >>> import matplotlib.pyplot as plt
>>> xp = np.linspace(-2, 6, 100)
>>> _ = plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--')
>>> plt.ylim(-2,2)
(-2, 2)
>>> plt.show() | numpy.reference.generated.numpy.polyfit |
numpy.polyint numpy.polyint(p, m=1, k=None)[source]
Return an antiderivative (indefinite integral) of a polynomial. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. The returned order m antiderivative P of polynomial p satisfies \(\frac{d^m}{dx^m}P(x) = p(x)\) and is defined up to m - 1 integration constants k. The constants determine the low-order polynomial part \[\frac{k_{m-1}}{0!} x^0 + \ldots + \frac{k_0}{(m-1)!}x^{m-1}\] of P so that \(P^{(j)}(0) = k_{m-j-1}\). Parameters
parray_like or poly1d
Polynomial to integrate. A sequence is interpreted as polynomial coefficients, see poly1d.
mint, optional
Order of the antiderivative. (Default: 1)
klist of m scalars or scalar, optional
Integration constants. They are given in the order of integration: those corresponding to highest-order terms come first. If None (default), all constants are assumed to be zero. If m = 1, a single scalar can be given instead of a list. See also polyder
derivative of a polynomial poly1d.integ
equivalent method Examples The defining property of the antiderivative: >>> p = np.poly1d([1,1,1])
>>> P = np.polyint(p)
>>> P
poly1d([ 0.33333333, 0.5 , 1. , 0. ]) # may vary
>>> np.polyder(P) == p
True
The integration constants default to zero, but can be specified: >>> P = np.polyint(p, 3)
>>> P(0)
0.0
>>> np.polyder(P)(0)
0.0
>>> np.polyder(P, 2)(0)
0.0
>>> P = np.polyint(p, 3, k=[6,5,3])
>>> P
poly1d([ 0.01666667, 0.04166667, 0.16666667, 3. , 5. , 3. ]) # may vary
Note that 3 = 6 / 2!, and that the constants are given in the order of integrations. Constant of the highest-order polynomial term comes first: >>> np.polyder(P, 2)(0)
6.0
>>> np.polyder(P, 1)(0)
5.0
>>> P(0)
3.0 | numpy.reference.generated.numpy.polyint |
numpy.polymul numpy.polymul(a1, a2)[source]
Find the product of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Finds the polynomial resulting from the multiplication of the two input polynomials. Each input must be either a poly1d object or a 1D sequence of polynomial coefficients, from highest to lowest degree. Parameters
a1, a2array_like or poly1d object
Input polynomials. Returns
outndarray or poly1d object
The polynomial resulting from the multiplication of the inputs. If either inputs is a poly1d object, then the output is also a poly1d object. Otherwise, it is a 1D array of polynomial coefficients from highest to lowest degree. See also poly1d
A one-dimensional polynomial class.
poly, polyadd, polyder, polydiv, polyfit, polyint, polysub, polyval
convolve
Array convolution. Same output as polymul, but has parameter for overlap mode. Examples >>> np.polymul([1, 2, 3], [9, 5, 1])
array([ 9, 23, 38, 17, 3])
Using poly1d objects: >>> p1 = np.poly1d([1, 2, 3])
>>> p2 = np.poly1d([9, 5, 1])
>>> print(p1)
2
1 x + 2 x + 3
>>> print(p2)
2
9 x + 5 x + 1
>>> print(np.polymul(p1, p2))
4 3 2
9 x + 23 x + 38 x + 17 x + 3 | numpy.reference.generated.numpy.polymul |
numpy.polynomial.chebyshev.Chebyshev class numpy.polynomial.chebyshev.Chebyshev(coef, domain=None, window=None)[source]
A Chebyshev series class. The Chebyshev class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the methods listed below. Parameters
coefarray_like
Chebyshev coefficients in order of increasing degree, i.e., (1, 2, 3) gives 1*T_0(x) + 2*T_1(x) + 3*T_2(x).
domain(2,) array_like, optional
Domain to use. The interval [domain[0], domain[1]] is mapped to the interval [window[0], window[1]] by shifting and scaling. The default value is [-1, 1].
window(2,) array_like, optional
Window, see domain for its use. The default value is [-1, 1]. New in version 1.6.0. Methods
__call__(arg) Call self as a function.
basis(deg[, domain, window]) Series basis polynomial of degree deg.
cast(series[, domain, window]) Convert series to series of this class.
convert([domain, kind, window]) Convert series to a different kind and/or domain and/or window.
copy() Return a copy.
cutdeg(deg) Truncate series to the given degree.
degree() The degree of the series.
deriv([m]) Differentiate.
fit(x, y, deg[, domain, rcond, full, w, window]) Least squares fit to data.
fromroots(roots[, domain, window]) Return series instance that has the specified roots.
has_samecoef(other) Check if coefficients match.
has_samedomain(other) Check if domains match.
has_sametype(other) Check if types match.
has_samewindow(other) Check if windows match.
identity([domain, window]) Identity function.
integ([m, k, lbnd]) Integrate.
interpolate(func, deg[, domain, args]) Interpolate a function at the Chebyshev points of the first kind.
linspace([n, domain]) Return x, y values at equally spaced points in domain.
mapparms() Return the mapping parameters.
roots() Return the roots of the series polynomial.
trim([tol]) Remove trailing coefficients
truncate(size) Truncate series to length size. | numpy.reference.generated.numpy.polynomial.chebyshev.chebyshev |
numpy.polynomial.hermite.Hermite class numpy.polynomial.hermite.Hermite(coef, domain=None, window=None)[source]
An Hermite series class. The Hermite class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the ABCPolyBase documentation. Parameters
coefarray_like
Hermite coefficients in order of increasing degree, i.e, (1, 2, 3) gives 1*H_0(x) + 2*H_1(X) + 3*H_2(x).
domain(2,) array_like, optional
Domain to use. The interval [domain[0], domain[1]] is mapped to the interval [window[0], window[1]] by shifting and scaling. The default value is [-1, 1].
window(2,) array_like, optional
Window, see domain for its use. The default value is [-1, 1]. New in version 1.6.0. Methods
__call__(arg) Call self as a function.
basis(deg[, domain, window]) Series basis polynomial of degree deg.
cast(series[, domain, window]) Convert series to series of this class.
convert([domain, kind, window]) Convert series to a different kind and/or domain and/or window.
copy() Return a copy.
cutdeg(deg) Truncate series to the given degree.
degree() The degree of the series.
deriv([m]) Differentiate.
fit(x, y, deg[, domain, rcond, full, w, window]) Least squares fit to data.
fromroots(roots[, domain, window]) Return series instance that has the specified roots.
has_samecoef(other) Check if coefficients match.
has_samedomain(other) Check if domains match.
has_sametype(other) Check if types match.
has_samewindow(other) Check if windows match.
identity([domain, window]) Identity function.
integ([m, k, lbnd]) Integrate.
linspace([n, domain]) Return x, y values at equally spaced points in domain.
mapparms() Return the mapping parameters.
roots() Return the roots of the series polynomial.
trim([tol]) Remove trailing coefficients
truncate(size) Truncate series to length size. | numpy.reference.generated.numpy.polynomial.hermite.hermite |
numpy.polynomial.hermite_e.HermiteE class numpy.polynomial.hermite_e.HermiteE(coef, domain=None, window=None)[source]
An HermiteE series class. The HermiteE class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the ABCPolyBase documentation. Parameters
coefarray_like
HermiteE coefficients in order of increasing degree, i.e, (1, 2, 3) gives 1*He_0(x) + 2*He_1(X) + 3*He_2(x).
domain(2,) array_like, optional
Domain to use. The interval [domain[0], domain[1]] is mapped to the interval [window[0], window[1]] by shifting and scaling. The default value is [-1, 1].
window(2,) array_like, optional
Window, see domain for its use. The default value is [-1, 1]. New in version 1.6.0. Methods
__call__(arg) Call self as a function.
basis(deg[, domain, window]) Series basis polynomial of degree deg.
cast(series[, domain, window]) Convert series to series of this class.
convert([domain, kind, window]) Convert series to a different kind and/or domain and/or window.
copy() Return a copy.
cutdeg(deg) Truncate series to the given degree.
degree() The degree of the series.
deriv([m]) Differentiate.
fit(x, y, deg[, domain, rcond, full, w, window]) Least squares fit to data.
fromroots(roots[, domain, window]) Return series instance that has the specified roots.
has_samecoef(other) Check if coefficients match.
has_samedomain(other) Check if domains match.
has_sametype(other) Check if types match.
has_samewindow(other) Check if windows match.
identity([domain, window]) Identity function.
integ([m, k, lbnd]) Integrate.
linspace([n, domain]) Return x, y values at equally spaced points in domain.
mapparms() Return the mapping parameters.
roots() Return the roots of the series polynomial.
trim([tol]) Remove trailing coefficients
truncate(size) Truncate series to length size. | numpy.reference.generated.numpy.polynomial.hermite_e.hermitee |
numpy.polynomial.laguerre.Laguerre class numpy.polynomial.laguerre.Laguerre(coef, domain=None, window=None)[source]
A Laguerre series class. The Laguerre class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the ABCPolyBase documentation. Parameters
coefarray_like
Laguerre coefficients in order of increasing degree, i.e, (1, 2, 3) gives 1*L_0(x) + 2*L_1(X) + 3*L_2(x).
domain(2,) array_like, optional
Domain to use. The interval [domain[0], domain[1]] is mapped to the interval [window[0], window[1]] by shifting and scaling. The default value is [0, 1].
window(2,) array_like, optional
Window, see domain for its use. The default value is [0, 1]. New in version 1.6.0. Methods
__call__(arg) Call self as a function.
basis(deg[, domain, window]) Series basis polynomial of degree deg.
cast(series[, domain, window]) Convert series to series of this class.
convert([domain, kind, window]) Convert series to a different kind and/or domain and/or window.
copy() Return a copy.
cutdeg(deg) Truncate series to the given degree.
degree() The degree of the series.
deriv([m]) Differentiate.
fit(x, y, deg[, domain, rcond, full, w, window]) Least squares fit to data.
fromroots(roots[, domain, window]) Return series instance that has the specified roots.
has_samecoef(other) Check if coefficients match.
has_samedomain(other) Check if domains match.
has_sametype(other) Check if types match.
has_samewindow(other) Check if windows match.
identity([domain, window]) Identity function.
integ([m, k, lbnd]) Integrate.
linspace([n, domain]) Return x, y values at equally spaced points in domain.
mapparms() Return the mapping parameters.
roots() Return the roots of the series polynomial.
trim([tol]) Remove trailing coefficients
truncate(size) Truncate series to length size. | numpy.reference.generated.numpy.polynomial.laguerre.laguerre |
numpy.polynomial.legendre.Legendre class numpy.polynomial.legendre.Legendre(coef, domain=None, window=None)[source]
A Legendre series class. The Legendre class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the ABCPolyBase documentation. Parameters
coefarray_like
Legendre coefficients in order of increasing degree, i.e., (1, 2, 3) gives 1*P_0(x) + 2*P_1(x) + 3*P_2(x).
domain(2,) array_like, optional
Domain to use. The interval [domain[0], domain[1]] is mapped to the interval [window[0], window[1]] by shifting and scaling. The default value is [-1, 1].
window(2,) array_like, optional
Window, see domain for its use. The default value is [-1, 1]. New in version 1.6.0. Methods
__call__(arg) Call self as a function.
basis(deg[, domain, window]) Series basis polynomial of degree deg.
cast(series[, domain, window]) Convert series to series of this class.
convert([domain, kind, window]) Convert series to a different kind and/or domain and/or window.
copy() Return a copy.
cutdeg(deg) Truncate series to the given degree.
degree() The degree of the series.
deriv([m]) Differentiate.
fit(x, y, deg[, domain, rcond, full, w, window]) Least squares fit to data.
fromroots(roots[, domain, window]) Return series instance that has the specified roots.
has_samecoef(other) Check if coefficients match.
has_samedomain(other) Check if domains match.
has_sametype(other) Check if types match.
has_samewindow(other) Check if windows match.
identity([domain, window]) Identity function.
integ([m, k, lbnd]) Integrate.
linspace([n, domain]) Return x, y values at equally spaced points in domain.
mapparms() Return the mapping parameters.
roots() Return the roots of the series polynomial.
trim([tol]) Remove trailing coefficients
truncate(size) Truncate series to length size. | numpy.reference.generated.numpy.polynomial.legendre.legendre |
numpy.polynomial.polynomial.Polynomial class numpy.polynomial.polynomial.Polynomial(coef, domain=None, window=None)[source]
A power series class. The Polynomial class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the ABCPolyBase documentation. Parameters
coefarray_like
Polynomial coefficients in order of increasing degree, i.e., (1, 2, 3) give 1 + 2*x + 3*x**2.
domain(2,) array_like, optional
Domain to use. The interval [domain[0], domain[1]] is mapped to the interval [window[0], window[1]] by shifting and scaling. The default value is [-1, 1].
window(2,) array_like, optional
Window, see domain for its use. The default value is [-1, 1]. New in version 1.6.0. Attributes
basis_name
Methods
__call__(arg) Call self as a function.
basis(deg[, domain, window]) Series basis polynomial of degree deg.
cast(series[, domain, window]) Convert series to series of this class.
convert([domain, kind, window]) Convert series to a different kind and/or domain and/or window.
copy() Return a copy.
cutdeg(deg) Truncate series to the given degree.
degree() The degree of the series.
deriv([m]) Differentiate.
fit(x, y, deg[, domain, rcond, full, w, window]) Least squares fit to data.
fromroots(roots[, domain, window]) Return series instance that has the specified roots.
has_samecoef(other) Check if coefficients match.
has_samedomain(other) Check if domains match.
has_sametype(other) Check if types match.
has_samewindow(other) Check if windows match.
identity([domain, window]) Identity function.
integ([m, k, lbnd]) Integrate.
linspace([n, domain]) Return x, y values at equally spaced points in domain.
mapparms() Return the mapping parameters.
roots() Return the roots of the series polynomial.
trim([tol]) Remove trailing coefficients
truncate(size) Truncate series to length size. | numpy.reference.generated.numpy.polynomial.polynomial.polynomial |
numpy.polysub numpy.polysub(a1, a2)[source]
Difference (subtraction) of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Given two polynomials a1 and a2, returns a1 - a2. a1 and a2 can be either array_like sequences of the polynomials’ coefficients (including coefficients equal to zero), or poly1d objects. Parameters
a1, a2array_like or poly1d
Minuend and subtrahend polynomials, respectively. Returns
outndarray or poly1d
Array or poly1d object of the difference polynomial’s coefficients. See also
polyval, polydiv, polymul, polyadd
Examples \[(2 x^2 + 10 x - 2) - (3 x^2 + 10 x -4) = (-x^2 + 2)\] >>> np.polysub([2, 10, -2], [3, 10, -4])
array([-1, 0, 2]) | numpy.reference.generated.numpy.polysub |
numpy.polyval numpy.polyval(p, x)[source]
Evaluate a polynomial at specific values. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. If p is of length N, this function returns the value: p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1] If x is a sequence, then p(x) is returned for each element of x. If x is another polynomial then the composite polynomial p(x(t)) is returned. Parameters
parray_like or poly1d object
1D array of polynomial coefficients (including coefficients equal to zero) from highest degree to the constant term, or an instance of poly1d.
xarray_like or poly1d object
A number, an array of numbers, or an instance of poly1d, at which to evaluate p. Returns
valuesndarray or poly1d
If x is a poly1d instance, the result is the composition of the two polynomials, i.e., x is “substituted” in p and the simplified result is returned. In addition, the type of x - array_like or poly1d - governs the type of the output: x array_like => values array_like, x a poly1d object => values is also. See also poly1d
A polynomial class. Notes Horner’s scheme [1] is used to evaluate the polynomial. Even so, for polynomials of high degree the values may be inaccurate due to rounding errors. Use carefully. If x is a subtype of ndarray the return value will be of the same type. References 1
I. N. Bronshtein, K. A. Semendyayev, and K. A. Hirsch (Eng. trans. Ed.), Handbook of Mathematics, New York, Van Nostrand Reinhold Co., 1985, pg. 720. Examples >>> np.polyval([3,0,1], 5) # 3 * 5**2 + 0 * 5**1 + 1
76
>>> np.polyval([3,0,1], np.poly1d(5))
poly1d([76])
>>> np.polyval(np.poly1d([3,0,1]), 5)
76
>>> np.polyval(np.poly1d([3,0,1]), np.poly1d(5))
poly1d([76]) | numpy.reference.generated.numpy.polyval |
numpy.positive numpy.positive(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'positive'>
Numerical positive, element-wise. New in version 1.13.0. Parameters
xarray_like or scalar
Input array. Returns
yndarray or scalar
Returned array or scalar: y = +x. This is a scalar if x is a scalar. Notes Equivalent to x.copy(), but only defined for types that support arithmetic. Examples >>> x1 = np.array(([1., -1.]))
>>> np.positive(x1)
array([ 1., -1.])
The unary + operator can be used as a shorthand for np.positive on ndarrays. >>> x1 = np.array(([1., -1.]))
>>> +x1
array([ 1., -1.]) | numpy.reference.generated.numpy.positive |
numpy.power numpy.power(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'power'>
First array elements raised to powers from second array, element-wise. Raise each base in x1 to the positionally-corresponding power in x2. x1 and x2 must be broadcastable to the same shape. An integer type raised to a negative integer power will raise a ValueError. Negative values raised to a non-integral value will return nan. To get complex results, cast the input to complex, or specify the dtype to be complex (see the example below). Parameters
x1array_like
The bases.
x2array_like
The exponents. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray
The bases in x1 raised to the exponents in x2. This is a scalar if both x1 and x2 are scalars. See also float_power
power function that promotes integers to float Examples Cube each element in an array. >>> x1 = np.arange(6)
>>> x1
[0, 1, 2, 3, 4, 5]
>>> np.power(x1, 3)
array([ 0, 1, 8, 27, 64, 125])
Raise the bases to different exponents. >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0]
>>> np.power(x1, x2)
array([ 0., 1., 8., 27., 16., 5.])
The effect of broadcasting. >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]])
>>> x2
array([[1, 2, 3, 3, 2, 1],
[1, 2, 3, 3, 2, 1]])
>>> np.power(x1, x2)
array([[ 0, 1, 8, 27, 16, 5],
[ 0, 1, 8, 27, 16, 5]])
The ** operator can be used as a shorthand for np.power on ndarrays. >>> x2 = np.array([1, 2, 3, 3, 2, 1])
>>> x1 = np.arange(6)
>>> x1 ** x2
array([ 0, 1, 8, 27, 16, 5])
Negative values raised to a non-integral value will result in nan (and a warning will be generated). >>> x3 = np.array([-1.0, -4.0])
>>> with np.errstate(invalid='ignore'):
... p = np.power(x3, 1.5)
...
>>> p
array([nan, nan])
To get complex results, give the argument dtype=complex. >>> np.power(x3, 1.5, dtype=complex)
array([-1.83697020e-16-1.j, -1.46957616e-15-8.j]) | numpy.reference.generated.numpy.power |
numpy.printoptions numpy.printoptions(*args, **kwargs)[source]
Context manager for setting print options. Set print options for the scope of the with block, and restore the old options at the end. See set_printoptions for the full description of available options. See also
set_printoptions, get_printoptions
Examples >>> from numpy.testing import assert_equal
>>> with np.printoptions(precision=2):
... np.array([2.0]) / 3
array([0.67])
The as-clause of the with-statement gives the current print options: >>> with np.printoptions(precision=2) as opts:
... assert_equal(opts, np.get_printoptions()) | numpy.reference.generated.numpy.printoptions |
numpy.prod numpy.prod(a, axis=None, dtype=None, out=None, keepdims=<no value>, initial=<no value>, where=<no value>)[source]
Return the product of array elements over a given axis. Parameters
aarray_like
Input data.
axisNone or int or tuple of ints, optional
Axis or axes along which a product is performed. The default, axis=None, will calculate the product of all the elements in the input array. If axis is negative it counts from the last to the first axis. New in version 1.7.0. If axis is a tuple of ints, a product is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.
dtypedtype, optional
The type of the returned array, as well as of the accumulator in which the elements are multiplied. The dtype of a is used by default unless a has an integer dtype of less precision than the default platform integer. In that case, if a is signed then the platform integer is used while if a is unsigned then an unsigned integer of the same precision as the platform integer is used.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the prod method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised.
initialscalar, optional
The starting value for this product. See reduce for details. New in version 1.15.0.
wherearray_like of bool, optional
Elements to include in the product. See reduce for details. New in version 1.17.0. Returns
product_along_axisndarray, see dtype parameter above.
An array shaped as a but with the specified axis removed. Returns a reference to out if specified. See also ndarray.prod
equivalent method Output type determination
Notes Arithmetic is modular when using integer types, and no error is raised on overflow. That means that, on a 32-bit platform: >>> x = np.array([536870910, 536870910, 536870910, 536870910])
>>> np.prod(x)
16 # may vary
The product of an empty array is the neutral element 1: >>> np.prod([])
1.0
Examples By default, calculate the product of all elements: >>> np.prod([1.,2.])
2.0
Even when the input array is two-dimensional: >>> np.prod([[1.,2.],[3.,4.]])
24.0
But we can also specify the axis over which to multiply: >>> np.prod([[1.,2.],[3.,4.]], axis=1)
array([ 2., 12.])
Or select specific elements to include: >>> np.prod([1., np.nan, 3.], where=[True, False, True])
3.0
If the type of x is unsigned, then the output type is the unsigned platform integer: >>> x = np.array([1, 2, 3], dtype=np.uint8)
>>> np.prod(x).dtype == np.uint
True
If x is of a signed integer type, then the output type is the default platform integer: >>> x = np.array([1, 2, 3], dtype=np.int8)
>>> np.prod(x).dtype == int
True
You can also start the product with a value other than one: >>> np.prod([1, 2], initial=5)
10 | numpy.reference.generated.numpy.prod |
numpy.promote_types numpy.promote_types(type1, type2)
Returns the data type with the smallest size and smallest scalar kind to which both type1 and type2 may be safely cast. The returned data type is always in native byte order. This function is symmetric, but rarely associative. Parameters
type1dtype or dtype specifier
First data type.
type2dtype or dtype specifier
Second data type. Returns
outdtype
The promoted data type. See also
result_type, dtype, can_cast
Notes New in version 1.6.0. Starting in NumPy 1.9, promote_types function now returns a valid string length when given an integer or float dtype as one argument and a string dtype as another argument. Previously it always returned the input string dtype, even if it wasn’t long enough to store the max integer/float value converted to a string. Examples >>> np.promote_types('f4', 'f8')
dtype('float64')
>>> np.promote_types('i8', 'f4')
dtype('float64')
>>> np.promote_types('>i8', '<c8')
dtype('complex128')
>>> np.promote_types('i4', 'S8')
dtype('S11')
An example of a non-associative case: >>> p = np.promote_types
>>> p('S', p('i1', 'u1'))
dtype('S6')
>>> p(p('S', 'i1'), 'u1')
dtype('S4') | numpy.reference.generated.numpy.promote_types |
numpy.ptp numpy.ptp(a, axis=None, out=None, keepdims=<no value>)[source]
Range of values (maximum - minimum) along an axis. The name of the function comes from the acronym for ‘peak to peak’. Warning ptp preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. np.int8, np.int16, etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than 2**(n-1)-1 will be returned as negative values. An example with a work-around is shown below. Parameters
aarray_like
Input values.
axisNone or int or tuple of ints, optional
Axis along which to find the peaks. By default, flatten the array. axis may be negative, in which case it counts from the last to the first axis. New in version 1.15.0. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before.
outarray_like
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type of the output values will be cast if necessary.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the ptp method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised. Returns
ptpndarray
A new array holding the result, unless out was specified, in which case a reference to out is returned. Examples >>> x = np.array([[4, 9, 2, 10],
... [6, 9, 7, 12]])
>>> np.ptp(x, axis=1)
array([8, 6])
>>> np.ptp(x, axis=0)
array([2, 0, 5, 2])
>>> np.ptp(x)
10
This example shows that a negative value can be returned when the input is an array of signed integers. >>> y = np.array([[1, 127],
... [0, 127],
... [-1, 127],
... [-2, 127]], dtype=np.int8)
>>> np.ptp(y, axis=1)
array([ 126, 127, -128, -127], dtype=int8)
A work-around is to use the view() method to view the result as unsigned integers with the same bit width: >>> np.ptp(y, axis=1).view(np.uint8)
array([126, 127, 128, 129], dtype=uint8) | numpy.reference.generated.numpy.ptp |
numpy.put numpy.put(a, ind, v, mode='raise')[source]
Replaces specified elements of an array with given values. The indexing works on the flattened target array. put is roughly equivalent to: a.flat[ind] = v
Parameters
andarray
Target array.
indarray_like
Target indices, interpreted as integers.
varray_like
Values to place in a at target indices. If v is shorter than ind it will be repeated as necessary.
mode{‘raise’, ‘wrap’, ‘clip’}, optional
Specifies how out-of-bounds indices will behave. ‘raise’ – raise an error (default) ‘wrap’ – wrap around ‘clip’ – clip to the range ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers. In ‘raise’ mode, if an exception occurs the target array may still be modified. See also
putmask, place
put_along_axis
Put elements by matching the array and the index arrays Examples >>> a = np.arange(5)
>>> np.put(a, [0, 2], [-44, -55])
>>> a
array([-44, 1, -55, 3, 4])
>>> a = np.arange(5)
>>> np.put(a, 22, -5, mode='clip')
>>> a
array([ 0, 1, 2, 3, -5]) | numpy.reference.generated.numpy.put |
numpy.put_along_axis numpy.put_along_axis(arr, indices, values, axis)[source]
Put values into the destination array by matching 1d index and data slices. This iterates over matching 1d slices oriented along the specified axis in the index and data arrays, and uses the former to place values into the latter. These slices can be different lengths. Functions returning an index along an axis, like argsort and argpartition, produce suitable indices for this function. New in version 1.15.0. Parameters
arrndarray (Ni…, M, Nk…)
Destination array.
indicesndarray (Ni…, J, Nk…)
Indices to change along each 1d slice of arr. This must match the dimension of arr, but dimensions in Ni and Nj may be 1 to broadcast against arr.
valuesarray_like (Ni…, J, Nk…)
values to insert at those indices. Its shape and dimension are broadcast to match that of indices.
axisint
The axis to take 1d slices along. If axis is None, the destination array is treated as if a flattened 1d view had been created of it. See also take_along_axis
Take values from the input array by matching 1d index and data slices Notes This is equivalent to (but faster than) the following use of ndindex and s_, which sets each of ii and kk to a tuple of indices: Ni, M, Nk = a.shape[:axis], a.shape[axis], a.shape[axis+1:]
J = indices.shape[axis] # Need not equal M
for ii in ndindex(Ni):
for kk in ndindex(Nk):
a_1d = a [ii + s_[:,] + kk]
indices_1d = indices[ii + s_[:,] + kk]
values_1d = values [ii + s_[:,] + kk]
for j in range(J):
a_1d[indices_1d[j]] = values_1d[j]
Equivalently, eliminating the inner loop, the last two lines would be: a_1d[indices_1d] = values_1d
Examples For this sample array >>> a = np.array([[10, 30, 20], [60, 40, 50]])
We can replace the maximum values with: >>> ai = np.expand_dims(np.argmax(a, axis=1), axis=1)
>>> ai
array([[1],
[0]])
>>> np.put_along_axis(a, ai, 99, axis=1)
>>> a
array([[10, 99, 20],
[99, 40, 50]]) | numpy.reference.generated.numpy.put_along_axis |
numpy.putmask numpy.putmask(a, mask, values)
Changes elements of an array based on conditional and input values. Sets a.flat[n] = values[n] for each n where mask.flat[n]==True. If values is not the same size as a and mask then it will repeat. This gives behavior different from a[mask] = values. Parameters
andarray
Target array.
maskarray_like
Boolean mask array. It has to be the same shape as a.
valuesarray_like
Values to put into a where mask is True. If values is smaller than a it will be repeated. See also
place, put, take, copyto
Examples >>> x = np.arange(6).reshape(2, 3)
>>> np.putmask(x, x>2, x**2)
>>> x
array([[ 0, 1, 2],
[ 9, 16, 25]])
If values is smaller than a it is repeated: >>> x = np.arange(5)
>>> np.putmask(x, x>1, [-33, -44])
>>> x
array([ 0, 1, -33, -44, -33]) | numpy.reference.generated.numpy.putmask |
numpy.PZERO
IEEE 754 floating point representation of positive zero. Returns yfloat
A floating point representation of positive zero. See Also NZERO : Defines negative zero. isinf : Shows which elements are positive or negative infinity. isposinf : Shows which elements are positive infinity. isneginf : Shows which elements are negative infinity. isnan : Shows which elements are Not a Number. isfiniteShows which elements are finite - not one of
Not a Number, positive infinity and negative infinity. Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Positive zero is considered to be a finite number. Examples >>> np.PZERO
0.0
>>> np.NZERO
-0.0
>>> np.isfinite([np.PZERO])
array([ True])
>>> np.isnan([np.PZERO])
array([False])
>>> np.isinf([np.PZERO])
array([False]) | numpy.reference.constants#numpy.PZERO |
numpy.quantile numpy.quantile(a, q, axis=None, out=None, overwrite_input=False, method='linear', keepdims=False, *, interpolation=None)[source]
Compute the q-th quantile of the data along the specified axis. New in version 1.15.0. Parameters
aarray_like
Input array or object that can be converted to an array.
qarray_like of float
Quantile or sequence of quantiles to compute, which must be between 0 and 1 inclusive.
axis{int, tuple of int, None}, optional
Axis or axes along which the quantiles are computed. The default is to compute the quantile(s) along a flattened version of the array.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.
overwrite_inputbool, optional
If True, then allow the input array a to be modified by intermediate calculations, to save memory. In this case, the contents of the input a after this function completes is undefined.
methodstr, optional
This parameter specifies the method to use for estimating the quantile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [1] are: ‘inverted_cdf’ ‘averaged_inverted_cdf’ ‘closest_observation’ ‘interpolated_inverted_cdf’ ‘hazen’ ‘weibull’ ‘linear’ (default) ‘median_unbiased’ ‘normal_unbiased’ The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: ‘lower’ ‘higher’, ‘midpoint’ ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array a.
interpolationstr, optional
Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns
quantilescalar or ndarray
If q is a single quantile and axis=None, then the result is a scalar. If multiple quantiles are given, first axis of the result corresponds to the quantiles. The other axes are the axes that remain after the reduction of a. If the input contains integers or floats smaller than float64, the output data-type is float64. Otherwise, the output data-type is the same as that of the input. If out is specified, that array is returned instead. See also mean
percentile
equivalent to quantile, but with q in the range [0, 100]. median
equivalent to quantile(..., 0.5) nanquantile
Notes Given a vector V of length N, the q-th quantile of V is the value q of the way from the minimum to the maximum in a sorted copy of V. The values and distances of the two nearest neighbors as well as the method parameter will determine the quantile if the normalized ranking does not match the location of q exactly. This function is the same as the median if q=0.5, the same as the minimum if q=0.0 and the same as the maximum if q=1.0. This optional method parameter specifies the method to use when the desired quantile lies between two data points i < j. If g is the fractional part of the index surrounded by i and alpha and beta are correction constants modifying i and j. \[i + g = (q - alpha) / ( n - alpha - beta + 1 )\] The different methods then work as follows inverted_cdf:
method 1 of H&F [1]. This method gives discontinuous results: * if g > 0 ; then take j * if g = 0 ; then take i averaged_inverted_cdf:
method 2 of H&F [1]. This method give discontinuous results: * if g > 0 ; then take j * if g = 0 ; then average between bounds closest_observation:
method 3 of H&F [1]. This method give discontinuous results: * if g > 0 ; then take j * if g = 0 and index is odd ; then take j * if g = 0 and index is even ; then take i interpolated_inverted_cdf:
method 4 of H&F [1]. This method give continuous results using: * alpha = 0 * beta = 1 hazen:
method 5 of H&F [1]. This method give continuous results using: * alpha = 1/2 * beta = 1/2 weibull:
method 6 of H&F [1]. This method give continuous results using: * alpha = 0 * beta = 0 linear:
method 7 of H&F [1]. This method give continuous results using: * alpha = 1 * beta = 1 median_unbiased:
method 8 of H&F [1]. This method is probably the best method if the sample distribution function is unknown (see reference). This method give continuous results using: * alpha = 1/3 * beta = 1/3 normal_unbiased:
method 9 of H&F [1]. This method is probably the best method if the sample distribution function is known to be normal. This method give continuous results using: * alpha = 3/8 * beta = 3/8 lower:
NumPy method kept for backwards compatibility. Takes i as the interpolation point. higher:
NumPy method kept for backwards compatibility. Takes j as the interpolation point. nearest:
NumPy method kept for backwards compatibility. Takes i or j, whichever is nearest. midpoint:
NumPy method kept for backwards compatibility. Uses (i + j) / 2. References
1(1,2,3,4,5,6,7,8,9,10)
R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 Examples >>> a = np.array([[10, 7, 4], [3, 2, 1]])
>>> a
array([[10, 7, 4],
[ 3, 2, 1]])
>>> np.quantile(a, 0.5)
3.5
>>> np.quantile(a, 0.5, axis=0)
array([6.5, 4.5, 2.5])
>>> np.quantile(a, 0.5, axis=1)
array([7., 2.])
>>> np.quantile(a, 0.5, axis=1, keepdims=True)
array([[7.],
[2.]])
>>> m = np.quantile(a, 0.5, axis=0)
>>> out = np.zeros_like(m)
>>> np.quantile(a, 0.5, axis=0, out=out)
array([6.5, 4.5, 2.5])
>>> m
array([6.5, 4.5, 2.5])
>>> b = a.copy()
>>> np.quantile(b, 0.5, axis=1, overwrite_input=True)
array([7., 2.])
>>> assert not np.all(a == b)
See also numpy.percentile for a visualization of most methods. | numpy.reference.generated.numpy.quantile |