title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.DataFrame.rank | `pandas.DataFrame.rank`
Compute numerical data ranks (1 through n) along axis.
```
>>> df = pd.DataFrame(data={'Animal': ['cat', 'penguin', 'dog',
... 'spider', 'snake'],
... 'Number_legs': [4, 2, 4, 8, np.nan]})
>>> df
Animal Number_legs
0 cat 4.0
1 penguin 2.0
2 dog 4.0
3 spider 8.0
4 snake NaN
``` | DataFrame.rank(axis=0, method='average', numeric_only=_NoDefault.no_default, na_option='keep', ascending=True, pct=False)[source]#
Compute numerical data ranks (1 through n) along axis.
By default, equal values are assigned a rank that is the average of the
ranks of those values.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0Index to direct ranking.
For Series this parameter is unused and defaults to 0.
method{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’How to rank the group of records that have the same value (i.e. ties):
average: average rank of the group
min: lowest rank in the group
max: highest rank in the group
first: ranks assigned in order they appear in the array
dense: like ‘min’, but rank always increases by 1 between groups.
numeric_onlybool, optionalFor DataFrame objects, rank only numeric columns if set to True.
na_option{‘keep’, ‘top’, ‘bottom’}, default ‘keep’How to rank NaN values:
keep: assign NaN rank to NaN values
top: assign lowest rank to NaN values
bottom: assign highest rank to NaN values
ascendingbool, default TrueWhether or not the elements should be ranked in ascending order.
pctbool, default FalseWhether or not to display the returned rankings in percentile
form.
Returns
same type as callerReturn a Series or DataFrame with data ranks as values.
See also
core.groupby.GroupBy.rankRank of values within each group.
Examples
>>> df = pd.DataFrame(data={'Animal': ['cat', 'penguin', 'dog',
... 'spider', 'snake'],
... 'Number_legs': [4, 2, 4, 8, np.nan]})
>>> df
Animal Number_legs
0 cat 4.0
1 penguin 2.0
2 dog 4.0
3 spider 8.0
4 snake NaN
Ties are assigned the mean of the ranks (by default) for the group.
>>> s = pd.Series(range(5), index=list("abcde"))
>>> s["d"] = s["b"]
>>> s.rank()
a 1.0
b 2.5
c 4.0
d 2.5
e 5.0
dtype: float64
The following example shows how the method behaves with the above
parameters:
default_rank: this is the default behaviour obtained without using
any parameter.
max_rank: setting method = 'max' the records that have the
same values are ranked using the highest rank (e.g.: since ‘cat’
and ‘dog’ are both in the 2nd and 3rd position, rank 3 is assigned.)
NA_bottom: choosing na_option = 'bottom', if there are records
with NaN values they are placed at the bottom of the ranking.
pct_rank: when setting pct = True, the ranking is expressed as
percentile rank.
>>> df['default_rank'] = df['Number_legs'].rank()
>>> df['max_rank'] = df['Number_legs'].rank(method='max')
>>> df['NA_bottom'] = df['Number_legs'].rank(na_option='bottom')
>>> df['pct_rank'] = df['Number_legs'].rank(pct=True)
>>> df
Animal Number_legs default_rank max_rank NA_bottom pct_rank
0 cat 4.0 2.5 3.0 2.5 0.625
1 penguin 2.0 1.0 1.0 1.0 0.250
2 dog 4.0 2.5 3.0 2.5 0.625
3 spider 8.0 4.0 4.0 4.0 1.000
4 snake NaN NaN NaN 5.0 NaN
| reference/api/pandas.DataFrame.rank.html |
pandas.tseries.offsets.Minute.normalize | pandas.tseries.offsets.Minute.normalize | Minute.normalize#
| reference/api/pandas.tseries.offsets.Minute.normalize.html |
pandas.tseries.offsets.CustomBusinessMonthBegin.normalize | pandas.tseries.offsets.CustomBusinessMonthBegin.normalize | CustomBusinessMonthBegin.normalize#
| reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.normalize.html |
pandas.io.formats.style.Styler.concat | `pandas.io.formats.style.Styler.concat`
Append another Styler to combine the output into a single table.
```
>>> df = DataFrame([[4, 6], [1, 9], [3, 4], [5, 5], [9,6]],
... columns=["Mike", "Jim"],
... index=["Mon", "Tue", "Wed", "Thurs", "Fri"])
>>> styler = df.style.concat(df.agg(["sum"]).style)
``` | Styler.concat(other)[source]#
Append another Styler to combine the output into a single table.
New in version 1.5.0.
Parameters
otherStylerThe other Styler object which has already been styled and formatted. The
data for this Styler must have the same columns as the original, and the
number of index levels must also be the same to render correctly.
Returns
selfStyler
Notes
The purpose of this method is to extend existing styled dataframes with other
metrics that may be useful but may not conform to the original’s structure.
For example adding a sub total row, or displaying metrics such as means,
variance or counts.
Styles that are applied using the apply, applymap, apply_index
and applymap_index, and formatting applied with format and
format_index will be preserved.
Warning
Only the output methods to_html, to_string and to_latex
currently work with concatenated Stylers.
Other output methods, including to_excel, do not work with
concatenated Stylers.
The following should be noted:
table_styles, table_attributes, caption and uuid are all
inherited from the original Styler and not other.
hidden columns and hidden index levels will be inherited from the
original Styler
css will be inherited from the original Styler, and the value of
keys data, row_heading and row will be prepended with
foot0_. If more concats are chained, their styles will be prepended
with foot1_, ‘’foot_2’’, etc., and if a concatenated style have
another concatanated style, the second style will be prepended with
foot{parent}_foot{child}_.
A common use case is to concatenate user defined functions with
DataFrame.agg or with described statistics via DataFrame.describe.
See examples.
Examples
A common use case is adding totals rows, or otherwise, via methods calculated
in DataFrame.agg.
>>> df = DataFrame([[4, 6], [1, 9], [3, 4], [5, 5], [9,6]],
... columns=["Mike", "Jim"],
... index=["Mon", "Tue", "Wed", "Thurs", "Fri"])
>>> styler = df.style.concat(df.agg(["sum"]).style)
Since the concatenated object is a Styler the existing functionality can be
used to conditionally format it as well as the original.
>>> descriptors = df.agg(["sum", "mean", lambda s: s.dtype])
>>> descriptors.index = ["Total", "Average", "dtype"]
>>> other = (descriptors.style
... .highlight_max(axis=1, subset=(["Total", "Average"], slice(None)))
... .format(subset=("Average", slice(None)), precision=2, decimal=",")
... .applymap(lambda v: "font-weight: bold;"))
>>> styler = (df.style
... .highlight_max(color="salmon")
... .set_table_styles([{"selector": ".foot_row0",
... "props": "border-top: 1px solid black;"}]))
>>> styler.concat(other)
When other has fewer index levels than the original Styler it is possible
to extend the index in other, with placeholder levels.
>>> df = DataFrame([[1], [2]], index=pd.MultiIndex.from_product([[0], [1, 2]]))
>>> descriptors = df.agg(["sum"])
>>> descriptors.index = pd.MultiIndex.from_product([[""], descriptors.index])
>>> df.style.concat(descriptors.style)
| reference/api/pandas.io.formats.style.Styler.concat.html |
pandas.io.formats.style.Styler.to_string | `pandas.io.formats.style.Styler.to_string`
Write Styler to a file, buffer or string in text format. | Styler.to_string(buf=None, *, encoding=None, sparse_index=None, sparse_columns=None, max_rows=None, max_columns=None, delimiter=' ')[source]#
Write Styler to a file, buffer or string in text format.
New in version 1.5.0.
Parameters
bufstr, path object, file-like object, optionalString, path object (implementing os.PathLike[str]), or file-like
object implementing a string write() function. If None, the result is
returned as a string.
encodingstr, optionalCharacter encoding setting for file output (and meta tags if available).
Defaults to pandas.options.styler.render.encoding value of “utf-8”.
sparse_indexbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each row.
Defaults to pandas.options.styler.sparse.index value.
sparse_columnsbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each
column. Defaults to pandas.options.styler.sparse.columns value.
max_rowsint, optionalThe maximum number of rows that will be rendered. Defaults to
pandas.options.styler.render.max_rows, which is None.
max_columnsint, optionalThe maximum number of columns that will be rendered. Defaults to
pandas.options.styler.render.max_columns, which is None.
Rows and columns may be reduced if the number of total elements is
large. This value is set to pandas.options.styler.render.max_elements,
which is 262144 (18 bit browser rendering).
delimiterstr, default single spaceThe separator between data elements.
Returns
str or NoneIf buf is None, returns the result as a string. Otherwise returns None.
| reference/api/pandas.io.formats.style.Styler.to_string.html |
pandas.tseries.offsets.WeekOfMonth.is_year_start | `pandas.tseries.offsets.WeekOfMonth.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | WeekOfMonth.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.WeekOfMonth.is_year_start.html |
pandas.api.extensions.ExtensionArray.take | `pandas.api.extensions.ExtensionArray.take`
Take elements from an array.
Indices to be taken. | ExtensionArray.take(indices, *, allow_fill=False, fill_value=None)[source]#
Take elements from an array.
Parameters
indicessequence of int or one-dimensional np.ndarray of intIndices to be taken.
allow_fillbool, default FalseHow to handle negative values in indices.
False: negative values in indices indicate positional indices
from the right (the default). This is similar to
numpy.take().
True: negative values in indices indicate
missing values. These values are set to fill_value. Any other
other negative values raise a ValueError.
fill_valueany, optionalFill value to use for NA-indices when allow_fill is True.
This may be None, in which case the default NA value for
the type, self.dtype.na_value, is used.
For many ExtensionArrays, there will be two representations of
fill_value: a user-facing “boxed” scalar, and a low-level
physical NA value. fill_value should be the user-facing version,
and the implementation should handle translating that to the
physical version for processing the take if necessary.
Returns
ExtensionArray
Raises
IndexErrorWhen the indices are out of bounds for the array.
ValueErrorWhen indices contains negative values other than -1
and allow_fill is True.
See also
numpy.takeTake elements from an array along an axis.
api.extensions.takeTake elements from an array.
Notes
ExtensionArray.take is called by Series.__getitem__, .loc,
iloc, when indices is a sequence of values. Additionally,
it’s called by Series.reindex(), or any other method
that causes realignment, with a fill_value.
Examples
Here’s an example implementation, which relies on casting the
extension array to object dtype. This uses the helper method
pandas.api.extensions.take().
def take(self, indices, allow_fill=False, fill_value=None):
from pandas.core.algorithms import take
# If the ExtensionArray is backed by an ndarray, then
# just pass that here instead of coercing to object.
data = self.astype(object)
if allow_fill and fill_value is None:
fill_value = self.dtype.na_value
# fill value should always be translated from the scalar
# type for the array, to the physical storage type for
# the data, before passing to take.
result = take(data, indices, fill_value=fill_value,
allow_fill=allow_fill)
return self._from_sequence(result, dtype=self.dtype)
| reference/api/pandas.api.extensions.ExtensionArray.take.html |
pandas.tseries.offsets.SemiMonthBegin.is_year_end | `pandas.tseries.offsets.SemiMonthBegin.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | SemiMonthBegin.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.SemiMonthBegin.is_year_end.html |
pandas.tseries.offsets.Milli.base | `pandas.tseries.offsets.Milli.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | Milli.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.Milli.base.html |
pandas.Timestamp.is_month_start | `pandas.Timestamp.is_month_start`
Return True if date is first day of month.
Examples
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_month_start
False
``` | Timestamp.is_month_start#
Return True if date is first day of month.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_month_start
False
>>> ts = pd.Timestamp(2020, 1, 1)
>>> ts.is_month_start
True
| reference/api/pandas.Timestamp.is_month_start.html |
pandas.CategoricalIndex.reorder_categories | `pandas.CategoricalIndex.reorder_categories`
Reorder categories as specified in new_categories.
new_categories need to include all old categories and no new category
items. | CategoricalIndex.reorder_categories(*args, **kwargs)[source]#
Reorder categories as specified in new_categories.
new_categories need to include all old categories and no new category
items.
Parameters
new_categoriesIndex-likeThe categories in new order.
orderedbool, optionalWhether or not the categorical is treated as a ordered categorical.
If not given, do not change the ordered information.
inplacebool, default FalseWhether or not to reorder the categories inplace or return a copy of
this categorical with reordered categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with removed categories or None if inplace=True.
Raises
ValueErrorIf the new categories do not contain all old category items or any
new ones
See also
rename_categoriesRename categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
| reference/api/pandas.CategoricalIndex.reorder_categories.html |
pandas.tseries.offsets.CustomBusinessMonthBegin.apply | pandas.tseries.offsets.CustomBusinessMonthBegin.apply | CustomBusinessMonthBegin.apply()#
| reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.apply.html |
pandas.Interval.mid | `pandas.Interval.mid`
Return the midpoint of the Interval. | Interval.mid#
Return the midpoint of the Interval.
| reference/api/pandas.Interval.mid.html |
pandas.io.formats.style.Styler.bar | `pandas.io.formats.style.Styler.bar`
Draw bar chart in the cell backgrounds. | Styler.bar(subset=None, axis=0, *, color=None, cmap=None, width=100, height=100, align='mid', vmin=None, vmax=None, props='width: 10em;')[source]#
Draw bar chart in the cell backgrounds.
Changed in version 1.4.0.
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'), or to the entire DataFrame at once
with axis=None.
colorstr or 2-tuple/listIf a str is passed, the color is the same for both
negative and positive numbers. If 2-tuple/list is used, the
first element is the color_negative and the second is the
color_positive (eg: [‘#d65f5f’, ‘#5fba7d’]).
cmapstr, matplotlib.cm.ColorMapA string name of a matplotlib Colormap, or a Colormap object. Cannot be
used together with color.
New in version 1.4.0.
widthfloat, default 100The percentage of the cell, measured from the left, in which to draw the
bars, in [0, 100].
heightfloat, default 100The percentage height of the bar in the cell, centrally aligned, in [0,100].
New in version 1.4.0.
alignstr, int, float, callable, default ‘mid’How to align the bars within the cells relative to a width adjusted center.
If string must be one of:
‘left’ : bars are drawn rightwards from the minimum data value.
‘right’ : bars are drawn leftwards from the maximum data value.
‘zero’ : a value of zero is located at the center of the cell.
‘mid’ : a value of (max-min)/2 is located at the center of the cell,
or if all values are negative (positive) the zero is
aligned at the right (left) of the cell.
‘mean’ : the mean value of the data is located at the center of the cell.
If a float or integer is given this will indicate the center of the cell.
If a callable should take a 1d or 2d array and return a scalar.
Changed in version 1.4.0.
vminfloat, optionalMinimum bar value, defining the left hand limit
of the bar drawing range, lower values are clipped to vmin.
When None (default): the minimum value of the data will be used.
vmaxfloat, optionalMaximum bar value, defining the right hand limit
of the bar drawing range, higher values are clipped to vmax.
When None (default): the maximum value of the data will be used.
propsstr, optionalThe base CSS of the cell that is extended to add the bar chart. Defaults to
“width: 10em;”.
New in version 1.4.0.
Returns
selfStyler
Notes
This section of the user guide:
Table Visualization gives
a number of examples for different settings and color coordination.
| reference/api/pandas.io.formats.style.Styler.bar.html |
pandas.core.groupby.DataFrameGroupBy.count | `pandas.core.groupby.DataFrameGroupBy.count`
Compute count of group, excluding missing values.
Count of values within each group. | DataFrameGroupBy.count()[source]#
Compute count of group, excluding missing values.
Returns
Series or DataFrameCount of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.groupby.DataFrameGroupBy.count.html |
Input/output | Input/output | Pickling#
read_pickle(filepath_or_buffer[, ...])
Load pickled pandas object (or any object) from file.
DataFrame.to_pickle(path[, compression, ...])
Pickle (serialize) object to file.
Flat file#
read_table(filepath_or_buffer, *[, sep, ...])
Read general delimited file into DataFrame.
read_csv(filepath_or_buffer, *[, sep, ...])
Read a comma-separated values (csv) file into DataFrame.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
read_fwf(filepath_or_buffer, *[, colspecs, ...])
Read a table of fixed-width formatted lines into DataFrame.
Clipboard#
read_clipboard([sep])
Read text from clipboard and pass to read_csv.
DataFrame.to_clipboard([excel, sep])
Copy object to the system clipboard.
Excel#
read_excel(io[, sheet_name, header, names, ...])
Read an Excel file into a pandas DataFrame.
DataFrame.to_excel(excel_writer[, ...])
Write object to an Excel sheet.
ExcelFile.parse([sheet_name, header, names, ...])
Parse specified sheet(s) into a DataFrame.
Styler.to_excel(excel_writer[, sheet_name, ...])
Write Styler to an Excel sheet.
ExcelWriter(path[, engine, date_format, ...])
Class for writing DataFrame objects into excel sheets.
JSON#
read_json(path_or_buf, *[, orient, typ, ...])
Convert a JSON string to pandas object.
json_normalize(data[, record_path, meta, ...])
Normalize semi-structured JSON data into a flat table.
DataFrame.to_json([path_or_buf, orient, ...])
Convert the object to a JSON string.
build_table_schema(data[, index, ...])
Create a Table schema from data.
HTML#
read_html(io, *[, match, flavor, header, ...])
Read HTML tables into a list of DataFrame objects.
DataFrame.to_html([buf, columns, col_space, ...])
Render a DataFrame as an HTML table.
Styler.to_html([buf, table_uuid, ...])
Write Styler to a file, buffer or string in HTML-CSS format.
XML#
read_xml(path_or_buffer, *[, xpath, ...])
Read XML document into a DataFrame object.
DataFrame.to_xml([path_or_buffer, index, ...])
Render a DataFrame to an XML document.
Latex#
DataFrame.to_latex([buf, columns, ...])
Render object to a LaTeX tabular, longtable, or nested table.
Styler.to_latex([buf, column_format, ...])
Write Styler to a file, buffer or string in LaTeX format.
HDFStore: PyTables (HDF5)#
read_hdf(path_or_buf[, key, mode, errors, ...])
Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, index, ...])
Store object in HDFStore.
HDFStore.append(key, value[, format, axes, ...])
Append to Table in file.
HDFStore.get(key)
Retrieve pandas object stored in file.
HDFStore.select(key[, where, start, stop, ...])
Retrieve pandas object stored in file, optionally based on where criteria.
HDFStore.info()
Print detailed information on the store.
HDFStore.keys([include])
Return a list of keys corresponding to objects stored in HDFStore.
HDFStore.groups()
Return a list of all the top-level nodes.
HDFStore.walk([where])
Walk the pytables group hierarchy for pandas objects.
Warning
One can store a subclass of DataFrame or Series to HDF5,
but the type of the subclass is lost upon storing.
Feather#
read_feather(path[, columns, use_threads, ...])
Load a feather-format object from the file path.
DataFrame.to_feather(path, **kwargs)
Write a DataFrame to the binary Feather format.
Parquet#
read_parquet(path[, engine, columns, ...])
Load a parquet object from the file path, returning a DataFrame.
DataFrame.to_parquet([path, engine, ...])
Write a DataFrame to the binary parquet format.
ORC#
read_orc(path[, columns])
Load an ORC object from the file path, returning a DataFrame.
DataFrame.to_orc([path, engine, index, ...])
Write a DataFrame to the ORC format.
SAS#
read_sas(filepath_or_buffer, *[, format, ...])
Read SAS files stored as either XPORT or SAS7BDAT format files.
SPSS#
read_spss(path[, usecols, convert_categoricals])
Load an SPSS file from the file path, returning a DataFrame.
SQL#
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Google BigQuery#
read_gbq(query[, project_id, index_col, ...])
Load data from Google BigQuery.
STATA#
read_stata(filepath_or_buffer, *[, ...])
Read Stata file into DataFrame.
DataFrame.to_stata(path, *[, convert_dates, ...])
Export DataFrame object to Stata dta format.
StataReader.data_label
Return data label of Stata file.
StataReader.value_labels()
Return a nested dict associating each variable name to its value and label.
StataReader.variable_labels()
Return a dict associating each variable name with corresponding label.
StataWriter.write_file()
Export DataFrame object to Stata dta format.
| reference/io.html |
pandas.tseries.offsets.YearEnd.__call__ | `pandas.tseries.offsets.YearEnd.__call__`
Call self as a function. | YearEnd.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.YearEnd.__call__.html |
pandas.tseries.offsets.Easter.rollforward | `pandas.tseries.offsets.Easter.rollforward`
Roll provided date forward to next offset only if not on offset. | Easter.rollforward()#
Roll provided date forward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.Easter.rollforward.html |
pandas.TimedeltaIndex.days | `pandas.TimedeltaIndex.days`
Number of days for each element. | property TimedeltaIndex.days[source]#
Number of days for each element.
| reference/api/pandas.TimedeltaIndex.days.html |
pandas.tseries.offsets.Week.is_anchored | `pandas.tseries.offsets.Week.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
``` | Week.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
| reference/api/pandas.tseries.offsets.Week.is_anchored.html |
pandas arrays, scalars, and data types | pandas arrays, scalars, and data types
For most data types, pandas uses NumPy arrays as the concrete
objects contained with a Index, Series, or
DataFrame.
For some data types, pandas extends NumPy’s type system. String aliases for these types
can be found at dtypes.
Kind of Data
pandas Data Type
Scalar | Objects#
For most data types, pandas uses NumPy arrays as the concrete
objects contained with a Index, Series, or
DataFrame.
For some data types, pandas extends NumPy’s type system. String aliases for these types
can be found at dtypes.
Kind of Data
pandas Data Type
Scalar
Array
TZ-aware datetime
DatetimeTZDtype
Timestamp
Datetimes
Timedeltas
(none)
Timedelta
Timedeltas
Period (time spans)
PeriodDtype
Period
Periods
Intervals
IntervalDtype
Interval
Intervals
Nullable Integer
Int64Dtype, …
(none)
Nullable integer
Categorical
CategoricalDtype
(none)
Categoricals
Sparse
SparseDtype
(none)
Sparse
Strings
StringDtype
str
Strings
Boolean (with NA)
BooleanDtype
bool
Nullable Boolean
PyArrow
ArrowDtype
Python Scalars or NA
PyArrow
pandas and third-party libraries can extend NumPy’s type system (see Extension types).
The top-level array() method can be used to create a new array, which may be
stored in a Series, Index, or as a column in a DataFrame.
array(data[, dtype, copy])
Create an array.
PyArrow#
Warning
This feature is experimental, and the API can change in a future release without warning.
The arrays.ArrowExtensionArray is backed by a pyarrow.ChunkedArray with a
pyarrow.DataType instead of a NumPy array and data type. The .dtype of a arrays.ArrowExtensionArray
is an ArrowDtype.
Pyarrow provides similar array and data type
support as NumPy including first-class nullability support for all data types, immutability and more.
Note
For string types (pyarrow.string(), string[pyarrow]), PyArrow support is still facilitated
by arrays.ArrowStringArray and StringDtype("pyarrow"). See the string section
below.
While individual values in an arrays.ArrowExtensionArray are stored as a PyArrow objects, scalars are returned
as Python scalars corresponding to the data type, e.g. a PyArrow int64 will be returned as Python int, or NA for missing
values.
arrays.ArrowExtensionArray(values)
Pandas ExtensionArray backed by a PyArrow ChunkedArray.
ArrowDtype(pyarrow_dtype)
An ExtensionDtype for PyArrow data types.
Datetimes#
NumPy cannot natively represent timezone-aware datetimes. pandas supports this
with the arrays.DatetimeArray extension array, which can hold timezone-naive
or timezone-aware values.
Timestamp, a subclass of datetime.datetime, is pandas’
scalar type for timezone-naive or timezone-aware datetime data.
Timestamp([ts_input, freq, tz, unit, year, ...])
Pandas replacement for python datetime.datetime object.
Properties#
Timestamp.asm8
Return numpy datetime64 format in nanoseconds.
Timestamp.day
Timestamp.dayofweek
Return day of the week.
Timestamp.day_of_week
Return day of the week.
Timestamp.dayofyear
Return the day of the year.
Timestamp.day_of_year
Return the day of the year.
Timestamp.days_in_month
Return the number of days in the month.
Timestamp.daysinmonth
Return the number of days in the month.
Timestamp.fold
Timestamp.hour
Timestamp.is_leap_year
Return True if year is a leap year.
Timestamp.is_month_end
Return True if date is last day of month.
Timestamp.is_month_start
Return True if date is first day of month.
Timestamp.is_quarter_end
Return True if date is last day of the quarter.
Timestamp.is_quarter_start
Return True if date is first day of the quarter.
Timestamp.is_year_end
Return True if date is last day of the year.
Timestamp.is_year_start
Return True if date is first day of the year.
Timestamp.max
Timestamp.microsecond
Timestamp.min
Timestamp.minute
Timestamp.month
Timestamp.nanosecond
Timestamp.quarter
Return the quarter of the year.
Timestamp.resolution
Timestamp.second
Timestamp.tz
Alias for tzinfo.
Timestamp.tzinfo
Timestamp.value
Timestamp.week
Return the week number of the year.
Timestamp.weekofyear
Return the week number of the year.
Timestamp.year
Methods#
Timestamp.astimezone(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.ceil(freq[, ambiguous, nonexistent])
Return a new Timestamp ceiled to this resolution.
Timestamp.combine(date, time)
Combine date, time into datetime with same date and time fields.
Timestamp.ctime
Return ctime() style string.
Timestamp.date
Return date object with same year, month and day.
Timestamp.day_name
Return the day name of the Timestamp with specified locale.
Timestamp.dst
Return self.tzinfo.dst(self).
Timestamp.floor(freq[, ambiguous, nonexistent])
Return a new Timestamp floored to this resolution.
Timestamp.freq
Timestamp.freqstr
Return the total number of days in the month.
Timestamp.fromordinal(ordinal[, freq, tz])
Construct a timestamp from a a proleptic Gregorian ordinal.
Timestamp.fromtimestamp(ts)
Transform timestamp[, tz] to tz's local time from POSIX timestamp.
Timestamp.isocalendar
Return a 3-tuple containing ISO year, week number, and weekday.
Timestamp.isoformat
Return the time formatted according to ISO 8610.
Timestamp.isoweekday()
Return the day of the week represented by the date.
Timestamp.month_name
Return the month name of the Timestamp with specified locale.
Timestamp.normalize
Normalize Timestamp to midnight, preserving tz information.
Timestamp.now([tz])
Return new Timestamp object representing current time local to tz.
Timestamp.replace([year, month, day, hour, ...])
Implements datetime.replace, handles nanoseconds.
Timestamp.round(freq[, ambiguous, nonexistent])
Round the Timestamp to the specified resolution.
Timestamp.strftime(format)
Return a formatted string of the Timestamp.
Timestamp.strptime(string, format)
Function is not implemented.
Timestamp.time
Return time object with same time but with tzinfo=None.
Timestamp.timestamp
Return POSIX timestamp as float.
Timestamp.timetuple
Return time tuple, compatible with time.localtime().
Timestamp.timetz
Return time object with same time and tzinfo.
Timestamp.to_datetime64
Return a numpy.datetime64 object with 'ns' precision.
Timestamp.to_numpy
Convert the Timestamp to a NumPy datetime64.
Timestamp.to_julian_date()
Convert TimeStamp to a Julian Date.
Timestamp.to_period
Return an period of which this timestamp is an observation.
Timestamp.to_pydatetime
Convert a Timestamp object to a native Python datetime object.
Timestamp.today([tz])
Return the current time in the local timezone.
Timestamp.toordinal
Return proleptic Gregorian ordinal.
Timestamp.tz_convert(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.tz_localize(tz[, ambiguous, ...])
Localize the Timestamp to a timezone.
Timestamp.tzname
Return self.tzinfo.tzname(self).
Timestamp.utcfromtimestamp(ts)
Construct a naive UTC datetime from a POSIX timestamp.
Timestamp.utcnow()
Return a new Timestamp representing UTC day and time.
Timestamp.utcoffset
Return self.tzinfo.utcoffset(self).
Timestamp.utctimetuple
Return UTC time tuple, compatible with time.localtime().
Timestamp.weekday()
Return the day of the week represented by the date.
A collection of timestamps may be stored in a arrays.DatetimeArray.
For timezone-aware data, the .dtype of a arrays.DatetimeArray is a
DatetimeTZDtype. For timezone-naive data, np.dtype("datetime64[ns]")
is used.
If the data are timezone-aware, then every value in the array must have the same timezone.
arrays.DatetimeArray(values[, dtype, freq, copy])
Pandas ExtensionArray for tz-naive or tz-aware datetime data.
DatetimeTZDtype([unit, tz])
An ExtensionDtype for timezone-aware datetime data.
Timedeltas#
NumPy can natively represent timedeltas. pandas provides Timedelta
for symmetry with Timestamp.
Timedelta([value, unit])
Represents a duration, the difference between two dates or times.
Properties#
Timedelta.asm8
Return a numpy timedelta64 array scalar view.
Timedelta.components
Return a components namedtuple-like.
Timedelta.days
Timedelta.delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
Timedelta.freq
(DEPRECATED) Freq property.
Timedelta.is_populated
(DEPRECATED) Is_populated property.
Timedelta.max
Timedelta.microseconds
Timedelta.min
Timedelta.nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
Timedelta.resolution
Timedelta.seconds
Timedelta.value
Timedelta.view
Array view compatibility.
Methods#
Timedelta.ceil(freq)
Return a new Timedelta ceiled to this resolution.
Timedelta.floor(freq)
Return a new Timedelta floored to this resolution.
Timedelta.isoformat
Format the Timedelta as ISO 8601 Duration.
Timedelta.round(freq)
Round the Timedelta to the specified resolution.
Timedelta.to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
Timedelta.to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
Timedelta.to_numpy
Convert the Timedelta to a NumPy timedelta64.
Timedelta.total_seconds
Total seconds in the duration.
A collection of Timedelta may be stored in a TimedeltaArray.
arrays.TimedeltaArray(values[, dtype, freq, ...])
Pandas ExtensionArray for timedelta data.
Periods#
pandas represents spans of times as Period objects.
Period#
Period([value, freq, ordinal, year, month, ...])
Represents a period of time.
Properties#
Period.day
Get day of the month that a Period falls on.
Period.dayofweek
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.day_of_week
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.dayofyear
Return the day of the year.
Period.day_of_year
Return the day of the year.
Period.days_in_month
Get the total number of days in the month that this period falls on.
Period.daysinmonth
Get the total number of days of the month that this period falls on.
Period.end_time
Get the Timestamp for the end of the period.
Period.freq
Period.freqstr
Return a string representation of the frequency.
Period.hour
Get the hour of the day component of the Period.
Period.is_leap_year
Return True if the period's year is in a leap year.
Period.minute
Get minute of the hour component of the Period.
Period.month
Return the month this Period falls on.
Period.ordinal
Period.quarter
Return the quarter this Period falls on.
Period.qyear
Fiscal year the Period lies in according to its starting-quarter.
Period.second
Get the second component of the Period.
Period.start_time
Get the Timestamp for the start of the period.
Period.week
Get the week of the year on the given Period.
Period.weekday
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.weekofyear
Get the week of the year on the given Period.
Period.year
Return the year this Period falls on.
Methods#
Period.asfreq
Convert Period to desired frequency, at the start or end of the interval.
Period.now
Return the period of now's date.
Period.strftime
Returns a formatted string representation of the Period.
Period.to_timestamp
Return the Timestamp representation of the Period.
A collection of Period may be stored in a arrays.PeriodArray.
Every period in a arrays.PeriodArray must have the same freq.
arrays.PeriodArray(values[, dtype, freq, copy])
Pandas ExtensionArray for storing Period data.
PeriodDtype([freq])
An ExtensionDtype for Period data.
Intervals#
Arbitrary intervals can be represented as Interval objects.
Interval
Immutable object implementing an Interval, a bounded slice-like interval.
Properties#
Interval.closed
String describing the inclusive side the intervals.
Interval.closed_left
Check if the interval is closed on the left side.
Interval.closed_right
Check if the interval is closed on the right side.
Interval.is_empty
Indicates if an interval is empty, meaning it contains no points.
Interval.left
Left bound for the interval.
Interval.length
Return the length of the Interval.
Interval.mid
Return the midpoint of the Interval.
Interval.open_left
Check if the interval is open on the left side.
Interval.open_right
Check if the interval is open on the right side.
Interval.overlaps
Check whether two Interval objects overlap.
Interval.right
Right bound for the interval.
A collection of intervals may be stored in an arrays.IntervalArray.
arrays.IntervalArray(data[, closed, dtype, ...])
Pandas array for interval data that are closed on the same side.
IntervalDtype([subtype, closed])
An ExtensionDtype for Interval data.
Nullable integer#
numpy.ndarray cannot natively represent integer-data with missing values.
pandas provides this through arrays.IntegerArray.
arrays.IntegerArray(values, mask[, copy])
Array of integer (optional missing) values.
Int8Dtype()
An ExtensionDtype for int8 integer data.
Int16Dtype()
An ExtensionDtype for int16 integer data.
Int32Dtype()
An ExtensionDtype for int32 integer data.
Int64Dtype()
An ExtensionDtype for int64 integer data.
UInt8Dtype()
An ExtensionDtype for uint8 integer data.
UInt16Dtype()
An ExtensionDtype for uint16 integer data.
UInt32Dtype()
An ExtensionDtype for uint32 integer data.
UInt64Dtype()
An ExtensionDtype for uint64 integer data.
Categoricals#
pandas defines a custom data type for representing data that can take only a
limited, fixed set of values. The dtype of a Categorical can be described by
a CategoricalDtype.
CategoricalDtype([categories, ordered])
Type for categorical data with the categories and orderedness.
CategoricalDtype.categories
An Index containing the unique categories allowed.
CategoricalDtype.ordered
Whether the categories have an ordered relationship.
Categorical data can be stored in a pandas.Categorical
Categorical(values[, categories, ordered, ...])
Represent a categorical variable in classic R / S-plus fashion.
The alternative Categorical.from_codes() constructor can be used when you
have the categories and integer codes already:
Categorical.from_codes(codes[, categories, ...])
Make a Categorical type from codes and categories or dtype.
The dtype information is available on the Categorical
Categorical.dtype
The CategoricalDtype for this instance.
Categorical.categories
The categories of this categorical.
Categorical.ordered
Whether the categories have an ordered relationship.
Categorical.codes
The category codes of this categorical.
np.asarray(categorical) works by implementing the array interface. Be aware, that this converts
the Categorical back to a NumPy array, so categories and order information is not preserved!
Categorical.__array__([dtype])
The numpy array interface.
A Categorical can be stored in a Series or DataFrame.
To create a Series of dtype category, use cat = s.astype(dtype) or
Series(..., dtype=dtype) where dtype is either
the string 'category'
an instance of CategoricalDtype.
If the Series is of dtype CategoricalDtype, Series.cat can be used to change the categorical
data. See Categorical accessor for more.
Sparse#
Data where a single value is repeated many times (e.g. 0 or NaN) may
be stored efficiently as a arrays.SparseArray.
arrays.SparseArray(data[, sparse_index, ...])
An ExtensionArray for storing sparse data.
SparseDtype([dtype, fill_value])
Dtype for data stored in SparseArray.
The Series.sparse accessor may be used to access sparse-specific attributes
and methods if the Series contains sparse values. See
Sparse accessor and the user guide for more.
Strings#
When working with text data, where each valid element is a string or missing,
we recommend using StringDtype (with the alias "string").
arrays.StringArray(values[, copy])
Extension array for string data.
arrays.ArrowStringArray(values)
Extension array for string data in a pyarrow.ChunkedArray.
StringDtype([storage])
Extension dtype for string data.
The Series.str accessor is available for Series backed by a arrays.StringArray.
See String handling for more.
Nullable Boolean#
The boolean dtype (with the alias "boolean") provides support for storing
boolean data (True, False) with missing values, which is not possible
with a bool numpy.ndarray.
arrays.BooleanArray(values, mask[, copy])
Array of boolean (True/False) data with missing values.
BooleanDtype()
Extension dtype for boolean data.
Utilities#
Constructors#
api.types.union_categoricals(to_union[, ...])
Combine list-like of Categorical-like, unioning categories.
api.types.infer_dtype
Return a string label of the type of a scalar or list-like of values.
api.types.pandas_dtype(dtype)
Convert input into a pandas only dtype object or a numpy dtype object.
Data type introspection#
api.types.is_bool_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a boolean dtype.
api.types.is_categorical_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Categorical dtype.
api.types.is_complex_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a complex dtype.
api.types.is_datetime64_any_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64 dtype.
api.types.is_datetime64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the datetime64 dtype.
api.types.is_datetime64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64[ns] dtype.
api.types.is_datetime64tz_dtype(arr_or_dtype)
Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
api.types.is_extension_type(arr)
(DEPRECATED) Check whether an array-like is of a pandas extension class instance.
api.types.is_extension_array_dtype(arr_or_dtype)
Check if an object is a pandas extension array type.
api.types.is_float_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a float dtype.
api.types.is_int64_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the int64 dtype.
api.types.is_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an integer dtype.
api.types.is_interval_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Interval dtype.
api.types.is_numeric_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a numeric dtype.
api.types.is_object_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the object dtype.
api.types.is_period_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Period dtype.
api.types.is_signed_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a signed integer dtype.
api.types.is_string_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the string dtype.
api.types.is_timedelta64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the timedelta64 dtype.
api.types.is_timedelta64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the timedelta64[ns] dtype.
api.types.is_unsigned_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an unsigned integer dtype.
api.types.is_sparse(arr)
Check whether an array-like is a 1-D pandas sparse array.
Iterable introspection#
api.types.is_dict_like(obj)
Check if the object is dict-like.
api.types.is_file_like(obj)
Check if the object is a file-like object.
api.types.is_list_like
Check if the object is list-like.
api.types.is_named_tuple(obj)
Check if the object is a named tuple.
api.types.is_iterator
Check if the object is an iterator.
Scalar introspection#
api.types.is_bool
Return True if given object is boolean.
api.types.is_categorical(arr)
(DEPRECATED) Check whether an array-like is a Categorical instance.
api.types.is_complex
Return True if given object is complex.
api.types.is_float
Return True if given object is float.
api.types.is_hashable(obj)
Return True if hash(obj) will succeed, False otherwise.
api.types.is_integer
Return True if given object is integer.
api.types.is_interval
api.types.is_number(obj)
Check if the object is a number.
api.types.is_re(obj)
Check if the object is a regex pattern instance.
api.types.is_re_compilable(obj)
Check if the object can be compiled into a regex pattern instance.
api.types.is_scalar
Return True if given object is scalar.
| reference/arrays.html |
pandas.tseries.offsets.YearEnd.copy | `pandas.tseries.offsets.YearEnd.copy`
Return a copy of the frequency.
Examples
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | YearEnd.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.YearEnd.copy.html |
pandas.PeriodIndex.start_time | `pandas.PeriodIndex.start_time`
Get the Timestamp for the start of the period.
```
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
``` | property PeriodIndex.start_time[source]#
Get the Timestamp for the start of the period.
Returns
Timestamp
See also
Period.end_timeReturn the end Timestamp.
Period.dayofyearReturn the day of year.
Period.daysinmonthReturn the days in that month.
Period.dayofweekReturn the day of the week.
Examples
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
>>> period.start_time
Timestamp('2012-01-01 00:00:00')
>>> period.end_time
Timestamp('2012-01-01 23:59:59.999999999')
| reference/api/pandas.PeriodIndex.start_time.html |
pandas.Index.shape | `pandas.Index.shape`
Return a tuple of the shape of the underlying data. | property Index.shape[source]#
Return a tuple of the shape of the underlying data.
| reference/api/pandas.Index.shape.html |
pandas.Timestamp.to_julian_date | `pandas.Timestamp.to_julian_date`
Convert TimeStamp to a Julian Date.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52')
>>> ts.to_julian_date()
2458923.147824074
``` | Timestamp.to_julian_date()#
Convert TimeStamp to a Julian Date.
0 Julian date is noon January 1, 4713 BC.
Examples
>>> ts = pd.Timestamp('2020-03-14T15:32:52')
>>> ts.to_julian_date()
2458923.147824074
| reference/api/pandas.Timestamp.to_julian_date.html |
pandas.Series.cumsum | `pandas.Series.cumsum`
Return cumulative sum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
sum.
```
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
``` | Series.cumsum(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative sum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
sum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
scalar or SeriesReturn cumulative sum of scalar or Series.
See also
core.window.expanding.Expanding.sumSimilar functionality but ignores NaN values.
Series.sumReturn the sum over Series axis.
Series.cummaxReturn cumulative maximum over Series axis.
Series.cumminReturn cumulative minimum over Series axis.
Series.cumsumReturn cumulative sum over Series axis.
Series.cumprodReturn cumulative product over Series axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the sum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row,
use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
| reference/api/pandas.Series.cumsum.html |
pandas.RangeIndex.from_range | `pandas.RangeIndex.from_range`
Create RangeIndex from a range object. | classmethod RangeIndex.from_range(data, name=None, dtype=None)[source]#
Create RangeIndex from a range object.
Returns
RangeIndex
| reference/api/pandas.RangeIndex.from_range.html |
pandas.Series.mul | `pandas.Series.mul`
Return Multiplication of series and other, element-wise (binary operator mul).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.multiply(b, fill_value=0)
a 1.0
b 0.0
c 0.0
d 0.0
e NaN
dtype: float64
``` | Series.mul(other, level=None, fill_value=None, axis=0)[source]#
Return Multiplication of series and other, element-wise (binary operator mul).
Equivalent to series * other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rmulReverse of the Multiplication operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.multiply(b, fill_value=0)
a 1.0
b 0.0
c 0.0
d 0.0
e NaN
dtype: float64
| reference/api/pandas.Series.mul.html |
pandas.tseries.offsets.FY5253Quarter.is_quarter_start | `pandas.tseries.offsets.FY5253Quarter.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | FY5253Quarter.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.FY5253Quarter.is_quarter_start.html |
pandas.Series.str.title | `pandas.Series.str.title`
Convert strings in the Series/Index to titlecase.
Equivalent to str.title().
```
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
``` | Series.str.title()[source]#
Convert strings in the Series/Index to titlecase.
Equivalent to str.title().
Returns
Series or Index of object
See also
Series.str.lowerConverts all characters to lowercase.
Series.str.upperConverts all characters to uppercase.
Series.str.titleConverts first character of each word to uppercase and remaining to lowercase.
Series.str.capitalizeConverts first character to uppercase and remaining to lowercase.
Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase.
Series.str.casefoldRemoves all case distinctions in the string.
Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
| reference/api/pandas.Series.str.title.html |
pandas.tseries.offsets.FY5253Quarter.nanos | pandas.tseries.offsets.FY5253Quarter.nanos | FY5253Quarter.nanos#
| reference/api/pandas.tseries.offsets.FY5253Quarter.nanos.html |
pandas.tseries.offsets.LastWeekOfMonth.name | `pandas.tseries.offsets.LastWeekOfMonth.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
``` | LastWeekOfMonth.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.LastWeekOfMonth.name.html |
pandas.tseries.offsets.BQuarterBegin.nanos | pandas.tseries.offsets.BQuarterBegin.nanos | BQuarterBegin.nanos#
| reference/api/pandas.tseries.offsets.BQuarterBegin.nanos.html |
pandas.DataFrame.first_valid_index | `pandas.DataFrame.first_valid_index`
Return index for first non-NA value or None, if no non-NA value is found.
Notes | DataFrame.first_valid_index()[source]#
Return index for first non-NA value or None, if no non-NA value is found.
Returns
scalartype of index
Notes
If all elements are non-NA/null, returns None.
Also returns None for empty Series/DataFrame.
| reference/api/pandas.DataFrame.first_valid_index.html |
pandas.io.formats.style.Styler.set_na_rep | `pandas.io.formats.style.Styler.set_na_rep`
Set the missing data representation on a Styler. | Styler.set_na_rep(na_rep)[source]#
Set the missing data representation on a Styler.
New in version 1.0.0.
Deprecated since version 1.3.0.
Parameters
na_repstr
Returns
selfStyler
Notes
This method is deprecated. See Styler.format()
| reference/api/pandas.io.formats.style.Styler.set_na_rep.html |
pandas.DataFrame.to_clipboard | `pandas.DataFrame.to_clipboard`
Copy object to the system clipboard.
```
>>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C'])
``` | DataFrame.to_clipboard(excel=True, sep=None, **kwargs)[source]#
Copy object to the system clipboard.
Write a text representation of object to the system clipboard.
This can be pasted into Excel, for example.
Parameters
excelbool, default TrueProduce output in a csv format for easy pasting into excel.
True, use the provided separator for csv pasting.
False, write a string representation of the object to the clipboard.
sepstr, default '\t'Field delimiter.
**kwargsThese parameters will be passed to DataFrame.to_csv.
See also
DataFrame.to_csvWrite a DataFrame to a comma-separated values (csv) file.
read_clipboardRead text from clipboard and pass to read_csv.
Notes
Requirements for your platform.
Linux : xclip, or xsel (with PyQt4 modules)
Windows : none
macOS : none
This method uses the processes developed for the package pyperclip. A
solution to render any output string format is given in the examples.
Examples
Copy the contents of a DataFrame to the clipboard.
>>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C'])
>>> df.to_clipboard(sep=',')
... # Wrote the following to the system clipboard:
... # ,A,B,C
... # 0,1,2,3
... # 1,4,5,6
We can omit the index by passing the keyword index and setting
it to false.
>>> df.to_clipboard(sep=',', index=False)
... # Wrote the following to the system clipboard:
... # A,B,C
... # 1,2,3
... # 4,5,6
Using the original pyperclip package for any string output format.
import pyperclip
html = df.style.to_html()
pyperclip.copy(html)
| reference/api/pandas.DataFrame.to_clipboard.html |
pandas.tseries.offsets.WeekOfMonth | `pandas.tseries.offsets.WeekOfMonth`
Describes monthly dates like “the Tuesday of the 2nd week of each month”.
A specific integer for the week of the month.
e.g. 0 is 1st week of month, 1 is the 2nd week, etc.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.WeekOfMonth()
Timestamp('2022-01-03 00:00:00')
``` | class pandas.tseries.offsets.WeekOfMonth#
Describes monthly dates like “the Tuesday of the 2nd week of each month”.
Parameters
nint
weekint {0, 1, 2, 3, …}, default 0A specific integer for the week of the month.
e.g. 0 is 1st week of month, 1 is the 2nd week, etc.
weekdayint {0, 1, …, 6}, default 0A specific integer for the day of the week.
0 is Monday
1 is Tuesday
2 is Wednesday
3 is Thursday
4 is Friday
5 is Saturday
6 is Sunday.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> ts + pd.offsets.WeekOfMonth()
Timestamp('2022-01-03 00:00:00')
Attributes
base
Returns a copy of the calling offset object with n=1 and all other attributes equal.
freqstr
Return a string representing the frequency.
kwds
Return a dict of extra parameters for the offset.
name
Return a string representing the base frequency.
n
nanos
normalize
rule_code
week
weekday
Methods
__call__(*args, **kwargs)
Call self as a function.
apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
copy
Return a copy of the frequency.
is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
is_month_end
Return boolean whether a timestamp occurs on the month end.
is_month_start
Return boolean whether a timestamp occurs on the month start.
is_on_offset
Return boolean whether a timestamp intersects with this frequency.
is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
is_year_end
Return boolean whether a timestamp occurs on the year end.
is_year_start
Return boolean whether a timestamp occurs on the year start.
rollback
Roll provided date backward to next offset only if not on offset.
rollforward
Roll provided date forward to next offset only if not on offset.
apply
isAnchored
onOffset
| reference/api/pandas.tseries.offsets.WeekOfMonth.html |
pandas.Index.is_monotonic_decreasing | `pandas.Index.is_monotonic_decreasing`
Return a boolean if the values are equal or decreasing.
```
>>> Index([3, 2, 1]).is_monotonic_decreasing
True
>>> Index([3, 2, 2]).is_monotonic_decreasing
True
>>> Index([3, 1, 2]).is_monotonic_decreasing
False
``` | property Index.is_monotonic_decreasing[source]#
Return a boolean if the values are equal or decreasing.
Examples
>>> Index([3, 2, 1]).is_monotonic_decreasing
True
>>> Index([3, 2, 2]).is_monotonic_decreasing
True
>>> Index([3, 1, 2]).is_monotonic_decreasing
False
| reference/api/pandas.Index.is_monotonic_decreasing.html |
pandas.errors.PerformanceWarning | `pandas.errors.PerformanceWarning`
Warning raised when there is a possible performance impact. | exception pandas.errors.PerformanceWarning[source]#
Warning raised when there is a possible performance impact.
| reference/api/pandas.errors.PerformanceWarning.html |
pandas.Index.unique | `pandas.Index.unique`
Return unique values in the index. | Index.unique(level=None)[source]#
Return unique values in the index.
Unique values are returned in order of appearance, this does NOT sort.
Parameters
levelint or hashable, optionalOnly return values from specified level (for MultiIndex).
If int, gets the level by integer position, else by level name.
Returns
Index
See also
uniqueNumpy array of unique values in that column.
Series.uniqueReturn unique values of Series object.
| reference/api/pandas.Index.unique.html |
Comparison with Stata | Comparison with Stata
For potential users coming from Stata
this page is meant to demonstrate how different Stata operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
pandas
Stata | For potential users coming from Stata
this page is meant to demonstrate how different Stata operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas
to familiarize yourself with the library.
As is customary, we import pandas and NumPy as follows:
In [1]: import pandas as pd
In [2]: import numpy as np
Data structures#
General terminology translation#
pandas
Stata
DataFrame
data set
column
variable
row
observation
groupby
bysort
NaN
.
DataFrame#
A DataFrame in pandas is analogous to a Stata data set – a two-dimensional
data source with labeled columns that can be of different types. As will be
shown in this document, almost any operation that can be applied to a data set
in Stata can also be accomplished in pandas.
Series#
A Series is the data structure that represents one column of a
DataFrame. Stata doesn’t have a separate data structure for a single column,
but in general, working with a Series is analogous to referencing a column
of a data set in Stata.
Index#
Every DataFrame and Series has an Index – labels on the
rows of the data. Stata does not have an exactly analogous concept. In Stata, a data set’s
rows are essentially unlabeled, other than an implicit integer index that can be
accessed with _n.
In pandas, if no index is specified, an integer index is also used by default
(first row = 0, second row = 1, and so on). While using a labeled Index or
MultiIndex can enable sophisticated analyses and is ultimately an important
part of pandas to understand, for this comparison we will essentially ignore the
Index and just treat the DataFrame as a collection of columns. Please
see the indexing documentation for much more on how to use an
Index effectively.
Copies vs. in place operations#
Most pandas operations return copies of the Series/DataFrame. To make the changes “stick”,
you’ll need to either assign to a new variable:
sorted_df = df.sort_values("col1")
or overwrite the original one:
df = df.sort_values("col1")
Note
You will see an inplace=True keyword argument available for some methods:
df.sort_values("col1", inplace=True)
Its use is discouraged. More information.
Data input / output#
Constructing a DataFrame from values#
A Stata data set can be built from specified values by
placing the data after an input statement and
specifying the column names.
input x y
1 2
3 4
5 6
end
A pandas DataFrame can be constructed in many different ways,
but for a small number of values, it is often convenient to specify it as
a Python dictionary, where the keys are the column names
and the values are the data.
In [3]: df = pd.DataFrame({"x": [1, 3, 5], "y": [2, 4, 6]})
In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6
Reading external data#
Like Stata, pandas provides utilities for reading in data from
many formats. The tips data set, found within the pandas
tests (csv)
will be used in many of the following examples.
Stata provides import delimited to read csv data into a data set in memory.
If the tips.csv file is in the current working directory, we can import it as follows.
import delimited tips.csv
The pandas method is read_csv(), which works similarly. Additionally, it will automatically download
the data set if presented with a url.
In [5]: url = (
...: "https://raw.githubusercontent.com/pandas-dev"
...: "/pandas/main/pandas/tests/io/data/csv/tips.csv"
...: )
...:
In [6]: tips = pd.read_csv(url)
In [7]: tips
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 29.03 5.92 Male No Sat Dinner 3
240 27.18 2.00 Female Yes Sat Dinner 2
241 22.67 2.00 Male Yes Sat Dinner 2
242 17.82 1.75 Male No Sat Dinner 2
243 18.78 3.00 Female No Thur Dinner 2
[244 rows x 7 columns]
Like import delimited, read_csv() can take a number of parameters to specify
how the data should be parsed. For example, if the data were instead tab delimited,
did not have column names, and existed in the current working directory,
the pandas command would be:
tips = pd.read_csv("tips.csv", sep="\t", header=None)
# alternatively, read_table is an alias to read_csv with tab delimiter
tips = pd.read_table("tips.csv", header=None)
pandas can also read Stata data sets in .dta format with the read_stata() function.
df = pd.read_stata("data.dta")
In addition to text/csv and Stata files, pandas supports a variety of other data formats
such as Excel, SAS, HDF5, Parquet, and SQL databases. These are all read via a pd.read_*
function. See the IO documentation for more details.
Limiting output#
By default, pandas will truncate output of large DataFrames to show the first and last rows.
This can be overridden by changing the pandas options, or using
DataFrame.head() or DataFrame.tail().
In [8]: tips.head(5)
Out[8]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
The equivalent in Stata would be:
list in 1/5
Exporting data#
The inverse of import delimited in Stata is export delimited
export delimited tips2.csv
Similarly in pandas, the opposite of read_csv is DataFrame.to_csv().
tips.to_csv("tips2.csv")
pandas can also export to Stata file format with the DataFrame.to_stata() method.
tips.to_stata("tips2.dta")
Data operations#
Operations on columns#
In Stata, arbitrary math expressions can be used with the generate and
replace commands on new or existing columns. The drop command drops
the column from the data set.
replace total_bill = total_bill - 2
generate new_bill = total_bill / 2
drop new_bill
pandas provides vectorized operations by specifying the individual Series in the
DataFrame. New columns can be assigned in the same way. The DataFrame.drop() method drops
a column from the DataFrame.
In [9]: tips["total_bill"] = tips["total_bill"] - 2
In [10]: tips["new_bill"] = tips["total_bill"] / 2
In [11]: tips
Out[11]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
.. ... ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3 13.515
240 25.18 2.00 Female Yes Sat Dinner 2 12.590
241 20.67 2.00 Male Yes Sat Dinner 2 10.335
242 15.82 1.75 Male No Sat Dinner 2 7.910
243 16.78 3.00 Female No Thur Dinner 2 8.390
[244 rows x 8 columns]
In [12]: tips = tips.drop("new_bill", axis=1)
Filtering#
Filtering in Stata is done with an if clause on one or more columns.
list if total_bill > 10
DataFrames can be filtered in multiple ways; the most intuitive of which is using
boolean indexing.
In [13]: tips[tips["total_bill"] > 10]
Out[13]:
total_bill tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
5 23.29 4.71 Male No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3
240 25.18 2.00 Female Yes Sat Dinner 2
241 20.67 2.00 Male Yes Sat Dinner 2
242 15.82 1.75 Male No Sat Dinner 2
243 16.78 3.00 Female No Thur Dinner 2
[204 rows x 7 columns]
The above statement is simply passing a Series of True/False objects to the DataFrame,
returning all rows with True.
In [14]: is_dinner = tips["time"] == "Dinner"
In [15]: is_dinner
Out[15]:
0 True
1 True
2 True
3 True
4 True
...
239 True
240 True
241 True
242 True
243 True
Name: time, Length: 244, dtype: bool
In [16]: is_dinner.value_counts()
Out[16]:
True 176
False 68
Name: time, dtype: int64
In [17]: tips[is_dinner]
Out[17]:
total_bill tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3
240 25.18 2.00 Female Yes Sat Dinner 2
241 20.67 2.00 Male Yes Sat Dinner 2
242 15.82 1.75 Male No Sat Dinner 2
243 16.78 3.00 Female No Thur Dinner 2
[176 rows x 7 columns]
If/then logic#
In Stata, an if clause can also be used to create new columns.
generate bucket = "low" if total_bill < 10
replace bucket = "high" if total_bill >= 10
The same operation in pandas can be accomplished using
the where method from numpy.
In [18]: tips["bucket"] = np.where(tips["total_bill"] < 10, "low", "high")
In [19]: tips
Out[19]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high
.. ... ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3 high
240 25.18 2.00 Female Yes Sat Dinner 2 high
241 20.67 2.00 Male Yes Sat Dinner 2 high
242 15.82 1.75 Male No Sat Dinner 2 high
243 16.78 3.00 Female No Thur Dinner 2 high
[244 rows x 8 columns]
Date functionality#
Stata provides a variety of functions to do operations on
date/datetime columns.
generate date1 = mdy(1, 15, 2013)
generate date2 = date("Feb152015", "MDY")
generate date1_year = year(date1)
generate date2_month = month(date2)
* shift date to beginning of next month
generate date1_next = mdy(month(date1) + 1, 1, year(date1)) if month(date1) != 12
replace date1_next = mdy(1, 1, year(date1) + 1) if month(date1) == 12
generate months_between = mofd(date2) - mofd(date1)
list date1 date2 date1_year date2_month date1_next months_between
The equivalent pandas operations are shown below. In addition to these
functions, pandas supports other Time Series features
not available in Stata (such as time zone handling and custom offsets) –
see the timeseries documentation for more details.
In [20]: tips["date1"] = pd.Timestamp("2013-01-15")
In [21]: tips["date2"] = pd.Timestamp("2015-02-15")
In [22]: tips["date1_year"] = tips["date1"].dt.year
In [23]: tips["date2_month"] = tips["date2"].dt.month
In [24]: tips["date1_next"] = tips["date1"] + pd.offsets.MonthBegin()
In [25]: tips["months_between"] = tips["date2"].dt.to_period("M") - tips[
....: "date1"
....: ].dt.to_period("M")
....:
In [26]: tips[
....: ["date1", "date2", "date1_year", "date2_month", "date1_next", "months_between"]
....: ]
....:
Out[26]:
date1 date2 date1_year date2_month date1_next months_between
0 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
1 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
2 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
3 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
4 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
.. ... ... ... ... ... ...
239 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
240 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
241 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
242 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
243 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
[244 rows x 6 columns]
Selection of columns#
Stata provides keywords to select, drop, and rename columns.
keep sex total_bill tip
drop sex
rename total_bill total_bill_2
The same operations are expressed in pandas below.
Keep certain columns#
In [27]: tips[["sex", "total_bill", "tip"]]
Out[27]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
2 Male 19.01 3.50
3 Male 21.68 3.31
4 Female 22.59 3.61
.. ... ... ...
239 Male 27.03 5.92
240 Female 25.18 2.00
241 Male 20.67 2.00
242 Male 15.82 1.75
243 Female 16.78 3.00
[244 rows x 3 columns]
Drop a column#
In [28]: tips.drop("sex", axis=1)
Out[28]:
total_bill tip smoker day time size
0 14.99 1.01 No Sun Dinner 2
1 8.34 1.66 No Sun Dinner 3
2 19.01 3.50 No Sun Dinner 3
3 21.68 3.31 No Sun Dinner 2
4 22.59 3.61 No Sun Dinner 4
.. ... ... ... ... ... ...
239 27.03 5.92 No Sat Dinner 3
240 25.18 2.00 Yes Sat Dinner 2
241 20.67 2.00 Yes Sat Dinner 2
242 15.82 1.75 No Sat Dinner 2
243 16.78 3.00 No Thur Dinner 2
[244 rows x 6 columns]
Rename a column#
In [29]: tips.rename(columns={"total_bill": "total_bill_2"})
Out[29]:
total_bill_2 tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
.. ... ... ... ... ... ... ...
239 27.03 5.92 Male No Sat Dinner 3
240 25.18 2.00 Female Yes Sat Dinner 2
241 20.67 2.00 Male Yes Sat Dinner 2
242 15.82 1.75 Male No Sat Dinner 2
243 16.78 3.00 Female No Thur Dinner 2
[244 rows x 7 columns]
Sorting by values#
Sorting in Stata is accomplished via sort
sort sex total_bill
pandas has a DataFrame.sort_values() method, which takes a list of columns to sort by.
In [30]: tips = tips.sort_values(["sex", "total_bill"])
In [31]: tips
Out[31]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
.. ... ... ... ... ... ... ...
182 43.35 3.50 Male Yes Sun Dinner 3
156 46.17 5.00 Male No Sun Dinner 6
59 46.27 6.73 Male No Sat Dinner 4
212 46.33 9.00 Male No Sat Dinner 4
170 48.81 10.00 Male Yes Sat Dinner 3
[244 rows x 7 columns]
String processing#
Finding length of string#
Stata determines the length of a character string with the strlen() and
ustrlen() functions for ASCII and Unicode strings, respectively.
generate strlen_time = strlen(time)
generate ustrlen_time = ustrlen(time)
You can find the length of a character string with Series.str.len().
In Python 3, all strings are Unicode strings. len includes trailing blanks.
Use len and rstrip to exclude trailing blanks.
In [32]: tips["time"].str.len()
Out[32]:
67 6
92 6
111 6
145 5
135 5
..
182 6
156 6
59 6
212 6
170 6
Name: time, Length: 244, dtype: int64
In [33]: tips["time"].str.rstrip().str.len()
Out[33]:
67 6
92 6
111 6
145 5
135 5
..
182 6
156 6
59 6
212 6
170 6
Name: time, Length: 244, dtype: int64
Finding position of substring#
Stata determines the position of a character in a string with the strpos() function.
This takes the string defined by the first argument and searches for the
first position of the substring you supply as the second argument.
generate str_position = strpos(sex, "ale")
You can find the position of a character in a column of strings with the Series.str.find()
method. find searches for the first position of the substring. If the substring is found, the
method returns its position. If not found, it returns -1. Keep in mind that Python indexes are
zero-based.
In [34]: tips["sex"].str.find("ale")
Out[34]:
67 3
92 3
111 3
145 3
135 3
..
182 1
156 1
59 1
212 1
170 1
Name: sex, Length: 244, dtype: int64
Extracting substring by position#
Stata extracts a substring from a string based on its position with the substr() function.
generate short_sex = substr(sex, 1, 1)
With pandas you can use [] notation to extract a substring
from a string by position locations. Keep in mind that Python
indexes are zero-based.
In [35]: tips["sex"].str[0:1]
Out[35]:
67 F
92 F
111 F
145 F
135 F
..
182 M
156 M
59 M
212 M
170 M
Name: sex, Length: 244, dtype: object
Extracting nth word#
The Stata word() function returns the nth word from a string.
The first argument is the string you want to parse and the
second argument specifies which word you want to extract.
clear
input str20 string
"John Smith"
"Jane Cook"
end
generate first_name = word(name, 1)
generate last_name = word(name, -1)
The simplest way to extract words in pandas is to split the strings by spaces, then reference the
word by index. Note there are more powerful approaches should you need them.
In [36]: firstlast = pd.DataFrame({"String": ["John Smith", "Jane Cook"]})
In [37]: firstlast["First_Name"] = firstlast["String"].str.split(" ", expand=True)[0]
In [38]: firstlast["Last_Name"] = firstlast["String"].str.rsplit(" ", expand=True)[1]
In [39]: firstlast
Out[39]:
String First_Name Last_Name
0 John Smith John Smith
1 Jane Cook Jane Cook
Changing case#
The Stata strupper(), strlower(), strproper(),
ustrupper(), ustrlower(), and ustrtitle() functions
change the case of ASCII and Unicode strings, respectively.
clear
input str20 string
"John Smith"
"Jane Cook"
end
generate upper = strupper(string)
generate lower = strlower(string)
generate title = strproper(string)
list
The equivalent pandas methods are Series.str.upper(), Series.str.lower(), and
Series.str.title().
In [40]: firstlast = pd.DataFrame({"string": ["John Smith", "Jane Cook"]})
In [41]: firstlast["upper"] = firstlast["string"].str.upper()
In [42]: firstlast["lower"] = firstlast["string"].str.lower()
In [43]: firstlast["title"] = firstlast["string"].str.title()
In [44]: firstlast
Out[44]:
string upper lower title
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook
Merging#
The following tables will be used in the merge examples:
In [45]: df1 = pd.DataFrame({"key": ["A", "B", "C", "D"], "value": np.random.randn(4)})
In [46]: df1
Out[46]:
key value
0 A 0.469112
1 B -0.282863
2 C -1.509059
3 D -1.135632
In [47]: df2 = pd.DataFrame({"key": ["B", "D", "D", "E"], "value": np.random.randn(4)})
In [48]: df2
Out[48]:
key value
0 B 1.212112
1 D -0.173215
2 D 0.119209
3 E -1.044236
In Stata, to perform a merge, one data set must be in memory
and the other must be referenced as a file name on disk. In
contrast, Python must have both DataFrames already in memory.
By default, Stata performs an outer join, where all observations
from both data sets are left in memory after the merge. One can
keep only observations from the initial data set, the merged data set,
or the intersection of the two by using the values created in the
_merge variable.
* First create df2 and save to disk
clear
input str1 key
B
D
D
E
end
generate value = rnormal()
save df2.dta
* Now create df1 in memory
clear
input str1 key
A
B
C
D
end
generate value = rnormal()
preserve
* Left join
merge 1:n key using df2.dta
keep if _merge == 1
* Right join
restore, preserve
merge 1:n key using df2.dta
keep if _merge == 2
* Inner join
restore, preserve
merge 1:n key using df2.dta
keep if _merge == 3
* Outer join
restore
merge 1:n key using df2.dta
pandas DataFrames have a merge() method, which provides similar functionality. The
data does not have to be sorted ahead of time, and different join types are accomplished via the
how keyword.
In [49]: inner_join = df1.merge(df2, on=["key"], how="inner")
In [50]: inner_join
Out[50]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
In [51]: left_join = df1.merge(df2, on=["key"], how="left")
In [52]: left_join
Out[52]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
In [53]: right_join = df1.merge(df2, on=["key"], how="right")
In [54]: right_join
Out[54]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236
In [55]: outer_join = df1.merge(df2, on=["key"], how="outer")
In [56]: outer_join
Out[56]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
Missing data#
Both pandas and Stata have a representation for missing data.
pandas represents missing data with the special float value NaN (not a number). Many of the
semantics are the same; for example missing data propagates through numeric operations, and is
ignored by default for aggregations.
In [57]: outer_join
Out[57]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
In [58]: outer_join["value_x"] + outer_join["value_y"]
Out[58]:
0 NaN
1 0.929249
2 NaN
3 -1.308847
4 -1.016424
5 NaN
dtype: float64
In [59]: outer_join["value_x"].sum()
Out[59]: -3.5940742896293765
One difference is that missing data cannot be compared to its sentinel value.
For example, in Stata you could do this to filter missing values.
* Keep missing values
list if value_x == .
* Keep non-missing values
list if value_x != .
In pandas, Series.isna() and Series.notna() can be used to filter the rows.
In [60]: outer_join[outer_join["value_x"].isna()]
Out[60]:
key value_x value_y
5 E NaN -1.044236
In [61]: outer_join[outer_join["value_x"].notna()]
Out[61]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
pandas provides a variety of methods to work with missing data. Here are some examples:
Drop rows with missing values#
In [62]: outer_join.dropna()
Out[62]:
key value_x value_y
1 B -0.282863 1.212112
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
Forward fill from previous rows#
In [63]: outer_join.fillna(method="ffill")
Out[63]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 1.212112
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E -1.135632 -1.044236
Replace missing values with a specified value#
Using the mean:
In [64]: outer_join["value_x"].fillna(outer_join["value_x"].mean())
Out[64]:
0 0.469112
1 -0.282863
2 -1.509059
3 -1.135632
4 -1.135632
5 -0.718815
Name: value_x, dtype: float64
GroupBy#
Aggregation#
Stata’s collapse can be used to group by one or
more key variables and compute aggregations on
numeric columns.
collapse (sum) total_bill tip, by(sex smoker)
pandas provides a flexible groupby mechanism that allows similar aggregations. See the
groupby documentation for more details and examples.
In [65]: tips_summed = tips.groupby(["sex", "smoker"])[["total_bill", "tip"]].sum()
In [66]: tips_summed
Out[66]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07
Transformation#
In Stata, if the group aggregations need to be used with the
original data set, one would usually use bysort with egen().
For example, to subtract the mean for each observation by smoker group.
bysort sex smoker: egen group_bill = mean(total_bill)
generate adj_total_bill = total_bill - group_bill
pandas provides a Transformation mechanism that allows these type of operations to be
succinctly expressed in one operation.
In [67]: gb = tips.groupby("smoker")["total_bill"]
In [68]: tips["adj_total_bill"] = tips["total_bill"] - gb.transform("mean")
In [69]: tips
Out[69]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278
.. ... ... ... ... ... ... ... ...
182 43.35 3.50 Male Yes Sun Dinner 3 24.593656
156 46.17 5.00 Male No Sun Dinner 6 28.981722
59 46.27 6.73 Male No Sat Dinner 4 29.081722
212 46.33 9.00 Male No Sat Dinner 4 29.141722
170 48.81 10.00 Male Yes Sat Dinner 3 30.053656
[244 rows x 8 columns]
By group processing#
In addition to aggregation, pandas groupby can be used to
replicate most other bysort processing from Stata. For example,
the following example lists the first observation in the current
sort order by sex/smoker group.
bysort sex smoker: list if _n == 1
In pandas this would be written as:
In [70]: tips.groupby(["sex", "smoker"]).first()
Out[70]:
total_bill tip day time size adj_total_bill
sex smoker
Female No 5.25 1.00 Sat Dinner 1 -11.938278
Yes 1.07 1.00 Sat Dinner 1 -17.686344
Male No 5.51 2.00 Thur Lunch 2 -11.678278
Yes 5.25 5.15 Sun Dinner 2 -13.506344
Other considerations#
Disk vs memory#
pandas and Stata both operate exclusively in memory. This means that the size of
data able to be loaded in pandas is limited by your machine’s memory.
If out of core processing is needed, one possibility is the
dask.dataframe
library, which provides a subset of pandas functionality for an
on-disk DataFrame.
| getting_started/comparison/comparison_with_stata.html |
pandas.melt | `pandas.melt`
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
This function is useful to massage a DataFrame into a format where one
or more columns are identifier variables (id_vars), while all other
columns, considered measured variables (value_vars), are “unpivoted” to
the row axis, leaving just two non-identifier columns, ‘variable’ and
‘value’.
```
>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
... 'B': {0: 1, 1: 3, 2: 5},
... 'C': {0: 2, 1: 4, 2: 6}})
>>> df
A B C
0 a 1 2
1 b 3 4
2 c 5 6
``` | pandas.melt(frame, id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None, ignore_index=True)[source]#
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
This function is useful to massage a DataFrame into a format where one
or more columns are identifier variables (id_vars), while all other
columns, considered measured variables (value_vars), are “unpivoted” to
the row axis, leaving just two non-identifier columns, ‘variable’ and
‘value’.
Parameters
id_varstuple, list, or ndarray, optionalColumn(s) to use as identifier variables.
value_varstuple, list, or ndarray, optionalColumn(s) to unpivot. If not specified, uses all columns that
are not set as id_vars.
var_namescalarName to use for the ‘variable’ column. If None it uses
frame.columns.name or ‘variable’.
value_namescalar, default ‘value’Name to use for the ‘value’ column.
col_levelint or str, optionalIf columns are a MultiIndex then use this level to melt.
ignore_indexbool, default TrueIf True, original index is ignored. If False, the original index is retained.
Index labels will be repeated as necessary.
New in version 1.1.0.
Returns
DataFrameUnpivoted DataFrame.
See also
DataFrame.meltIdentical method.
pivot_tableCreate a spreadsheet-style pivot table as a DataFrame.
DataFrame.pivotReturn reshaped DataFrame organized by given index / column values.
DataFrame.explodeExplode a DataFrame from list-like columns to long format.
Notes
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
... 'B': {0: 1, 1: 3, 2: 5},
... 'C': {0: 2, 1: 4, 2: 6}})
>>> df
A B C
0 a 1 2
1 b 3 4
2 c 5 6
>>> pd.melt(df, id_vars=['A'], value_vars=['B'])
A variable value
0 a B 1
1 b B 3
2 c B 5
>>> pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])
A variable value
0 a B 1
1 b B 3
2 c B 5
3 a C 2
4 b C 4
5 c C 6
The names of ‘variable’ and ‘value’ columns can be customized:
>>> pd.melt(df, id_vars=['A'], value_vars=['B'],
... var_name='myVarname', value_name='myValname')
A myVarname myValname
0 a B 1
1 b B 3
2 c B 5
Original index values can be kept around:
>>> pd.melt(df, id_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
A variable value
0 a B 1
1 b B 3
2 c B 5
0 a C 2
1 b C 4
2 c C 6
If you have multi-index columns:
>>> df.columns = [list('ABC'), list('DEF')]
>>> df
A B C
D E F
0 a 1 2
1 b 3 4
2 c 5 6
>>> pd.melt(df, col_level=0, id_vars=['A'], value_vars=['B'])
A variable value
0 a B 1
1 b B 3
2 c B 5
>>> pd.melt(df, id_vars=[('A', 'D')], value_vars=[('B', 'E')])
(A, D) variable_0 variable_1 value
0 a B E 1
1 b B E 3
2 c B E 5
| reference/api/pandas.melt.html |
pandas.Index.is_ | `pandas.Index.is_`
More flexible, faster check like is but that works through views. | final Index.is_(other)[source]#
More flexible, faster check like is but that works through views.
Note: this is not the same as Index.identical(), which checks
that metadata is also the same.
Parameters
otherobjectOther object to compare against.
Returns
boolTrue if both have same underlying data, False otherwise.
See also
Index.identicalWorks like Index.is_ but also checks metadata.
| reference/api/pandas.Index.is_.html |
pandas.tseries.offsets.CustomBusinessMonthBegin.calendar | pandas.tseries.offsets.CustomBusinessMonthBegin.calendar | CustomBusinessMonthBegin.calendar#
| reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.calendar.html |
pandas.tseries.offsets.Easter.__call__ | `pandas.tseries.offsets.Easter.__call__`
Call self as a function. | Easter.__call__(*args, **kwargs)#
Call self as a function.
| reference/api/pandas.tseries.offsets.Easter.__call__.html |
pandas.Series.dt.freq | pandas.Series.dt.freq | Series.dt.freq[source]#
| reference/api/pandas.Series.dt.freq.html |
pandas.core.window.rolling.Rolling.var | `pandas.core.window.rolling.Rolling.var`
Calculate the rolling variance.
```
>>> s = pd.Series([5, 5, 6, 7, 5, 5, 5])
>>> s.rolling(3).var()
0 NaN
1 NaN
2 0.333333
3 1.000000
4 1.000000
5 1.333333
6 0.000000
dtype: float64
``` | Rolling.var(ddof=1, numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the rolling variance.
Parameters
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.4.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.4.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
numpy.varEquivalent method for NumPy array.
pandas.Series.rollingCalling rolling with Series data.
pandas.DataFrame.rollingCalling rolling with DataFrames.
pandas.Series.varAggregating var for Series.
pandas.DataFrame.varAggregating var for DataFrame.
Notes
The default ddof of 1 used in Series.var() is different
than the default ddof of 0 in numpy.var().
A minimum of one period is required for the rolling calculation.
Examples
>>> s = pd.Series([5, 5, 6, 7, 5, 5, 5])
>>> s.rolling(3).var()
0 NaN
1 NaN
2 0.333333
3 1.000000
4 1.000000
5 1.333333
6 0.000000
dtype: float64
| reference/api/pandas.core.window.rolling.Rolling.var.html |
pandas.tseries.offsets.QuarterEnd.n | pandas.tseries.offsets.QuarterEnd.n | QuarterEnd.n#
| reference/api/pandas.tseries.offsets.QuarterEnd.n.html |
pandas.tseries.offsets.BusinessMonthEnd.is_quarter_start | `pandas.tseries.offsets.BusinessMonthEnd.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | BusinessMonthEnd.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.is_quarter_start.html |
pandas.tseries.offsets.Day.rule_code | pandas.tseries.offsets.Day.rule_code | Day.rule_code#
| reference/api/pandas.tseries.offsets.Day.rule_code.html |
pandas.tseries.offsets.CustomBusinessHour.rollforward | `pandas.tseries.offsets.CustomBusinessHour.rollforward`
Roll provided date forward to next offset only if not on offset. | CustomBusinessHour.rollforward(other)#
Roll provided date forward to next offset only if not on offset.
| reference/api/pandas.tseries.offsets.CustomBusinessHour.rollforward.html |
pandas.Timestamp.fromordinal | `pandas.Timestamp.fromordinal`
Construct a timestamp from a a proleptic Gregorian ordinal.
Date corresponding to a proleptic Gregorian ordinal.
```
>>> pd.Timestamp.fromordinal(737425)
Timestamp('2020-01-01 00:00:00')
``` | classmethod Timestamp.fromordinal(ordinal, freq=None, tz=None)#
Construct a timestamp from a a proleptic Gregorian ordinal.
Parameters
ordinalintDate corresponding to a proleptic Gregorian ordinal.
freqstr, DateOffsetOffset to apply to the Timestamp.
tzstr, pytz.timezone, dateutil.tz.tzfile or NoneTime zone for the Timestamp.
Notes
By definition there cannot be any tz info on the ordinal itself.
Examples
>>> pd.Timestamp.fromordinal(737425)
Timestamp('2020-01-01 00:00:00')
| reference/api/pandas.Timestamp.fromordinal.html |
pandas.tseries.offsets.SemiMonthBegin.is_month_end | `pandas.tseries.offsets.SemiMonthBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | SemiMonthBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.SemiMonthBegin.is_month_end.html |
pandas.DataFrame.sparse.to_dense | `pandas.DataFrame.sparse.to_dense`
Convert a DataFrame with sparse values to dense.
```
>>> df = pd.DataFrame({"A": pd.arrays.SparseArray([0, 1, 0])})
>>> df.sparse.to_dense()
A
0 0
1 1
2 0
``` | DataFrame.sparse.to_dense()[source]#
Convert a DataFrame with sparse values to dense.
New in version 0.25.0.
Returns
DataFrameA DataFrame with the same values stored as dense arrays.
Examples
>>> df = pd.DataFrame({"A": pd.arrays.SparseArray([0, 1, 0])})
>>> df.sparse.to_dense()
A
0 0
1 1
2 0
| reference/api/pandas.DataFrame.sparse.to_dense.html |
pandas.ExcelWriter.if_sheet_exists | `pandas.ExcelWriter.if_sheet_exists`
How to behave when writing to a sheet that already exists in append mode. | property ExcelWriter.if_sheet_exists[source]#
How to behave when writing to a sheet that already exists in append mode.
| reference/api/pandas.ExcelWriter.if_sheet_exists.html |
pandas.tseries.offsets.SemiMonthEnd.rule_code | pandas.tseries.offsets.SemiMonthEnd.rule_code | SemiMonthEnd.rule_code#
| reference/api/pandas.tseries.offsets.SemiMonthEnd.rule_code.html |
pandas maintenance | pandas maintenance
This guide is for pandas’ maintainers. It may also be interesting to contributors
looking to understand the pandas development process and what steps are necessary
to become a maintainer.
The main contributing guide is available at Contributing to pandas.
pandas uses two levels of permissions: triage and core team members.
Triage members can label and close issues and pull requests.
Core team members can label and close issues and pull request, and can merge
pull requests. | This guide is for pandas’ maintainers. It may also be interesting to contributors
looking to understand the pandas development process and what steps are necessary
to become a maintainer.
The main contributing guide is available at Contributing to pandas.
Roles#
pandas uses two levels of permissions: triage and core team members.
Triage members can label and close issues and pull requests.
Core team members can label and close issues and pull request, and can merge
pull requests.
GitHub publishes the full list of permissions.
Tasks#
pandas is largely a volunteer project, so these tasks shouldn’t be read as
“expectations” of triage and maintainers. Rather, they’re general descriptions
of what it means to be a maintainer.
Triage newly filed issues (see Issue triage)
Review newly opened pull requests
Respond to updates on existing issues and pull requests
Drive discussion and decisions on stalled issues and pull requests
Provide experience / wisdom on API design questions to ensure consistency and maintainability
Project organization (run / attend developer meetings, represent pandas)
https://matthewrocklin.com/blog/2019/05/18/maintainer may be interesting background
reading.
Issue triage#
Here’s a typical workflow for triaging a newly opened issue.
Thank the reporter for opening an issue
The issue tracker is many people’s first interaction with the pandas project itself,
beyond just using the library. As such, we want it to be a welcoming, pleasant
experience.
Is the necessary information provided?
Ideally reporters would fill out the issue template, but many don’t.
If crucial information (like the version of pandas they used), is missing
feel free to ask for that and label the issue with “Needs info”. The
report should follow the guidelines in Bug reports and enhancement requests.
You may want to link to that if they didn’t follow the template.
Make sure that the title accurately reflects the issue. Edit it yourself
if it’s not clear.
Is this a duplicate issue?
We have many open issues. If a new issue is clearly a duplicate, label the
new issue as “Duplicate” assign the milestone “No Action”, and close the issue
with a link to the original issue. Make sure to still thank the reporter, and
encourage them to chime in on the original issue, and perhaps try to fix it.
If the new issue provides relevant information, such as a better or slightly
different example, add it to the original issue as a comment or an edit to
the original post.
Is the issue minimal and reproducible?
For bug reports, we ask that the reporter provide a minimal reproducible
example. See https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
for a good explanation. If the example is not reproducible, or if it’s
clearly not minimal, feel free to ask the reporter if they can provide
and example or simplify the provided one. Do acknowledge that writing
minimal reproducible examples is hard work. If the reporter is struggling,
you can try to write one yourself and we’ll edit the original post to include it.
If a reproducible example can’t be provided, add the “Needs info” label.
If a reproducible example is provided, but you see a simplification,
edit the original post with your simpler reproducible example.
Is this a clearly defined feature request?
Generally, pandas prefers to discuss and design new features in issues, before
a pull request is made. Encourage the submitter to include a proposed API
for the new feature. Having them write a full docstring is a good way to
pin down specifics.
We’ll need a discussion from several pandas maintainers before deciding whether
the proposal is in scope for pandas.
Is this a usage question?
We prefer that usage questions are asked on StackOverflow with the pandas
tag. https://stackoverflow.com/questions/tagged/pandas
If it’s easy to answer, feel free to link to the relevant documentation section,
let them know that in the future this kind of question should be on
StackOverflow, and close the issue.
What labels and milestones should I add?
Apply the relevant labels. This is a bit of an art, and comes with experience.
Look at similar issues to get a feel for how things are labeled.
If the issue is clearly defined and the fix seems relatively straightforward,
label the issue as “Good first issue”.
Typically, new issues will be assigned the “Contributions welcome” milestone,
unless it’s know that this issue should be addressed in a specific release (say
because it’s a large regression).
Closing issues#
Be delicate here: many people interpret closing an issue as us saying that the
conversation is over. It’s typically best to give the reporter some time to
respond or self-close their issue if it’s determined that the behavior is not a bug,
or the feature is out of scope. Sometimes reporters just go away though, and
we’ll close the issue after the conversation has died.
Reviewing pull requests#
Anybody can review a pull request: regular contributors, triagers, or core-team
members. But only core-team members can merge pull requests when they’re ready.
Here are some things to check when reviewing a pull request.
Tests should be in a sensible location: in the same file as closely related tests.
New public APIs should be included somewhere in doc/source/reference/.
New / changed API should use the versionadded or versionchanged directives in the docstring.
User-facing changes should have a whatsnew in the appropriate file.
Regression tests should reference the original GitHub issue number like # GH-1234.
The pull request should be labeled and assigned the appropriate milestone (the next patch release
for regression fixes and small bug fixes, the next minor milestone otherwise)
Changes should comply with our Version policy.
Backporting#
pandas supports point releases (e.g. 1.4.3) that aim to:
Fix bugs in new features introduced in the first minor version release.
e.g. If a new feature was added in 1.4 and contains a bug, a fix can be applied in 1.4.3
Fix bugs that used to work in a few minor releases prior. There should be agreement between core team members that a backport is appropriate.
e.g. If a feature worked in 1.2 and stopped working since 1.3, a fix can be applied in 1.4.3.
Since pandas minor releases are based on Github branches (e.g. point release of 1.4 are based off the 1.4.x branch),
“backporting” means merging a pull request fix to the main branch and correct minor branch associated with the next point release.
By default, if a pull request is assigned to the next point release milestone within the Github interface,
the backporting process should happen automatically by the @meeseeksdev bot once the pull request is merged.
A new pull request will be made backporting the pull request to the correct version branch.
Sometimes due to merge conflicts, a manual pull request will need to be made addressing the code conflict.
If the bot does not automatically start the backporting process, you can also write a Github comment in the merged pull request
to trigger the backport:
@meeseeksdev backport version-branch
This will trigger a workflow which will backport a given change to a branch
(e.g. @meeseeksdev backport 1.4.x)
Cleaning up old issues#
Every open issue in pandas has a cost. Open issues make finding duplicates harder,
and can make it harder to know what needs to be done in pandas. That said, closing
issues isn’t a goal on its own. Our goal is to make pandas the best it can be,
and that’s best done by ensuring that the quality of our open issues is high.
Occasionally, bugs are fixed but the issue isn’t linked to in the Pull Request.
In these cases, comment that “This has been fixed, but could use a test.” and
label the issue as “Good First Issue” and “Needs Test”.
If an older issue doesn’t follow our issue template, edit the original post to
include a minimal example, the actual output, and the expected output. Uniformity
in issue reports is valuable.
If an older issue lacks a reproducible example, label it as “Needs Info” and
ask them to provide one (or write one yourself if possible). If one isn’t
provide reasonably soon, close it according to the policies in Closing issues.
Cleaning up old pull requests#
Occasionally, contributors are unable to finish off a pull request.
If some time has passed (two weeks, say) since the last review requesting changes,
gently ask if they’re still interested in working on this. If another two weeks or
so passes with no response, thank them for their work and close the pull request.
Comment on the original issue that “There’s a stalled PR at #1234 that may be
helpful.”, and perhaps label the issue as “Good first issue” if the PR was relatively
close to being accepted.
Additionally, core-team members can push to contributors branches. This can be
helpful for pushing an important PR across the line, or for fixing a small
merge conflict.
Becoming a pandas maintainer#
The full process is outlined in our governance documents. In summary,
we’re happy to give triage permissions to anyone who shows interest by
being helpful on the issue tracker.
The required steps for adding a maintainer are:
Contact the contributor and ask their interest to join.
Add the contributor to the appropriate Github Team if accepted the invitation.
pandas-core is for core team members
pandas-triage is for pandas triage members
Add the contributor to the pandas Google group.
Create a pull request to add the contributor’s Github handle to pandas-dev/pandas/web/pandas/config.yml.
Create a pull request to add the contributor’s name/Github handle to the governance document.
The current list of core-team members is at
https://github.com/pandas-dev/pandas-governance/blob/master/people.md
Merging pull requests#
Only core team members can merge pull requests. We have a few guidelines.
You should typically not self-merge your own pull requests. Exceptions include
things like small changes to fix CI (e.g. pinning a package version).
You should not merge pull requests that have an active discussion, or pull
requests that has any -1 votes from a core maintainer. pandas operates
by consensus.
For larger changes, it’s good to have a +1 from at least two core team members.
In addition to the items listed in Closing issues, you should verify
that the pull request is assigned the correct milestone.
Pull requests merged with a patch-release milestone will typically be backported
by our bot. Verify that the bot noticed the merge (it will leave a comment within
a minute typically). If a manual backport is needed please do that, and remove
the “Needs backport” label once you’ve done it manually. If you forget to assign
a milestone before tagging, you can request the bot to backport it with:
@Meeseeksdev backport <branch>
Benchmark machine#
The team currently owns dedicated hardware for hosting a website for pandas’ ASV performance benchmark. The results
are published to http://pandas.pydata.org/speed/pandas/
Configuration#
The machine can be configured with the Ansible playbook in https://github.com/tomaugspurger/asv-runner.
Publishing#
The results are published to another Github repository, https://github.com/tomaugspurger/asv-collection.
Finally, we have a cron job on our docs server to pull from https://github.com/tomaugspurger/asv-collection, to serve them from /speed.
Ask Tom or Joris for access to the webserver.
Debugging#
The benchmarks are scheduled by Airflow. It has a dashboard for viewing and debugging the results. You’ll need to setup an SSH tunnel to view them
ssh -L 8080:localhost:8080 pandas@panda.likescandy.com
Release process#
The process for releasing a new version of pandas can be found at https://github.com/pandas-dev/pandas-release
| development/maintaining.html |
Extending pandas | Extending pandas
While pandas provides a rich set of methods, containers, and data types, your
needs may not be fully satisfied. pandas offers a few options for extending
pandas.
Libraries can use the decorators
pandas.api.extensions.register_dataframe_accessor(),
pandas.api.extensions.register_series_accessor(), and
pandas.api.extensions.register_index_accessor(), to add additional
“namespaces” to pandas objects. All of these follow a similar convention: you
decorate a class, providing the name of attribute to add. The class’s
__init__ method gets the object being decorated. For example:
Now users can access your methods using the geo namespace:
This can be a convenient way to extend pandas objects without subclassing them.
If you write a custom accessor, make a pull request adding it to our
pandas ecosystem page.
We highly recommend validating the data in your accessor’s __init__.
In our GeoAccessor, we validate that the data contains the expected columns,
raising an AttributeError when the validation fails.
For a Series accessor, you should validate the dtype if the accessor
applies only to certain dtypes. | While pandas provides a rich set of methods, containers, and data types, your
needs may not be fully satisfied. pandas offers a few options for extending
pandas.
Registering custom accessors#
Libraries can use the decorators
pandas.api.extensions.register_dataframe_accessor(),
pandas.api.extensions.register_series_accessor(), and
pandas.api.extensions.register_index_accessor(), to add additional
“namespaces” to pandas objects. All of these follow a similar convention: you
decorate a class, providing the name of attribute to add. The class’s
__init__ method gets the object being decorated. For example:
@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._validate(pandas_obj)
self._obj = pandas_obj
@staticmethod
def _validate(obj):
# verify there is a column latitude and a column longitude
if "latitude" not in obj.columns or "longitude" not in obj.columns:
raise AttributeError("Must have 'latitude' and 'longitude'.")
@property
def center(self):
# return the geographic center point of this DataFrame
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def plot(self):
# plot this array's data on a map, e.g., using Cartopy
pass
Now users can access your methods using the geo namespace:
>>> ds = pd.DataFrame(
... {"longitude": np.linspace(0, 10), "latitude": np.linspace(0, 20)}
... )
>>> ds.geo.center
(5.0, 10.0)
>>> ds.geo.plot()
# plots data on a map
This can be a convenient way to extend pandas objects without subclassing them.
If you write a custom accessor, make a pull request adding it to our
pandas ecosystem page.
We highly recommend validating the data in your accessor’s __init__.
In our GeoAccessor, we validate that the data contains the expected columns,
raising an AttributeError when the validation fails.
For a Series accessor, you should validate the dtype if the accessor
applies only to certain dtypes.
Extension types#
Note
The pandas.api.extensions.ExtensionDtype and pandas.api.extensions.ExtensionArray APIs were
experimental prior to pandas 1.5. Starting with version 1.5, future changes will follow
the pandas deprecation policy.
pandas defines an interface for implementing data types and arrays that extend
NumPy’s type system. pandas itself uses the extension system for some types
that aren’t built into NumPy (categorical, period, interval, datetime with
timezone).
Libraries can define a custom array and data type. When pandas encounters these
objects, they will be handled properly (i.e. not converted to an ndarray of
objects). Many methods like pandas.isna() will dispatch to the extension
type’s implementation.
If you’re building a library that implements the interface, please publicize it
on Extension data types.
The interface consists of two classes.
ExtensionDtype#
A pandas.api.extensions.ExtensionDtype is similar to a numpy.dtype object. It describes the
data type. Implementors are responsible for a few unique items like the name.
One particularly important item is the type property. This should be the
class that is the scalar type for your data. For example, if you were writing an
extension array for IP Address data, this might be ipaddress.IPv4Address.
See the extension dtype source for interface definition.
pandas.api.extensions.ExtensionDtype can be registered to pandas to allow creation via a string dtype name.
This allows one to instantiate Series and .astype() with a registered string name, for
example 'category' is a registered string accessor for the CategoricalDtype.
See the extension dtype dtypes for more on how to register dtypes.
ExtensionArray#
This class provides all the array-like functionality. ExtensionArrays are
limited to 1 dimension. An ExtensionArray is linked to an ExtensionDtype via the
dtype attribute.
pandas makes no restrictions on how an extension array is created via its
__new__ or __init__, and puts no restrictions on how you store your
data. We do require that your array be convertible to a NumPy array, even if
this is relatively expensive (as it is for Categorical).
They may be backed by none, one, or many NumPy arrays. For example,
pandas.Categorical is an extension array backed by two arrays,
one for codes and one for categories. An array of IPv6 addresses may
be backed by a NumPy structured array with two fields, one for the
lower 64 bits and one for the upper 64 bits. Or they may be backed
by some other storage type, like Python lists.
See the extension array source for the interface definition. The docstrings
and comments contain guidance for properly implementing the interface.
ExtensionArray operator support#
By default, there are no operators defined for the class ExtensionArray.
There are two approaches for providing operator support for your ExtensionArray:
Define each of the operators on your ExtensionArray subclass.
Use an operator implementation from pandas that depends on operators that are already defined
on the underlying elements (scalars) of the ExtensionArray.
Note
Regardless of the approach, you may want to set __array_priority__
if you want your implementation to be called when involved in binary operations
with NumPy arrays.
For the first approach, you define selected operators, e.g., __add__, __le__, etc. that
you want your ExtensionArray subclass to support.
The second approach assumes that the underlying elements (i.e., scalar type) of the ExtensionArray
have the individual operators already defined. In other words, if your ExtensionArray
named MyExtensionArray is implemented so that each element is an instance
of the class MyExtensionElement, then if the operators are defined
for MyExtensionElement, the second approach will automatically
define the operators for MyExtensionArray.
A mixin class, ExtensionScalarOpsMixin supports this second
approach. If developing an ExtensionArray subclass, for example MyExtensionArray,
can simply include ExtensionScalarOpsMixin as a parent class of MyExtensionArray,
and then call the methods _add_arithmetic_ops() and/or
_add_comparison_ops() to hook the operators into
your MyExtensionArray class, as follows:
from pandas.api.extensions import ExtensionArray, ExtensionScalarOpsMixin
class MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin):
pass
MyExtensionArray._add_arithmetic_ops()
MyExtensionArray._add_comparison_ops()
Note
Since pandas automatically calls the underlying operator on each
element one-by-one, this might not be as performant as implementing your own
version of the associated operators directly on the ExtensionArray.
For arithmetic operations, this implementation will try to reconstruct a new
ExtensionArray with the result of the element-wise operation. Whether
or not that succeeds depends on whether the operation returns a result
that’s valid for the ExtensionArray. If an ExtensionArray cannot
be reconstructed, an ndarray containing the scalars returned instead.
For ease of implementation and consistency with operations between pandas
and NumPy ndarrays, we recommend not handling Series and Indexes in your binary ops.
Instead, you should detect these cases and return NotImplemented.
When pandas encounters an operation like op(Series, ExtensionArray), pandas
will
unbox the array from the Series (Series.array)
call result = op(values, ExtensionArray)
re-box the result in a Series
NumPy universal functions#
Series implements __array_ufunc__. As part of the implementation,
pandas unboxes the ExtensionArray from the Series, applies the ufunc,
and re-boxes it if necessary.
If applicable, we highly recommend that you implement __array_ufunc__ in your
extension array to avoid coercion to an ndarray. See
the NumPy documentation
for an example.
As part of your implementation, we require that you defer to pandas when a pandas
container (Series, DataFrame, Index) is detected in inputs.
If any of those is present, you should return NotImplemented. pandas will take care of
unboxing the array from the container and re-calling the ufunc with the unwrapped input.
Testing extension arrays#
We provide a test suite for ensuring that your extension arrays satisfy the expected
behavior. To use the test suite, you must provide several pytest fixtures and inherit
from the base test class. The required fixtures are found in
https://github.com/pandas-dev/pandas/blob/main/pandas/tests/extension/conftest.py.
To use a test, subclass it:
from pandas.tests.extension import base
class TestConstructors(base.BaseConstructorsTests):
pass
See https://github.com/pandas-dev/pandas/blob/main/pandas/tests/extension/base/__init__.py
for a list of all the tests available.
Compatibility with Apache Arrow#
An ExtensionArray can support conversion to / from pyarrow arrays
(and thus support for example serialization to the Parquet file format)
by implementing two methods: ExtensionArray.__arrow_array__ and
ExtensionDtype.__from_arrow__.
The ExtensionArray.__arrow_array__ ensures that pyarrow knowns how
to convert the specific extension array into a pyarrow.Array (also when
included as a column in a pandas DataFrame):
class MyExtensionArray(ExtensionArray):
...
def __arrow_array__(self, type=None):
# convert the underlying array values to a pyarrow Array
import pyarrow
return pyarrow.array(..., type=type)
The ExtensionDtype.__from_arrow__ method then controls the conversion
back from pyarrow to a pandas ExtensionArray. This method receives a pyarrow
Array or ChunkedArray as only argument and is expected to return the
appropriate pandas ExtensionArray for this dtype and the passed values:
class ExtensionDtype:
...
def __from_arrow__(self, array: pyarrow.Array/ChunkedArray) -> ExtensionArray:
...
See more in the Arrow documentation.
Those methods have been implemented for the nullable integer and string extension
dtypes included in pandas, and ensure roundtrip to pyarrow and the Parquet file format.
Subclassing pandas data structures#
Warning
There are some easier alternatives before considering subclassing pandas data structures.
Extensible method chains with pipe
Use composition. See here.
Extending by registering an accessor
Extending by extension type
This section describes how to subclass pandas data structures to meet more specific needs. There are two points that need attention:
Override constructor properties.
Define original properties
Note
You can find a nice example in geopandas project.
Override constructor properties#
Each data structure has several constructor properties for returning a new
data structure as the result of an operation. By overriding these properties,
you can retain subclasses through pandas data manipulations.
There are 3 possible constructor properties to be defined on a subclass:
DataFrame/Series._constructor: Used when a manipulation result has the same dimension as the original.
DataFrame._constructor_sliced: Used when a DataFrame (sub-)class manipulation result should be a Series (sub-)class.
Series._constructor_expanddim: Used when a Series (sub-)class manipulation result should be a DataFrame (sub-)class, e.g. Series.to_frame().
Below example shows how to define SubclassedSeries and SubclassedDataFrame overriding constructor properties.
class SubclassedSeries(pd.Series):
@property
def _constructor(self):
return SubclassedSeries
@property
def _constructor_expanddim(self):
return SubclassedDataFrame
class SubclassedDataFrame(pd.DataFrame):
@property
def _constructor(self):
return SubclassedDataFrame
@property
def _constructor_sliced(self):
return SubclassedSeries
>>> s = SubclassedSeries([1, 2, 3])
>>> type(s)
<class '__main__.SubclassedSeries'>
>>> to_framed = s.to_frame()
>>> type(to_framed)
<class '__main__.SubclassedDataFrame'>
>>> df = SubclassedDataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> type(df)
<class '__main__.SubclassedDataFrame'>
>>> sliced1 = df[["A", "B"]]
>>> sliced1
A B
0 1 4
1 2 5
2 3 6
>>> type(sliced1)
<class '__main__.SubclassedDataFrame'>
>>> sliced2 = df["A"]
>>> sliced2
0 1
1 2
2 3
Name: A, dtype: int64
>>> type(sliced2)
<class '__main__.SubclassedSeries'>
Define original properties#
To let original data structures have additional properties, you should let pandas know what properties are added. pandas maps unknown properties to data names overriding __getattribute__. Defining original properties can be done in one of 2 ways:
Define _internal_names and _internal_names_set for temporary properties which WILL NOT be passed to manipulation results.
Define _metadata for normal properties which will be passed to manipulation results.
Below is an example to define two original properties, “internal_cache” as a temporary property and “added_property” as a normal property
class SubclassedDataFrame2(pd.DataFrame):
# temporary properties
_internal_names = pd.DataFrame._internal_names + ["internal_cache"]
_internal_names_set = set(_internal_names)
# normal properties
_metadata = ["added_property"]
@property
def _constructor(self):
return SubclassedDataFrame2
>>> df = SubclassedDataFrame2({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
>>> df
A B C
0 1 4 7
1 2 5 8
2 3 6 9
>>> df.internal_cache = "cached"
>>> df.added_property = "property"
>>> df.internal_cache
cached
>>> df.added_property
property
# properties defined in _internal_names is reset after manipulation
>>> df[["A", "B"]].internal_cache
AttributeError: 'SubclassedDataFrame2' object has no attribute 'internal_cache'
# properties defined in _metadata are retained
>>> df[["A", "B"]].added_property
property
Plotting backends#
Starting in 0.25 pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib. For example:
>>> pd.set_option("plotting.backend", "backend.module")
>>> pd.Series([1, 2, 3]).plot()
This would be more or less equivalent to:
>>> import backend.module
>>> backend.module.plot(pd.Series([1, 2, 3]))
The backend module can then use other visualization tools (Bokeh, Altair,…)
to generate the plots.
Libraries implementing the plotting backend should use entry points
to make their backend discoverable to pandas. The key is "pandas_plotting_backends". For example, pandas
registers the default “matplotlib” backend as follows.
# in setup.py
setup( # noqa: F821
...,
entry_points={
"pandas_plotting_backends": [
"matplotlib = pandas:plotting._matplotlib",
],
},
)
More information on how to implement a third-party plotting backend can be found at
https://github.com/pandas-dev/pandas/blob/main/pandas/plotting/__init__.py#L1.
| development/extending.html |
pandas.tseries.offsets.BYearEnd.apply | pandas.tseries.offsets.BYearEnd.apply | BYearEnd.apply()#
| reference/api/pandas.tseries.offsets.BYearEnd.apply.html |
pandas.Series.dt.timetz | `pandas.Series.dt.timetz`
Returns numpy array of datetime.time objects with timezones. | Series.dt.timetz[source]#
Returns numpy array of datetime.time objects with timezones.
The time part of the Timestamps.
| reference/api/pandas.Series.dt.timetz.html |
pandas.tseries.offsets.Hour.n | pandas.tseries.offsets.Hour.n | Hour.n#
| reference/api/pandas.tseries.offsets.Hour.n.html |
pandas.Series.iloc | `pandas.Series.iloc`
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to
length-1 of the axis), but may also be used with a boolean
array.
```
>>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
... {'a': 100, 'b': 200, 'c': 300, 'd': 400},
... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
>>> df = pd.DataFrame(mydict)
>>> df
a b c d
0 1 2 3 4
1 100 200 300 400
2 1000 2000 3000 4000
``` | property Series.iloc[source]#
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to
length-1 of the axis), but may also be used with a boolean
array.
Allowed inputs are:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series or
DataFrame) and that returns valid output for indexing (one of the above).
This is useful in method chains, when you don’t have a reference to the
calling object, but would like to base your selection on some value.
A tuple of row and column indexes. The tuple elements consist of one of the
above inputs, e.g. (0, 1).
.iloc will raise IndexError if a requested indexer is
out-of-bounds, except slice indexers which allow out-of-bounds
indexing (this conforms with python/numpy slice semantics).
See more at Selection by Position.
See also
DataFrame.iatFast integer location scalar accessor.
DataFrame.locPurely label-location based indexer for selection by label.
Series.ilocPurely integer-location based indexing for selection by position.
Examples
>>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
... {'a': 100, 'b': 200, 'c': 300, 'd': 400},
... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
>>> df = pd.DataFrame(mydict)
>>> df
a b c d
0 1 2 3 4
1 100 200 300 400
2 1000 2000 3000 4000
Indexing just the rows
With a scalar integer.
>>> type(df.iloc[0])
<class 'pandas.core.series.Series'>
>>> df.iloc[0]
a 1
b 2
c 3
d 4
Name: 0, dtype: int64
With a list of integers.
>>> df.iloc[[0]]
a b c d
0 1 2 3 4
>>> type(df.iloc[[0]])
<class 'pandas.core.frame.DataFrame'>
>>> df.iloc[[0, 1]]
a b c d
0 1 2 3 4
1 100 200 300 400
With a slice object.
>>> df.iloc[:3]
a b c d
0 1 2 3 4
1 100 200 300 400
2 1000 2000 3000 4000
With a boolean mask the same length as the index.
>>> df.iloc[[True, False, True]]
a b c d
0 1 2 3 4
2 1000 2000 3000 4000
With a callable, useful in method chains. The x passed
to the lambda is the DataFrame being sliced. This selects
the rows whose index label even.
>>> df.iloc[lambda x: x.index % 2 == 0]
a b c d
0 1 2 3 4
2 1000 2000 3000 4000
Indexing both axes
You can mix the indexer types for the index and columns. Use : to
select the entire axis.
With scalar integers.
>>> df.iloc[0, 1]
2
With lists of integers.
>>> df.iloc[[0, 2], [1, 3]]
b d
0 2 4
2 2000 4000
With slice objects.
>>> df.iloc[1:3, 0:3]
a b c
1 100 200 300
2 1000 2000 3000
With a boolean array whose length matches the columns.
>>> df.iloc[:, [True, False, True, False]]
a c
0 1 3
1 100 300
2 1000 3000
With a callable function that expects the Series or DataFrame.
>>> df.iloc[:, lambda df: [0, 2]]
a c
0 1 3
1 100 300
2 1000 3000
| reference/api/pandas.Series.iloc.html |
pandas.tseries.offsets.FY5253Quarter.is_year_end | `pandas.tseries.offsets.FY5253Quarter.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | FY5253Quarter.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.FY5253Quarter.is_year_end.html |
pandas.tseries.offsets.CustomBusinessDay.is_year_end | `pandas.tseries.offsets.CustomBusinessDay.is_year_end`
Return boolean whether a timestamp occurs on the year end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | CustomBusinessDay.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessDay.is_year_end.html |
pandas.Timedelta.to_numpy | `pandas.Timedelta.to_numpy`
Convert the Timedelta to a NumPy timedelta64. | Timedelta.to_numpy()#
Convert the Timedelta to a NumPy timedelta64.
New in version 0.25.0.
This is an alias method for Timedelta.to_timedelta64(). The dtype and
copy parameters are available here only for compatibility. Their values
will not affect the return value.
Returns
numpy.timedelta64
See also
Series.to_numpySimilar method for Series.
| reference/api/pandas.Timedelta.to_numpy.html |
pandas.core.window.expanding.Expanding.min | `pandas.core.window.expanding.Expanding.min`
Calculate the expanding minimum.
Include only float, int, boolean columns. | Expanding.min(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the expanding minimum.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.minAggregating min for Series.
pandas.DataFrame.minAggregating min for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
| reference/api/pandas.core.window.expanding.Expanding.min.html |
pandas.DataFrame.last | `pandas.DataFrame.last`
Select final periods of time series data based on a date offset.
```
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 1
2018-04-11 2
2018-04-13 3
2018-04-15 4
``` | DataFrame.last(offset)[source]#
Select final periods of time series data based on a date offset.
For a DataFrame with a sorted DatetimeIndex, this function
selects the last few rows based on a date offset.
Parameters
offsetstr, DateOffset, dateutil.relativedeltaThe offset length of the data that will be selected. For instance,
‘3D’ will display all the rows having their index within the last 3 days.
Returns
Series or DataFrameA subset of the caller.
Raises
TypeErrorIf the index is not a DatetimeIndex
See also
firstSelect initial periods of time series based on a date offset.
at_timeSelect values at a particular time of the day.
between_timeSelect values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 1
2018-04-11 2
2018-04-13 3
2018-04-15 4
Get the rows for the last 3 days:
>>> ts.last('3D')
A
2018-04-13 3
2018-04-15 4
Notice the data for 3 last calendar days were returned, not the last
3 observed days in the dataset, and therefore data for 2018-04-11 was
not returned.
| reference/api/pandas.DataFrame.last.html |
pandas.DataFrame.boxplot | `pandas.DataFrame.boxplot`
Make a box plot from DataFrame columns.
```
>>> np.random.seed(1234)
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['Col1', 'Col2', 'Col3', 'Col4'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3'])
``` | DataFrame.boxplot(column=None, by=None, ax=None, fontsize=None, rot=0, grid=True, figsize=None, layout=None, return_type=None, backend=None, **kwargs)[source]#
Make a box plot from DataFrame columns.
Make a box-and-whisker plot from DataFrame columns, optionally grouped
by some other columns. A box plot is a method for graphically depicting
groups of numerical data through their quartiles.
The box extends from the Q1 to Q3 quartile values of the data,
with a line at the median (Q2). The whiskers extend from the edges
of box to show the range of the data. By default, they extend no more than
1.5 * IQR (IQR = Q3 - Q1) from the edges of the box, ending at the farthest
data point within that interval. Outliers are plotted as separate dots.
For further details see
Wikipedia’s entry for boxplot.
Parameters
columnstr or list of str, optionalColumn name or list of names, or vector.
Can be any valid input to pandas.DataFrame.groupby().
bystr or array-like, optionalColumn in the DataFrame to pandas.DataFrame.groupby().
One box-plot will be done per value of columns in by.
axobject of class matplotlib.axes.Axes, optionalThe matplotlib axes to be used by boxplot.
fontsizefloat or strTick label font size in points or as a string (e.g., large).
rotint or float, default 0The rotation angle of labels (in degrees)
with respect to the screen coordinate system.
gridbool, default TrueSetting this to True will show the grid.
figsizeA tuple (width, height) in inchesThe size of the figure to create in matplotlib.
layouttuple (rows, columns), optionalFor example, (3, 5) will display the subplots
using 3 columns and 5 rows, starting from the top-left.
return_type{‘axes’, ‘dict’, ‘both’} or None, default ‘axes’The kind of object to return. The default is axes.
‘axes’ returns the matplotlib axes the boxplot is drawn on.
‘dict’ returns a dictionary whose values are the matplotlib
Lines of the boxplot.
‘both’ returns a namedtuple with the axes and dict.
when grouping with by, a Series mapping columns to
return_type is returned.
If return_type is None, a NumPy array
of axes with the same shape as layout is returned.
backendstr, default NoneBackend to use instead of the backend specified in the option
plotting.backend. For instance, ‘matplotlib’. Alternatively, to
specify the plotting.backend for the whole session, set
pd.options.plotting.backend.
New in version 1.0.0.
**kwargsAll other plotting keyword arguments to be passed to
matplotlib.pyplot.boxplot().
Returns
resultSee Notes.
See also
Series.plot.histMake a histogram.
matplotlib.pyplot.boxplotMatplotlib equivalent plot.
Notes
The return type depends on the return_type parameter:
‘axes’ : object of class matplotlib.axes.Axes
‘dict’ : dict of matplotlib.lines.Line2D objects
‘both’ : a namedtuple with structure (ax, lines)
For data grouped with by, return a Series of the above or a numpy
array:
Series
array (for return_type = None)
Use return_type='dict' when you want to tweak the appearance
of the lines after plotting. In this case a dict containing the Lines
making up the boxes, caps, fliers, medians, and whiskers is returned.
Examples
Boxplots can be created for every column in the dataframe
by df.boxplot() or indicating the columns to be used:
>>> np.random.seed(1234)
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['Col1', 'Col2', 'Col3', 'Col4'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3'])
Boxplots of variables distributions grouped by the values of a third
variable can be created using the option by. For instance:
>>> df = pd.DataFrame(np.random.randn(10, 2),
... columns=['Col1', 'Col2'])
>>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
... 'B', 'B', 'B', 'B', 'B'])
>>> boxplot = df.boxplot(by='X')
A list of strings (i.e. ['X', 'Y']) can be passed to boxplot
in order to group the data by combination of the variables in the x-axis:
>>> df = pd.DataFrame(np.random.randn(10, 3),
... columns=['Col1', 'Col2', 'Col3'])
>>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
... 'B', 'B', 'B', 'B', 'B'])
>>> df['Y'] = pd.Series(['A', 'B', 'A', 'B', 'A',
... 'B', 'A', 'B', 'A', 'B'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by=['X', 'Y'])
The layout of boxplot can be adjusted giving a tuple to layout:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
... layout=(2, 1))
Additional formatting can be done to the boxplot, like suppressing the grid
(grid=False), rotating the labels in the x-axis (i.e. rot=45)
or changing the fontsize (i.e. fontsize=15):
>>> boxplot = df.boxplot(grid=False, rot=45, fontsize=15)
The parameter return_type can be used to select the type of element
returned by boxplot. When return_type='axes' is selected,
the matplotlib axes on which the boxplot is drawn are returned:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], return_type='axes')
>>> type(boxplot)
<class 'matplotlib.axes._subplots.AxesSubplot'>
When grouping with by, a Series mapping columns to return_type
is returned:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
... return_type='axes')
>>> type(boxplot)
<class 'pandas.core.series.Series'>
If return_type is None, a NumPy array of axes with the same shape
as layout is returned:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
... return_type=None)
>>> type(boxplot)
<class 'numpy.ndarray'>
| reference/api/pandas.DataFrame.boxplot.html |
pandas.Series.unique | `pandas.Series.unique`
Return unique values of Series object.
Uniques are returned in order of appearance. Hash table-based unique,
therefore does NOT sort.
```
>>> pd.Series([2, 1, 3, 3], name='A').unique()
array([2, 1, 3])
``` | Series.unique()[source]#
Return unique values of Series object.
Uniques are returned in order of appearance. Hash table-based unique,
therefore does NOT sort.
Returns
ndarray or ExtensionArrayThe unique values returned as a NumPy array. See Notes.
See also
Series.drop_duplicatesReturn Series with duplicate values removed.
uniqueTop-level unique method for any 1-d array-like object.
Index.uniqueReturn Index with unique values from an Index object.
Notes
Returns the unique values as a NumPy array. In case of an
extension-array backed Series, a new
ExtensionArray of that type with just
the unique values is returned. This includes
Categorical
Period
Datetime with Timezone
Interval
Sparse
IntegerNA
See Examples section.
Examples
>>> pd.Series([2, 1, 3, 3], name='A').unique()
array([2, 1, 3])
>>> pd.Series([pd.Timestamp('2016-01-01') for _ in range(3)]).unique()
array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
>>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern')
... for _ in range(3)]).unique()
<DatetimeArray>
['2016-01-01 00:00:00-05:00']
Length: 1, dtype: datetime64[ns, US/Eastern]
An Categorical will return categories in the order of
appearance and with the same dtype.
>>> pd.Series(pd.Categorical(list('baabc'))).unique()
['b', 'a', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> pd.Series(pd.Categorical(list('baabc'), categories=list('abc'),
... ordered=True)).unique()
['b', 'a', 'c']
Categories (3, object): ['a' < 'b' < 'c']
| reference/api/pandas.Series.unique.html |
pandas.Series.ndim | `pandas.Series.ndim`
Number of dimensions of the underlying data, by definition 1. | property Series.ndim[source]#
Number of dimensions of the underlying data, by definition 1.
| reference/api/pandas.Series.ndim.html |
pandas.tseries.offsets.CustomBusinessDay.name | `pandas.tseries.offsets.CustomBusinessDay.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
``` | CustomBusinessDay.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.CustomBusinessDay.name.html |
pandas.tseries.offsets.WeekOfMonth.is_month_start | `pandas.tseries.offsets.WeekOfMonth.is_month_start`
Return boolean whether a timestamp occurs on the month start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | WeekOfMonth.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.WeekOfMonth.is_month_start.html |
pandas.Period.freq | pandas.Period.freq | Period.freq#
| reference/api/pandas.Period.freq.html |
pandas.tseries.offsets.SemiMonthBegin.apply_index | `pandas.tseries.offsets.SemiMonthBegin.apply_index`
Vectorized apply of DateOffset to DatetimeIndex. | SemiMonthBegin.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.SemiMonthBegin.apply_index.html |
pandas.api.types.is_dict_like | `pandas.api.types.is_dict_like`
Check if the object is dict-like.
Whether obj has dict-like properties.
```
>>> is_dict_like({1: 2})
True
>>> is_dict_like([1, 2, 3])
False
>>> is_dict_like(dict)
False
>>> is_dict_like(dict())
True
``` | pandas.api.types.is_dict_like(obj)[source]#
Check if the object is dict-like.
Parameters
objThe object to check
Returns
is_dict_likeboolWhether obj has dict-like properties.
Examples
>>> is_dict_like({1: 2})
True
>>> is_dict_like([1, 2, 3])
False
>>> is_dict_like(dict)
False
>>> is_dict_like(dict())
True
| reference/api/pandas.api.types.is_dict_like.html |
pandas.Series.idxmin | `pandas.Series.idxmin`
Return the row label of the minimum value.
```
>>> s = pd.Series(data=[1, None, 4, 1],
... index=['A', 'B', 'C', 'D'])
>>> s
A 1.0
B NaN
C 4.0
D 1.0
dtype: float64
``` | Series.idxmin(axis=0, skipna=True, *args, **kwargs)[source]#
Return the row label of the minimum value.
If multiple values equal the minimum, the first row label with that
value is returned.
Parameters
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
skipnabool, default TrueExclude NA/null values. If the entire Series is NA, the result
will be NA.
*args, **kwargsAdditional arguments and keywords have no effect but might be
accepted for compatibility with NumPy.
Returns
IndexLabel of the minimum value.
Raises
ValueErrorIf the Series is empty.
See also
numpy.argminReturn indices of the minimum values along the given axis.
DataFrame.idxminReturn index of first occurrence of minimum over requested axis.
Series.idxmaxReturn index label of the first occurrence of maximum of values.
Notes
This method is the Series version of ndarray.argmin. This method
returns the label of the minimum, while ndarray.argmin returns
the position. To get the position, use series.values.argmin().
Examples
>>> s = pd.Series(data=[1, None, 4, 1],
... index=['A', 'B', 'C', 'D'])
>>> s
A 1.0
B NaN
C 4.0
D 1.0
dtype: float64
>>> s.idxmin()
'A'
If skipna is False and there is an NA value in the data,
the function returns nan.
>>> s.idxmin(skipna=False)
nan
| reference/api/pandas.Series.idxmin.html |
pandas.api.extensions.ExtensionArray.view | `pandas.api.extensions.ExtensionArray.view`
Return a view on the array. | ExtensionArray.view(dtype=None)[source]#
Return a view on the array.
Parameters
dtypestr, np.dtype, or ExtensionDtype, optionalDefault None.
Returns
ExtensionArray or np.ndarrayA view on the ExtensionArray’s data.
| reference/api/pandas.api.extensions.ExtensionArray.view.html |
pandas.MultiIndex.get_level_values | `pandas.MultiIndex.get_level_values`
Return vector of label values for requested level.
```
>>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def')))
>>> mi.names = ['level_1', 'level_2']
``` | MultiIndex.get_level_values(level)[source]#
Return vector of label values for requested level.
Length of returned vector is equal to the length of the index.
Parameters
levelint or strlevel is either the integer position of the level in the
MultiIndex, or the name of the level.
Returns
valuesIndexValues is a level of this MultiIndex converted to
a single Index (or subclass thereof).
Notes
If the level contains missing values, the result may be casted to
float with missing values specified as NaN. This is because
the level is converted to a regular Index.
Examples
Create a MultiIndex:
>>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def')))
>>> mi.names = ['level_1', 'level_2']
Get level values by supplying level as either integer or name:
>>> mi.get_level_values(0)
Index(['a', 'b', 'c'], dtype='object', name='level_1')
>>> mi.get_level_values('level_2')
Index(['d', 'e', 'f'], dtype='object', name='level_2')
If a level contains missing values, the return type of the level
maybe casted to float.
>>> pd.MultiIndex.from_arrays([[1, None, 2], [3, 4, 5]]).dtypes
level_0 int64
level_1 int64
dtype: object
>>> pd.MultiIndex.from_arrays([[1, None, 2], [3, 4, 5]]).get_level_values(0)
Float64Index([1.0, nan, 2.0], dtype='float64')
| reference/api/pandas.MultiIndex.get_level_values.html |
pandas.tseries.offsets.SemiMonthBegin.isAnchored | pandas.tseries.offsets.SemiMonthBegin.isAnchored | SemiMonthBegin.isAnchored()#
| reference/api/pandas.tseries.offsets.SemiMonthBegin.isAnchored.html |
pandas ecosystem | Increasingly, packages are being built on top of pandas to address specific needs
in data preparation, analysis and visualization.
This is encouraging because it means pandas is not only helping users to handle
their data tasks but also that it provides a better starting point for developers to
build powerful and more focused data tools.
The creation of libraries that complement pandas’ functionality also allows pandas
development to remain focused around it’s original requirements.
This is an inexhaustive list of projects that build on pandas in order to provide
tools in the PyData space. For a list of projects that depend on pandas,
see the
Github network dependents for pandas
or search pypi for pandas.
We’d like to make it easier for users to find these projects, if you know of other
substantial projects that you feel should be on this list, please let us know.
Data cleaning and validation#
Pyjanitor#
Pyjanitor provides a clean API for cleaning data, using method chaining.
Pandera#
Pandera provides a flexible and expressive API for performing data validation on dataframes
to make data processing pipelines more readable and robust.
Dataframes contain information that pandera explicitly validates at runtime. This is useful in
production-critical data pipelines or reproducible research settings.
pandas-path#
Since Python 3.4, pathlib has been
included in the Python standard library. Path objects provide a simple
and delightful way to interact with the file system. The pandas-path package enables the
Path API for pandas through a custom accessor .path. Getting just the filenames from
a series of full file paths is as simple as my_files.path.name. Other convenient operations like
joining paths, replacing file extensions, and checking if files exist are also available.
Statistics and machine learning#
pandas-tfrecords#
Easy saving pandas dataframe to tensorflow tfrecords format and reading tfrecords to pandas.
Statsmodels#
Statsmodels is the prominent Python “statistics and econometrics library” and it has
a long-standing special relationship with pandas. Statsmodels provides powerful statistics,
econometrics, analysis and modeling functionality that is out of pandas’ scope.
Statsmodels leverages pandas objects as the underlying data container for computation.
sklearn-pandas#
Use pandas DataFrames in your scikit-learn
ML pipeline.
Featuretools#
Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming temporal and relational datasets into feature matrices for machine learning using reusable feature engineering “primitives”. Users can contribute their own primitives in Python and share them with the rest of the community.
Compose#
Compose is a machine learning tool for labeling data and prediction engineering. It allows you to structure the labeling process by parameterizing prediction problems and transforming time-driven relational data into target values with cutoff times that can be used for supervised learning.
STUMPY#
STUMPY is a powerful and scalable Python library for modern time series analysis.
At its core, STUMPY efficiently computes something called a
matrix profile,
which can be used for a wide variety of time series data mining tasks.
Visualization#
Pandas has its own Styler class for table visualization, and while
pandas also has built-in support for data visualization through charts with matplotlib,
there are a number of other pandas-compatible libraries.
Altair#
Altair is a declarative statistical visualization library for Python.
With Altair, you can spend more time understanding your data and its
meaning. Altair’s API is simple, friendly and consistent and built on
top of the powerful Vega-Lite JSON specification. This elegant
simplicity produces beautiful and effective visualizations with a
minimal amount of code. Altair works with pandas DataFrames.
Bokeh#
Bokeh is a Python interactive visualization library for large datasets that natively uses
the latest web technologies. Its goal is to provide elegant, concise construction of novel
graphics in the style of Protovis/D3, while delivering high-performance interactivity over
large data to thin clients.
Pandas-Bokeh provides a high level API
for Bokeh that can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "pandas_bokeh")
It is very similar to the matplotlib plotting backend, but provides interactive
web-based charts and maps.
Seaborn#
Seaborn is a Python visualization library based on
matplotlib. It provides a high-level, dataset-oriented
interface for creating attractive statistical graphics. The plotting functions
in seaborn understand pandas objects and leverage pandas grouping operations
internally to support concise specification of complex visualizations. Seaborn
also goes beyond matplotlib and pandas with the option to perform statistical
estimation while plotting, aggregating across observations and visualizing the
fit of statistical models to emphasize patterns in a dataset.
plotnine#
Hadley Wickham’s ggplot2 is a foundational exploratory visualization package for the R language.
Based on “The Grammar of Graphics” it
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
Various implementations to other languages are available.
A good implementation for Python users is has2k1/plotnine.
IPython vega#
IPython Vega leverages Vega to create plots within Jupyter Notebook.
Plotly#
Plotly’s Python API enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are rendered with WebGL and D3.js. The library supports plotting directly from a pandas DataFrame and cloud-based collaboration. Users of matplotlib, ggplot for Python, and Seaborn can convert figures into interactive web-based plots. Plots can be drawn in IPython Notebooks , edited with R or MATLAB, modified in a GUI, or embedded in apps and dashboards. Plotly is free for unlimited sharing, and has offline, or on-premise accounts for private use.
Lux#
Lux is a Python library that facilitates fast and easy experimentation with data by automating the visual data exploration process. To use Lux, simply add an extra import alongside pandas:
import lux
import pandas as pd
df = pd.read_csv("data.csv")
df # discover interesting insights!
By printing out a dataframe, Lux automatically recommends a set of visualizations that highlights interesting trends and patterns in the dataframe. Users can leverage any existing pandas commands without modifying their code, while being able to visualize their pandas data structures (e.g., DataFrame, Series, Index) at the same time. Lux also offers a powerful, intuitive language that allow users to create Altair, matplotlib, or Vega-Lite visualizations without having to think at the level of code.
Qtpandas#
Spun off from the main pandas library, the qtpandas
library enables DataFrame visualization and manipulation in PyQt4 and PySide applications.
D-Tale#
D-Tale is a lightweight web client for visualizing pandas data structures. It
provides a rich spreadsheet-style grid which acts as a wrapper for a lot of
pandas functionality (query, sort, describe, corr…) so users can quickly
manipulate their data. There is also an interactive chart-builder using Plotly
Dash allowing users to build nice portable visualizations. D-Tale can be
invoked with the following command
import dtale
dtale.show(df)
D-Tale integrates seamlessly with Jupyter notebooks, Python terminals, Kaggle
& Google Colab. Here are some demos of the grid.
hvplot#
hvPlot is a high-level plotting API for the PyData ecosystem built on HoloViews.
It can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "hvplot")
IDE#
IPython#
IPython is an interactive command shell and distributed computing
environment. IPython tab completion works with pandas methods and also
attributes like DataFrame columns.
Jupyter Notebook / Jupyter Lab#
Jupyter Notebook is a web application for creating Jupyter notebooks.
A Jupyter notebook is a JSON document containing an ordered list
of input/output cells which can contain code, text, mathematics, plots
and rich media.
Jupyter notebooks can be converted to a number of open standard output formats
(HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText, Markdown,
Python) through ‘Download As’ in the web interface and jupyter convert
in a shell.
pandas DataFrames implement _repr_html_ and _repr_latex methods
which are utilized by Jupyter Notebook for displaying
(abbreviated) HTML or LaTeX tables. LaTeX output is properly escaped.
(Note: HTML tables may or may not be
compatible with non-HTML Jupyter output formats.)
See Options and Settings and
Available Options
for pandas display. settings.
Quantopian/qgrid#
qgrid is “an interactive grid for sorting and filtering
DataFrames in IPython Notebook” built with SlickGrid.
Spyder#
Spyder is a cross-platform PyQt-based IDE combining the editing, analysis,
debugging and profiling functionality of a software development tool with the
data exploration, interactive execution, deep inspection and rich visualization
capabilities of a scientific environment like MATLAB or Rstudio.
Its Variable Explorer
allows users to view, manipulate and edit pandas Index, Series,
and DataFrame objects like a “spreadsheet”, including copying and modifying
values, sorting, displaying a “heatmap”, converting data types and more.
pandas objects can also be renamed, duplicated, new columns added,
copied/pasted to/from the clipboard (as TSV), and saved/loaded to/from a file.
Spyder can also import data from a variety of plain text and binary files
or the clipboard into a new pandas DataFrame via a sophisticated import wizard.
Most pandas classes, methods and data attributes can be autocompleted in
Spyder’s Editor and
IPython Console,
and Spyder’s Help pane can retrieve
and render Numpydoc documentation on pandas objects in rich text with Sphinx
both automatically and on-demand.
API#
pandas-datareader#
pandas-datareader is a remote data access library for pandas (PyPI:pandas-datareader).
It is based on functionality that was located in pandas.io.data and pandas.io.wb but was
split off in v0.19.
See more in the pandas-datareader docs:
The following data feeds are available:
Google Finance
Tiingo
Morningstar
IEX
Robinhood
Enigma
Quandl
FRED
Fama/French
World Bank
OECD
Eurostat
TSP Fund Data
Nasdaq Trader Symbol Definitions
Stooq Index Data
MOEX Data
Quandl/Python#
Quandl API for Python wraps the Quandl REST API to return
pandas DataFrames with timeseries indexes.
Pydatastream#
PyDatastream is a Python interface to the
Refinitiv Datastream (DWS)
REST API to return indexed pandas DataFrames with financial data.
This package requires valid credentials for this API (non free).
pandaSDMX#
pandaSDMX is a library to retrieve and acquire statistical data
and metadata disseminated in
SDMX 2.1, an ISO-standard
widely used by institutions such as statistics offices, central banks,
and international organisations. pandaSDMX can expose datasets and related
structural metadata including data flows, code-lists,
and data structure definitions as pandas Series
or MultiIndexed DataFrames.
fredapi#
fredapi is a Python interface to the Federal Reserve Economic Data (FRED)
provided by the Federal Reserve Bank of St. Louis. It works with both the FRED database and ALFRED database that
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in Python to the FRED
HTTP API, and also provides several convenient methods for parsing and analyzing point-in-time data from ALFRED.
fredapi makes use of pandas and returns data in a Series or DataFrame. This module requires a FRED API key that
you can obtain for free on the FRED website.
dataframe_sql#
dataframe_sql is a Python package that translates SQL syntax directly into
operations on pandas DataFrames. This is useful when migrating from a database to
using pandas or for users more comfortable with SQL looking for a way to interface
with pandas.
Domain specific#
Geopandas#
Geopandas extends pandas data objects to include geographic information which support
geometric operations. If your work entails maps and geographical coordinates, and
you love pandas, you should take a close look at Geopandas.
staircase#
staircase is a data analysis package, built upon pandas and numpy, for modelling and
manipulation of mathematical step functions. It provides a rich variety of arithmetic
operations, relational operations, logical operations, statistical operations and
aggregations for step functions defined over real numbers, datetime and timedelta domains.
xarray#
xarray brings the labeled data power of pandas to the physical sciences by
providing N-dimensional variants of the core pandas data structures. It aims to
provide a pandas-like and pandas-compatible toolkit for analytics on multi-
dimensional arrays, rather than the tabular data for which pandas excels.
IO#
BCPandas#
BCPandas provides high performance writes from pandas to Microsoft SQL Server,
far exceeding the performance of the native df.to_sql method. Internally, it uses
Microsoft’s BCP utility, but the complexity is fully abstracted away from the end user.
Rigorously tested, it is a complete replacement for df.to_sql.
Deltalake#
Deltalake python package lets you access tables stored in
Delta Lake natively in Python without the need to use Spark or
JVM. It provides the delta_table.to_pyarrow_table().to_pandas() method to convert
any Delta table into Pandas dataframe.
Out-of-core#
Blaze#
Blaze provides a standard API for doing computations with various
in-memory and on-disk backends: NumPy, pandas, SQLAlchemy, MongoDB, PyTables,
PySpark.
Cylon#
Cylon is a fast, scalable, distributed memory parallel runtime with a pandas
like Python DataFrame API. ”Core Cylon” is implemented with C++ using Apache
Arrow format to represent the data in-memory. Cylon DataFrame API implements
most of the core operators of pandas such as merge, filter, join, concat,
group-by, drop_duplicates, etc. These operators are designed to work across
thousands of cores to scale applications. It can interoperate with pandas
DataFrame by reading data from pandas or converting data to pandas so users
can selectively scale parts of their pandas DataFrame applications.
from pycylon import read_csv, DataFrame, CylonEnv
from pycylon.net import MPIConfig
# Initialize Cylon distributed environment
config: MPIConfig = MPIConfig()
env: CylonEnv = CylonEnv(config=config, distributed=True)
df1: DataFrame = read_csv('/tmp/csv1.csv')
df2: DataFrame = read_csv('/tmp/csv2.csv')
# Using 1000s of cores across the cluster to compute the join
df3: Table = df1.join(other=df2, on=[0], algorithm="hash", env=env)
print(df3)
Dask#
Dask is a flexible parallel computing library for analytics. Dask
provides a familiar DataFrame interface for out-of-core, parallel and distributed computing.
Dask-ML#
Dask-ML enables parallel and distributed machine learning using Dask alongside existing machine learning libraries like Scikit-Learn, XGBoost, and TensorFlow.
Ibis#
Ibis offers a standard way to write analytics code, that can be run in multiple engines. It helps in bridging the gap between local Python environments (like pandas) and remote storage and execution systems like Hadoop components (like HDFS, Impala, Hive, Spark) and SQL databases (Postgres, etc.).
Koalas#
Koalas provides a familiar pandas DataFrame interface on top of Apache Spark. It enables users to leverage multi-cores on one machine or a cluster of machines to speed up or scale their DataFrame code.
Modin#
The modin.pandas DataFrame is a parallel and distributed drop-in replacement
for pandas. This means that you can use Modin with existing pandas code or write
new code with the existing pandas API. Modin can leverage your entire machine or
cluster to speed up and scale your pandas workloads, including traditionally
time-consuming tasks like ingesting data (read_csv, read_excel,
read_parquet, etc.).
# import pandas as pd
import modin.pandas as pd
df = pd.read_csv("big.csv") # use all your cores!
Odo#
Odo provides a uniform API for moving data between different formats. It uses
pandas own read_csv for CSV IO and leverages many existing packages such as
PyTables, h5py, and pymongo to move data between non pandas formats. Its graph
based approach is also extensible by end users for custom formats that may be
too specific for the core of odo.
Pandarallel#
Pandarallel provides a simple way to parallelize your pandas operations on all your CPUs by changing only one line of code.
If also displays progress bars.
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True)
# df.apply(func)
df.parallel_apply(func)
Vaex#
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a Python library for Out-of-Core DataFrames (similar to pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (109) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
vaex.from_pandas
vaex.to_pandas_df
Extension data types#
pandas provides an interface for defining
extension types to extend NumPy’s type
system. The following libraries implement that interface to provide types not
found in NumPy or pandas, which work well with pandas’ data containers.
Cyberpandas#
Cyberpandas provides an extension type for storing arrays of IP Addresses. These
arrays can be stored inside pandas’ Series and DataFrame.
Pandas-Genomics#
Pandas-Genomics provides extension types, extension arrays, and extension accessors for working with genomics data
Pint-Pandas#
Pint-Pandas provides an extension type for
storing numeric arrays with units. These arrays can be stored inside pandas’
Series and DataFrame. Operations between Series and DataFrame columns which
use pint’s extension array are then units aware.
Text Extensions for Pandas#
Text Extensions for Pandas
provides extension types to cover common data structures for representing natural language
data, plus library integrations that convert the outputs of popular natural language
processing libraries into Pandas DataFrames.
Accessors#
A directory of projects providing
extension accessors. This is for users to
discover new accessors and for library authors to coordinate on the namespace.
Library
Accessor
Classes
Description
cyberpandas
ip
Series
Provides common operations for working with IP addresses.
pdvega
vgplot
Series, DataFrame
Provides plotting functions from the Altair library.
pandas-genomics
genomics
Series, DataFrame
Provides common operations for quality control and analysis of genomics data.
pandas_path
path
Index, Series
Provides pathlib.Path functions for Series.
pint-pandas
pint
Series, DataFrame
Provides units support for numeric Series and DataFrames.
composeml
slice
DataFrame
Provides a generator for enhanced data slicing.
datatest
validate
Series, DataFrame, Index
Provides validation, differences, and acceptance managers.
woodwork
ww
Series, DataFrame
Provides physical, logical, and semantic data typing information for Series and DataFrames.
staircase
sc
Series
Provides methods for querying, aggregating and plotting step functions
Development tools#
pandas-stubs#
While pandas repository is partially typed, the package itself doesn’t expose this information for external use.
Install pandas-stubs to enable basic type coverage of pandas API.
Learn more by reading through GH14468, GH26766, GH28142.
See installation and usage instructions on the github page.
| ecosystem.html | null |
pandas.tseries.offsets.CustomBusinessDay.is_month_start | `pandas.tseries.offsets.CustomBusinessDay.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | CustomBusinessDay.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.CustomBusinessDay.is_month_start.html |
pandas.api.types.is_datetime64_dtype | `pandas.api.types.is_datetime64_dtype`
Check whether an array-like or dtype is of the datetime64 dtype.
```
>>> is_datetime64_dtype(object)
False
>>> is_datetime64_dtype(np.datetime64)
True
>>> is_datetime64_dtype(np.array([], dtype=int))
False
>>> is_datetime64_dtype(np.array([], dtype=np.datetime64))
True
>>> is_datetime64_dtype([1, 2, 3])
False
``` | pandas.api.types.is_datetime64_dtype(arr_or_dtype)[source]#
Check whether an array-like or dtype is of the datetime64 dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array-like or dtype to check.
Returns
booleanWhether or not the array-like or dtype is of the datetime64 dtype.
Examples
>>> is_datetime64_dtype(object)
False
>>> is_datetime64_dtype(np.datetime64)
True
>>> is_datetime64_dtype(np.array([], dtype=int))
False
>>> is_datetime64_dtype(np.array([], dtype=np.datetime64))
True
>>> is_datetime64_dtype([1, 2, 3])
False
| reference/api/pandas.api.types.is_datetime64_dtype.html |
pandas.DataFrame.drop | `pandas.DataFrame.drop`
Drop specified labels from rows or columns.
```
>>> df = pd.DataFrame(np.arange(12).reshape(3, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
``` | DataFrame.drop(labels=None, *, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise')[source]#
Drop specified labels from rows or columns.
Remove rows or columns by specifying label names and corresponding
axis, or by specifying directly index or column names. When using a
multi-index, labels on different levels can be removed by specifying
the level. See the user guide <advanced.shown_levels>
for more information about the now unused levels.
Parameters
labelssingle label or list-likeIndex or column labels to drop. A tuple will be used as a single
label and not treated as a list-like.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Whether to drop labels from the index (0 or ‘index’) or
columns (1 or ‘columns’).
indexsingle label or list-likeAlternative to specifying axis (labels, axis=0
is equivalent to index=labels).
columnssingle label or list-likeAlternative to specifying axis (labels, axis=1
is equivalent to columns=labels).
levelint or level name, optionalFor MultiIndex, level from which the labels will be removed.
inplacebool, default FalseIf False, return a copy. Otherwise, do operation
inplace and return None.
errors{‘ignore’, ‘raise’}, default ‘raise’If ‘ignore’, suppress error and only existing labels are
dropped.
Returns
DataFrame or NoneDataFrame without the removed index or column labels or
None if inplace=True.
Raises
KeyErrorIf any of the labels is not found in the selected axis.
See also
DataFrame.locLabel-location based indexer for selection by label.
DataFrame.dropnaReturn DataFrame with labels on given axis omitted where (all or any) data are missing.
DataFrame.drop_duplicatesReturn DataFrame with duplicate rows removed, optionally only considering certain columns.
Series.dropReturn Series with specified index labels removed.
Examples
>>> df = pd.DataFrame(np.arange(12).reshape(3, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
Drop columns
>>> df.drop(['B', 'C'], axis=1)
A D
0 0 3
1 4 7
2 8 11
>>> df.drop(columns=['B', 'C'])
A D
0 0 3
1 4 7
2 8 11
Drop a row by index
>>> df.drop([0, 1])
A B C D
2 8 9 10 11
Drop columns and/or rows of MultiIndex DataFrame
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> df = pd.DataFrame(index=midx, columns=['big', 'small'],
... data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
... [250, 150], [1.5, 0.8], [320, 250],
... [1, 0.8], [0.3, 0.2]])
>>> df
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
weight 1.0 0.8
length 0.3 0.2
Drop a specific index combination from the MultiIndex
DataFrame, i.e., drop the combination 'falcon' and
'weight', which deletes only the corresponding row
>>> df.drop(index=('falcon', 'weight'))
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
length 0.3 0.2
>>> df.drop(index='cow', columns='small')
big
lama speed 45.0
weight 200.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
>>> df.drop(index='length', level=1)
big small
lama speed 45.0 30.0
weight 200.0 100.0
cow speed 30.0 20.0
weight 250.0 150.0
falcon speed 320.0 250.0
weight 1.0 0.8
| reference/api/pandas.DataFrame.drop.html |
pandas.tseries.offsets.Minute.copy | `pandas.tseries.offsets.Minute.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | Minute.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.Minute.copy.html |
pandas.DataFrame.sort_index | `pandas.DataFrame.sort_index`
Sort object by labels (along an axis).
```
>>> df = pd.DataFrame([1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150],
... columns=['A'])
>>> df.sort_index()
A
1 4
29 2
100 1
150 5
234 3
``` | DataFrame.sort_index(*, axis=0, level=None, ascending=True, inplace=False, kind='quicksort', na_position='last', sort_remaining=True, ignore_index=False, key=None)[source]#
Sort object by labels (along an axis).
Returns a new DataFrame sorted by label if inplace argument is
False, otherwise updates the original DataFrame and returns None.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis along which to sort. The value 0 identifies the rows,
and 1 identifies the columns.
levelint or level name or list of ints or list of level namesIf not None, sort on values in specified index level(s).
ascendingbool or list-like of bools, default TrueSort ascending vs. descending. When the index is a MultiIndex the
sort direction can be controlled for each level individually.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’Choice of sorting algorithm. See also numpy.sort() for more
information. mergesort and stable are the only stable algorithms. For
DataFrames, this option is only applied when sorting on a single
column or label.
na_position{‘first’, ‘last’}, default ‘last’Puts NaNs at the beginning if first; last puts NaNs at the end.
Not implemented for MultiIndex.
sort_remainingbool, default TrueIf True and sorting by level and index is multilevel, sort by other
levels too (in order) after sorting by specified level.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
New in version 1.0.0.
keycallable, optionalIf not None, apply the key function to the index values
before sorting. This is similar to the key argument in the
builtin sorted() function, with the notable difference that
this key function should be vectorized. It should expect an
Index and return an Index of the same shape. For MultiIndex
inputs, the key is applied per level.
New in version 1.1.0.
Returns
DataFrame or NoneThe original DataFrame sorted by the labels or None if inplace=True.
See also
Series.sort_indexSort Series by the index.
DataFrame.sort_valuesSort DataFrame by the value.
Series.sort_valuesSort Series by the value.
Examples
>>> df = pd.DataFrame([1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150],
... columns=['A'])
>>> df.sort_index()
A
1 4
29 2
100 1
150 5
234 3
By default, it sorts in ascending order, to sort in descending order,
use ascending=False
>>> df.sort_index(ascending=False)
A
234 3
150 5
100 1
29 2
1 4
A key function can be specified which is applied to the index before
sorting. For a MultiIndex this is applied to each level separately.
>>> df = pd.DataFrame({"a": [1, 2, 3, 4]}, index=['A', 'b', 'C', 'd'])
>>> df.sort_index(key=lambda x: x.str.lower())
a
A 1
b 2
C 3
d 4
| reference/api/pandas.DataFrame.sort_index.html |
pandas.DataFrame.notnull | `pandas.DataFrame.notnull`
DataFrame.notnull is an alias for DataFrame.notna.
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
``` | DataFrame.notnull()[source]#
DataFrame.notnull is an alias for DataFrame.notna.
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA.
Non-missing values get mapped to True. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
NA values, such as None or numpy.NaN, get mapped to False
values.
Returns
DataFrameMask of bool values for each element in DataFrame that
indicates whether an element is not an NA value.
See also
DataFrame.notnullAlias of notna.
DataFrame.isnaBoolean inverse of notna.
DataFrame.dropnaOmit axes labels with missing values.
notnaTop-level notna.
Examples
Show which entries in a DataFrame are not NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
Show which entries in a Series are not NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
| reference/api/pandas.DataFrame.notnull.html |
pandas.DataFrame.infer_objects | `pandas.DataFrame.infer_objects`
Attempt to infer better dtypes for object columns.
```
>>> df = pd.DataFrame({"A": ["a", 1, 2, 3]})
>>> df = df.iloc[1:]
>>> df
A
1 1
2 2
3 3
``` | DataFrame.infer_objects()[source]#
Attempt to infer better dtypes for object columns.
Attempts soft conversion of object-dtyped
columns, leaving non-object and unconvertible
columns unchanged. The inference rules are the
same as during normal Series/DataFrame construction.
Returns
convertedsame type as input object
See also
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
to_numericConvert argument to numeric type.
convert_dtypesConvert argument to best possible dtype.
Examples
>>> df = pd.DataFrame({"A": ["a", 1, 2, 3]})
>>> df = df.iloc[1:]
>>> df
A
1 1
2 2
3 3
>>> df.dtypes
A object
dtype: object
>>> df.infer_objects().dtypes
A int64
dtype: object
| reference/api/pandas.DataFrame.infer_objects.html |
pandas.core.groupby.DataFrameGroupBy.value_counts | `pandas.core.groupby.DataFrameGroupBy.value_counts`
Return a Series or DataFrame containing counts of unique rows.
New in version 1.4.0.
```
>>> df = pd.DataFrame({
... 'gender': ['male', 'male', 'female', 'male', 'female', 'male'],
... 'education': ['low', 'medium', 'high', 'low', 'high', 'low'],
... 'country': ['US', 'FR', 'US', 'FR', 'FR', 'FR']
... })
``` | DataFrameGroupBy.value_counts(subset=None, normalize=False, sort=True, ascending=False, dropna=True)[source]#
Return a Series or DataFrame containing counts of unique rows.
New in version 1.4.0.
Parameters
subsetlist-like, optionalColumns to use when counting unique combinations.
normalizebool, default FalseReturn proportions rather than frequencies.
sortbool, default TrueSort by frequencies.
ascendingbool, default FalseSort in ascending order.
dropnabool, default TrueDon’t include counts of rows that contain NA values.
Returns
Series or DataFrameSeries if the groupby as_index is True, otherwise DataFrame.
See also
Series.value_countsEquivalent method on Series.
DataFrame.value_countsEquivalent method on DataFrame.
SeriesGroupBy.value_countsEquivalent method on SeriesGroupBy.
Notes
If the groupby as_index is True then the returned Series will have a
MultiIndex with one level per input column.
If the groupby as_index is False then the returned DataFrame will have an
additional column with the value_counts. The column is labelled ‘count’ or
‘proportion’, depending on the normalize parameter.
By default, rows that contain any NA values are omitted from
the result.
By default, the result will be in descending order so that the
first element of each group is the most frequently-occurring row.
Examples
>>> df = pd.DataFrame({
... 'gender': ['male', 'male', 'female', 'male', 'female', 'male'],
... 'education': ['low', 'medium', 'high', 'low', 'high', 'low'],
... 'country': ['US', 'FR', 'US', 'FR', 'FR', 'FR']
... })
>>> df
gender education country
0 male low US
1 male medium FR
2 female high US
3 male low FR
4 female high FR
5 male low FR
>>> df.groupby('gender').value_counts()
gender education country
female high FR 1
US 1
male low FR 2
US 1
medium FR 1
dtype: int64
>>> df.groupby('gender').value_counts(ascending=True)
gender education country
female high FR 1
US 1
male low US 1
medium FR 1
low FR 2
dtype: int64
>>> df.groupby('gender').value_counts(normalize=True)
gender education country
female high FR 0.50
US 0.50
male low FR 0.50
US 0.25
medium FR 0.25
dtype: float64
>>> df.groupby('gender', as_index=False).value_counts()
gender education country count
0 female high FR 1
1 female high US 1
2 male low FR 2
3 male low US 1
4 male medium FR 1
>>> df.groupby('gender', as_index=False).value_counts(normalize=True)
gender education country proportion
0 female high FR 0.50
1 female high US 0.50
2 male low FR 0.50
3 male low US 0.25
4 male medium FR 0.25
| reference/api/pandas.core.groupby.DataFrameGroupBy.value_counts.html |
Categorical data | Categorical data | This is an introduction to pandas categorical data type, including a short comparison
with R’s factor.
Categoricals are a pandas data type corresponding to categorical variables in
statistics. A categorical variable takes on a limited, and usually fixed,
number of possible values (categories; levels in R). Examples are gender,
social class, blood type, country affiliation, observation time or rating via
Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g.
‘strongly agree’ vs ‘agree’ or ‘first observation’ vs. ‘second observation’), but numerical
operations (additions, divisions, …) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by
the order of categories, not lexical order of the values. Internally, the data structure
consists of a categories array and an integer array of codes which point to the real value in
the categories array.
The categorical data type is useful in the following cases:
A string variable consisting of only a few different values. Converting such a string
variable to a categorical variable will save some memory, see here.
The lexical order of a variable is not the same as the logical order (“one”, “two”, “three”).
By converting to a categorical and specifying an order on the categories, sorting and
min/max will use the logical order instead of the lexical order, see here.
As a signal to other Python libraries that this column should be treated as a categorical
variable (e.g. to use suitable statistical methods or plot types).
See also the API docs on categoricals.
Object creation#
Series creation#
Categorical Series or columns in a DataFrame can be created in several ways:
By specifying dtype="category" when constructing a Series:
In [1]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [2]: s
Out[2]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
By converting an existing Series or column to a category dtype:
In [3]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
In [4]: df["B"] = df["A"].astype("category")
In [5]: df
Out[5]:
A B
0 a a
1 b b
2 c c
3 a a
By using special functions, such as cut(), which groups data into
discrete bins. See the example on tiling in the docs.
In [6]: df = pd.DataFrame({"value": np.random.randint(0, 100, 20)})
In [7]: labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)]
In [8]: df["group"] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels)
In [9]: df.head(10)
Out[9]:
value group
0 65 60 - 69
1 49 40 - 49
2 56 50 - 59
3 43 40 - 49
4 43 40 - 49
5 91 90 - 99
6 32 30 - 39
7 87 80 - 89
8 36 30 - 39
9 8 0 - 9
By passing a pandas.Categorical object to a Series or assigning it to a DataFrame.
In [10]: raw_cat = pd.Categorical(
....: ["a", "b", "c", "a"], categories=["b", "c", "d"], ordered=False
....: )
....:
In [11]: s = pd.Series(raw_cat)
In [12]: s
Out[12]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): ['b', 'c', 'd']
In [13]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
In [14]: df["B"] = raw_cat
In [15]: df
Out[15]:
A B
0 a NaN
1 b b
2 c c
3 a NaN
Categorical data has a specific category dtype:
In [16]: df.dtypes
Out[16]:
A object
B category
dtype: object
DataFrame creation#
Similar to the previous section where a single column was converted to categorical, all columns in a
DataFrame can be batch converted to categorical either during or after construction.
This can be done during construction by specifying dtype="category" in the DataFrame constructor:
In [17]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")}, dtype="category")
In [18]: df.dtypes
Out[18]:
A category
B category
dtype: object
Note that the categories present in each column differ; the conversion is done column by column, so
only labels present in a given column are categories:
In [19]: df["A"]
Out[19]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [20]: df["B"]
Out[20]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (3, object): ['b', 'c', 'd']
Analogously, all columns in an existing DataFrame can be batch converted using DataFrame.astype():
In [21]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")})
In [22]: df_cat = df.astype("category")
In [23]: df_cat.dtypes
Out[23]:
A category
B category
dtype: object
This conversion is likewise done column by column:
In [24]: df_cat["A"]
Out[24]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [25]: df_cat["B"]
Out[25]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (3, object): ['b', 'c', 'd']
Controlling behavior#
In the examples above where we passed dtype='category', we used the default
behavior:
Categories are inferred from the data.
Categories are unordered.
To control those behaviors, instead of passing 'category', use an instance
of CategoricalDtype.
In [26]: from pandas.api.types import CategoricalDtype
In [27]: s = pd.Series(["a", "b", "c", "a"])
In [28]: cat_type = CategoricalDtype(categories=["b", "c", "d"], ordered=True)
In [29]: s_cat = s.astype(cat_type)
In [30]: s_cat
Out[30]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): ['b' < 'c' < 'd']
Similarly, a CategoricalDtype can be used with a DataFrame to ensure that categories
are consistent among all columns.
In [31]: from pandas.api.types import CategoricalDtype
In [32]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")})
In [33]: cat_type = CategoricalDtype(categories=list("abcd"), ordered=True)
In [34]: df_cat = df.astype(cat_type)
In [35]: df_cat["A"]
Out[35]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (4, object): ['a' < 'b' < 'c' < 'd']
In [36]: df_cat["B"]
Out[36]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (4, object): ['a' < 'b' < 'c' < 'd']
Note
To perform table-wise conversion, where all labels in the entire DataFrame are used as
categories for each column, the categories parameter can be determined programmatically by
categories = pd.unique(df.to_numpy().ravel()).
If you already have codes and categories, you can use the
from_codes() constructor to save the factorize step
during normal constructor mode:
In [37]: splitter = np.random.choice([0, 1], 5, p=[0.5, 0.5])
In [38]: s = pd.Series(pd.Categorical.from_codes(splitter, categories=["train", "test"]))
Regaining original data#
To get back to the original Series or NumPy array, use
Series.astype(original_dtype) or np.asarray(categorical):
In [39]: s = pd.Series(["a", "b", "c", "a"])
In [40]: s
Out[40]:
0 a
1 b
2 c
3 a
dtype: object
In [41]: s2 = s.astype("category")
In [42]: s2
Out[42]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [43]: s2.astype(str)
Out[43]:
0 a
1 b
2 c
3 a
dtype: object
In [44]: np.asarray(s2)
Out[44]: array(['a', 'b', 'c', 'a'], dtype=object)
Note
In contrast to R’s factor function, categorical data is not converting input values to
strings; categories will end up the same data type as the original values.
Note
In contrast to R’s factor function, there is currently no way to assign/change labels at
creation time. Use categories to change the categories after creation time.
CategoricalDtype#
A categorical’s type is fully described by
categories: a sequence of unique values and no missing values
ordered: a boolean
This information can be stored in a CategoricalDtype.
The categories argument is optional, which implies that the actual categories
should be inferred from whatever is present in the data when the
pandas.Categorical is created. The categories are assumed to be unordered
by default.
In [45]: from pandas.api.types import CategoricalDtype
In [46]: CategoricalDtype(["a", "b", "c"])
Out[46]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=False)
In [47]: CategoricalDtype(["a", "b", "c"], ordered=True)
Out[47]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=True)
In [48]: CategoricalDtype()
Out[48]: CategoricalDtype(categories=None, ordered=False)
A CategoricalDtype can be used in any place pandas
expects a dtype. For example pandas.read_csv(),
pandas.DataFrame.astype(), or in the Series constructor.
Note
As a convenience, you can use the string 'category' in place of a
CategoricalDtype when you want the default behavior of
the categories being unordered, and equal to the set values present in the
array. In other words, dtype='category' is equivalent to
dtype=CategoricalDtype().
Equality semantics#
Two instances of CategoricalDtype compare equal
whenever they have the same categories and order. When comparing two
unordered categoricals, the order of the categories is not considered.
In [49]: c1 = CategoricalDtype(["a", "b", "c"], ordered=False)
# Equal, since order is not considered when ordered=False
In [50]: c1 == CategoricalDtype(["b", "c", "a"], ordered=False)
Out[50]: True
# Unequal, since the second CategoricalDtype is ordered
In [51]: c1 == CategoricalDtype(["a", "b", "c"], ordered=True)
Out[51]: False
All instances of CategoricalDtype compare equal to the string 'category'.
In [52]: c1 == "category"
Out[52]: True
Warning
Since dtype='category' is essentially CategoricalDtype(None, False),
and since all instances CategoricalDtype compare equal to 'category',
all instances of CategoricalDtype compare equal to a
CategoricalDtype(None, False), regardless of categories or
ordered.
Description#
Using describe() on categorical data will produce similar
output to a Series or DataFrame of type string.
In [53]: cat = pd.Categorical(["a", "c", "c", np.nan], categories=["b", "a", "c"])
In [54]: df = pd.DataFrame({"cat": cat, "s": ["a", "c", "c", np.nan]})
In [55]: df.describe()
Out[55]:
cat s
count 3 3
unique 2 2
top c c
freq 2 2
In [56]: df["cat"].describe()
Out[56]:
count 3
unique 2
top c
freq 2
Name: cat, dtype: object
Working with categories#
Categorical data has a categories and a ordered property, which list their
possible values and whether the ordering matters or not. These properties are
exposed as s.cat.categories and s.cat.ordered. If you don’t manually
specify categories and ordering, they are inferred from the passed arguments.
In [57]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [58]: s.cat.categories
Out[58]: Index(['a', 'b', 'c'], dtype='object')
In [59]: s.cat.ordered
Out[59]: False
It’s also possible to pass in the categories in a specific order:
In [60]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"], categories=["c", "b", "a"]))
In [61]: s.cat.categories
Out[61]: Index(['c', 'b', 'a'], dtype='object')
In [62]: s.cat.ordered
Out[62]: False
Note
New categorical data are not automatically ordered. You must explicitly
pass ordered=True to indicate an ordered Categorical.
Note
The result of unique() is not always the same as Series.cat.categories,
because Series.unique() has a couple of guarantees, namely that it returns categories
in the order of appearance, and it only includes values that are actually present.
In [63]: s = pd.Series(list("babc")).astype(CategoricalDtype(list("abcd")))
In [64]: s
Out[64]:
0 b
1 a
2 b
3 c
dtype: category
Categories (4, object): ['a', 'b', 'c', 'd']
# categories
In [65]: s.cat.categories
Out[65]: Index(['a', 'b', 'c', 'd'], dtype='object')
# uniques
In [66]: s.unique()
Out[66]:
['b', 'a', 'c']
Categories (4, object): ['a', 'b', 'c', 'd']
Renaming categories#
Renaming categories is done by using the
rename_categories() method:
In [67]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [68]: s
Out[68]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [69]: new_categories = ["Group %s" % g for g in s.cat.categories]
In [70]: s = s.cat.rename_categories(new_categories)
In [71]: s
Out[71]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c']
# You can also pass a dict-like object to map the renaming
In [72]: s = s.cat.rename_categories({1: "x", 2: "y", 3: "z"})
In [73]: s
Out[73]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c']
Note
In contrast to R’s factor, categorical data can have categories of other types than string.
Note
Be aware that assigning new categories is an inplace operation, while most other operations
under Series.cat per default return a new Series of dtype category.
Categories must be unique or a ValueError is raised:
In [74]: try:
....: s = s.cat.rename_categories([1, 1, 1])
....: except ValueError as e:
....: print("ValueError:", str(e))
....:
ValueError: Categorical categories must be unique
Categories must also not be NaN or a ValueError is raised:
In [75]: try:
....: s = s.cat.rename_categories([1, 2, np.nan])
....: except ValueError as e:
....: print("ValueError:", str(e))
....:
ValueError: Categorical categories cannot be null
Appending new categories#
Appending categories can be done by using the
add_categories() method:
In [76]: s = s.cat.add_categories([4])
In [77]: s.cat.categories
Out[77]: Index(['Group a', 'Group b', 'Group c', 4], dtype='object')
In [78]: s
Out[78]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (4, object): ['Group a', 'Group b', 'Group c', 4]
Removing categories#
Removing categories can be done by using the
remove_categories() method. Values which are removed
are replaced by np.nan.:
In [79]: s = s.cat.remove_categories([4])
In [80]: s
Out[80]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c']
Removing unused categories#
Removing unused categories can also be done:
In [81]: s = pd.Series(pd.Categorical(["a", "b", "a"], categories=["a", "b", "c", "d"]))
In [82]: s
Out[82]:
0 a
1 b
2 a
dtype: category
Categories (4, object): ['a', 'b', 'c', 'd']
In [83]: s.cat.remove_unused_categories()
Out[83]:
0 a
1 b
2 a
dtype: category
Categories (2, object): ['a', 'b']
Setting categories#
If you want to do remove and add new categories in one step (which has some
speed advantage), or simply set the categories to a predefined scale,
use set_categories().
In [84]: s = pd.Series(["one", "two", "four", "-"], dtype="category")
In [85]: s
Out[85]:
0 one
1 two
2 four
3 -
dtype: category
Categories (4, object): ['-', 'four', 'one', 'two']
In [86]: s = s.cat.set_categories(["one", "two", "three", "four"])
In [87]: s
Out[87]:
0 one
1 two
2 four
3 NaN
dtype: category
Categories (4, object): ['one', 'two', 'three', 'four']
Note
Be aware that Categorical.set_categories() cannot know whether some category is omitted
intentionally or because it is misspelled or (under Python3) due to a type difference (e.g.,
NumPy S1 dtype and Python strings). This can result in surprising behaviour!
Sorting and order#
If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a
meaning and certain operations are possible. If the categorical is unordered, .min()/.max() will raise a TypeError.
In [88]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"], ordered=False))
In [89]: s.sort_values(inplace=True)
In [90]: s = pd.Series(["a", "b", "c", "a"]).astype(CategoricalDtype(ordered=True))
In [91]: s.sort_values(inplace=True)
In [92]: s
Out[92]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): ['a' < 'b' < 'c']
In [93]: s.min(), s.max()
Out[93]: ('a', 'c')
You can set categorical data to be ordered by using as_ordered() or unordered by using as_unordered(). These will by
default return a new object.
In [94]: s.cat.as_ordered()
Out[94]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): ['a' < 'b' < 'c']
In [95]: s.cat.as_unordered()
Out[95]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
Sorting will use the order defined by categories, not any lexical order present on the data type.
This is even true for strings and numeric data:
In [96]: s = pd.Series([1, 2, 3, 1], dtype="category")
In [97]: s = s.cat.set_categories([2, 3, 1], ordered=True)
In [98]: s
Out[98]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [99]: s.sort_values(inplace=True)
In [100]: s
Out[100]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [101]: s.min(), s.max()
Out[101]: (2, 1)
Reordering#
Reordering the categories is possible via the Categorical.reorder_categories() and
the Categorical.set_categories() methods. For Categorical.reorder_categories(), all
old categories must be included in the new categories and no new categories are allowed. This will
necessarily make the sort order the same as the categories order.
In [102]: s = pd.Series([1, 2, 3, 1], dtype="category")
In [103]: s = s.cat.reorder_categories([2, 3, 1], ordered=True)
In [104]: s
Out[104]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [105]: s.sort_values(inplace=True)
In [106]: s
Out[106]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [107]: s.min(), s.max()
Out[107]: (2, 1)
Note
Note the difference between assigning new categories and reordering the categories: the first
renames categories and therefore the individual values in the Series, but if the first
position was sorted last, the renamed value will still be sorted last. Reordering means that the
way values are sorted is different afterwards, but not that individual values in the
Series are changed.
Note
If the Categorical is not ordered, Series.min() and Series.max() will raise
TypeError. Numeric operations like +, -, *, / and operations based on them
(e.g. Series.median(), which would need to compute the mean between two values if the length
of an array is even) do not work and raise a TypeError.
Multi column sorting#
A categorical dtyped column will participate in a multi-column sort in a similar manner to other columns.
The ordering of the categorical is determined by the categories of that column.
In [108]: dfs = pd.DataFrame(
.....: {
.....: "A": pd.Categorical(
.....: list("bbeebbaa"),
.....: categories=["e", "a", "b"],
.....: ordered=True,
.....: ),
.....: "B": [1, 2, 1, 2, 2, 1, 2, 1],
.....: }
.....: )
.....:
In [109]: dfs.sort_values(by=["A", "B"])
Out[109]:
A B
2 e 1
3 e 2
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
Reordering the categories changes a future sort.
In [110]: dfs["A"] = dfs["A"].cat.reorder_categories(["a", "b", "e"])
In [111]: dfs.sort_values(by=["A", "B"])
Out[111]:
A B
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
2 e 1
3 e 2
Comparisons#
Comparing categorical data with other objects is possible in three cases:
Comparing equality (== and !=) to a list-like object (list, Series, array,
…) of the same length as the categorical data.
All comparisons (==, !=, >, >=, <, and <=) of categorical data to
another categorical Series, when ordered==True and the categories are the same.
All comparisons of a categorical data to a scalar.
All other comparisons, especially “non-equality” comparisons of two categoricals with different
categories or a categorical with any list-like object, will raise a TypeError.
Note
Any “non-equality” comparisons of categorical data with a Series, np.array, list or
categorical data with different categories or ordering will raise a TypeError because custom
categories ordering could be interpreted in two ways: one with taking into account the
ordering and one without.
In [112]: cat = pd.Series([1, 2, 3]).astype(CategoricalDtype([3, 2, 1], ordered=True))
In [113]: cat_base = pd.Series([2, 2, 2]).astype(CategoricalDtype([3, 2, 1], ordered=True))
In [114]: cat_base2 = pd.Series([2, 2, 2]).astype(CategoricalDtype(ordered=True))
In [115]: cat
Out[115]:
0 1
1 2
2 3
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [116]: cat_base
Out[116]:
0 2
1 2
2 2
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [117]: cat_base2
Out[117]:
0 2
1 2
2 2
dtype: category
Categories (1, int64): [2]
Comparing to a categorical with the same categories and ordering or to a scalar works:
In [118]: cat > cat_base
Out[118]:
0 True
1 False
2 False
dtype: bool
In [119]: cat > 2
Out[119]:
0 True
1 False
2 False
dtype: bool
Equality comparisons work with any list-like object of same length and scalars:
In [120]: cat == cat_base
Out[120]:
0 False
1 True
2 False
dtype: bool
In [121]: cat == np.array([1, 2, 3])
Out[121]:
0 True
1 True
2 True
dtype: bool
In [122]: cat == 2
Out[122]:
0 False
1 True
2 False
dtype: bool
This doesn’t work because the categories are not the same:
In [123]: try:
.....: cat > cat_base2
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Categoricals can only be compared if 'categories' are the same.
If you want to do a “non-equality” comparison of a categorical series with a list-like object
which is not categorical data, you need to be explicit and convert the categorical data back to
the original values:
In [124]: base = np.array([1, 2, 3])
In [125]: try:
.....: cat > base
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot compare a Categorical for op __gt__ with type <class 'numpy.ndarray'>.
If you want to compare values, use 'np.asarray(cat) <op> other'.
In [126]: np.asarray(cat) > base
Out[126]: array([False, False, False])
When you compare two unordered categoricals with the same categories, the order is not considered:
In [127]: c1 = pd.Categorical(["a", "b"], categories=["a", "b"], ordered=False)
In [128]: c2 = pd.Categorical(["a", "b"], categories=["b", "a"], ordered=False)
In [129]: c1 == c2
Out[129]: array([ True, True])
Operations#
Apart from Series.min(), Series.max() and Series.mode(), the
following operations are possible with categorical data:
Series methods like Series.value_counts() will use all categories,
even if some categories are not present in the data:
In [130]: s = pd.Series(pd.Categorical(["a", "b", "c", "c"], categories=["c", "a", "b", "d"]))
In [131]: s.value_counts()
Out[131]:
c 2
a 1
b 1
d 0
dtype: int64
DataFrame methods like DataFrame.sum() also show “unused” categories.
In [132]: columns = pd.Categorical(
.....: ["One", "One", "Two"], categories=["One", "Two", "Three"], ordered=True
.....: )
.....:
In [133]: df = pd.DataFrame(
.....: data=[[1, 2, 3], [4, 5, 6]],
.....: columns=pd.MultiIndex.from_arrays([["A", "B", "B"], columns]),
.....: )
.....:
In [134]: df.groupby(axis=1, level=1).sum()
Out[134]:
One Two Three
0 3 3 0
1 9 6 0
Groupby will also show “unused” categories:
In [135]: cats = pd.Categorical(
.....: ["a", "b", "b", "b", "c", "c", "c"], categories=["a", "b", "c", "d"]
.....: )
.....:
In [136]: df = pd.DataFrame({"cats": cats, "values": [1, 2, 2, 2, 3, 4, 5]})
In [137]: df.groupby("cats").mean()
Out[137]:
values
cats
a 1.0
b 2.0
c 4.0
d NaN
In [138]: cats2 = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
In [139]: df2 = pd.DataFrame(
.....: {
.....: "cats": cats2,
.....: "B": ["c", "d", "c", "d"],
.....: "values": [1, 2, 3, 4],
.....: }
.....: )
.....:
In [140]: df2.groupby(["cats", "B"]).mean()
Out[140]:
values
cats B
a c 1.0
d 2.0
b c 3.0
d 4.0
c c NaN
d NaN
Pivot tables:
In [141]: raw_cat = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
In [142]: df = pd.DataFrame({"A": raw_cat, "B": ["c", "d", "c", "d"], "values": [1, 2, 3, 4]})
In [143]: pd.pivot_table(df, values="values", index=["A", "B"])
Out[143]:
values
A B
a c 1
d 2
b c 3
d 4
Data munging#
The optimized pandas data access methods .loc, .iloc, .at, and .iat,
work as normal. The only difference is the return type (for getting) and
that only values already in categories can be assigned.
Getting#
If the slicing operation returns either a DataFrame or a column of type
Series, the category dtype is preserved.
In [144]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
In [145]: cats = pd.Series(["a", "b", "b", "b", "c", "c", "c"], dtype="category", index=idx)
In [146]: values = [1, 2, 2, 2, 3, 4, 5]
In [147]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)
In [148]: df.iloc[2:4, :]
Out[148]:
cats values
j b 2
k b 2
In [149]: df.iloc[2:4, :].dtypes
Out[149]:
cats category
values int64
dtype: object
In [150]: df.loc["h":"j", "cats"]
Out[150]:
h a
i b
j b
Name: cats, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [151]: df[df["cats"] == "b"]
Out[151]:
cats values
i b 2
j b 2
k b 2
An example where the category type is not preserved is if you take one single
row: the resulting Series is of dtype object:
# get the complete "h" row as a Series
In [152]: df.loc["h", :]
Out[152]:
cats a
values 1
Name: h, dtype: object
Returning a single item from categorical data will also return the value, not a categorical
of length “1”.
In [153]: df.iat[0, 0]
Out[153]: 'a'
In [154]: df["cats"] = df["cats"].cat.rename_categories(["x", "y", "z"])
In [155]: df.at["h", "cats"] # returns a string
Out[155]: 'x'
Note
The is in contrast to R’s factor function, where factor(c(1,2,3))[1]
returns a single value factor.
To get a single value Series of type category, you pass in a list with
a single value:
In [156]: df.loc[["h"], "cats"]
Out[156]:
h x
Name: cats, dtype: category
Categories (3, object): ['x', 'y', 'z']
String and datetime accessors#
The accessors .dt and .str will work if the s.cat.categories are of
an appropriate type:
In [157]: str_s = pd.Series(list("aabb"))
In [158]: str_cat = str_s.astype("category")
In [159]: str_cat
Out[159]:
0 a
1 a
2 b
3 b
dtype: category
Categories (2, object): ['a', 'b']
In [160]: str_cat.str.contains("a")
Out[160]:
0 True
1 True
2 False
3 False
dtype: bool
In [161]: date_s = pd.Series(pd.date_range("1/1/2015", periods=5))
In [162]: date_cat = date_s.astype("category")
In [163]: date_cat
Out[163]:
0 2015-01-01
1 2015-01-02
2 2015-01-03
3 2015-01-04
4 2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04, 2015-01-05]
In [164]: date_cat.dt.day
Out[164]:
0 1
1 2
2 3
3 4
4 5
dtype: int64
Note
The returned Series (or DataFrame) is of the same type as if you used the
.str.<method> / .dt.<method> on a Series of that type (and not of
type category!).
That means, that the returned values from methods and properties on the accessors of a
Series and the returned values from methods and properties on the accessors of this
Series transformed to one of type category will be equal:
In [165]: ret_s = str_s.str.contains("a")
In [166]: ret_cat = str_cat.str.contains("a")
In [167]: ret_s.dtype == ret_cat.dtype
Out[167]: True
In [168]: ret_s == ret_cat
Out[168]:
0 True
1 True
2 True
3 True
dtype: bool
Note
The work is done on the categories and then a new Series is constructed. This has
some performance implication if you have a Series of type string, where lots of elements
are repeated (i.e. the number of unique elements in the Series is a lot smaller than the
length of the Series). In this case it can be faster to convert the original Series
to one of type category and use .str.<method> or .dt.<property> on that.
Setting#
Setting values in a categorical column (or Series) works as long as the
value is included in the categories:
In [169]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
In [170]: cats = pd.Categorical(["a", "a", "a", "a", "a", "a", "a"], categories=["a", "b"])
In [171]: values = [1, 1, 1, 1, 1, 1, 1]
In [172]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)
In [173]: df.iloc[2:4, :] = [["b", 2], ["b", 2]]
In [174]: df
Out[174]:
cats values
h a 1
i a 1
j b 2
k b 2
l a 1
m a 1
n a 1
In [175]: try:
.....: df.iloc[2:4, :] = [["c", 3], ["c", 3]]
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot setitem on a Categorical with a new category, set the categories first
Setting values by assigning categorical data will also check that the categories match:
In [176]: df.loc["j":"k", "cats"] = pd.Categorical(["a", "a"], categories=["a", "b"])
In [177]: df
Out[177]:
cats values
h a 1
i a 1
j a 2
k a 2
l a 1
m a 1
n a 1
In [178]: try:
.....: df.loc["j":"k", "cats"] = pd.Categorical(["b", "b"], categories=["a", "b", "c"])
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot set a Categorical with another, without identical categories
Assigning a Categorical to parts of a column of other types will use the values:
In [179]: df = pd.DataFrame({"a": [1, 1, 1, 1, 1], "b": ["a", "a", "a", "a", "a"]})
In [180]: df.loc[1:2, "a"] = pd.Categorical(["b", "b"], categories=["a", "b"])
In [181]: df.loc[2:3, "b"] = pd.Categorical(["b", "b"], categories=["a", "b"])
In [182]: df
Out[182]:
a b
0 1 a
1 b a
2 b b
3 1 b
4 1 a
In [183]: df.dtypes
Out[183]:
a object
b object
dtype: object
Merging / concatenation#
By default, combining Series or DataFrames which contain the same
categories results in category dtype, otherwise results will depend on the
dtype of the underlying categories. Merges that result in non-categorical
dtypes will likely have higher memory usage. Use .astype or
union_categoricals to ensure category results.
In [184]: from pandas.api.types import union_categoricals
# same categories
In [185]: s1 = pd.Series(["a", "b"], dtype="category")
In [186]: s2 = pd.Series(["a", "b", "a"], dtype="category")
In [187]: pd.concat([s1, s2])
Out[187]:
0 a
1 b
0 a
1 b
2 a
dtype: category
Categories (2, object): ['a', 'b']
# different categories
In [188]: s3 = pd.Series(["b", "c"], dtype="category")
In [189]: pd.concat([s1, s3])
Out[189]:
0 a
1 b
0 b
1 c
dtype: object
# Output dtype is inferred based on categories values
In [190]: int_cats = pd.Series([1, 2], dtype="category")
In [191]: float_cats = pd.Series([3.0, 4.0], dtype="category")
In [192]: pd.concat([int_cats, float_cats])
Out[192]:
0 1.0
1 2.0
0 3.0
1 4.0
dtype: float64
In [193]: pd.concat([s1, s3]).astype("category")
Out[193]:
0 a
1 b
0 b
1 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [194]: union_categoricals([s1.array, s3.array])
Out[194]:
['a', 'b', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
The following table summarizes the results of merging Categoricals:
arg1
arg2
identical
result
category
category
True
category
category (object)
category (object)
False
object (dtype is inferred)
category (int)
category (float)
False
float (dtype is inferred)
See also the section on merge dtypes for notes about
preserving merge dtypes and performance.
Unioning#
If you want to combine categoricals that do not necessarily have the same
categories, the union_categoricals() function will
combine a list-like of categoricals. The new categories will be the union of
the categories being combined.
In [195]: from pandas.api.types import union_categoricals
In [196]: a = pd.Categorical(["b", "c"])
In [197]: b = pd.Categorical(["a", "b"])
In [198]: union_categoricals([a, b])
Out[198]:
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
By default, the resulting categories will be ordered as
they appear in the data. If you want the categories to
be lexsorted, use sort_categories=True argument.
In [199]: union_categoricals([a, b], sort_categories=True)
Out[199]:
['b', 'c', 'a', 'b']
Categories (3, object): ['a', 'b', 'c']
union_categoricals also works with the “easy” case of combining two
categoricals of the same categories and order information
(e.g. what you could also append for).
In [200]: a = pd.Categorical(["a", "b"], ordered=True)
In [201]: b = pd.Categorical(["a", "b", "a"], ordered=True)
In [202]: union_categoricals([a, b])
Out[202]:
['a', 'b', 'a', 'b', 'a']
Categories (2, object): ['a' < 'b']
The below raises TypeError because the categories are ordered and not identical.
In [1]: a = pd.Categorical(["a", "b"], ordered=True)
In [2]: b = pd.Categorical(["a", "b", "c"], ordered=True)
In [3]: union_categoricals([a, b])
Out[3]:
TypeError: to union ordered Categoricals, all categories must be the same
Ordered categoricals with different categories or orderings can be combined by
using the ignore_ordered=True argument.
In [203]: a = pd.Categorical(["a", "b", "c"], ordered=True)
In [204]: b = pd.Categorical(["c", "b", "a"], ordered=True)
In [205]: union_categoricals([a, b], ignore_order=True)
Out[205]:
['a', 'b', 'c', 'c', 'b', 'a']
Categories (3, object): ['a', 'b', 'c']
union_categoricals() also works with a
CategoricalIndex, or Series containing categorical data, but note that
the resulting array will always be a plain Categorical:
In [206]: a = pd.Series(["b", "c"], dtype="category")
In [207]: b = pd.Series(["a", "b"], dtype="category")
In [208]: union_categoricals([a, b])
Out[208]:
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
Note
union_categoricals may recode the integer codes for categories
when combining categoricals. This is likely what you want,
but if you are relying on the exact numbering of the categories, be
aware.
In [209]: c1 = pd.Categorical(["b", "c"])
In [210]: c2 = pd.Categorical(["a", "b"])
In [211]: c1
Out[211]:
['b', 'c']
Categories (2, object): ['b', 'c']
# "b" is coded to 0
In [212]: c1.codes
Out[212]: array([0, 1], dtype=int8)
In [213]: c2
Out[213]:
['a', 'b']
Categories (2, object): ['a', 'b']
# "b" is coded to 1
In [214]: c2.codes
Out[214]: array([0, 1], dtype=int8)
In [215]: c = union_categoricals([c1, c2])
In [216]: c
Out[216]:
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
# "b" is coded to 0 throughout, same as c1, different from c2
In [217]: c.codes
Out[217]: array([0, 1, 2, 0], dtype=int8)
Getting data in/out#
You can write data that contains category dtypes to a HDFStore.
See here for an example and caveats.
It is also possible to write data to and reading data from Stata format files.
See here for an example and caveats.
Writing to a CSV file will convert the data, effectively removing any information about the
categorical (categories and ordering). So if you read back the CSV file you have to convert the
relevant columns back to category and assign the right categories and categories ordering.
In [218]: import io
In [219]: s = pd.Series(pd.Categorical(["a", "b", "b", "a", "a", "d"]))
# rename the categories
In [220]: s = s.cat.rename_categories(["very good", "good", "bad"])
# reorder the categories and add missing categories
In [221]: s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
In [222]: df = pd.DataFrame({"cats": s, "vals": [1, 2, 3, 4, 5, 6]})
In [223]: csv = io.StringIO()
In [224]: df.to_csv(csv)
In [225]: df2 = pd.read_csv(io.StringIO(csv.getvalue()))
In [226]: df2.dtypes
Out[226]:
Unnamed: 0 int64
cats object
vals int64
dtype: object
In [227]: df2["cats"]
Out[227]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: object
# Redo the category
In [228]: df2["cats"] = df2["cats"].astype("category")
In [229]: df2["cats"].cat.set_categories(
.....: ["very bad", "bad", "medium", "good", "very good"], inplace=True
.....: )
.....:
In [230]: df2.dtypes
Out[230]:
Unnamed: 0 int64
cats category
vals int64
dtype: object
In [231]: df2["cats"]
Out[231]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: category
Categories (5, object): ['very bad', 'bad', 'medium', 'good', 'very good']
The same holds for writing to a SQL database with to_sql.
Missing data#
pandas primarily uses the value np.nan to represent missing data. It is by
default not included in computations. See the Missing Data section.
Missing values should not be included in the Categorical’s categories,
only in the values.
Instead, it is understood that NaN is different, and is always a possibility.
When working with the Categorical’s codes, missing values will always have
a code of -1.
In [232]: s = pd.Series(["a", "b", np.nan, "a"], dtype="category")
# only two categories
In [233]: s
Out[233]:
0 a
1 b
2 NaN
3 a
dtype: category
Categories (2, object): ['a', 'b']
In [234]: s.cat.codes
Out[234]:
0 0
1 1
2 -1
3 0
dtype: int8
Methods for working with missing data, e.g. isna(), fillna(),
dropna(), all work normally:
In [235]: s = pd.Series(["a", "b", np.nan], dtype="category")
In [236]: s
Out[236]:
0 a
1 b
2 NaN
dtype: category
Categories (2, object): ['a', 'b']
In [237]: pd.isna(s)
Out[237]:
0 False
1 False
2 True
dtype: bool
In [238]: s.fillna("a")
Out[238]:
0 a
1 b
2 a
dtype: category
Categories (2, object): ['a', 'b']
Differences to R’s factor#
The following differences to R’s factor functions can be observed:
R’s levels are named categories.
R’s levels are always of type string, while categories in pandas can be of any dtype.
It’s not possible to specify labels at creation time. Use s.cat.rename_categories(new_labels)
afterwards.
In contrast to R’s factor function, using categorical data as the sole input to create a
new categorical series will not remove unused categories but create a new categorical series
which is equal to the passed in one!
R allows for missing values to be included in its levels (pandas’ categories). pandas
does not allow NaN categories, but missing values can still be in the values.
Gotchas#
Memory usage#
The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In contrast,
an object dtype is a constant times the length of the data.
In [239]: s = pd.Series(["foo", "bar"] * 1000)
# object dtype
In [240]: s.nbytes
Out[240]: 16000
# category dtype
In [241]: s.astype("category").nbytes
Out[241]: 2016
Note
If the number of categories approaches the length of the data, the Categorical will use nearly the same or
more memory than an equivalent object dtype representation.
In [242]: s = pd.Series(["foo%04d" % i for i in range(2000)])
# object dtype
In [243]: s.nbytes
Out[243]: 16000
# category dtype
In [244]: s.astype("category").nbytes
Out[244]: 20000
Categorical is not a numpy array#
Currently, categorical data and the underlying Categorical is implemented as a Python
object and not as a low-level NumPy array dtype. This leads to some problems.
NumPy itself doesn’t know about the new dtype:
In [245]: try:
.....: np.dtype("category")
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: data type 'category' not understood
In [246]: dtype = pd.Categorical(["a"]).dtype
In [247]: try:
.....: np.dtype(dtype)
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot interpret 'CategoricalDtype(categories=['a'], ordered=False)' as a data type
Dtype comparisons work:
In [248]: dtype == np.str_
Out[248]: False
In [249]: np.str_ == dtype
Out[249]: False
To check if a Series contains Categorical data, use hasattr(s, 'cat'):
In [250]: hasattr(pd.Series(["a"], dtype="category"), "cat")
Out[250]: True
In [251]: hasattr(pd.Series(["a"]), "cat")
Out[251]: False
Using NumPy functions on a Series of type category should not work as Categoricals
are not numeric data (even in the case that .categories is numeric).
In [252]: s = pd.Series(pd.Categorical([1, 2, 3, 4]))
In [253]: try:
.....: np.sum(s)
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: 'Categorical' with dtype category does not support reduction 'sum'
Note
If such a function works, please file a bug at https://github.com/pandas-dev/pandas!
dtype in apply#
pandas currently does not preserve the dtype in apply functions: If you apply along rows you get
a Series of object dtype (same as getting a row -> getting one element will return a
basic type) and applying along columns will also convert to object. NaN values are unaffected.
You can use fillna to handle missing values before applying a function.
In [254]: df = pd.DataFrame(
.....: {
.....: "a": [1, 2, 3, 4],
.....: "b": ["a", "b", "c", "d"],
.....: "cats": pd.Categorical([1, 2, 3, 2]),
.....: }
.....: )
.....:
In [255]: df.apply(lambda row: type(row["cats"]), axis=1)
Out[255]:
0 <class 'int'>
1 <class 'int'>
2 <class 'int'>
3 <class 'int'>
dtype: object
In [256]: df.apply(lambda col: col.dtype, axis=0)
Out[256]:
a int64
b object
cats category
dtype: object
Categorical index#
CategoricalIndex is a type of index that is useful for supporting
indexing with duplicates. This is a container around a Categorical
and allows efficient indexing and storage of an index with a large number of duplicated elements.
See the advanced indexing docs for a more detailed
explanation.
Setting the index will create a CategoricalIndex:
In [257]: cats = pd.Categorical([1, 2, 3, 4], categories=[4, 2, 3, 1])
In [258]: strings = ["a", "b", "c", "d"]
In [259]: values = [4, 2, 3, 1]
In [260]: df = pd.DataFrame({"strings": strings, "values": values}, index=cats)
In [261]: df.index
Out[261]: CategoricalIndex([1, 2, 3, 4], categories=[4, 2, 3, 1], ordered=False, dtype='category')
# This now sorts by the categories order
In [262]: df.sort_index()
Out[262]:
strings values
4 d 1
2 b 2
3 c 3
1 a 4
Side effects#
Constructing a Series from a Categorical will not copy the input
Categorical. This means that changes to the Series will in most cases
change the original Categorical:
In [263]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])
In [264]: s = pd.Series(cat, name="cat")
In [265]: cat
Out[265]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [266]: s.iloc[0:2] = 10
In [267]: cat
Out[267]:
[10, 10, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [268]: df = pd.DataFrame(s)
In [269]: df["cat"].cat.categories = [1, 2, 3, 4, 5]
In [270]: cat
Out[270]:
[5, 5, 3, 5]
Categories (5, int64): [1, 2, 3, 4, 5]
Use copy=True to prevent such a behaviour or simply don’t reuse Categoricals:
In [271]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])
In [272]: s = pd.Series(cat, name="cat", copy=True)
In [273]: cat
Out[273]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [274]: s.iloc[0:2] = 10
In [275]: cat
Out[275]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
Note
This also happens in some cases when you supply a NumPy array instead of a Categorical:
using an int array (e.g. np.array([1,2,3,4])) will exhibit the same behavior, while using
a string array (e.g. np.array(["a","b","c","a"])) will not.
| user_guide/categorical.html |
pandas.read_fwf | `pandas.read_fwf`
Read a table of fixed-width formatted lines into DataFrame.
```
>>> pd.read_fwf('data.csv')
``` | pandas.read_fwf(filepath_or_buffer, *, colspecs='infer', widths=None, infer_nrows=100, **kwds)[source]#
Read a table of fixed-width formatted lines into DataFrame.
Also supports optionally iterating or breaking of the file
into chunks.
Additional help can be found in the online docs for IO Tools.
Parameters
filepath_or_bufferstr, path object, or file-like objectString, path object (implementing os.PathLike[str]), or file-like
object implementing a text read() function.The string could be a URL.
Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be:
file://localhost/path/to/table.csv.
colspecslist of tuple (int, int) or ‘infer’. optionalA list of tuples giving the extents of the fixed-width
fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try
detecting the column specifications from the first 100 rows of
the data which are not being skipped via skiprows (default=’infer’).
widthslist of int, optionalA list of field widths which can be used instead of ‘colspecs’ if
the intervals are contiguous.
infer_nrowsint, default 100The number of rows to consider when letting the parser determine the
colspecs.
**kwdsoptionalOptional keyword arguments can be passed to TextFileReader.
Returns
DataFrame or TextFileReaderA comma-separated values (csv) file is returned as two-dimensional
data structure with labeled axes.
See also
DataFrame.to_csvWrite DataFrame to a comma-separated values (csv) file.
read_csvRead a comma-separated values (csv) file into DataFrame.
Examples
>>> pd.read_fwf('data.csv')
| reference/api/pandas.read_fwf.html |
pandas.tseries.offsets.BusinessMonthEnd.name | `pandas.tseries.offsets.BusinessMonthEnd.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
``` | BusinessMonthEnd.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.name.html |
pandas.tseries.offsets.FY5253.normalize | pandas.tseries.offsets.FY5253.normalize | FY5253.normalize#
| reference/api/pandas.tseries.offsets.FY5253.normalize.html |
pandas.Index.T | `pandas.Index.T`
Return the transpose, which is by definition self. | property Index.T[source]#
Return the transpose, which is by definition self.
| reference/api/pandas.Index.T.html |
pandas.api.extensions.ExtensionArray._from_sequence_of_strings | `pandas.api.extensions.ExtensionArray._from_sequence_of_strings`
Construct a new ExtensionArray from a sequence of strings. | classmethod ExtensionArray._from_sequence_of_strings(strings, *, dtype=None, copy=False)[source]#
Construct a new ExtensionArray from a sequence of strings.
Parameters
stringsSequenceEach element will be an instance of the scalar type for this
array, cls.dtype.type.
dtypedtype, optionalConstruct for this particular dtype. This should be a Dtype
compatible with the ExtensionArray.
copybool, default FalseIf True, copy the underlying data.
Returns
ExtensionArray
| reference/api/pandas.api.extensions.ExtensionArray._from_sequence_of_strings.html |
pandas.tseries.offsets.FY5253Quarter.is_year_end | `pandas.tseries.offsets.FY5253Quarter.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | FY5253Quarter.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.FY5253Quarter.is_year_end.html |
pandas.ExcelWriter.engine | `pandas.ExcelWriter.engine`
Name of engine. | property ExcelWriter.engine[source]#
Name of engine.
| reference/api/pandas.ExcelWriter.engine.html |
pandas.timedelta_range | `pandas.timedelta_range`
Return a fixed frequency TimedeltaIndex with day as the default.
```
>>> pd.timedelta_range(start='1 day', periods=4)
TimedeltaIndex(['1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq='D')
``` | pandas.timedelta_range(start=None, end=None, periods=None, freq=None, name=None, closed=None)[source]#
Return a fixed frequency TimedeltaIndex with day as the default.
Parameters
startstr or timedelta-like, default NoneLeft bound for generating timedeltas.
endstr or timedelta-like, default NoneRight bound for generating timedeltas.
periodsint, default NoneNumber of periods to generate.
freqstr or DateOffset, default ‘D’Frequency strings can have multiples, e.g. ‘5H’.
namestr, default NoneName of the resulting TimedeltaIndex.
closedstr, default NoneMake the interval closed with respect to the given frequency to
the ‘left’, ‘right’, or both sides (None).
Returns
TimedeltaIndex
Notes
Of the four parameters start, end, periods, and freq,
exactly three must be specified. If freq is omitted, the resulting
TimedeltaIndex will have periods linearly spaced elements between
start and end (closed on both sides).
To learn more about the frequency strings, please see this link.
Examples
>>> pd.timedelta_range(start='1 day', periods=4)
TimedeltaIndex(['1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq='D')
The closed parameter specifies which endpoint is included. The default
behavior is to include both endpoints.
>>> pd.timedelta_range(start='1 day', periods=4, closed='right')
TimedeltaIndex(['2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq='D')
The freq parameter specifies the frequency of the TimedeltaIndex.
Only fixed frequencies can be passed, non-fixed frequencies such as
‘M’ (month end) will raise.
>>> pd.timedelta_range(start='1 day', end='2 days', freq='6H')
TimedeltaIndex(['1 days 00:00:00', '1 days 06:00:00', '1 days 12:00:00',
'1 days 18:00:00', '2 days 00:00:00'],
dtype='timedelta64[ns]', freq='6H')
Specify start, end, and periods; the frequency is generated
automatically (linearly spaced).
>>> pd.timedelta_range(start='1 day', end='5 days', periods=4)
TimedeltaIndex(['1 days 00:00:00', '2 days 08:00:00', '3 days 16:00:00',
'5 days 00:00:00'],
dtype='timedelta64[ns]', freq=None)
| reference/api/pandas.timedelta_range.html |
pandas.DataFrame.iloc | `pandas.DataFrame.iloc`
Purely integer-location based indexing for selection by position.
```
>>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
... {'a': 100, 'b': 200, 'c': 300, 'd': 400},
... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
>>> df = pd.DataFrame(mydict)
>>> df
a b c d
0 1 2 3 4
1 100 200 300 400
2 1000 2000 3000 4000
``` | property DataFrame.iloc[source]#
Purely integer-location based indexing for selection by position.
.iloc[] is primarily integer position based (from 0 to
length-1 of the axis), but may also be used with a boolean
array.
Allowed inputs are:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series or
DataFrame) and that returns valid output for indexing (one of the above).
This is useful in method chains, when you don’t have a reference to the
calling object, but would like to base your selection on some value.
A tuple of row and column indexes. The tuple elements consist of one of the
above inputs, e.g. (0, 1).
.iloc will raise IndexError if a requested indexer is
out-of-bounds, except slice indexers which allow out-of-bounds
indexing (this conforms with python/numpy slice semantics).
See more at Selection by Position.
See also
DataFrame.iatFast integer location scalar accessor.
DataFrame.locPurely label-location based indexer for selection by label.
Series.ilocPurely integer-location based indexing for selection by position.
Examples
>>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
... {'a': 100, 'b': 200, 'c': 300, 'd': 400},
... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
>>> df = pd.DataFrame(mydict)
>>> df
a b c d
0 1 2 3 4
1 100 200 300 400
2 1000 2000 3000 4000
Indexing just the rows
With a scalar integer.
>>> type(df.iloc[0])
<class 'pandas.core.series.Series'>
>>> df.iloc[0]
a 1
b 2
c 3
d 4
Name: 0, dtype: int64
With a list of integers.
>>> df.iloc[[0]]
a b c d
0 1 2 3 4
>>> type(df.iloc[[0]])
<class 'pandas.core.frame.DataFrame'>
>>> df.iloc[[0, 1]]
a b c d
0 1 2 3 4
1 100 200 300 400
With a slice object.
>>> df.iloc[:3]
a b c d
0 1 2 3 4
1 100 200 300 400
2 1000 2000 3000 4000
With a boolean mask the same length as the index.
>>> df.iloc[[True, False, True]]
a b c d
0 1 2 3 4
2 1000 2000 3000 4000
With a callable, useful in method chains. The x passed
to the lambda is the DataFrame being sliced. This selects
the rows whose index label even.
>>> df.iloc[lambda x: x.index % 2 == 0]
a b c d
0 1 2 3 4
2 1000 2000 3000 4000
Indexing both axes
You can mix the indexer types for the index and columns. Use : to
select the entire axis.
With scalar integers.
>>> df.iloc[0, 1]
2
With lists of integers.
>>> df.iloc[[0, 2], [1, 3]]
b d
0 2 4
2 2000 4000
With slice objects.
>>> df.iloc[1:3, 0:3]
a b c
1 100 200 300
2 1000 2000 3000
With a boolean array whose length matches the columns.
>>> df.iloc[:, [True, False, True, False]]
a c
0 1 3
1 100 300
2 1000 3000
With a callable function that expects the Series or DataFrame.
>>> df.iloc[:, lambda df: [0, 2]]
a c
0 1 3
1 100 300
2 1000 3000
| reference/api/pandas.DataFrame.iloc.html |
pandas.tseries.offsets.BYearEnd.isAnchored | pandas.tseries.offsets.BYearEnd.isAnchored | BYearEnd.isAnchored()#
| reference/api/pandas.tseries.offsets.BYearEnd.isAnchored.html |
pandas.tseries.offsets.MonthBegin.is_month_end | `pandas.tseries.offsets.MonthBegin.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | MonthBegin.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.MonthBegin.is_month_end.html |