title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.io.formats.style.Styler.from_custom_template | `pandas.io.formats.style.Styler.from_custom_template`
Factory function for creating a subclass of Styler. | classmethod Styler.from_custom_template(searchpath, html_table=None, html_style=None)[source]#
Factory function for creating a subclass of Styler.
Uses custom templates and Jinja environment.
Changed in version 1.3.0.
Parameters
searchpathstr or listPath or paths of directories containing the templates.
html_tablestrName of your custom template to replace the html_table template.
New in version 1.3.0.
html_stylestrName of your custom template to replace the html_style template.
New in version 1.3.0.
Returns
MyStylersubclass of StylerHas the correct env,``template_html``, template_html_table and
template_html_style class attributes set.
| reference/api/pandas.io.formats.style.Styler.from_custom_template.html |
pandas.plotting.boxplot | `pandas.plotting.boxplot`
Make a box plot from DataFrame columns.
```
>>> np.random.seed(1234)
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['Col1', 'Col2', 'Col3', 'Col4'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3'])
``` | pandas.plotting.boxplot(data, column=None, by=None, ax=None, fontsize=None, rot=0, grid=True, figsize=None, layout=None, return_type=None, **kwargs)[source]#
Make a box plot from DataFrame columns.
Make a box-and-whisker plot from DataFrame columns, optionally grouped
by some other columns. A box plot is a method for graphically depicting
groups of numerical data through their quartiles.
The box extends from the Q1 to Q3 quartile values of the data,
with a line at the median (Q2). The whiskers extend from the edges
of box to show the range of the data. By default, they extend no more than
1.5 * IQR (IQR = Q3 - Q1) from the edges of the box, ending at the farthest
data point within that interval. Outliers are plotted as separate dots.
For further details see
Wikipedia’s entry for boxplot.
Parameters
columnstr or list of str, optionalColumn name or list of names, or vector.
Can be any valid input to pandas.DataFrame.groupby().
bystr or array-like, optionalColumn in the DataFrame to pandas.DataFrame.groupby().
One box-plot will be done per value of columns in by.
axobject of class matplotlib.axes.Axes, optionalThe matplotlib axes to be used by boxplot.
fontsizefloat or strTick label font size in points or as a string (e.g., large).
rotint or float, default 0The rotation angle of labels (in degrees)
with respect to the screen coordinate system.
gridbool, default TrueSetting this to True will show the grid.
figsizeA tuple (width, height) in inchesThe size of the figure to create in matplotlib.
layouttuple (rows, columns), optionalFor example, (3, 5) will display the subplots
using 3 columns and 5 rows, starting from the top-left.
return_type{‘axes’, ‘dict’, ‘both’} or None, default ‘axes’The kind of object to return. The default is axes.
‘axes’ returns the matplotlib axes the boxplot is drawn on.
‘dict’ returns a dictionary whose values are the matplotlib
Lines of the boxplot.
‘both’ returns a namedtuple with the axes and dict.
when grouping with by, a Series mapping columns to
return_type is returned.
If return_type is None, a NumPy array
of axes with the same shape as layout is returned.
**kwargsAll other plotting keyword arguments to be passed to
matplotlib.pyplot.boxplot().
Returns
resultSee Notes.
See also
Series.plot.histMake a histogram.
matplotlib.pyplot.boxplotMatplotlib equivalent plot.
Notes
The return type depends on the return_type parameter:
‘axes’ : object of class matplotlib.axes.Axes
‘dict’ : dict of matplotlib.lines.Line2D objects
‘both’ : a namedtuple with structure (ax, lines)
For data grouped with by, return a Series of the above or a numpy
array:
Series
array (for return_type = None)
Use return_type='dict' when you want to tweak the appearance
of the lines after plotting. In this case a dict containing the Lines
making up the boxes, caps, fliers, medians, and whiskers is returned.
Examples
Boxplots can be created for every column in the dataframe
by df.boxplot() or indicating the columns to be used:
>>> np.random.seed(1234)
>>> df = pd.DataFrame(np.random.randn(10, 4),
... columns=['Col1', 'Col2', 'Col3', 'Col4'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3'])
Boxplots of variables distributions grouped by the values of a third
variable can be created using the option by. For instance:
>>> df = pd.DataFrame(np.random.randn(10, 2),
... columns=['Col1', 'Col2'])
>>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
... 'B', 'B', 'B', 'B', 'B'])
>>> boxplot = df.boxplot(by='X')
A list of strings (i.e. ['X', 'Y']) can be passed to boxplot
in order to group the data by combination of the variables in the x-axis:
>>> df = pd.DataFrame(np.random.randn(10, 3),
... columns=['Col1', 'Col2', 'Col3'])
>>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
... 'B', 'B', 'B', 'B', 'B'])
>>> df['Y'] = pd.Series(['A', 'B', 'A', 'B', 'A',
... 'B', 'A', 'B', 'A', 'B'])
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by=['X', 'Y'])
The layout of boxplot can be adjusted giving a tuple to layout:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
... layout=(2, 1))
Additional formatting can be done to the boxplot, like suppressing the grid
(grid=False), rotating the labels in the x-axis (i.e. rot=45)
or changing the fontsize (i.e. fontsize=15):
>>> boxplot = df.boxplot(grid=False, rot=45, fontsize=15)
The parameter return_type can be used to select the type of element
returned by boxplot. When return_type='axes' is selected,
the matplotlib axes on which the boxplot is drawn are returned:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], return_type='axes')
>>> type(boxplot)
<class 'matplotlib.axes._subplots.AxesSubplot'>
When grouping with by, a Series mapping columns to return_type
is returned:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
... return_type='axes')
>>> type(boxplot)
<class 'pandas.core.series.Series'>
If return_type is None, a NumPy array of axes with the same shape
as layout is returned:
>>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
... return_type=None)
>>> type(boxplot)
<class 'numpy.ndarray'>
| reference/api/pandas.plotting.boxplot.html |
pandas.io.formats.style.Styler.render | `pandas.io.formats.style.Styler.render`
Render the Styler including all applied styles to HTML.
Deprecated since version 1.4.0. | Styler.render(sparse_index=None, sparse_columns=None, **kwargs)[source]#
Render the Styler including all applied styles to HTML.
Deprecated since version 1.4.0.
Parameters
sparse_indexbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each row.
Defaults to pandas.options.styler.sparse.index value.
sparse_columnsbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each row.
Defaults to pandas.options.styler.sparse.columns value.
**kwargsAny additional keyword arguments are passed
through to self.template.render.
This is useful when you need to provide
additional variables for a custom template.
Returns
renderedstrThe rendered HTML.
Notes
This method is deprecated in favour of Styler.to_html.
Styler objects have defined the _repr_html_ method
which automatically calls self.to_html() when it’s the
last item in a Notebook cell.
When calling Styler.render() directly, wrap the result in
IPython.display.HTML to view the rendered HTML in the notebook.
Pandas uses the following keys in render. Arguments passed
in **kwargs take precedence, so think carefully if you want
to override them:
head
cellstyle
body
uuid
table_styles
caption
table_attributes
| reference/api/pandas.io.formats.style.Styler.render.html |
pandas.DataFrame.nlargest | `pandas.DataFrame.nlargest`
Return the first n rows ordered by columns in descending order.
```
>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
... 434000, 434000, 337000, 11300,
... 11300, 11300],
... 'GDP': [1937894, 2583560 , 12011, 4520, 12128,
... 17036, 182, 38, 311],
... 'alpha-2': ["IT", "FR", "MT", "MV", "BN",
... "IS", "NR", "TV", "AI"]},
... index=["Italy", "France", "Malta",
... "Maldives", "Brunei", "Iceland",
... "Nauru", "Tuvalu", "Anguilla"])
>>> df
population GDP alpha-2
Italy 59000000 1937894 IT
France 65000000 2583560 FR
Malta 434000 12011 MT
Maldives 434000 4520 MV
Brunei 434000 12128 BN
Iceland 337000 17036 IS
Nauru 11300 182 NR
Tuvalu 11300 38 TV
Anguilla 11300 311 AI
``` | DataFrame.nlargest(n, columns, keep='first')[source]#
Return the first n rows ordered by columns in descending order.
Return the first n rows with the largest values in columns, in
descending order. The columns that are not specified are returned as
well, but not used for ordering.
This method is equivalent to
df.sort_values(columns, ascending=False).head(n), but more
performant.
Parameters
nintNumber of rows to return.
columnslabel or list of labelsColumn label(s) to order by.
keep{‘first’, ‘last’, ‘all’}, default ‘first’Where there are duplicate values:
first : prioritize the first occurrence(s)
last : prioritize the last occurrence(s)
all : do not drop any duplicates, even it means
selecting more than n items.
Returns
DataFrameThe first n rows ordered by the given columns in descending
order.
See also
DataFrame.nsmallestReturn the first n rows ordered by columns in ascending order.
DataFrame.sort_valuesSort DataFrame by the values.
DataFrame.headReturn the first n rows without re-ordering.
Notes
This function cannot be used with all column types. For example, when
specifying columns with object or category dtypes, TypeError is
raised.
Examples
>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
... 434000, 434000, 337000, 11300,
... 11300, 11300],
... 'GDP': [1937894, 2583560 , 12011, 4520, 12128,
... 17036, 182, 38, 311],
... 'alpha-2': ["IT", "FR", "MT", "MV", "BN",
... "IS", "NR", "TV", "AI"]},
... index=["Italy", "France", "Malta",
... "Maldives", "Brunei", "Iceland",
... "Nauru", "Tuvalu", "Anguilla"])
>>> df
population GDP alpha-2
Italy 59000000 1937894 IT
France 65000000 2583560 FR
Malta 434000 12011 MT
Maldives 434000 4520 MV
Brunei 434000 12128 BN
Iceland 337000 17036 IS
Nauru 11300 182 NR
Tuvalu 11300 38 TV
Anguilla 11300 311 AI
In the following example, we will use nlargest to select the three
rows having the largest values in column “population”.
>>> df.nlargest(3, 'population')
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Malta 434000 12011 MT
When using keep='last', ties are resolved in reverse order:
>>> df.nlargest(3, 'population', keep='last')
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Brunei 434000 12128 BN
When using keep='all', all duplicate items are maintained:
>>> df.nlargest(3, 'population', keep='all')
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Malta 434000 12011 MT
Maldives 434000 4520 MV
Brunei 434000 12128 BN
To order by the largest values in column “population” and then “GDP”,
we can specify multiple columns like in the next example.
>>> df.nlargest(3, ['population', 'GDP'])
population GDP alpha-2
France 65000000 2583560 FR
Italy 59000000 1937894 IT
Brunei 434000 12128 BN
| reference/api/pandas.DataFrame.nlargest.html |
pandas.io.formats.style.Styler.text_gradient | `pandas.io.formats.style.Styler.text_gradient`
Color the text in a gradient style.
```
>>> df = pd.DataFrame(columns=["City", "Temp (c)", "Rain (mm)", "Wind (m/s)"],
... data=[["Stockholm", 21.6, 5.0, 3.2],
... ["Oslo", 22.4, 13.3, 3.1],
... ["Copenhagen", 24.5, 0.0, 6.7]])
``` | Styler.text_gradient(cmap='PuBu', low=0, high=0, axis=0, subset=None, vmin=None, vmax=None, gmap=None)[source]#
Color the text in a gradient style.
The text color is determined according
to the data in each column, row or frame, or by a given
gradient map. Requires matplotlib.
Parameters
cmapstr or colormapMatplotlib colormap.
lowfloatCompress the color range at the low end. This is a multiple of the data
range to extend below the minimum; good values usually in [0, 1],
defaults to 0.
highfloatCompress the color range at the high end. This is a multiple of the data
range to extend above the maximum; good values usually in [0, 1],
defaults to 0.
axis{0, 1, “index”, “columns”, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'), or to the entire DataFrame at once
with axis=None.
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
vminfloat, optionalMinimum data value that corresponds to colormap minimum value.
If not specified the minimum value of the data (or gmap) will be used.
New in version 1.0.0.
vmaxfloat, optionalMaximum data value that corresponds to colormap maximum value.
If not specified the maximum value of the data (or gmap) will be used.
New in version 1.0.0.
gmaparray-like, optionalGradient map for determining the text colors. If not supplied
will use the underlying data from rows, columns or frame. If given as an
ndarray or list-like must be an identical shape to the underlying data
considering axis and subset. If given as DataFrame or Series must
have same index and column labels considering axis and subset.
If supplied, vmin and vmax should be given relative to this
gradient map.
New in version 1.3.0.
Returns
selfStyler
See also
Styler.background_gradientColor the background in a gradient style.
Notes
When using low and high the range
of the gradient, given by the data if gmap is not given or by gmap,
is extended at the low end effectively by
map.min - low * map.range and at the high end by
map.max + high * map.range before the colors are normalized and determined.
If combining with vmin and vmax the map.min, map.max and
map.range are replaced by values according to the values derived from
vmin and vmax.
This method will preselect numeric columns and ignore non-numeric columns
unless a gmap is supplied in which case no preselection occurs.
Examples
>>> df = pd.DataFrame(columns=["City", "Temp (c)", "Rain (mm)", "Wind (m/s)"],
... data=[["Stockholm", 21.6, 5.0, 3.2],
... ["Oslo", 22.4, 13.3, 3.1],
... ["Copenhagen", 24.5, 0.0, 6.7]])
Shading the values column-wise, with axis=0, preselecting numeric columns
>>> df.style.text_gradient(axis=0)
Shading all values collectively using axis=None
>>> df.style.text_gradient(axis=None)
Compress the color map from the both low and high ends
>>> df.style.text_gradient(axis=None, low=0.75, high=1.0)
Manually setting vmin and vmax gradient thresholds
>>> df.style.text_gradient(axis=None, vmin=6.7, vmax=21.6)
Setting a gmap and applying to all columns with another cmap
>>> df.style.text_gradient(axis=0, gmap=df['Temp (c)'], cmap='YlOrRd')
...
Setting the gradient map for a dataframe (i.e. axis=None), we need to
explicitly state subset to match the gmap shape
>>> gmap = np.array([[1,2,3], [2,3,4], [3,4,5]])
>>> df.style.text_gradient(axis=None, gmap=gmap,
... cmap='YlOrRd', subset=['Temp (c)', 'Rain (mm)', 'Wind (m/s)']
... )
| reference/api/pandas.io.formats.style.Styler.text_gradient.html |
pandas.read_sql | `pandas.read_sql`
Read SQL query or database table into a DataFrame.
```
>>> from sqlite3 import connect
>>> conn = connect(':memory:')
>>> df = pd.DataFrame(data=[[0, '10/11/12'], [1, '12/11/10']],
... columns=['int_column', 'date_column'])
>>> df.to_sql('test_data', conn)
2
``` | pandas.read_sql(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, columns=None, chunksize=None)[source]#
Read SQL query or database table into a DataFrame.
This function is a convenience wrapper around read_sql_table and
read_sql_query (for backward compatibility). It will delegate
to the specific function depending on the provided input. A SQL query
will be routed to read_sql_query, while a database table name will
be routed to read_sql_table. Note that the delegated function might
have more specific notes about their functionality not listed here.
Parameters
sqlstr or SQLAlchemy Selectable (select or text object)SQL query to be executed or a table name.
conSQLAlchemy connectable, str, or sqlite3 connectionUsing SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported. The user is responsible
for engine disposal and connection closure for the SQLAlchemy connectable; str
connections are closed automatically. See
here.
index_colstr or list of str, optional, default: NoneColumn(s) to set as index(MultiIndex).
coerce_floatbool, default TrueAttempts to convert values of non-string, non-numeric objects (like
decimal.Decimal) to floating point, useful for SQL result sets.
paramslist, tuple or dict, optional, default: NoneList of parameters to pass to execute method. The syntax used
to pass parameters is database driver dependent. Check your
database driver documentation for which of the five syntax styles,
described in PEP 249’s paramstyle, is supported.
Eg. for psycopg2, uses %(name)s so use params={‘name’ : ‘value’}.
parse_dateslist or dict, default: None
List of column names to parse as dates.
Dict of {column_name: format string} where format string is
strftime compatible in case of parsing string times, or is one of
(D, s, ns, ms, us) in case of parsing integer timestamps.
Dict of {column_name: arg dict}, where the arg dict corresponds
to the keyword arguments of pandas.to_datetime()
Especially useful with databases without native Datetime support,
such as SQLite.
columnslist, default: NoneList of column names to select from SQL table (only used when reading
a table).
chunksizeint, default NoneIf specified, return an iterator where chunksize is the
number of rows to include in each chunk.
Returns
DataFrame or Iterator[DataFrame]
See also
read_sql_tableRead SQL database table into a DataFrame.
read_sql_queryRead SQL query into a DataFrame.
Examples
Read data from SQL via either a SQL query or a SQL tablename.
When using a SQLite database only SQL queries are accepted,
providing only the SQL tablename will result in an error.
>>> from sqlite3 import connect
>>> conn = connect(':memory:')
>>> df = pd.DataFrame(data=[[0, '10/11/12'], [1, '12/11/10']],
... columns=['int_column', 'date_column'])
>>> df.to_sql('test_data', conn)
2
>>> pd.read_sql('SELECT int_column, date_column FROM test_data', conn)
int_column date_column
0 0 10/11/12
1 1 12/11/10
>>> pd.read_sql('test_data', 'postgres:///db_name')
Apply date parsing to columns through the parse_dates argument
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates=["date_column"])
int_column date_column
0 0 2012-10-11
1 1 2010-12-11
The parse_dates argument calls pd.to_datetime on the provided columns.
Custom argument values for applying pd.to_datetime on a column are specified
via a dictionary format:
1. Ignore errors while parsing the values of “date_column”
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"errors": "ignore"}})
int_column date_column
0 0 2012-10-11
1 1 2010-12-11
Apply a dayfirst date parsing order on the values of “date_column”
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"dayfirst": True}})
int_column date_column
0 0 2012-11-10
1 1 2010-11-12
Apply custom formatting when date parsing the values of “date_column”
>>> pd.read_sql('SELECT int_column, date_column FROM test_data',
... conn,
... parse_dates={"date_column": {"format": "%d/%m/%y"}})
int_column date_column
0 0 2012-11-10
1 1 2010-11-12
| reference/api/pandas.read_sql.html |
pandas.tseries.offsets.WeekOfMonth.is_on_offset | `pandas.tseries.offsets.WeekOfMonth.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | WeekOfMonth.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.WeekOfMonth.is_on_offset.html |
pandas.tseries.offsets.SemiMonthBegin.base | `pandas.tseries.offsets.SemiMonthBegin.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | SemiMonthBegin.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.SemiMonthBegin.base.html |
pandas.Index.size | `pandas.Index.size`
Return the number of elements in the underlying data. | property Index.size[source]#
Return the number of elements in the underlying data.
| reference/api/pandas.Index.size.html |
pandas.tseries.offsets.QuarterBegin.base | `pandas.tseries.offsets.QuarterBegin.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | QuarterBegin.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.QuarterBegin.base.html |
pandas.tseries.offsets.BYearEnd.is_month_end | `pandas.tseries.offsets.BYearEnd.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | BYearEnd.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.BYearEnd.is_month_end.html |
pandas.CategoricalIndex.add_categories | `pandas.CategoricalIndex.add_categories`
Add new categories.
```
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
``` | CategoricalIndex.add_categories(*args, **kwargs)[source]#
Add new categories.
new_categories will be included at the last/highest place in the
categories and will be unused directly after this call.
Parameters
new_categoriescategory or list-like of categoryThe new categories to be included.
inplacebool, default FalseWhether or not to add the categories inplace or return a copy of
this categorical with added categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with new categories added or None if inplace=True.
Raises
ValueErrorIf the new categories include old categories or do not validate as
categories
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['c', 'b', 'c'])
>>> c
['c', 'b', 'c']
Categories (2, object): ['b', 'c']
>>> c.add_categories(['d', 'a'])
['c', 'b', 'c']
Categories (4, object): ['b', 'c', 'd', 'a']
| reference/api/pandas.CategoricalIndex.add_categories.html |
pandas.tseries.offsets.BQuarterEnd.normalize | pandas.tseries.offsets.BQuarterEnd.normalize | BQuarterEnd.normalize#
| reference/api/pandas.tseries.offsets.BQuarterEnd.normalize.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.is_year_end | `pandas.tseries.offsets.CustomBusinessMonthEnd.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | CustomBusinessMonthEnd.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.is_year_end.html |
pandas.tseries.offsets.Nano.isAnchored | pandas.tseries.offsets.Nano.isAnchored | Nano.isAnchored()#
| reference/api/pandas.tseries.offsets.Nano.isAnchored.html |
pandas.tseries.offsets.Micro.is_month_start | `pandas.tseries.offsets.Micro.is_month_start`
Return boolean whether a timestamp occurs on the month start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | Micro.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.Micro.is_month_start.html |
pandas.testing.assert_frame_equal | `pandas.testing.assert_frame_equal`
Check that left and right DataFrame are equal.
This function is intended to compare two DataFrames and output any
differences. It is mostly intended for use in unit tests.
Additional parameters allow varying the strictness of the
equality checks performed.
```
>>> from pandas.testing import assert_frame_equal
>>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
>>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
``` | pandas.testing.assert_frame_equal(left, right, check_dtype=True, check_index_type='equiv', check_column_type='equiv', check_frame_type=True, check_less_precise=_NoDefault.no_default, check_names=True, by_blocks=False, check_exact=False, check_datetimelike_compat=False, check_categorical=True, check_like=False, check_freq=True, check_flags=True, rtol=1e-05, atol=1e-08, obj='DataFrame')[source]#
Check that left and right DataFrame are equal.
This function is intended to compare two DataFrames and output any
differences. It is mostly intended for use in unit tests.
Additional parameters allow varying the strictness of the
equality checks performed.
Parameters
leftDataFrameFirst DataFrame to compare.
rightDataFrameSecond DataFrame to compare.
check_dtypebool, default TrueWhether to check the DataFrame dtype is identical.
check_index_typebool or {‘equiv’}, default ‘equiv’Whether to check the Index class, dtype and inferred_type
are identical.
check_column_typebool or {‘equiv’}, default ‘equiv’Whether to check the columns class, dtype and inferred_type
are identical. Is passed as the exact argument of
assert_index_equal().
check_frame_typebool, default TrueWhether to check the DataFrame class is identical.
check_less_precisebool or int, default FalseSpecify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare.
When comparing two numbers, if the first number has magnitude less
than 1e-5, we compare the two numbers directly and check whether
they are equivalent within the specified precision. Otherwise, we
compare the ratio of the second number to the first number and
check whether it is equivalent to 1 within the specified precision.
Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute
tolerance, respectively. Similar to math.isclose().
check_namesbool, default TrueWhether to check that the names attribute for both the index
and column attributes of the DataFrame is identical.
by_blocksbool, default FalseSpecify how to compare internal data. If False, compare by columns.
If True, compare by blocks.
check_exactbool, default FalseWhether to compare number exactly.
check_datetimelike_compatbool, default FalseCompare datetime-like which is comparable ignoring dtype.
check_categoricalbool, default TrueWhether to compare internal Categorical exactly.
check_likebool, default FalseIf True, ignore the order of index & columns.
Note: index labels must match their respective rows
(same as in columns) - same labels must be with the same data.
check_freqbool, default TrueWhether to check the freq attribute on a DatetimeIndex or TimedeltaIndex.
New in version 1.1.0.
check_flagsbool, default TrueWhether to check the flags attribute.
rtolfloat, default 1e-5Relative tolerance. Only used when check_exact is False.
New in version 1.1.0.
atolfloat, default 1e-8Absolute tolerance. Only used when check_exact is False.
New in version 1.1.0.
objstr, default ‘DataFrame’Specify object name being compared, internally used to show appropriate
assertion message.
See also
assert_series_equalEquivalent method for asserting Series equality.
DataFrame.equalsCheck DataFrame equality.
Examples
This example shows comparing two DataFrames that are equal
but with columns of differing dtypes.
>>> from pandas.testing import assert_frame_equal
>>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
>>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
df1 equals itself.
>>> assert_frame_equal(df1, df1)
df1 differs from df2 as column ‘b’ is of a different type.
>>> assert_frame_equal(df1, df2)
Traceback (most recent call last):
...
AssertionError: Attributes of DataFrame.iloc[:, 1] (column name="b") are different
Attribute “dtype” are different
[left]: int64
[right]: float64
Ignore differing dtypes in columns with check_dtype.
>>> assert_frame_equal(df1, df2, check_dtype=False)
| reference/api/pandas.testing.assert_frame_equal.html |
pandas.Series.str.isdecimal | `pandas.Series.str.isdecimal`
Check whether all characters in each string are decimal.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
``` | Series.str.isdecimal()[source]#
Check whether all characters in each string are decimal.
This is equivalent to running the Python string method
str.isdecimal() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
| reference/api/pandas.Series.str.isdecimal.html |
pandas.Series.mod | `pandas.Series.mod`
Return Modulo of series and other, element-wise (binary operator mod).
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.mod(b, fill_value=0)
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
``` | Series.mod(other, level=None, fill_value=None, axis=0)[source]#
Return Modulo of series and other, element-wise (binary operator mod).
Equivalent to series % other, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
SeriesThe result of the operation.
See also
Series.rmodReverse of the Modulo operator, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.mod(b, fill_value=0)
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64
| reference/api/pandas.Series.mod.html |
GroupBy | GroupBy | GroupBy objects are returned by groupby calls: pandas.DataFrame.groupby(), pandas.Series.groupby(), etc.
Indexing, iteration#
GroupBy.__iter__()
Groupby iterator.
GroupBy.groups
Dict {group name -> group labels}.
GroupBy.indices
Dict {group name -> group indices}.
GroupBy.get_group(name[, obj])
Construct DataFrame from group with provided name.
Grouper(*args, **kwargs)
A Grouper allows the user to specify a groupby instruction for an object.
Function application#
GroupBy.apply(func, *args, **kwargs)
Apply function func group-wise and combine the results together.
GroupBy.agg(func, *args, **kwargs)
SeriesGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
DataFrameGroupBy.aggregate([func, engine, ...])
Aggregate using one or more operations over the specified axis.
SeriesGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed Series on each group.
DataFrameGroupBy.transform(func, *args[, ...])
Call function producing a same-indexed DataFrame on each group.
GroupBy.pipe(func, *args, **kwargs)
Apply a func with arguments to this GroupBy object and return its result.
Computations / descriptive stats#
GroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
GroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
GroupBy.bfill([limit])
Backward fill the values.
GroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
GroupBy.count()
Compute count of group, excluding missing values.
GroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
GroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
GroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
GroupBy.cumprod([axis])
Cumulative product for each group.
GroupBy.cumsum([axis])
Cumulative sum for each group.
GroupBy.ffill([limit])
Forward fill the values.
GroupBy.first([numeric_only, min_count])
Compute the first non-null entry of each column.
GroupBy.head([n])
Return first n rows of each group.
GroupBy.last([numeric_only, min_count])
Compute the last non-null entry of each column.
GroupBy.max([numeric_only, min_count, ...])
Compute max of group values.
GroupBy.mean([numeric_only, engine, ...])
Compute mean of groups, excluding missing values.
GroupBy.median([numeric_only])
Compute median of groups, excluding missing values.
GroupBy.min([numeric_only, min_count, ...])
Compute min of group values.
GroupBy.ngroup([ascending])
Number each group from 0 to the number of groups - 1.
GroupBy.nth
Take the nth row from each group if n is an int, otherwise a subset of rows.
GroupBy.ohlc()
Compute open, high, low and close values of a group, excluding missing values.
GroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
GroupBy.prod([numeric_only, min_count])
Compute prod of group values.
GroupBy.rank([method, ascending, na_option, ...])
Provide the rank of values within each group.
GroupBy.pct_change([periods, fill_method, ...])
Calculate pct_change of each value to previous entry in group.
GroupBy.size()
Compute group sizes.
GroupBy.sem([ddof, numeric_only])
Compute standard error of the mean of groups, excluding missing values.
GroupBy.std([ddof, engine, engine_kwargs, ...])
Compute standard deviation of groups, excluding missing values.
GroupBy.sum([numeric_only, min_count, ...])
Compute sum of group values.
GroupBy.var([ddof, engine, engine_kwargs, ...])
Compute variance of groups, excluding missing values.
GroupBy.tail([n])
Return last n rows of each group.
The following methods are available in both SeriesGroupBy and
DataFrameGroupBy objects, but may differ slightly, usually in that
the DataFrameGroupBy version usually permits the specification of an
axis argument, and often an argument indicating whether to restrict
application to columns of a specific data type.
DataFrameGroupBy.all([skipna])
Return True if all values in the group are truthful, else False.
DataFrameGroupBy.any([skipna])
Return True if any value in the group is truthful, else False.
DataFrameGroupBy.backfill([limit])
(DEPRECATED) Backward fill the values.
DataFrameGroupBy.bfill([limit])
Backward fill the values.
DataFrameGroupBy.corr
Compute pairwise correlation of columns, excluding NA/null values.
DataFrameGroupBy.count()
Compute count of group, excluding missing values.
DataFrameGroupBy.cov
Compute pairwise covariance of columns, excluding NA/null values.
DataFrameGroupBy.cumcount([ascending])
Number each item in each group from 0 to the length of that group - 1.
DataFrameGroupBy.cummax([axis, numeric_only])
Cumulative max for each group.
DataFrameGroupBy.cummin([axis, numeric_only])
Cumulative min for each group.
DataFrameGroupBy.cumprod([axis])
Cumulative product for each group.
DataFrameGroupBy.cumsum([axis])
Cumulative sum for each group.
DataFrameGroupBy.describe(**kwargs)
Generate descriptive statistics.
DataFrameGroupBy.diff([periods, axis])
First discrete difference of element.
DataFrameGroupBy.ffill([limit])
Forward fill the values.
DataFrameGroupBy.fillna
Fill NA/NaN values using the specified method.
DataFrameGroupBy.filter(func[, dropna])
Return a copy of a DataFrame excluding filtered elements.
DataFrameGroupBy.hist
Make a histogram of the DataFrame's columns.
DataFrameGroupBy.idxmax([axis, skipna, ...])
Return index of first occurrence of maximum over requested axis.
DataFrameGroupBy.idxmin([axis, skipna, ...])
Return index of first occurrence of minimum over requested axis.
DataFrameGroupBy.mad
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
DataFrameGroupBy.nunique([dropna])
Return DataFrame with counts of unique elements in each position.
DataFrameGroupBy.pad([limit])
(DEPRECATED) Forward fill the values.
DataFrameGroupBy.pct_change([periods, ...])
Calculate pct_change of each value to previous entry in group.
DataFrameGroupBy.plot
Class implementing the .plot attribute for groupby objects.
DataFrameGroupBy.quantile([q, ...])
Return group values at the given quantile, a la numpy.percentile.
DataFrameGroupBy.rank([method, ascending, ...])
Provide the rank of values within each group.
DataFrameGroupBy.resample(rule, *args, **kwargs)
Provide resampling when using a TimeGrouper.
DataFrameGroupBy.sample([n, frac, replace, ...])
Return a random sample of items from each group.
DataFrameGroupBy.shift([periods, freq, ...])
Shift each group by periods observations.
DataFrameGroupBy.size()
Compute group sizes.
DataFrameGroupBy.skew
Return unbiased skew over requested axis.
DataFrameGroupBy.take
Return the elements in the given positional indices along an axis.
DataFrameGroupBy.tshift
(DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrameGroupBy.value_counts([subset, ...])
Return a Series or DataFrame containing counts of unique rows.
The following methods are available only for SeriesGroupBy objects.
SeriesGroupBy.hist
Draw histogram of the input series using matplotlib.
SeriesGroupBy.nlargest([n, keep])
Return the largest n elements.
SeriesGroupBy.nsmallest([n, keep])
Return the smallest n elements.
SeriesGroupBy.unique
Return unique values of Series object.
SeriesGroupBy.is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
SeriesGroupBy.is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
The following methods are available only for DataFrameGroupBy objects.
DataFrameGroupBy.corrwith
Compute pairwise correlation.
DataFrameGroupBy.boxplot([subplots, column, ...])
Make box plots from DataFrameGroupBy data.
| reference/groupby.html |
pandas.Interval.closed_left | `pandas.Interval.closed_left`
Check if the interval is closed on the left side. | Interval.closed_left#
Check if the interval is closed on the left side.
For the meaning of closed and open see Interval.
Returns
boolTrue if the Interval is closed on the left-side.
| reference/api/pandas.Interval.closed_left.html |
Contributing to pandas | Contributing to pandas | Table of contents:
Where to start?
Bug reports and enhancement requests
Working with the code
Version control, Git, and GitHub
Getting started with Git
Forking
Creating a branch
Contributing your changes to pandas
Committing your code
Pushing your changes
Review your code
Finally, make the pull request
Updating your pull request
Autofixing formatting errors
Delete your merged branch (optional)
Tips for a successful pull request
Where to start?#
All contributions, bug reports, bug fixes, documentation improvements,
enhancements, and ideas are welcome.
If you are brand new to pandas or open-source development, we recommend going
through the GitHub “issues” tab
to find issues that interest you. There are a number of issues listed under Docs
and good first issue
where you could start out. Once you’ve found an interesting issue, you can
return here to get your development environment setup.
When you start working on an issue, it’s a good idea to assign the issue to yourself,
so nobody else duplicates the work on it. GitHub restricts assigning issues to maintainers
of the project only. In most projects, and until recently in pandas, contributors added a
comment letting others know they are working on an issue. While this is ok, you need to
check each issue individually, and it’s not possible to find the unassigned ones.
For this reason, we implemented a workaround consisting of adding a comment with the exact
text take. When you do it, a GitHub action will automatically assign you the issue
(this will take seconds, and may require refreshing the page to see it).
By doing this, it’s possible to filter the list of issues and find only the unassigned ones.
So, a good way to find an issue to start contributing to pandas is to check the list of
unassigned good first issues
and assign yourself one you like by writing a comment with the exact text take.
If for whatever reason you are not able to continue working with the issue, please try to
unassign it, so other people know it’s available again. You can check the list of
assigned issues, since people may not be working in them anymore. If you want to work on one
that is assigned, feel free to kindly ask the current assignee if you can take it
(please allow at least a week of inactivity before considering work in the issue discontinued).
We have several contributor community communication channels, which you are
welcome to join, and ask questions as you figure things out. Among them are regular meetings for
new contributors, dev meetings, a dev mailing list, and a slack for the contributor community.
All pandas contributors are welcome to these spaces, where they can connect with each other. Even
maintainers who have been with us for a long time felt just like you when they started out, and
are happy to welcome you and support you as you get to know how we work, and where things are.
Take a look at the next sections to learn more.
Bug reports and enhancement requests#
Bug reports are an important part of making pandas more stable. Having a complete bug report
will allow others to reproduce the bug and provide insight into fixing. See
this stackoverflow article and
this blogpost
for tips on writing a good bug report.
Trying the bug-producing code out on the main branch is often a worthwhile exercise
to confirm the bug still exists. It is also worth searching existing bug reports and pull requests
to see if the issue has already been reported and/or fixed.
Bug reports must:
Include a short, self-contained Python snippet reproducing the problem.
You can format the code nicely by using GitHub Flavored Markdown:
```python
>>> from pandas import DataFrame
>>> df = DataFrame(...)
...
```
Include the full version string of pandas and its dependencies. You can use the built-in function:
>>> import pandas as pd
>>> pd.show_versions()
Explain why the current behavior is wrong/not desired and what you expect instead.
The issue will then show up to the pandas community and be open to comments/ideas from others.
Working with the code#
Now that you have an issue you want to fix, enhancement to add, or documentation to improve,
you need to learn how to work with GitHub and the pandas code base.
Version control, Git, and GitHub#
To the new user, working with Git is one of the more daunting aspects of contributing to pandas.
It can very quickly become overwhelming, but sticking to the guidelines below will help keep the process
straightforward and mostly trouble free. As always, if you are having difficulties please
feel free to ask for help.
The code is hosted on GitHub. To
contribute you will need to sign up for a free GitHub account. We use Git for
version control to allow many people to work together on the project.
Some great resources for learning Git:
the GitHub help pages.
the NumPy documentation.
Matthew Brett’s Pydagogue.
Getting started with Git#
GitHub has instructions for installing git,
setting up your SSH key, and configuring git. All these steps need to be completed before
you can work seamlessly between your local repository and GitHub.
Forking#
You will need your own fork to work on the code. Go to the pandas project
page and hit the Fork button. You will
want to clone your fork to your machine:
git clone https://github.com/your-user-name/pandas.git pandas-yourname
cd pandas-yourname
git remote add upstream https://github.com/pandas-dev/pandas.git
This creates the directory pandas-yourname and connects your repository to
the upstream (main project) pandas repository.
Note that performing a shallow clone (with --depth==N, for some N greater
or equal to 1) might break some tests and features as pd.show_versions()
as the version number cannot be computed anymore.
Creating a branch#
You want your main branch to reflect only production-ready code, so create a
feature branch for making your changes. For example:
git branch shiny-new-feature
git checkout shiny-new-feature
The above can be simplified to:
git checkout -b shiny-new-feature
This changes your working directory to the shiny-new-feature branch. Keep any
changes in this branch specific to one bug or feature so it is clear
what the branch brings to pandas. You can have many shiny-new-features
and switch in between them using the git checkout command.
When creating this branch, make sure your main branch is up to date with
the latest upstream main version. To update your local main branch, you
can do:
git checkout main
git pull upstream main --ff-only
When you want to update the feature branch with changes in main after
you created the branch, check the section on
updating a PR.
Contributing your changes to pandas#
Committing your code#
Keep style fixes to a separate commit to make your pull request more readable.
Once you’ve made changes, you can see them by typing:
git status
If you have created a new file, it is not being tracked by git. Add it by typing:
git add path/to/file-to-be-added.py
Doing ‘git status’ again should give something like:
# On branch shiny-new-feature
#
# modified: /relative/path/to/file-you-added.py
#
Finally, commit your changes to your local repository with an explanatory message. pandas
uses a convention for commit message prefixes and layout. Here are
some common prefixes along with general guidelines for when to use them:
ENH: Enhancement, new functionality
BUG: Bug fix
DOC: Additions/updates to documentation
TST: Additions/updates to tests
BLD: Updates to the build process/scripts
PERF: Performance improvement
TYP: Type annotations
CLN: Code cleanup
The following defines how a commit message should be structured. Please reference the
relevant GitHub issues in your commit message using GH1234 or #1234. Either style
is fine, but the former is generally preferred:
a subject line with < 80 chars.
One blank line.
Optionally, a commit message body.
Now you can commit your changes in your local repository:
git commit -m
Pushing your changes#
When you want your changes to appear publicly on your GitHub page, push your
forked feature branch’s commits:
git push origin shiny-new-feature
Here origin is the default name given to your remote repository on GitHub.
You can see the remote repositories:
git remote -v
If you added the upstream repository as described above you will see something
like:
origin git@github.com:yourname/pandas.git (fetch)
origin git@github.com:yourname/pandas.git (push)
upstream git://github.com/pandas-dev/pandas.git (fetch)
upstream git://github.com/pandas-dev/pandas.git (push)
Now your code is on GitHub, but it is not yet a part of the pandas project. For that to
happen, a pull request needs to be submitted on GitHub.
Review your code#
When you’re ready to ask for a code review, file a pull request. Before you do, once
again make sure that you have followed all the guidelines outlined in this document
regarding code style, tests, performance tests, and documentation. You should also
double check your branch changes against the branch it was based on:
Navigate to your repository on GitHub – https://github.com/your-user-name/pandas
Click on Branches
Click on the Compare button for your feature branch
Select the base and compare branches, if necessary. This will be main and
shiny-new-feature, respectively.
Finally, make the pull request#
If everything looks good, you are ready to make a pull request. A pull request is how
code from a local repository becomes available to the GitHub community and can be looked
at and eventually merged into the main version. This pull request and its associated
changes will eventually be committed to the main branch and available in the next
release. To submit a pull request:
Navigate to your repository on GitHub
Click on the Pull Request button
You can then click on Commits and Files Changed to make sure everything looks
okay one last time
Write a description of your changes in the Preview Discussion tab
Click Send Pull Request.
This request then goes to the repository maintainers, and they will review
the code.
Updating your pull request#
Based on the review you get on your pull request, you will probably need to make
some changes to the code. In that case, you can make them in your branch,
add a new commit to that branch, push it to GitHub, and the pull request will be
automatically updated. Pushing them to GitHub again is done by:
git push origin shiny-new-feature
This will automatically update your pull request with the latest code and restart the
Continuous Integration tests.
Another reason you might need to update your pull request is to solve conflicts
with changes that have been merged into the main branch since you opened your
pull request.
To do this, you need to “merge upstream main” in your branch:
git checkout shiny-new-feature
git fetch upstream
git merge upstream/main
If there are no conflicts (or they could be fixed automatically), a file with a
default commit message will open, and you can simply save and quit this file.
If there are merge conflicts, you need to solve those conflicts. See for
example at https://help.github.com/articles/resolving-a-merge-conflict-using-the-command-line/
for an explanation on how to do this.
Once the conflicts are merged and the files where the conflicts were solved are
added, you can run git commit to save those fixes.
If you have uncommitted changes at the moment you want to update the branch with
main, you will need to stash them prior to updating (see the
stash docs).
This will effectively store your changes and they can be reapplied after updating.
After the feature branch has been update locally, you can now update your pull
request by pushing to the branch on GitHub:
git push origin shiny-new-feature
Autofixing formatting errors#
We use several styling checks (e.g. black, flake8, isort) which are run after
you make a pull request.
To automatically fix formatting errors on each commit you make, you can
set up pre-commit yourself. First, create a Python environment and then set up pre-commit.
Delete your merged branch (optional)#
Once your feature branch is accepted into upstream, you’ll probably want to get rid of
the branch. First, merge upstream main into your branch so git knows it is safe to
delete your branch:
git fetch upstream
git checkout main
git merge upstream/main
Then you can do:
git branch -d shiny-new-feature
Make sure you use a lower-case -d, or else git won’t warn you if your feature
branch has not actually been merged.
The branch will still exist on GitHub, so to delete it there do:
git push origin --delete shiny-new-feature
Tips for a successful pull request#
If you have made it to the Review your code phase, one of the core contributors may
take a look. Please note however that a handful of people are responsible for reviewing
all of the contributions, which can often lead to bottlenecks.
To improve the chances of your pull request being reviewed, you should:
Reference an open issue for non-trivial changes to clarify the PR’s purpose
Ensure you have appropriate tests. These should be the first part of any PR
Keep your pull requests as simple as possible. Larger PRs take longer to review
Ensure that CI is in a green state. Reviewers may not even look otherwise
Keep Updating your pull request, either by request or every few days
| development/contributing.html |
pandas.api.types.is_bool_dtype | `pandas.api.types.is_bool_dtype`
Check whether the provided array or dtype is of a boolean dtype.
```
>>> is_bool_dtype(str)
False
>>> is_bool_dtype(int)
False
>>> is_bool_dtype(bool)
True
>>> is_bool_dtype(np.bool_)
True
>>> is_bool_dtype(np.array(['a', 'b']))
False
>>> is_bool_dtype(pd.Series([1, 2]))
False
>>> is_bool_dtype(np.array([True, False]))
True
>>> is_bool_dtype(pd.Categorical([True, False]))
True
>>> is_bool_dtype(pd.arrays.SparseArray([True, False]))
True
``` | pandas.api.types.is_bool_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of a boolean dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
booleanWhether or not the array or dtype is of a boolean dtype.
Notes
An ExtensionArray is considered boolean when the _is_boolean
attribute is set to True.
Examples
>>> is_bool_dtype(str)
False
>>> is_bool_dtype(int)
False
>>> is_bool_dtype(bool)
True
>>> is_bool_dtype(np.bool_)
True
>>> is_bool_dtype(np.array(['a', 'b']))
False
>>> is_bool_dtype(pd.Series([1, 2]))
False
>>> is_bool_dtype(np.array([True, False]))
True
>>> is_bool_dtype(pd.Categorical([True, False]))
True
>>> is_bool_dtype(pd.arrays.SparseArray([True, False]))
True
| reference/api/pandas.api.types.is_bool_dtype.html |
pandas.Series.keys | `pandas.Series.keys`
Return alias for index. | Series.keys()[source]#
Return alias for index.
Returns
IndexIndex of the Series.
| reference/api/pandas.Series.keys.html |
pandas.Index.hasnans | `pandas.Index.hasnans`
Return True if there are any NaNs. | Index.hasnans[source]#
Return True if there are any NaNs.
Enables various performance speedups.
| reference/api/pandas.Index.hasnans.html |
pandas.tseries.offsets.BQuarterEnd.base | `pandas.tseries.offsets.BQuarterEnd.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | BQuarterEnd.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.BQuarterEnd.base.html |
pandas.core.resample.Resampler.sum | `pandas.core.resample.Resampler.sum`
Compute sum of group values.
Include only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. | Resampler.sum(numeric_only=_NoDefault.no_default, min_count=0, *args, **kwargs)[source]#
Compute sum of group values.
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
min_countint, default 0The required number of valid values to perform the operation. If fewer
than min_count non-NA values are present the result will be NA.
Returns
Series or DataFrameComputed sum of values within each group.
| reference/api/pandas.core.resample.Resampler.sum.html |
pandas.to_numeric | `pandas.to_numeric`
Convert argument to a numeric type.
```
>>> s = pd.Series(['1.0', '2', -3])
>>> pd.to_numeric(s)
0 1.0
1 2.0
2 -3.0
dtype: float64
>>> pd.to_numeric(s, downcast='float')
0 1.0
1 2.0
2 -3.0
dtype: float32
>>> pd.to_numeric(s, downcast='signed')
0 1
1 2
2 -3
dtype: int8
>>> s = pd.Series(['apple', '1.0', '2', -3])
>>> pd.to_numeric(s, errors='ignore')
0 apple
1 1.0
2 2
3 -3
dtype: object
>>> pd.to_numeric(s, errors='coerce')
0 NaN
1 1.0
2 2.0
3 -3.0
dtype: float64
``` | pandas.to_numeric(arg, errors='raise', downcast=None)[source]#
Convert argument to a numeric type.
The default return dtype is float64 or int64
depending on the data supplied. Use the downcast parameter
to obtain other dtypes.
Please note that precision loss may occur if really large numbers
are passed in. Due to the internal limitations of ndarray, if
numbers smaller than -9223372036854775808 (np.iinfo(np.int64).min)
or larger than 18446744073709551615 (np.iinfo(np.uint64).max) are
passed in, it is very likely they will be converted to float so that
they can stored in an ndarray. These warnings apply similarly to
Series since it internally leverages ndarray.
Parameters
argscalar, list, tuple, 1-d array, or SeriesArgument to be converted.
errors{‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
If ‘raise’, then invalid parsing will raise an exception.
If ‘coerce’, then invalid parsing will be set as NaN.
If ‘ignore’, then invalid parsing will return the input.
downcaststr, default NoneCan be ‘integer’, ‘signed’, ‘unsigned’, or ‘float’.
If not None, and if the data has been successfully cast to a
numerical dtype (or if the data was numeric to begin with),
downcast that resulting data to the smallest numerical dtype
possible according to the following rules:
‘integer’ or ‘signed’: smallest signed int dtype (min.: np.int8)
‘unsigned’: smallest unsigned int dtype (min.: np.uint8)
‘float’: smallest float dtype (min.: np.float32)
As this behaviour is separate from the core conversion to
numeric values, any errors raised during the downcasting
will be surfaced regardless of the value of the ‘errors’ input.
In addition, downcasting will only occur if the size
of the resulting data’s dtype is strictly larger than
the dtype it is to be cast to, so if none of the dtypes
checked satisfy that specification, no downcasting will be
performed on the data.
Returns
retNumeric if parsing succeeded.
Return type depends on input. Series if Series, otherwise ndarray.
See also
DataFrame.astypeCast argument to a specified dtype.
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
numpy.ndarray.astypeCast a numpy array to a specified type.
DataFrame.convert_dtypesConvert dtypes.
Examples
Take separate series and convert to numeric, coercing when told to
>>> s = pd.Series(['1.0', '2', -3])
>>> pd.to_numeric(s)
0 1.0
1 2.0
2 -3.0
dtype: float64
>>> pd.to_numeric(s, downcast='float')
0 1.0
1 2.0
2 -3.0
dtype: float32
>>> pd.to_numeric(s, downcast='signed')
0 1
1 2
2 -3
dtype: int8
>>> s = pd.Series(['apple', '1.0', '2', -3])
>>> pd.to_numeric(s, errors='ignore')
0 apple
1 1.0
2 2
3 -3
dtype: object
>>> pd.to_numeric(s, errors='coerce')
0 NaN
1 1.0
2 2.0
3 -3.0
dtype: float64
Downcasting of nullable integer and floating dtypes is supported:
>>> s = pd.Series([1, 2, 3], dtype="Int64")
>>> pd.to_numeric(s, downcast="integer")
0 1
1 2
2 3
dtype: Int8
>>> s = pd.Series([1.0, 2.1, 3.0], dtype="Float64")
>>> pd.to_numeric(s, downcast="float")
0 1.0
1 2.1
2 3.0
dtype: Float32
| reference/api/pandas.to_numeric.html |
pandas.api.types.is_sparse | `pandas.api.types.is_sparse`
Check whether an array-like is a 1-D pandas sparse array.
```
>>> is_sparse(pd.arrays.SparseArray([0, 0, 1, 0]))
True
>>> is_sparse(pd.Series(pd.arrays.SparseArray([0, 0, 1, 0])))
True
``` | pandas.api.types.is_sparse(arr)[source]#
Check whether an array-like is a 1-D pandas sparse array.
Check that the one-dimensional array-like is a pandas sparse array.
Returns True if it is a pandas sparse array, not another type of
sparse array.
Parameters
arrarray-likeArray-like to check.
Returns
boolWhether or not the array-like is a pandas sparse array.
Examples
Returns True if the parameter is a 1-D pandas sparse array.
>>> is_sparse(pd.arrays.SparseArray([0, 0, 1, 0]))
True
>>> is_sparse(pd.Series(pd.arrays.SparseArray([0, 0, 1, 0])))
True
Returns False if the parameter is not sparse.
>>> is_sparse(np.array([0, 0, 1, 0]))
False
>>> is_sparse(pd.Series([0, 1, 0, 0]))
False
Returns False if the parameter is not a pandas sparse array.
>>> from scipy.sparse import bsr_matrix
>>> is_sparse(bsr_matrix([0, 1, 0, 0]))
False
Returns False if the parameter has more than one dimension.
| reference/api/pandas.api.types.is_sparse.html |