instance_id
stringclasses 4
values | text
stringclasses 4
values | repo
stringclasses 1
value | base_commit
stringclasses 4
values | problem_statement
stringclasses 4
values | hints_text
stringclasses 4
values | created_at
stringclasses 4
values | patch
stringclasses 4
values | test_patch
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|
pandas-dev__pandas-24759 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: DataFrame with tz-aware data and max(axis=1) returns NaN
I have a dataframe looks like this, and its column 2 is missing:
![image](https://cloud.githubusercontent.com/assets/6269369/8245984/6ff1798c-1669-11e5-88b1-2c27a6b5f2fb.png)
When I try to select the max date in each row, I got all NaN in return:
![img2](https://cloud.githubusercontent.com/assets/6269369/8246090/eea64bbc-166a-11e5-9fb3-668a37a2cf3a.png)
However, If the dataframe's type is float64, the selection work as expected.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
30 </a>
31 </tr>
32 <tr>
33 <td>License</td>
34 <td>
35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
37 </a>
38 </td>
39 </tr>
40 <tr>
41 <td>Build Status</td>
42 <td>
43 <a href="https://travis-ci.org/pandas-dev/pandas">
44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
45 </a>
46 </td>
47 </tr>
48 <tr>
49 <td></td>
50 <td>
51 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
52 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
53 </a>
54 </td>
55 </tr>
56 <tr>
57 <td>Coverage</td>
58 <td>
59 <a href="https://codecov.io/gh/pandas-dev/pandas">
60 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
61 </a>
62 </td>
63 </tr>
64 <tr>
65 <td>Downloads</td>
66 <td>
67 <a href="https://pandas.pydata.org">
68 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
69 </a>
70 </td>
71 </tr>
72 <tr>
73 <td>Gitter</td>
74 <td>
75 <a href="https://gitter.im/pydata/pandas">
76 <img src="https://badges.gitter.im/Join%20Chat.svg"
77 </a>
78 </td>
79 </tr>
80 </table>
81
82
83
84 ## What is it?
85
86 **pandas** is a Python package providing fast, flexible, and expressive data
87 structures designed to make working with "relational" or "labeled" data both
88 easy and intuitive. It aims to be the fundamental high-level building block for
89 doing practical, **real world** data analysis in Python. Additionally, it has
90 the broader goal of becoming **the most powerful and flexible open source data
91 analysis / manipulation tool available in any language**. It is already well on
92 its way towards this goal.
93
94 ## Main Features
95 Here are just a few of the things that pandas does well:
96
97 - Easy handling of [**missing data**][missing-data] (represented as
98 `NaN`) in floating point as well as non-floating point data
99 - Size mutability: columns can be [**inserted and
100 deleted**][insertion-deletion] from DataFrame and higher dimensional
101 objects
102 - Automatic and explicit [**data alignment**][alignment]: objects can
103 be explicitly aligned to a set of labels, or the user can simply
104 ignore the labels and let `Series`, `DataFrame`, etc. automatically
105 align the data for you in computations
106 - Powerful, flexible [**group by**][groupby] functionality to perform
107 split-apply-combine operations on data sets, for both aggregating
108 and transforming data
109 - Make it [**easy to convert**][conversion] ragged,
110 differently-indexed data in other Python and NumPy data structures
111 into DataFrame objects
112 - Intelligent label-based [**slicing**][slicing], [**fancy
113 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
114 large data sets
115 - Intuitive [**merging**][merging] and [**joining**][joining] data
116 sets
117 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
118 data sets
119 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
120 labels per tick)
121 - Robust IO tools for loading data from [**flat files**][flat-files]
122 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
123 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
124 - [**Time series**][timeseries]-specific functionality: date range
125 generation and frequency conversion, moving window statistics,
126 moving window linear regressions, date shifting and lagging, etc.
127
128
129 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
130 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
131 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
132 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
133 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
134 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
135 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
136 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
137 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
138 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
139 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
140 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
141 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
142 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
143 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
144 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
145 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
146 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
147
148 ## Where to get it
149 The source code is currently hosted on GitHub at:
150 https://github.com/pandas-dev/pandas
151
152 Binary installers for the latest released version are available at the [Python
153 package index](https://pypi.org/project/pandas) and on conda.
154
155 ```sh
156 # conda
157 conda install pandas
158 ```
159
160 ```sh
161 # or PyPI
162 pip install pandas
163 ```
164
165 ## Dependencies
166 - [NumPy](https://www.numpy.org): 1.12.0 or higher
167 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
168 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
169
170 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
171 for recommended and optional dependencies.
172
173 ## Installation from sources
174 To install pandas from source you need Cython in addition to the normal
175 dependencies above. Cython can be installed from pypi:
176
177 ```sh
178 pip install cython
179 ```
180
181 In the `pandas` directory (same one where you found this file after
182 cloning the git repo), execute:
183
184 ```sh
185 python setup.py install
186 ```
187
188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
189
190 ```sh
191 python setup.py develop
192 ```
193
194 Alternatively, you can use `pip` if you want all the dependencies pulled
195 in automatically (the `-e` option is for installing it in [development
196 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
197
198 ```sh
199 pip install -e .
200 ```
201
202 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
203
204 ## License
205 [BSD 3](LICENSE)
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
209
210 ## Background
211 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
212 has been under active development since then.
213
214 ## Getting Help
215
216 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
217 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
218
219 ## Discussion and Development
220 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
221
222 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas)
223
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
229
230 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
231
232 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
233
234 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
235
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| pandas-dev/pandas | e2bf1ff491dbdabce1590506d2b1dafca34db82e | BUG: DataFrame with tz-aware data and max(axis=1) returns NaN
I have a dataframe looks like this, and its column 2 is missing:
![image](https://cloud.githubusercontent.com/assets/6269369/8245984/6ff1798c-1669-11e5-88b1-2c27a6b5f2fb.png)
When I try to select the max date in each row, I got all NaN in return:
![img2](https://cloud.githubusercontent.com/assets/6269369/8246090/eea64bbc-166a-11e5-9fb3-668a37a2cf3a.png)
However, If the dataframe's type is float64, the selection work as expected.
| pls show pd.show_versions() and df_datetime64.info()
Hello! Hijacking this issue as I've also verified this behaviour (actually, it took a while to discover after upgrading to 0.19.0 and discovering some odd dropping of timezones - see #14524, which is a duplication of #13905). This behaviour was masked to my program previously as Pandas 0.18.1 was dropping the timezones from all relevant columns before I tried to perform this step. Once upgrading to 0.19.0 half the operations I was performing stopped dropping timezones, leading to mismatch between tz-aware and tz-naive timestamps which I've been chasing down the rabbit hole for a couple of days now.
I've verified that this is present in pandas 0.18.1 and 0.19.0.
From some stepping through of the code, this looks like a potential problem with the numpy implementations of `.max(axis=1)`, but I haven't yet found the culprit!
This issue has meant that I've been forced to roll back to 0.18.1 to use the drop timezone bug in order to make the `df.max(axis=1)` work, which is frustrating! I have also tried a `df.T.max()` to work around the issue, but this infuriatingly returns an empty series (see below).
#### A small, complete example of the issue
``` python
import pandas as pd
df = pd.DataFrame(pd.date_range(start=pd.Timestamp('2016-01-01 00:00:00+00'), end=pd.Timestamp('2016-01-01 23:59:59+00'), freq='H'))
df.columns = ['a']
df['b'] = df.a.subtract(pd.Timedelta(seconds=60*60)) # if using pandas 0.19.0 to test, ensure that this is a series of timedeltas instead of a single - we want b and c to be tz-naive.
df[['a', 'b']].max() # This is fine, produces two numbers
df[['a', 'b']].max(axis=1) # This is not fine, produces a correctly sized series of NaN
df['c'] = df.a.subtract(pd.Timedelta(seconds=60)) # if using pandas 0.19.0 to test, ensure that this is a series of timedeltas instead of a single - we want b and c to be tz-naive.
df[['b', 'c']].max(axis=1) # This is fine, produces correctly sized series of valid timestamps without timezone
df[['a', 'b']].T.max() # produces an empty series.
```
#### Expected Output
Calling `df.max(axis=1)` on a dataframe with timezone-aware timestamps should return valid timestamps, not NaN.
#### Output of `pd.show_versions()`
(I have tested in two virtualenvs, the only difference between the two being the pandas version)
<details>
# Paste the output here
## INSTALLED VERSIONS
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Darwin
OS-release: 14.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: None.None
pandas: 0.18.1
nose: 1.3.7
pip: 8.1.2
setuptools: 28.6.0
Cython: None
numpy: 1.11.2
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.5.3
pytz: 2016.7
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None
</details>
| 2019-01-14T00:28:56Z | <patch>
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1647,6 +1647,7 @@ Timezones
- Bug in :meth:`DataFrame.any` returns wrong value when ``axis=1`` and the data is of datetimelike type (:issue:`23070`)
- Bug in :meth:`DatetimeIndex.to_period` where a timezone aware index was converted to UTC first before creating :class:`PeriodIndex` (:issue:`22905`)
- Bug in :meth:`DataFrame.tz_localize`, :meth:`DataFrame.tz_convert`, :meth:`Series.tz_localize`, and :meth:`Series.tz_convert` where ``copy=False`` would mutate the original argument inplace (:issue:`6326`)
+- Bug in :meth:`DataFrame.max` and :meth:`DataFrame.min` with ``axis=1`` where a :class:`Series` with ``NaN`` would be returned when all columns contained the same timezone (:issue:`10390`)
Offsets
^^^^^^^
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -49,6 +49,7 @@
find_common_type)
from pandas.core.dtypes.common import (
is_dict_like,
+ is_datetime64tz_dtype,
is_object_dtype,
is_extension_type,
is_extension_array_dtype,
@@ -7390,7 +7391,9 @@ def f(x):
return op(x, axis=axis, skipna=skipna, **kwds)
# exclude timedelta/datetime unless we are uniform types
- if axis == 1 and self._is_mixed_type and self._is_datelike_mixed_type:
+ if (axis == 1 and self._is_datelike_mixed_type
+ and (not self._is_homogeneous_type
+ and not is_datetime64tz_dtype(self.dtypes[0]))):
numeric_only = True
if numeric_only is None:
</patch> | [] | [] | |||
pandas-dev__pandas-37834 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
read_json with dtype=False infers Missing Values as None
Run against master:
```python
In [13]: pd.read_json("[null]", dtype=True)
Out[13]:
0
0 NaN
In [14]: pd.read_json("[null]", dtype=False)
Out[14]:
0
0 None
```
I think the second above is an issue - should probably return `np.nan` instead of `None`
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8 [![PyPI Latest Release](https://img.shields.io/pypi/v/pandas.svg)](https://pypi.org/project/pandas/)
9 [![Conda Latest Release](https://anaconda.org/conda-forge/pandas/badges/version.svg)](https://anaconda.org/anaconda/pandas/)
10 [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3509134.svg)](https://doi.org/10.5281/zenodo.3509134)
11 [![Package Status](https://img.shields.io/pypi/status/pandas.svg)](https://pypi.org/project/pandas/)
12 [![License](https://img.shields.io/pypi/l/pandas.svg)](https://github.com/pandas-dev/pandas/blob/master/LICENSE)
13 [![Travis Build Status](https://travis-ci.org/pandas-dev/pandas.svg?branch=master)](https://travis-ci.org/pandas-dev/pandas)
14 [![Azure Build Status](https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master)](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master)
15 [![Coverage](https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master)](https://codecov.io/gh/pandas-dev/pandas)
16 [![Downloads](https://anaconda.org/conda-forge/pandas/badges/downloads.svg)](https://pandas.pydata.org)
17 [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas)
18 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org)
19 [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
20
21 ## What is it?
22
23 **pandas** is a Python package that provides fast, flexible, and expressive data
24 structures designed to make working with "relational" or "labeled" data both
25 easy and intuitive. It aims to be the fundamental high-level building block for
26 doing practical, **real world** data analysis in Python. Additionally, it has
27 the broader goal of becoming **the most powerful and flexible open source data
28 analysis / manipulation tool available in any language**. It is already well on
29 its way towards this goal.
30
31 ## Main Features
32 Here are just a few of the things that pandas does well:
33
34 - Easy handling of [**missing data**][missing-data] (represented as
35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
36 - Size mutability: columns can be [**inserted and
37 deleted**][insertion-deletion] from DataFrame and higher dimensional
38 objects
39 - Automatic and explicit [**data alignment**][alignment]: objects can
40 be explicitly aligned to a set of labels, or the user can simply
41 ignore the labels and let `Series`, `DataFrame`, etc. automatically
42 align the data for you in computations
43 - Powerful, flexible [**group by**][groupby] functionality to perform
44 split-apply-combine operations on data sets, for both aggregating
45 and transforming data
46 - Make it [**easy to convert**][conversion] ragged,
47 differently-indexed data in other Python and NumPy data structures
48 into DataFrame objects
49 - Intelligent label-based [**slicing**][slicing], [**fancy
50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
51 large data sets
52 - Intuitive [**merging**][merging] and [**joining**][joining] data
53 sets
54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
55 data sets
56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
57 labels per tick)
58 - Robust IO tools for loading data from [**flat files**][flat-files]
59 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
61 - [**Time series**][timeseries]-specific functionality: date range
62 generation and frequency conversion, moving window statistics,
63 date shifting and lagging.
64
65
66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
84
85 ## Where to get it
86 The source code is currently hosted on GitHub at:
87 https://github.com/pandas-dev/pandas
88
89 Binary installers for the latest released version are available at the [Python
90 package index](https://pypi.org/project/pandas) and on conda.
91
92 ```sh
93 # conda
94 conda install pandas
95 ```
96
97 ```sh
98 # or PyPI
99 pip install pandas
100 ```
101
102 ## Dependencies
103 - [NumPy](https://www.numpy.org)
104 - [python-dateutil](https://labix.org/python-dateutil)
105 - [pytz](https://pythonhosted.org/pytz)
106
107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
108
109 ## Installation from sources
110 To install pandas from source you need Cython in addition to the normal
111 dependencies above. Cython can be installed from pypi:
112
113 ```sh
114 pip install cython
115 ```
116
117 In the `pandas` directory (same one where you found this file after
118 cloning the git repo), execute:
119
120 ```sh
121 python setup.py install
122 ```
123
124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
125
126
127 ```sh
128 python -m pip install -e . --no-build-isolation --no-use-pep517
129 ```
130
131 If you have `make`, you can also use `make develop` to run the same command.
132
133 or alternatively
134
135 ```sh
136 python setup.py develop
137 ```
138
139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
140
141 ## License
142 [BSD 3](LICENSE)
143
144 ## Documentation
145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
146
147 ## Background
148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
149 has been under active development since then.
150
151 ## Getting Help
152
153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
155
156 ## Discussion and Development
157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
158
159 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas)
160
161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
162
163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
164
165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
166
167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
168
169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
170
171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
172
173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md)
174
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| pandas-dev/pandas | 793b6351f59b42ec6a0dc42ccce679a5cf232a1a | read_json with dtype=False infers Missing Values as None
Run against master:
```python
In [13]: pd.read_json("[null]", dtype=True)
Out[13]:
0
0 NaN
In [14]: pd.read_json("[null]", dtype=False)
Out[14]:
0
0 None
```
I think the second above is an issue - should probably return `np.nan` instead of `None`
| I _think_ that this isn't a bug. NaN is a numerical value. If `dtype=False` we shouldn't infer it to be a numerical value.
But if it it is an issue I can do this one.
Yea there is certainly some ambiguity here and @jorisvandenbossche might have some thoughts but I don't necessarily think the JSON `null` would map better to `None` than it would to `np.nan`.
read_csv would also convert this to NaN:
```python
>>> pd.read_csv(io.StringIO("1\nnull"))
1
0 NaN
```
Complementing what @chrisstpierre said, the null type in json can be numerical, string, list, ..., as seen in https://stackoverflow.com/questions/21120999/representing-null-in-json.
So setting up as `np.nan` even though `dtype` is set explicitly to `False` may be beneficial when working with numerical data only but not with strings for example. I think a `np.nan` in a column of strings should not be desirable.
Hi all - Just curious whether stringifying `None` in the following example is expected behaviour?
```python
df = pd.read_json('[null]', dtype={0: str})
print(df.values)
# [['None']]
```
If yes, how can I avoid this while at the same time specifying the the column to be of `str` type?
Thanks for your help! | 2020-11-14T16:52:13Z | <patch>
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -653,6 +653,7 @@ I/O
- Bug in :func:`read_html` was raising a ``TypeError`` when supplying a ``pathlib.Path`` argument to the ``io`` parameter (:issue:`37705`)
- :meth:`to_excel` and :meth:`to_markdown` support writing to fsspec URLs such as S3 and Google Cloud Storage (:issue:`33987`)
- Bug in :meth:`read_fw` was not skipping blank lines (even with ``skip_blank_lines=True``) (:issue:`37758`)
+- Parse missing values using :func:`read_json` with ``dtype=False`` to ``NaN`` instead of ``None`` (:issue:`28501`)
- :meth:`read_fwf` was inferring compression with ``compression=None`` which was not consistent with the other :meth:``read_*`` functions (:issue:`37909`)
Period
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -20,7 +20,7 @@
from pandas.core.dtypes.common import ensure_str, is_period_dtype
-from pandas import DataFrame, MultiIndex, Series, isna, to_datetime
+from pandas import DataFrame, MultiIndex, Series, isna, notna, to_datetime
from pandas.core import generic
from pandas.core.construction import create_series_with_explicit_dtype
from pandas.core.generic import NDFrame
@@ -858,7 +858,10 @@ def _try_convert_data(self, name, data, use_dtypes=True, convert_dates=True):
# don't try to coerce, unless a force conversion
if use_dtypes:
if not self.dtype:
- return data, False
+ if all(notna(data)):
+ return data, False
+ return data.fillna(np.nan), True
+
elif self.dtype is True:
pass
else:
</patch> | [] | [] | |||
pandas-dev__pandas-24725 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DataFrame creation incorrect error message
The problem was already mentioned as part of other issues, but still persists in 0.22
https://github.com/pandas-dev/pandas/issues/8020
https://github.com/blaze/blaze/issues/466
Reported both expected shape and input data shape are both transposed which causes a lot of confusion. In my opinion, the reference value should be ` DataFrame.shape`.
```python
my_arr = np.array([1, 2, 3])
print("my_arr.shape: {}".format(my_arr.shape))
df = pd.DataFrame(index=[0], columns=range(0, 4), data=my_arr)
```
```python
my_arr.shape: (3,)
Traceback (most recent call last):
...
ValueError: Shape of passed values is (1, 3), indices imply (4, 1)
```
Below are shapes which are expected to be reported:
```python
my_arr = np.array([[0, 1, 2, 3]])
print("my_arr.shape: {}".format(my_arr.shape))
df = pd.DataFrame(index=[0], columns=range(0, 4), data=my_arr)
print(df.shape)`
```
```python
my_arr.shape: (1, 4)
(1, 4)
```
I'm not sure, whether this is another issue, but in the first example, the error cause is 1-dimensional data while constructor expects 2-dimensional data. The user gets no hint about this from the error message.
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
30 </a>
31 </tr>
32 <tr>
33 <td>License</td>
34 <td>
35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
37 </a>
38 </td>
39 </tr>
40 <tr>
41 <td>Build Status</td>
42 <td>
43 <a href="https://travis-ci.org/pandas-dev/pandas">
44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
45 </a>
46 </td>
47 </tr>
48 <tr>
49 <td></td>
50 <td>
51 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
52 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
53 </a>
54 </td>
55 </tr>
56 <tr>
57 <td>Coverage</td>
58 <td>
59 <a href="https://codecov.io/gh/pandas-dev/pandas">
60 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
61 </a>
62 </td>
63 </tr>
64 <tr>
65 <td>Downloads</td>
66 <td>
67 <a href="https://pandas.pydata.org">
68 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
69 </a>
70 </td>
71 </tr>
72 <tr>
73 <td>Gitter</td>
74 <td>
75 <a href="https://gitter.im/pydata/pandas">
76 <img src="https://badges.gitter.im/Join%20Chat.svg"
77 </a>
78 </td>
79 </tr>
80 </table>
81
82
83
84 ## What is it?
85
86 **pandas** is a Python package providing fast, flexible, and expressive data
87 structures designed to make working with "relational" or "labeled" data both
88 easy and intuitive. It aims to be the fundamental high-level building block for
89 doing practical, **real world** data analysis in Python. Additionally, it has
90 the broader goal of becoming **the most powerful and flexible open source data
91 analysis / manipulation tool available in any language**. It is already well on
92 its way towards this goal.
93
94 ## Main Features
95 Here are just a few of the things that pandas does well:
96
97 - Easy handling of [**missing data**][missing-data] (represented as
98 `NaN`) in floating point as well as non-floating point data
99 - Size mutability: columns can be [**inserted and
100 deleted**][insertion-deletion] from DataFrame and higher dimensional
101 objects
102 - Automatic and explicit [**data alignment**][alignment]: objects can
103 be explicitly aligned to a set of labels, or the user can simply
104 ignore the labels and let `Series`, `DataFrame`, etc. automatically
105 align the data for you in computations
106 - Powerful, flexible [**group by**][groupby] functionality to perform
107 split-apply-combine operations on data sets, for both aggregating
108 and transforming data
109 - Make it [**easy to convert**][conversion] ragged,
110 differently-indexed data in other Python and NumPy data structures
111 into DataFrame objects
112 - Intelligent label-based [**slicing**][slicing], [**fancy
113 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
114 large data sets
115 - Intuitive [**merging**][merging] and [**joining**][joining] data
116 sets
117 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
118 data sets
119 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
120 labels per tick)
121 - Robust IO tools for loading data from [**flat files**][flat-files]
122 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
123 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
124 - [**Time series**][timeseries]-specific functionality: date range
125 generation and frequency conversion, moving window statistics,
126 moving window linear regressions, date shifting and lagging, etc.
127
128
129 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
130 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
131 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
132 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
133 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
134 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
135 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
136 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
137 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
138 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
139 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
140 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
141 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
142 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
143 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
144 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
145 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
146 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
147
148 ## Where to get it
149 The source code is currently hosted on GitHub at:
150 https://github.com/pandas-dev/pandas
151
152 Binary installers for the latest released version are available at the [Python
153 package index](https://pypi.org/project/pandas) and on conda.
154
155 ```sh
156 # conda
157 conda install pandas
158 ```
159
160 ```sh
161 # or PyPI
162 pip install pandas
163 ```
164
165 ## Dependencies
166 - [NumPy](https://www.numpy.org): 1.12.0 or higher
167 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
168 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
169
170 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
171 for recommended and optional dependencies.
172
173 ## Installation from sources
174 To install pandas from source you need Cython in addition to the normal
175 dependencies above. Cython can be installed from pypi:
176
177 ```sh
178 pip install cython
179 ```
180
181 In the `pandas` directory (same one where you found this file after
182 cloning the git repo), execute:
183
184 ```sh
185 python setup.py install
186 ```
187
188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
189
190 ```sh
191 python setup.py develop
192 ```
193
194 Alternatively, you can use `pip` if you want all the dependencies pulled
195 in automatically (the `-e` option is for installing it in [development
196 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
197
198 ```sh
199 pip install -e .
200 ```
201
202 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
203
204 ## License
205 [BSD 3](LICENSE)
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
209
210 ## Background
211 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
212 has been under active development since then.
213
214 ## Getting Help
215
216 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
217 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
218
219 ## Discussion and Development
220 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
221
222 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas)
223
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
229
230 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
231
232 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
233
234 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
235
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| pandas-dev/pandas | 17a6bc56e5ab6ad3dab12d3a8b20ed69a5830b6f | DataFrame creation incorrect error message
The problem was already mentioned as part of other issues, but still persists in 0.22
https://github.com/pandas-dev/pandas/issues/8020
https://github.com/blaze/blaze/issues/466
Reported both expected shape and input data shape are both transposed which causes a lot of confusion. In my opinion, the reference value should be ` DataFrame.shape`.
```python
my_arr = np.array([1, 2, 3])
print("my_arr.shape: {}".format(my_arr.shape))
df = pd.DataFrame(index=[0], columns=range(0, 4), data=my_arr)
```
```python
my_arr.shape: (3,)
Traceback (most recent call last):
...
ValueError: Shape of passed values is (1, 3), indices imply (4, 1)
```
Below are shapes which are expected to be reported:
```python
my_arr = np.array([[0, 1, 2, 3]])
print("my_arr.shape: {}".format(my_arr.shape))
df = pd.DataFrame(index=[0], columns=range(0, 4), data=my_arr)
print(df.shape)`
```
```python
my_arr.shape: (1, 4)
(1, 4)
```
I'm not sure, whether this is another issue, but in the first example, the error cause is 1-dimensional data while constructor expects 2-dimensional data. The user gets no hint about this from the error message.
| the first example is wrong. The block manager reports this, but doesn't flip the dim (like we do for everything else), so would welcome a PR to correct that.
I don't see a problem with the 2nd. You gave a 1, 4 array. That's the same as the dim of the frame, so it constructs. | 2019-01-11T15:13:07Z | <patch>
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1816,6 +1816,7 @@ Reshaping
- Bug in :func:`DataFrame.unstack` where a ``ValueError`` was raised when unstacking timezone aware values (:issue:`18338`)
- Bug in :func:`DataFrame.stack` where timezone aware values were converted to timezone naive values (:issue:`19420`)
- Bug in :func:`merge_asof` where a ``TypeError`` was raised when ``by_col`` were timezone aware values (:issue:`21184`)
+- Bug showing an incorrect shape when throwing error during ``DataFrame`` construction. (:issue:`20742`)
.. _whatsnew_0240.bug_fixes.sparse:
@@ -1853,6 +1854,7 @@ Other
- Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (:issue:`24113`)
+
.. _whatsnew_0.24.0.contributors:
Contributors
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1674,7 +1674,15 @@ def create_block_manager_from_arrays(arrays, names, axes):
def construction_error(tot_items, block_shape, axes, e=None):
""" raise a helpful message about our construction """
passed = tuple(map(int, [tot_items] + list(block_shape)))
- implied = tuple(map(int, [len(ax) for ax in axes]))
+ # Correcting the user facing error message during dataframe construction
+ if len(passed) <= 2:
+ passed = passed[::-1]
+
+ implied = tuple(len(ax) for ax in axes)
+ # Correcting the user facing error message during dataframe construction
+ if len(implied) <= 2:
+ implied = implied[::-1]
+
if passed == implied and e is not None:
raise e
if block_shape[0] == 0:
</patch> | [] | [] | |||
pandas-dev__pandas-25419 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Series.str.casefold
`Series.str.lower` implements `str.lower`, as expected. There are also corresponding `Series` methods for the other Python 2 string casing methods. However, Python 3's `str.casefold` is missing. [Casefold](https://docs.python.org/3/library/stdtypes.html#str.casefold) improves string equality and other comparisons, because it handles a greater variety of characters, as per the Unicode Standard. It'd be nice to have `Series.str.casefold` in Pandas.
The current alternative is more verbose and is slower than `Series.str.lower`.
pd.Series(s.casefold() if isinstance(s, str) else s for s in series)
Further, this alternative encourages a frustrating mistake -- forgetting to keep the original Series' index, which causes trouble if the new Series needs to be inserted into the same DataFrame as the original.
Apologies for double-posting. I used the wrong account for #25404 .
</issue>
<code>
[start of README.md]
1 <div align="center">
2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br>
3 </div>
4
5 -----------------
6
7 # pandas: powerful Python data analysis toolkit
8
9 <table>
10 <tr>
11 <td>Latest Release</td>
12 <td>
13 <a href="https://pypi.org/project/pandas/">
14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" />
15 </a>
16 </td>
17 </tr>
18 <td></td>
19 <td>
20 <a href="https://anaconda.org/anaconda/pandas/">
21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" />
22 </a>
23 </td>
24 </tr>
25 <tr>
26 <td>Package Status</td>
27 <td>
28 <a href="https://pypi.org/project/pandas/">
29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td>
30 </a>
31 </tr>
32 <tr>
33 <td>License</td>
34 <td>
35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE">
36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" />
37 </a>
38 </td>
39 </tr>
40 <tr>
41 <td>Build Status</td>
42 <td>
43 <a href="https://travis-ci.org/pandas-dev/pandas">
44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" />
45 </a>
46 </td>
47 </tr>
48 <tr>
49 <td></td>
50 <td>
51 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master">
52 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" />
53 </a>
54 </td>
55 </tr>
56 <tr>
57 <td>Coverage</td>
58 <td>
59 <a href="https://codecov.io/gh/pandas-dev/pandas">
60 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" />
61 </a>
62 </td>
63 </tr>
64 <tr>
65 <td>Downloads</td>
66 <td>
67 <a href="https://pandas.pydata.org">
68 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" />
69 </a>
70 </td>
71 </tr>
72 <tr>
73 <td>Gitter</td>
74 <td>
75 <a href="https://gitter.im/pydata/pandas">
76 <img src="https://badges.gitter.im/Join%20Chat.svg"
77 </a>
78 </td>
79 </tr>
80 </table>
81
82
83
84 ## What is it?
85
86 **pandas** is a Python package providing fast, flexible, and expressive data
87 structures designed to make working with "relational" or "labeled" data both
88 easy and intuitive. It aims to be the fundamental high-level building block for
89 doing practical, **real world** data analysis in Python. Additionally, it has
90 the broader goal of becoming **the most powerful and flexible open source data
91 analysis / manipulation tool available in any language**. It is already well on
92 its way towards this goal.
93
94 ## Main Features
95 Here are just a few of the things that pandas does well:
96
97 - Easy handling of [**missing data**][missing-data] (represented as
98 `NaN`) in floating point as well as non-floating point data
99 - Size mutability: columns can be [**inserted and
100 deleted**][insertion-deletion] from DataFrame and higher dimensional
101 objects
102 - Automatic and explicit [**data alignment**][alignment]: objects can
103 be explicitly aligned to a set of labels, or the user can simply
104 ignore the labels and let `Series`, `DataFrame`, etc. automatically
105 align the data for you in computations
106 - Powerful, flexible [**group by**][groupby] functionality to perform
107 split-apply-combine operations on data sets, for both aggregating
108 and transforming data
109 - Make it [**easy to convert**][conversion] ragged,
110 differently-indexed data in other Python and NumPy data structures
111 into DataFrame objects
112 - Intelligent label-based [**slicing**][slicing], [**fancy
113 indexing**][fancy-indexing], and [**subsetting**][subsetting] of
114 large data sets
115 - Intuitive [**merging**][merging] and [**joining**][joining] data
116 sets
117 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
118 data sets
119 - [**Hierarchical**][mi] labeling of axes (possible to have multiple
120 labels per tick)
121 - Robust IO tools for loading data from [**flat files**][flat-files]
122 (CSV and delimited), [**Excel files**][excel], [**databases**][db],
123 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
124 - [**Time series**][timeseries]-specific functionality: date range
125 generation and frequency conversion, moving window statistics,
126 moving window linear regressions, date shifting and lagging, etc.
127
128
129 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data
130 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion
131 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures
132 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine
133 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe
134 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
135 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix
136 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
137 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
138 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
139 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables
140 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations
141 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex
142 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files
143 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files
144 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries
145 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables
146 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality
147
148 ## Where to get it
149 The source code is currently hosted on GitHub at:
150 https://github.com/pandas-dev/pandas
151
152 Binary installers for the latest released version are available at the [Python
153 package index](https://pypi.org/project/pandas) and on conda.
154
155 ```sh
156 # conda
157 conda install pandas
158 ```
159
160 ```sh
161 # or PyPI
162 pip install pandas
163 ```
164
165 ## Dependencies
166 - [NumPy](https://www.numpy.org): 1.12.0 or higher
167 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher
168 - [pytz](https://pythonhosted.org/pytz): 2011k or higher
169
170 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies)
171 for recommended and optional dependencies.
172
173 ## Installation from sources
174 To install pandas from source you need Cython in addition to the normal
175 dependencies above. Cython can be installed from pypi:
176
177 ```sh
178 pip install cython
179 ```
180
181 In the `pandas` directory (same one where you found this file after
182 cloning the git repo), execute:
183
184 ```sh
185 python setup.py install
186 ```
187
188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs):
189
190 ```sh
191 python setup.py develop
192 ```
193
194 Alternatively, you can use `pip` if you want all the dependencies pulled
195 in automatically (the `-e` option is for installing it in [development
196 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)):
197
198 ```sh
199 pip install -e .
200 ```
201
202 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source).
203
204 ## License
205 [BSD 3](LICENSE)
206
207 ## Documentation
208 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
209
210 ## Background
211 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and
212 has been under active development since then.
213
214 ## Getting Help
215
216 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
217 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
218
219 ## Discussion and Development
220 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions.
221
222 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas)
223
224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
225
226 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
227
228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
229
230 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
231
232 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
233
234 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas).
235
[end of README.md]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
| pandas-dev/pandas | 85572de5e7bb188cfecc575ee56786406e79dc79 | Series.str.casefold
`Series.str.lower` implements `str.lower`, as expected. There are also corresponding `Series` methods for the other Python 2 string casing methods. However, Python 3's `str.casefold` is missing. [Casefold](https://docs.python.org/3/library/stdtypes.html#str.casefold) improves string equality and other comparisons, because it handles a greater variety of characters, as per the Unicode Standard. It'd be nice to have `Series.str.casefold` in Pandas.
The current alternative is more verbose and is slower than `Series.str.lower`.
pd.Series(s.casefold() if isinstance(s, str) else s for s in series)
Further, this alternative encourages a frustrating mistake -- forgetting to keep the original Series' index, which causes trouble if the new Series needs to be inserted into the same DataFrame as the original.
Apologies for double-posting. I used the wrong account for #25404 .
| can u update with the doc reference for casefold in python docs
certainly would take as an enhancement PR | 2019-02-23T19:41:19Z | <patch>
diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst
--- a/doc/source/reference/series.rst
+++ b/doc/source/reference/series.rst
@@ -409,6 +409,7 @@ strings and apply several methods to it. These can be accessed like
:template: autosummary/accessor_method.rst
Series.str.capitalize
+ Series.str.casefold
Series.str.cat
Series.str.center
Series.str.contains
diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst
--- a/doc/source/user_guide/text.rst
+++ b/doc/source/user_guide/text.rst
@@ -600,6 +600,7 @@ Method Summary
:meth:`~Series.str.partition`;Equivalent to ``str.partition``
:meth:`~Series.str.rpartition`;Equivalent to ``str.rpartition``
:meth:`~Series.str.lower`;Equivalent to ``str.lower``
+ :meth:`~Series.str.casefold`;Equivalent to ``str.casefold``
:meth:`~Series.str.upper`;Equivalent to ``str.upper``
:meth:`~Series.str.find`;Equivalent to ``str.find``
:meth:`~Series.str.rfind`;Equivalent to ``str.rfind``
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -22,6 +22,7 @@ Other Enhancements
- Indexing of ``DataFrame`` and ``Series`` now accepts zerodim ``np.ndarray`` (:issue:`24919`)
- :meth:`Timestamp.replace` now supports the ``fold`` argument to disambiguate DST transition times (:issue:`25017`)
- :meth:`DataFrame.at_time` and :meth:`Series.at_time` now support :meth:`datetime.time` objects with timezones (:issue:`24043`)
+- ``Series.str`` has gained :meth:`Series.str.casefold` method to removes all case distinctions present in a string (:issue:`25405`)
- :meth:`DataFrame.set_index` now works for instances of ``abc.Iterator``, provided their output is of the same length as the calling frame (:issue:`22484`, :issue:`24984`)
- :meth:`DatetimeIndex.union` now supports the ``sort`` argument. The behaviour of the sort parameter matches that of :meth:`Index.union` (:issue:`24994`)
-
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -2926,7 +2926,7 @@ def rindex(self, sub, start=0, end=None):
_shared_docs['casemethods'] = ("""
Convert strings in the Series/Index to %(type)s.
-
+ %(version)s
Equivalent to :meth:`str.%(method)s`.
Returns
@@ -2943,6 +2943,7 @@ def rindex(self, sub, start=0, end=None):
remaining to lowercase.
Series.str.swapcase : Converts uppercase to lowercase and lowercase to
uppercase.
+ Series.str.casefold: Removes all case distinctions in the string.
Examples
--------
@@ -2989,12 +2990,15 @@ def rindex(self, sub, start=0, end=None):
3 sWaPcAsE
dtype: object
""")
- _shared_docs['lower'] = dict(type='lowercase', method='lower')
- _shared_docs['upper'] = dict(type='uppercase', method='upper')
- _shared_docs['title'] = dict(type='titlecase', method='title')
+ _shared_docs['lower'] = dict(type='lowercase', method='lower', version='')
+ _shared_docs['upper'] = dict(type='uppercase', method='upper', version='')
+ _shared_docs['title'] = dict(type='titlecase', method='title', version='')
_shared_docs['capitalize'] = dict(type='be capitalized',
- method='capitalize')
- _shared_docs['swapcase'] = dict(type='be swapcased', method='swapcase')
+ method='capitalize', version='')
+ _shared_docs['swapcase'] = dict(type='be swapcased', method='swapcase',
+ version='')
+ _shared_docs['casefold'] = dict(type='be casefolded', method='casefold',
+ version='\n .. versionadded:: 0.25.0\n')
lower = _noarg_wrapper(lambda x: x.lower(),
docstring=_shared_docs['casemethods'] %
_shared_docs['lower'])
@@ -3010,6 +3014,9 @@ def rindex(self, sub, start=0, end=None):
swapcase = _noarg_wrapper(lambda x: x.swapcase(),
docstring=_shared_docs['casemethods'] %
_shared_docs['swapcase'])
+ casefold = _noarg_wrapper(lambda x: x.casefold(),
+ docstring=_shared_docs['casemethods'] %
+ _shared_docs['casefold'])
_shared_docs['ismethods'] = ("""
Check whether all characters in each string are %(type)s.
</patch> | [] | [] |
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 37