repo
stringclasses 32
values | instance_id
stringlengths 13
37
| base_commit
stringlengths 40
40
| patch
stringlengths 1
1.89M
| test_patch
stringclasses 1
value | problem_statement
stringlengths 304
69k
| hints_text
stringlengths 0
246k
| created_at
stringlengths 20
20
| version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value | traceback
stringlengths 64
23.4k
| __index_level_0__
int64 29
19k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pandas-dev/pandas | pandas-dev__pandas-17364 | e8a1765edf91ec4d087b46b90d5e54530550029b | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -405,6 +405,7 @@ Reshaping
- Bug in :func:`crosstab` where passing two ``Series`` with the same name raised a ``KeyError`` (:issue:`13279`)
- :func:`Series.argmin`, :func:`Series.argmax`, and their counterparts on ``DataFrame`` and groupby objects work correctly with floating point data that contains infinite values (:issue:`13595`).
- Bug in :func:`unique` where checking a tuple of strings raised a ``TypeError`` (:issue:`17108`)
+- Bug in :func:`concat` where order of result index was unpredictable if it contained non-comparable elements (:issue:`17344`)
Numeric
^^^^^^^
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -629,3 +629,17 @@ def _random_state(state=None):
else:
raise ValueError("random_state must be an integer, a numpy "
"RandomState, or None")
+
+
+def _get_distinct_objs(objs):
+ """
+ Return a list with distinct elements of "objs" (different ids).
+ Preserves order.
+ """
+ ids = set()
+ res = []
+ for obj in objs:
+ if not id(obj) in ids:
+ ids.add(id(obj))
+ res.append(obj)
+ return res
diff --git a/pandas/core/indexes/api.py b/pandas/core/indexes/api.py
--- a/pandas/core/indexes/api.py
+++ b/pandas/core/indexes/api.py
@@ -23,8 +23,7 @@
'PeriodIndex', 'DatetimeIndex',
'_new_Index', 'NaT',
'_ensure_index', '_get_na_value', '_get_combined_index',
- '_get_objs_combined_axis',
- '_get_distinct_indexes', '_union_indexes',
+ '_get_objs_combined_axis', '_union_indexes',
'_get_consensus_names',
'_all_indexes_same']
@@ -41,7 +40,7 @@ def _get_objs_combined_axis(objs, intersect=False, axis=0):
def _get_combined_index(indexes, intersect=False):
# TODO: handle index names!
- indexes = _get_distinct_indexes(indexes)
+ indexes = com._get_distinct_objs(indexes)
if len(indexes) == 0:
return Index([])
if len(indexes) == 1:
@@ -55,10 +54,6 @@ def _get_combined_index(indexes, intersect=False):
return _ensure_index(union)
-def _get_distinct_indexes(indexes):
- return list(dict((id(x), x) for x in indexes).values())
-
-
def _union_indexes(indexes):
if len(indexes) == 0:
raise AssertionError('Must have at least 1 Index to union')
| Unconsistent (random) behaviour of pd.concat with different indexes
#### Code Sample, a copy-pastable example if possible
The following code
```python
import pandas as pd
dfs_sq = []
dfs_sq.append(pd.DataFrame(index=[0, 'sess'], columns=range(1,3)))
for i in range(5):
dfs_sq.append(pd.DataFrame(index=[0, 1, 'sess'], columns=range(1,3)))
df_sq = pd.concat(dfs_sq, axis=1)
assert df_sq.index[1] == 'sess', df_sq.index
```
... saved as ``dependable.py``, succeeds if called as ``PYTHONHASHSEED=40 python3 dependable.py`` and fails if called as ``PYTHONHASHSEED=41 python3 dependable.py``:
```bash
Traceback (most recent call last):
File "dependable.py", line 11, in <module>
assert df_sq.index[1] == 'sess', df_sq.index
AssertionError: Index([0, 1, 'sess'], dtype='object')
```
#### Problem description
This is not very nice.
#### Expected Output
The same - which one is not particularly important.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.0-3-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: it_IT.UTF-8
LOCALE: it_IT.UTF-8
pandas: 0.21.0.dev+389.g276f3089a
pytest: 3.0.6
pip: 9.0.1
setuptools: None
Cython: 0.25.2
numpy: 1.12.1
scipy: 0.19.0
pyarrow: None
xarray: None
IPython: 5.1.0.dev
sphinx: 1.5.6
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: 1.2.1
tables: 3.3.0
numexpr: 2.6.1
feather: 0.3.1
matplotlib: 2.0.2
openpyxl: None
xlrd: 1.0.0
xlwt: 1.1.2
xlsxwriter: 0.9.6
lxml: None
bs4: 4.5.3
html5lib: 0.999999999
sqlalchemy: 1.0.15
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: 0.2.1
</details>
| Presumably this is because your Index is un-sortable, so the hash union defines the resulting order.
Not sure there's anything that can be done?
```python
In [16]: idx = dfs_sq[0].index.union(dfs_sq[1].index)
In [17]: idx
Out[17]: Index([0, 'sess', 1], dtype='object')
In [18]: idx.sort_values()
TypeError: '>' not supported between instances of 'int' and 'str'
```
> Not sure there's anything that can be done?
OK, I'm replying without having looked at the code, but: if two indexes are unsortable, then I expect the resulting union to respect their original order (with e.g. priority given to the first index if orders don't coincide). Actually, I would have expected this to happen even when the two indexes, and the union too, are sortable...
I'm also talking without having looked at much code ... but I believe we're currently doing a hash-based unique on the entire set of values, that's what tosses out the order. I suppose we could do something more iterative like you're suggesting that would preserve it. | 2017-08-28T22:33:57Z | [] | [] |
Traceback (most recent call last):
File "dependable.py", line 11, in <module>
assert df_sq.index[1] == 'sess', df_sq.index
AssertionError: Index([0, 1, 'sess'], dtype='object')
| 11,343 |
|||
pandas-dev/pandas | pandas-dev__pandas-17507 | 21a38008e3cab7a0459cce4fab4ace11379c3148 | diff --git a/asv_bench/benchmarks/timestamp.py b/asv_bench/benchmarks/timestamp.py
--- a/asv_bench/benchmarks/timestamp.py
+++ b/asv_bench/benchmarks/timestamp.py
@@ -1,5 +1,7 @@
from .pandas_vb_common import *
from pandas import to_timedelta, Timestamp
+import pytz
+import datetime
class TimestampProperties(object):
@@ -58,3 +60,24 @@ def time_is_leap_year(self):
def time_microsecond(self):
self.ts.microsecond
+
+
+class TimestampOps(object):
+ goal_time = 0.2
+
+ def setup(self):
+ self.ts = Timestamp('2017-08-25 08:16:14')
+ self.ts_tz = Timestamp('2017-08-25 08:16:14', tz='US/Eastern')
+
+ dt = datetime.datetime(2016, 3, 27, 1)
+ self.tzinfo = pytz.timezone('CET').localize(dt, is_dst=False).tzinfo
+ self.ts2 = Timestamp(dt)
+
+ def time_replace_tz(self):
+ self.ts.replace(tzinfo=pytz.timezone('US/Eastern'))
+
+ def time_replace_across_dst(self):
+ self.ts2.replace(tzinfo=self.tzinfo)
+
+ def time_replace_None(self):
+ self.ts_tz.replace(tzinfo=None)
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -487,6 +487,7 @@ Conversion
- Bug in ``IntervalIndex.is_non_overlapping_monotonic`` when intervals are closed on both sides and overlap at a point (:issue:`16560`)
- Bug in :func:`Series.fillna` returns frame when ``inplace=True`` and ``value`` is dict (:issue:`16156`)
- Bug in :attr:`Timestamp.weekday_name` returning a UTC-based weekday name when localized to a timezone (:issue:`17354`)
+- Bug in ``Timestamp.replace`` when replacing ``tzinfo`` around DST changes (:issue:`15683`)
Indexing
^^^^^^^^
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -142,6 +142,7 @@ def ints_to_pydatetime(ndarray[int64_t] arr, tz=None, freq=None, box=False):
cdef:
Py_ssize_t i, n = len(arr)
+ ndarray[int64_t] trans, deltas
pandas_datetimestruct dts
object dt
int64_t value
@@ -417,8 +418,9 @@ class Timestamp(_Timestamp):
def _round(self, freq, rounder):
- cdef int64_t unit
- cdef object result, value
+ cdef:
+ int64_t unit, r, value, buff = 1000000
+ object result
from pandas.tseries.frequencies import to_offset
unit = to_offset(freq).nanos
@@ -429,16 +431,15 @@ class Timestamp(_Timestamp):
if unit < 1000 and unit % 1000 != 0:
# for nano rounding, work with the last 6 digits separately
# due to float precision
- buff = 1000000
- result = (buff * (value // buff) + unit *
- (rounder((value % buff) / float(unit))).astype('i8'))
+ r = (buff * (value // buff) + unit *
+ (rounder((value % buff) / float(unit))).astype('i8'))
elif unit >= 1000 and unit % 1000 != 0:
msg = 'Precision will be lost using frequency: {}'
warnings.warn(msg.format(freq))
- result = (unit * rounder(value / float(unit)).astype('i8'))
+ r = (unit * rounder(value / float(unit)).astype('i8'))
else:
- result = (unit * rounder(value / float(unit)).astype('i8'))
- result = Timestamp(result, unit='ns')
+ r = (unit * rounder(value / float(unit)).astype('i8'))
+ result = Timestamp(r, unit='ns')
if self.tz is not None:
result = result.tz_localize(self.tz)
return result
@@ -683,14 +684,16 @@ class Timestamp(_Timestamp):
cdef:
pandas_datetimestruct dts
- int64_t value
+ int64_t value, value_tz, offset
object _tzinfo, result, k, v
+ datetime ts_input
# set to naive if needed
_tzinfo = self.tzinfo
value = self.value
if _tzinfo is not None:
- value = tz_convert_single(value, 'UTC', _tzinfo)
+ value_tz = tz_convert_single(value, _tzinfo, 'UTC')
+ value += value - value_tz
# setup components
pandas_datetime_to_datetimestruct(value, PANDAS_FR_ns, &dts)
@@ -724,16 +727,14 @@ class Timestamp(_Timestamp):
_tzinfo = tzinfo
# reconstruct & check bounds
- value = pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
+ ts_input = datetime(dts.year, dts.month, dts.day, dts.hour, dts.min,
+ dts.sec, dts.us, tzinfo=_tzinfo)
+ ts = convert_to_tsobject(ts_input, _tzinfo, None, 0, 0)
+ value = ts.value + (dts.ps // 1000)
if value != NPY_NAT:
_check_dts_bounds(&dts)
- # set tz if needed
- if _tzinfo is not None:
- value = tz_convert_single(value, _tzinfo, 'UTC')
-
- result = create_timestamp_from_ts(value, dts, _tzinfo, self.freq)
- return result
+ return create_timestamp_from_ts(value, dts, _tzinfo, self.freq)
def isoformat(self, sep='T'):
base = super(_Timestamp, self).isoformat(sep=sep)
@@ -1175,7 +1176,7 @@ cdef class _Timestamp(datetime):
return np.datetime64(self.value, 'ns')
def __add__(self, other):
- cdef int64_t other_int
+ cdef int64_t other_int, nanos
if is_timedelta64_object(other):
other_int = other.astype('timedelta64[ns]').view('i8')
@@ -1625,6 +1626,10 @@ cdef inline void _localize_tso(_TSObject obj, object tz):
"""
Take a TSObject in UTC and localizes to timezone tz.
"""
+ cdef:
+ ndarray[int64_t] trans, deltas
+ Py_ssize_t delta, posn
+
if is_utc(tz):
obj.tzinfo = tz
elif is_tzlocal(tz):
@@ -1676,7 +1681,7 @@ cdef inline void _localize_tso(_TSObject obj, object tz):
obj.tzinfo = tz
-def _localize_pydatetime(object dt, object tz):
+cpdef inline object _localize_pydatetime(object dt, object tz):
"""
Take a datetime/Timestamp in UTC and localizes to timezone tz.
"""
@@ -3892,7 +3897,7 @@ for _maybe_method_name in dir(NaTType):
# Conversion routines
-def _delta_to_nanoseconds(delta):
+cpdef int64_t _delta_to_nanoseconds(delta):
if isinstance(delta, np.ndarray):
return delta.astype('m8[ns]').astype('int64')
if hasattr(delta, 'nanos'):
@@ -4137,7 +4142,7 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
return result
-def tz_convert_single(int64_t val, object tz1, object tz2):
+cpdef int64_t tz_convert_single(int64_t val, object tz1, object tz2):
"""
Convert the val (in i8) from timezone1 to timezone2
@@ -5006,6 +5011,7 @@ cdef inline int64_t _normalized_stamp(pandas_datetimestruct *dts) nogil:
def dates_normalized(ndarray[int64_t] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
+ ndarray[int64_t] trans, deltas
pandas_datetimestruct dts
if tz is None or is_utc(tz):
| BUG: Timestamp.replace chaining not compat with datetime.replace
#### Code Sample, a copy-pastable example if possible
```python
import pytz
import pandas as pd
from datetime import datetime
pytz.timezone('CET').localize(datetime(2016, 3, 27, 1), is_dst=None)
pytz.timezone('CET').localize(pd.Timestamp(datetime(2016, 3, 27, 1)), is_dst=None)
```
#### Problem description
The above code runs with Pandas 0.18 but raises the following exception with Pandas 0.19:
```python
>>> pytz.timezone('CET').localize(pd.Timestamp(datetime(2016, 3, 27, 1)), is_dst=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/localhome/stefan/emsconda/envs/popeye/lib/python3.6/site-packages/pytz/tzinfo.py", line 327, in localize
raise NonExistentTimeError(dt)
pytz.exceptions.NonExistentTimeError: 2016-03-27 01:00:00
```
Is this an intentional API breakage of 0.19 or a bug?
#### Expected Output
<no exception>
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.12-100.fc24.x86_64+debug
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: de_DE.UTF-8
LOCALE: de_DE.UTF-8
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 34.3.0
Cython: None
numpy: 1.12.0
scipy: 0.18.1
statsmodels: None
xarray: None
IPython: 5.3.0
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: 1.5.0
bottleneck: None
tables: None
numexpr: None
matplotlib: 2.0.0
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.1.5
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None
</details>
| This is correct according to ``pytz`` doc-string.
```
In [8]: pytz.timezone('CET').localize(Timestamp(datetime(2016, 3, 27, 1)), is_dst=True)
Out[8]: Timestamp('2016-03-27 00:00:00+0100', tz='CET')
In [9]: pytz.timezone('CET').localize(Timestamp(datetime(2016, 3, 27, 1)), is_dst=False)
Out[9]: Timestamp('2016-03-27 01:00:00+0100', tz='CET')
In [10]: pytz.timezone('CET').localize(Timestamp(datetime(2016, 3, 27, 1)), is_dst=None)
---------------------------------------------------------------------------
NonExistentTimeError Traceback (most recent call last)
<ipython-input-10-6cbd34e0bbef> in <module>()
----> 1 pytz.timezone('CET').localize(Timestamp(datetime(2016, 3, 27, 1)), is_dst=None)
/Users/jreback/miniconda3/envs/pandas/lib/python3.5/site-packages/pytz/tzinfo.py in localize(self, dt, is_dst)
325 # If we refuse to guess, raise an exception.
326 if is_dst is None:
--> 327 raise NonExistentTimeError(dt)
328
329 # If we are forcing the pre-DST side of the DST transition, we
NonExistentTimeError: 2016-03-27 01:00:00
```
actually I find the ``pytz`` behavior of ``is_dst=None`` to be just odd. They are conflating too many things into a single argument I am afraid.
Okay, thx for the feedback.
We did some more research on this issue and found the following:
The problem occurs during DST-changes when we (de)normalize input dates from dates with tzinfo to UTC dates and back from tz-less UTC dates to dates with a tzinfo.
The [stdlib docs](https://docs.python.org/3/library/datetime.html#datetime.date.replace) states:
*“Return a date with the same value, except for those parameters given new values by whichever keyword arguments are specified.”*
Lets test this:
```
import pytz
import pandas as pd
from datetime import datetime
# Base datetime and a tzinfo object
dt = datetime(2016, 3, 27, 1)
tzinfo = pytz.timezone('CET').localize(dt, is_dst=False).tzinfo
# Expected: tzinfo replaced, actual date value unchanged:
print('Datetimes:')
print(dt.replace(tzinfo=tzinfo))
print(dt.replace(tzinfo=tzinfo).replace(tzinfo=None))
# Unexpected behaviour in pandas 0.19.x:
# Other values than tzinfo were changed:
print('Pandas Timestamp:')
print(pd.Timestamp(dt).replace(tzinfo=tzinfo))
print(pd.Timestamp(dt).replace(tzinfo=tzinfo).replace(tzinfo=None))
```
Pandas 0.18.1:
```
Datetimes:
2016-03-27 01:00:00+01:00
2016-03-27 01:00:00
Pandas Timestamp:
2016-03-27 01:00:00+01:00
2016-03-27 01:00:00 # ok
```
Pandas 0.19.2:
```
Datetimes:
2016-03-27 01:00:00+01:00
2016-03-27 01:00:00
Pandas Timestamp:
2016-03-27 01:00:00+01:00
2016-03-27 00:00:00 # unexpected
```
The datetime in the last row of the Pandas 0.19.2 output is incorrect.
This readily occurs in the context of pytz as `localize()` and `normalize()` do that all the time (here: pytz 2016.10) in `pytz.tzinfo.DstTzInfo.localize`, line 314, `loc_dt = tzinfo.normalize(dt.replace(tzinfo=tzinfo))` is executed and in `pytz.tzinfo.DstTzInfo.normalize`, line 239, `dt = dt.replace(tzinfo=None)` is executed.
Interestingly, the issue only occurs if we do the “double replace” but not if we directly initialize a datetime with a tzinfo:
```
print(pd.Timestamp(datetime(2016, 3, 27, 1, tzinfo=tzinfo))
print(pd.Timestamp(datetime(2016, 3, 27, 1, tzinfo=tzinfo).replace(tzinfo=None))
```
Pandas 0.18.1:
```
2016-03-27 01:00:00+01:00
2016-03-27 01:00:00
```
Pandas 0.19.2:
```
2016-03-27 01:00:00+01:00
2016-03-27 01:00:00
```
I looked at the change logs and coudn’t find anything related to this issue.
I guess this is a bug, ``Timestamp.replace`` should act exactly like ``datetime.replace``. It is overriden because it needs to handle parameter validation and nanoseconds. So [21] should match [19]
```
In [18]: dt.replace(tzinfo=tzinfo)
Out[18]: datetime.datetime(2016, 3, 27, 1, 0, tzinfo=<DstTzInfo 'CET' CET+1:00:00 STD>)
In [19]: dt.replace(tzinfo=tzinfo).replace(tzinfo=None)
Out[19]: datetime.datetime(2016, 3, 27, 1, 0)
In [20]: pd.Timestamp(dt).replace(tzinfo=tzinfo)
Out[20]: Timestamp('2016-03-27 01:00:00+0100', tz='CET')
In [21]: pd.Timestamp(dt).replace(tzinfo=tzinfo).replace(tzinfo=None)
Out[21]: Timestamp('2016-03-27 00:00:00')
```
All that said, I would *never* use ``.replace`` directly, and more naturally simply ``tz_localize`` and ``tz_convert``. (including ambiguity over transitions and such).
```
In [25]: pd.Timestamp(dt).tz_localize(tzinfo)
Out[25]: Timestamp('2016-03-27 01:00:00+0100', tz='CET')
In [26]: pd.Timestamp(dt).tz_localize(tzinfo).tz_localize(None)
Out[26]: Timestamp('2016-03-27 01:00:00')
```
you are welcome to submit a PR to fix.
https://github.com/pandas-dev/pandas/commit/f8bd08e9c2fc6365980f41b846bbae4b40f08b83 is the change (has been modified slightly since then).
Unfortunately, we currently don't have the time to get familiar with the pandas internals and fix this issue ourselves.
I added a regression test for this issues as follows:
```diff
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 1fc0e1b..75d4872 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -1233,6 +1233,18 @@ class TestTimeZones(tm.TestCase):
self.assertEqual(result_pytz.to_pydatetime().tzname(),
result_dateutil.to_pydatetime().tzname())
+ # issue 15683
+ dt = datetime(2016, 3, 27, 1)
+ tzinfo = pytz.timezone('CET').localize(dt, is_dst=False).tzinfo
+ # This should work:
+ result_dt = dt.replace(tzinfo=tzinfo)
+ result_pd = Timestamp(dt).replace(tzinfo=tzinfo)
+ self.assertEqual(result_dt.timestamp(), result_pd.timestamp())
+ # self.assertEqual(result_dt, result_pd.to_datetime()) # This fails!!!
+ # This should fail:
+ result_dt = dt.replace(tzinfo=tzinfo).replace(tzinfo=None)
+ result_pd = Timestamp(dt).replace(tzinfo=tzinfo).replace(tzinfo=None)
+ self.assertEqual(result_dt.timestamp(), result_pd.timestamp())
+ # self.assertEqual(result_dt, result_pd.to_datetime())
+
def test_index_equals_with_tz(self):
left = date_range('1/1/2011', periods=100, freq='H', tz='utc')
right = date_range('1/1/2011', periods=100, freq='H', tz='US/Eastern')
```
Surprisingly, the `assertEqual()` using `to_datetime()` fails. I don't know if this is another issue or not:
```
> self.assertEqual(result_dt, result_pd.to_datetime())
E AssertionError: datetime.datetime(2016, 3, 27, 1, 0, tzinfo=<DstTzInfo 'CET' CET+1:00:00 STD>) != datetime.datetime(2016, 3, 27, 0, 0, tzinfo=<DstTzInfo 'CET' CET+1:00:00 STD>)
```
I still have no Idea how to fix this.
I played around with with it a little bit more and something looks very broken:
```python
>>> import datetime, pandas, pytz
>>>
>>> # Two equal datetimes:
>>> dt = datetime.datetime(2016, 3, 27, 1)
>>> pd = pandas.Timestamp(dt)
>>> dt == pd
True
>>> dt == pd.to_pydatetime()
True
>>> dt.timestamp() == pd.timestamp()
True
>>>
>>> # Let's introduce timezones and stuff breaks:
>>>
>>> tzinfo = pytz.timezone('CET')
>>> rdt = dt.replace(tzinfo=tzinfo)
>>> rpd = pd.replace(tzinfo=tzinfo)
>>>
>>> rdt == rpd # What?
False
>>> rdt == rpd.to_pydatetime() # Really?
False
>>> rdt.timestamp() == rpd.timestamp() # Why is this True now?
True
>>> # What do we have?
>>> rdt
datetime.datetime(2016, 3, 27, 1, 0, tzinfo=<DstTzInfo 'CET' CET+1:00:00 STD>)
>>> rpd # This *looks* like rdt but is *not equal* to it.
Timestamp('2016-03-27 01:00:00+0100', tz='CET')
>>> rpd.to_pydatetime() # This is cleary not wanted:
datetime.datetime(2016, 3, 27, 0, 0, tzinfo=<DstTzInfo 'CET' CET+1:00:00 STD>)
>>>
>>> # This seems to be the logical result of the above bug:
>>> ndt = rdt.replace(tzinfo=None)
>>> npd = rpd.replace(tzinfo=None)
>>> ndt
datetime.datetime(2016, 3, 27, 1, 0)
>>> npd
Timestamp('2016-03-27 00:00:00')
>>> npd.to_pydatetime()
datetime.datetime(2016, 3, 27, 0, 0)
>>> ndt == dt
True
>>> npd == pd
False
```
The `Timestamp` constructor already seems to be broken:
```python
>>> dttz = datetime.datetime(2016, 3, 27, 1, tzinfo=tzinfo)
>>> pdtz = pandas.Timestamp(2016, 3, 27, 1, tzinfo=tzinfo)
>>> dttz
datetime.datetime(2016, 3, 27, 1, 0, tzinfo=<DstTzInfo 'CET' CET+1:00:00 STD>)
>>> pdtz # Where is the tzinfo?
Timestamp('2016-03-27 01:00:00')
>>> dttz.timestamp() == pdtz.timestamp() # Expected
True
>>> dttz == pdtz # Unexpected
False
>>> dttz == pdtz.to_pydatetime() # Unexpected
False
```
@sscherfke ``datetime.datetime`` has a different underlying representation
```
In [1]: dt = pd.Timestamp('2016-03-27 01:00:00', tz='CET')
In [2]: dt
Out[2]: Timestamp('2016-03-27 00:00:00+0100', tz='CET')
In [3]: dt.tz_convert('UTC')
Out[3]: Timestamp('2016-03-26 23:00:00+0000', tz='UTC')
In [4]: dt.tz_convert('UTC').value
Out[4]: 1459033200000000000
In [5]: dt.value
Out[5]: 1459033200000000000
In [6]: dt.tz
Out[6]: <DstTzInfo 'CET' CET+1:00:00 STD>
In [7]: dt.tz_convert('UTC').tz
Out[7]: <UTC>
```
``TImestamp`` keeps UTC time *always* and the tz as a parameter. This always efficient manipulation.
You are encourage to use ``tz_localize/tz_convert`` as these correctly manipulate all dst / tz's and work across different tz vendors.
the construction as a small issue xref in #15777
Okay, maybe my last example might then not be related to this issue. But the problem with `replace(tzinfo)` (it does not only replace the tzinfo but also alter the the actual date/time) remains.
I'd really like to help fixing this issue but Pandas has a very huge code base and I'm a very new Pandas user... :-/
@sscherfke well ``.replace`` is actually a very straightforward method, though its in cython, and it *does* call other things.
Yes, the *other things* is the problem. Finding out what they are supposed to do and what they actually to and which *other thing* actually it the culprit for this issue. :)
now that #15934 is merged the construction issues should be fixed, FYI.
Yes, the construction issues are fixed now.
I hoped that this might (accidentally) fix this issue, but it doesn't:
```pycon
>>> import datetime, pandas, pytz
>>> tzinfo = pytz.timezone('CET')
>>> dt = datetime.datetime(2016, 3, 27, 1)
>>> pd = pandas.Timestamp(dt)
>>> dttz = dt.replace(tzinfo=tzinfo)
>>> pdtz1 = pd.replace(tzinfo=tzinfo)
>>> pdtz2 = pandas.Timestamp('2016-03-27 01:00', tz='CET')
>>> dttz == pdtz1
False
>>> dttz == pdtz2
True
>>> for x in [pdtz1, pdtz2]:
... print(x, x.tzinfo, x.timestamp(), x.value, x.to_pydatetime())
...
2016-03-27 01:00:00+01:00 CET 1459036800.0 1459033200000000000 2016-03-27 00:00:00+01:00
2016-03-27 01:00:00+01:00 CET 1459036800.0 1459036800000000000 2016-03-27 01:00:00+01:00
```
As you can see, the `value` of both Timestamp differs, so I guess `replace()` breaks it somehow.
`replace()` (when called on a none-timezoned TS) calls the following four methods in this order:
- `pandas_datetime_to_datetimestruct()`
- `pandas_datetimestruct_to_datetime()`
- `tz_convert_single()`
- `create_timestamp_from_ts()`
The `pandas_a_to_b()` methods are neither defined nor imported in the module (??), so I took a closer look at the remaining two.
`create_timestamp_from_ts()` does not appear to do any calculations on the `value`.
So I think `tz_convert_single()` remains as the most probable culprit.
Yes, `tz_convert_single()` is the culprit. I added a few prints in `replace()`. Before that method is called at the end, the value is `1459040400000000000` and afterwards it is `1459033200000000000`. The difference is 2h (which is wrong – it should be 1h).
yeah this should not be converting, instead it should be localizing.
```
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index c471d46..6356073 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -732,7 +732,9 @@ class Timestamp(_Timestamp):
# set tz if needed
if _tzinfo is not None:
- value = tz_convert_single(value, _tzinfo, 'UTC')
+ value = tz_localize_to_utc(np.array([value], dtype='i8'), _tzinfo,
+ ambiguous='raise',
+ errors='raise')[0]
result = create_timestamp_from_ts(value, dts, _tzinfo, self.freq)
return result
```
this breaks another test, but passes (so you would make this into an actual test)
```
In [1]: import datetime, pandas, pytz
...: tzinfo = pytz.timezone('CET')
...: dt = datetime.datetime(2016, 3, 27, 1)
...: pd = pandas.Timestamp(dt)
...: dttz = dt.replace(tzinfo=tzinfo)
...: pdtz1 = pd.replace(tzinfo=tzinfo)
...: pdtz2 = pandas.Timestamp('2016-03-27 01:00', tz='CET')
...:
In [2]: dttz == pdtz1
Out[2]: True
In [3]: dttz == pdtz2
Out[3]: True
```
I am very confused about what's happening inside Pandas:
```pycon
>>> dt = datetime.datetime(2016, 3, 27, 1)
>>> datetime.datetime.fromtimestamp(pandas.Timestamp(dt).timestamp())
datetime.datetime(2016, 3, 27, 1, 0)
>>> datetime.datetime.fromtimestamp(pandas.Timestamp(dt).value / 1000000000)
datetime.datetime(2016, 3, 27, 3, 0)
```
I thought `value` would be a high-res UTC timestamp but it is actually two hours ahead of `timestamp()` (at least in this case).
When `value` is converted from `CET` to `UTC` at the end of `replace()`, `tz_convert_single()` detects that `value` is summer time (CEST) (because 2016-03-27 03:00 *is* CEST), it calculates an offset of 2h.
*edit: Saw you comments only after I wrote this comment.
A test case like this will show the issue and pass when your proposed fix is applied:
```python
def test_issue_15683(self):
# issue 15683
dt = datetime(2016, 3, 27, 1)
tzinfo = pytz.timezone('CET').localize(dt, is_dst=False).tzinfo
result_dt = dt.replace(tzinfo=tzinfo)
result_pd = Timestamp(dt).replace(tzinfo=tzinfo)
self.assertEqual(result_dt.timestamp(), result_pd.timestamp())
self.assertEqual(result_dt, result_pd.to_pydatetime())
self.assertEqual(result_dt, result_pd)
result_dt = dt.replace(tzinfo=tzinfo).replace(tzinfo=None)
result_pd = Timestamp(dt).replace(tzinfo=tzinfo).replace(tzinfo=None)
self.assertEqual(result_dt.timestamp(), result_pd.timestamp())
self.assertEqual(result_dt, result_pd.to_pydatetime())
self.assertEqual(result_dt, result_pd)
```
happy to take a PR to fix as I said.
What about the breaking test?
if you'd like to delve into that would be helpful
More problems (with our fix):
```pycon
>>> import datetime, pandas, pytz
>>> tzinfo = pytz.timezone('CET')
>>>
>>> # Reference case with datetime.datetime object
>>> pd = pandas.Timestamp('2016-10-30 01:15').to_pydatetime()
>>> pd
datetime.datetime(2016, 10, 30, 1, 15)
>>> tzinfo.localize(pd, is_dst=True)
datetime.datetime(2016, 10, 30, 1, 15, tzinfo=<DstTzInfo 'CET' CEST+2:00:00 DST>)
>>>
>>> # Error in Pandas
>>> pd = pandas.Timestamp('2016-10-30 01:15')
>>> pd
Timestamp('2016-10-30 01:15:00')
>>> tzinfo.localize(pd, is_dst=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../envs/pandas/lib/python3.6/site-packages/pytz/tzinfo.py", line 314, in localize
loc_dt = tzinfo.normalize(dt.replace(tzinfo=tzinfo))
File ".../envs/pandas/lib/python3.6/site-packages/pytz/tzinfo.py", line 242, in normalize
return self.fromutc(dt)
File ".../envs/pandas/lib/python3.6/site-packages/pytz/tzinfo.py", line 187, in fromutc
return (dt + inf[0]).replace(tzinfo=self._tzinfos[inf])
File "pandas/_libs/tslib.pyx", line 735, in pandas._libs.tslib.Timestamp.replace (pandas/_libs/tslib.c:14931)
value = tz_localize_to_utc(np.array([value], dtype='i8'), _tzinfo,
File "pandas/_libs/tslib.pyx", line 4582, in pandas._libs.tslib.tz_localize_to_utc (pandas/_libs/tslib.c:77718)
raise pytz.AmbiguousTimeError(
pytz.exceptions.AmbiguousTimeError: Cannot infer dst time from Timestamp('2016-10-30 02:15:00'), try using the 'ambigu
ous' argument
```
This is weired as 01:15 is actually not ambiguous (02:15 would be). I guess the problem arises because we convert *from* our destination tz *to* UTC in `replace()`.
In the old version (without the fix) there would be no error but a wrong result (1h offset).
If a Timestamp has a tzinfo (e.g., UTC or CET), `(Timestamp.value / 1_000_000_000) == pd.timestamp()`.
If a Timestamp does *not* have a tzinfo, `(Timestamp.value / 1_000_000_000) - pd.timestamp()` is the offset of my local timezone to UTC. Why is this by chance? What is this offset and why is it?
@sscherfke not sure what you are asking.
I finally found a solution that works. All tests in `test_timezones` are passing and our own code seems to work as well. :)
```diff
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index c471d46..c418059 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -685,14 +685,16 @@ class Timestamp(_Timestamp):
cdef:
pandas_datetimestruct dts
int64_t value
- object _tzinfo, result, k, v
+ object _tzinfo, result, k, v, ts_input
_TSObject ts
# set to naive if needed
_tzinfo = self.tzinfo
value = self.value
if _tzinfo is not None:
- value = tz_convert_single(value, 'UTC', _tzinfo)
+ value_tz = tz_convert_single(value, _tzinfo, 'UTC')
+ offset = value - value_tz
+ value += offset
# setup components
pandas_datetime_to_datetimestruct(value, PANDAS_FR_ns, &dts)
@@ -726,16 +728,14 @@ class Timestamp(_Timestamp):
_tzinfo = tzinfo
# reconstruct & check bounds
- value = pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
+ ts_input = datetime(dts.year, dts.month, dts.day, dts.hour, dts.min,
+ dts.sec, dts.us, tzinfo=_tzinfo)
+ ts = convert_to_tsobject(ts_input, _tzinfo, None, 0, 0)
+ value = ts.value + (dts.ps // 1000)
if value != NPY_NAT:
_check_dts_bounds(&dts)
- # set tz if needed
- if _tzinfo is not None:
- value = tz_convert_single(value, _tzinfo, 'UTC')
-
- result = create_timestamp_from_ts(value, dts, _tzinfo, self.freq)
- return result
+ return create_timestamp_from_ts(value, dts, _tzinfo, self.freq)
def isoformat(self, sep='T'):
base = super(_Timestamp, self).isoformat(sep=sep)
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index 06b6bbb..08b8040 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -1280,6 +1280,25 @@ class TestTimeZones(tm.TestCase):
self.assertEqual(result_pytz.to_pydatetime().tzname(),
result_dateutil.to_pydatetime().tzname())
+ def test_tzreplace_issue_15683(self):
+ """Regression test for issue 15683."""
+ dt = datetime(2016, 3, 27, 1)
+ tzinfo = pytz.timezone('CET').localize(dt, is_dst=False).tzinfo
+
+ result_dt = dt.replace(tzinfo=tzinfo)
+ result_pd = Timestamp(dt).replace(tzinfo=tzinfo)
+
+ self.assertEqual(result_dt.timestamp(), result_pd.timestamp())
+ self.assertEqual(result_dt, result_pd)
+ self.assertEqual(result_dt, result_pd.to_pydatetime())
+
+ result_dt = dt.replace(tzinfo=tzinfo).replace(tzinfo=None)
+ result_pd = Timestamp(dt).replace(tzinfo=tzinfo).replace(tzinfo=None)
+
+ self.assertEqual(result_dt.timestamp(), result_pd.timestamp())
+ self.assertEqual(result_dt, result_pd)
+ self.assertEqual(result_dt, result_pd.to_pydatetime())
+
def test_index_equals_with_tz(self):
left = date_range('1/1/2011', periods=100, freq='H', tz='utc')
right = date_range('1/1/2011', periods=100, freq='H', tz='US/Eastern')
```
ok if u want to put a PR | 2017-09-13T02:46:55Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/localhome/stefan/emsconda/envs/popeye/lib/python3.6/site-packages/pytz/tzinfo.py", line 327, in localize
raise NonExistentTimeError(dt)
pytz.exceptions.NonExistentTimeError: 2016-03-27 01:00:00
| 11,368 |
|||
pandas-dev/pandas | pandas-dev__pandas-17846 | 674fb96b33c07c680844f674fcdf0767b6e3c2f9 | diff --git a/doc/source/whatsnew/v0.21.1.txt b/doc/source/whatsnew/v0.21.1.txt
--- a/doc/source/whatsnew/v0.21.1.txt
+++ b/doc/source/whatsnew/v0.21.1.txt
@@ -120,7 +120,7 @@ Reshaping
- Error message in ``pd.merge_asof()`` for key datatype mismatch now includes datatype of left and right key (:issue:`18068`)
- Bug in ``pd.concat`` when empty and non-empty DataFrames or Series are concatenated (:issue:`18178` :issue:`18187`)
- Bug in ``DataFrame.filter(...)`` when :class:`unicode` is passed as a condition in Python 2 (:issue:`13101`)
--
+- Bug when merging empty DataFrames when ``np.seterr(divide='raise')`` is set (:issue:`17776`)
Numeric
^^^^^^^
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1529,7 +1529,8 @@ def _get_join_keys(llab, rlab, shape, sort):
rkey = stride * rlab[0].astype('i8', subok=False, copy=False)
for i in range(1, nlev):
- stride //= shape[i]
+ with np.errstate(divide='ignore'):
+ stride //= shape[i]
lkey += llab[i] * stride
rkey += rlab[i] * stride
| merging two empty dataframes can incur a division by zero
#### Code Sample, a copy-pastable example if possible
```python
import numpy
import pandas
pandas.show_versions()
a = pandas.DataFrame({'a':[],'b':[],'c':[]})
numpy.seterr(divide='raise')
pandas.merge(a,a,on=('a','b')) # no problem if we only merge on 'a'.
```
#### Problem description
The call to merge triggers a division by zero.
<details>
Traceback (most recent call last):
File "/homes/mickyl/work/bugs/pandas_merge_div_by_0.py", line 8, in <module>
pandas.merge(a,a,on=('a','b'))
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 54, in merge
return op.get_result()
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 569, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 734, in _get_join_info
right_indexer) = self._get_join_indexers()
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 713, in _get_join_indexers
how=self.how)
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 985, in _get_join_indexers
lkey, rkey = _get_join_keys(llab, rlab, shape, sort)
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 1457, in _get_join_keys
stride //= shape[i]
FloatingPointError: divide by zero encountered in long_scalars
</details>
The expeted behaviour is for merge to return an empty dataframe without causing division by 0.
#### Expected Output
just a print-out of all the version numbers with no exception.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.9.final.0
python-bits: 64
OS: Linux
OS-release: 4.7.0-0.bpo.1-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.20.3
pytest: 2.6.3
pip: 9.0.1
setuptools: 5.5.1
Cython: 0.25.2
numpy: 1.13.3
scipy: 0.19.1
xarray: None
IPython: 5.4.1
sphinx: 1.2.3
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: 1.2.1
tables: 3.1.1
numexpr: 2.6.4
feather: None
matplotlib: 2.0.0
openpyxl: 2.4.8
xlrd: 0.9.2
xlwt: 0.7.5
xlsxwriter: 0.5.2
lxml: 3.4.0
bs4: None
html5lib: 0.999999999
sqlalchemy: 0.9.8
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| you could make a patch to fix this. note that we surround pandas operations with ``np.seterr(divide='ignore')`` as a matter of course; we DO want to propagate. In this case it would be checked directly though.
I'm sorry, I don't know what the right fix would be. Please remember that I don't understand this codebase like you do.
well you would have to debug this and see where the error is occurring. then surround that with a ``np.seterr(divid='ignore')``
Hi, does "make a patch" mean edit the code directly? Sorry, I'm new to this and thought I'd try it, but now am not sure if I did the right thing. | 2017-10-11T18:57:43Z | [] | [] |
Traceback (most recent call last):
File "/homes/mickyl/work/bugs/pandas_merge_div_by_0.py", line 8, in <module>
pandas.merge(a,a,on=('a','b'))
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 54, in merge
return op.get_result()
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 569, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 734, in _get_join_info
right_indexer) = self._get_join_indexers()
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 713, in _get_join_indexers
how=self.how)
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 985, in _get_join_indexers
lkey, rkey = _get_join_keys(llab, rlab, shape, sort)
File "/homes/mickyl/venvs/debian8/local/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 1457, in _get_join_keys
stride //= shape[i]
FloatingPointError: divide by zero encountered in long_scalars
| 11,430 |
|||
pandas-dev/pandas | pandas-dev__pandas-17857 | 3c964a47d626a06a3f9c2d0795ee7d744dc72363 | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -956,6 +956,7 @@ I/O
- Bug in :meth:`DataFrame.to_html` with ``notebook=True`` where DataFrames with named indices or non-MultiIndex indices had undesired horizontal or vertical alignment for column or row labels, respectively (:issue:`16792`)
- Bug in :meth:`DataFrame.to_html` in which there was no validation of the ``justify`` parameter (:issue:`17527`)
- Bug in :func:`HDFStore.select` when reading a contiguous mixed-data table featuring VLArray (:issue:`17021`)
+- Bug in :func:`to_json` where several conditions (including objects with unprintable symbols, objects with deep recursion, overlong labels) caused segfaults instead of raising the appropriate exception (:issue:`14256`)
Plotting
^^^^^^^^
@@ -1033,3 +1034,4 @@ Other
^^^^^
- Bug where some inplace operators were not being wrapped and produced a copy when invoked (:issue:`12962`)
- Bug in :func:`eval` where the ``inplace`` parameter was being incorrectly handled (:issue:`16732`)
+
diff --git a/pandas/_libs/src/ujson/lib/ultrajson.h b/pandas/_libs/src/ujson/lib/ultrajson.h
--- a/pandas/_libs/src/ujson/lib/ultrajson.h
+++ b/pandas/_libs/src/ujson/lib/ultrajson.h
@@ -307,4 +307,11 @@ EXPORTFUNCTION JSOBJ JSON_DecodeObject(JSONObjectDecoder *dec,
const char *buffer, size_t cbBuffer);
EXPORTFUNCTION void encode(JSOBJ, JSONObjectEncoder *, const char *, size_t);
+#define Buffer_Reserve(__enc, __len) \
+ if ((size_t)((__enc)->end - (__enc)->offset) < (size_t)(__len)) { \
+ Buffer_Realloc((__enc), (__len)); \
+ }
+
+void Buffer_Realloc(JSONObjectEncoder *enc, size_t cbNeeded);
+
#endif // PANDAS__LIBS_SRC_UJSON_LIB_ULTRAJSON_H_
diff --git a/pandas/_libs/src/ujson/lib/ultrajsonenc.c b/pandas/_libs/src/ujson/lib/ultrajsonenc.c
--- a/pandas/_libs/src/ujson/lib/ultrajsonenc.c
+++ b/pandas/_libs/src/ujson/lib/ultrajsonenc.c
@@ -714,11 +714,6 @@ int Buffer_EscapeStringValidated(JSOBJ obj, JSONObjectEncoder *enc,
}
}
-#define Buffer_Reserve(__enc, __len) \
- if ((size_t)((__enc)->end - (__enc)->offset) < (size_t)(__len)) { \
- Buffer_Realloc((__enc), (__len)); \
- }
-
#define Buffer_AppendCharUnchecked(__enc, __chr) *((__enc)->offset++) = __chr;
FASTCALL_ATTR INLINE_PREFIX void FASTCALL_MSVC strreverse(char *begin,
@@ -976,6 +971,7 @@ void encode(JSOBJ obj, JSONObjectEncoder *enc, const char *name,
}
enc->iterEnd(obj, &tc);
+ Buffer_Reserve(enc, 2);
Buffer_AppendCharUnchecked(enc, ']');
break;
}
@@ -1003,6 +999,7 @@ void encode(JSOBJ obj, JSONObjectEncoder *enc, const char *name,
}
enc->iterEnd(obj, &tc);
+ Buffer_Reserve(enc, 2);
Buffer_AppendCharUnchecked(enc, '}');
break;
}
diff --git a/pandas/_libs/src/ujson/python/objToJSON.c b/pandas/_libs/src/ujson/python/objToJSON.c
--- a/pandas/_libs/src/ujson/python/objToJSON.c
+++ b/pandas/_libs/src/ujson/python/objToJSON.c
@@ -783,6 +783,7 @@ static void NpyArr_getLabel(JSOBJ obj, JSONTypeContext *tc, size_t *outLen,
JSONObjectEncoder *enc = (JSONObjectEncoder *)tc->encoder;
PRINTMARK();
*outLen = strlen(labels[idx]);
+ Buffer_Reserve(enc, *outLen);
memcpy(enc->offset, labels[idx], sizeof(char) * (*outLen));
enc->offset += *outLen;
*outLen = 0;
@@ -879,7 +880,7 @@ int PdBlock_iterNext(JSOBJ obj, JSONTypeContext *tc) {
NpyArrContext *npyarr;
PRINTMARK();
- if (PyErr_Occurred()) {
+ if (PyErr_Occurred() || ((JSONObjectEncoder *)tc->encoder)->errorMsg) {
return 0;
}
@@ -1224,6 +1225,10 @@ int Dir_iterNext(JSOBJ _obj, JSONTypeContext *tc) {
PyObject *attrName;
char *attrStr;
+ if (PyErr_Occurred() || ((JSONObjectEncoder *)tc->encoder)->errorMsg) {
+ return 0;
+ }
+
if (itemValue) {
Py_DECREF(GET_TC(tc)->itemValue);
GET_TC(tc)->itemValue = itemValue = NULL;
| BUG: to_json with objects causing segfault
#### Code Sample, a copy-pastable example if possible
Creating an bson objectID, without giving an objectID exclusively is ok.
``` python
>>> import bson
>>> import pandas as pd
>>> pd.DataFrame({'A': [bson.objectid.ObjectId()]}).to_json()
Out[4]: '{"A":{"0":{"binary":"W\\u0e32\\u224cug\\u00fcR","generation_time":1474361586000}}}'
>>> pd.DataFrame({'A': [bson.objectid.ObjectId()], 'B': [1]}).to_json()
Out[5]: '{"A":{"0":{"binary":"W\\u0e4e\\u224cug\\u00fcS","generation_time":1474361614000}},"B":{"0":1}}'
```
However, if you provide an ID explicitly, an exception is raised
``` python
>>> pd.DataFrame({'A': [bson.objectid.ObjectId('574b4454ba8c5eb4f98a8f45')]}).to_json()
Traceback (most recent call last):
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-7-c9a20090d481>", line 1, in <module>
pd.DataFrame({'A': [bson.objectid.ObjectId('574b4454ba8c5eb4f98a8f45')]}).to_json()
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/pandas/core/generic.py", line 1056, in to_json
default_handler=default_handler)
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/pandas/io/json.py", line 36, in to_json
date_unit=date_unit, default_handler=default_handler).write()
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/pandas/io/json.py", line 79, in write
default_handler=self.default_handler)
OverflowError: Unsupported UTF-8 sequence length when encoding string
```
And worse, if the column is not the only column, the entire process dies.
``` python
>>> pd.DataFrame({'A': [bson.objectid.ObjectId('574b4454ba8c5eb4f98a8f45')], 'B': [1]}).to_json()
Process finished with exit code 139
```
#### Expected Output
#### output of `pd.show_versions()`
```
pandas: 0.18.1
nose: 1.3.7
pip: 8.1.2
setuptools: 26.1.1
Cython: 0.24
numpy: 1.10.4
scipy: 0.17.0
statsmodels: 0.6.1
xarray: 0.7.2
IPython: 4.1.2
sphinx: 1.3.5
patsy: 0.4.1
dateutil: 2.5.2
pytz: 2016.6.1
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.5.2
matplotlib: 1.5.1
openpyxl: 2.3.2
xlrd: 0.9.4
xlwt: 1.0.0
xlsxwriter: 0.8.4
lxml: 3.6.0
bs4: 4.3.2
html5lib: 0.999
httplib2: 0.9.2
apiclient: 1.5.0
sqlalchemy: 1.0.13
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.39.0
pandas_datareader: None
```
pymongo version is 3.3.0
| When passing object dtypes which don't actually contain strings (though they could also contain objects which have a good enough response to special methods to work), you must supply a `default_handler`.
So the first 2 cases above are expected.
The 3rd is handled this way.
```
In [6]: pd.DataFrame({'A': [bson.objectid.ObjectId('574b4454ba8c5eb4f98a8f45')]}).to_json(default_handler=str)
Out[6]: '{"A":{"0":"574b4454ba8c5eb4f98a8f45"}}'
```
seg faulting shouldn't happen though; we should get an exception that a `default_handler` is not supplied.
http://pandas.pydata.org/pandas-docs/stable/io.html#fallback-behavior
cc @kawochen
cc @Komnomnomnom
I suppose the 2nd path is also not reporting that a `default_handler` is missing
```
In [10]: pd.DataFrame({'A': [bson.objectid.ObjectId('574b4454ba8c5eb4f98a8f45')]}).to_json(default_handler=str)
Out[10]: '{"A":{"0":"574b4454ba8c5eb4f98a8f45"}}'
```
This impacted us this weekend as well. Our default_handler was only handling specific objects that we wanted to control the json serialization for, but would other wise return the object. We have since changed the logic of the default_handler to serialize everything, but just raising an error if a default_handler is not present does not prevent the to_json method from causing a segfault for different objects.
@jreback I should have some time this weekend or early next week to dig into these segfaults (if nobody gets to it first)
This also comes up if you have shapely geometries in a column (came up by accident when a geopandas GeoDataFrame got converted to a regular DataFrame).
If you have a small enough sample the json encoder hits the recursion limit and you get an error.
```python
>>> import pandas as pd
>>> from shapely.geometry import Polygon
>>> geom = Polygon([(0, 0), (1, 1), (1, 0)])
>>> df = pd.DataFrame([('testval {}'.format(i), geom) for i in range(5)], columns=['value', 'geometry'])
>>> df.to_json()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/david/miniconda3/envs/geotesting/lib/python3.6/site-packages/pandas/core/generic.py", line 1089, in to_json
lines=lines)
File "/home/david/miniconda3/envs/geotesting/lib/python3.6/site-packages/pandas/io/json.py", line 39, in to_json
date_unit=date_unit, default_handler=default_handler).write()
File "/home/david/miniconda3/envs/geotesting/lib/python3.6/site-packages/pandas/io/json.py", line 85, in write
default_handler=self.default_handler)
OverflowError: Maximum recursion level reached
```
Add more rows to the DataFrame and you can get a segfault (doesn't appear to be guaranteed - sometimes you get the OverflowError).
```python
>>> df = pd.DataFrame([('testval {}'.format(i), geom) for i in range(5000)], columns=['value', 'geometry'])
>>> df.to_json()
Segmentation fault (core dumped)
```
@DavidCEllis you need to supply a ``default_handler``
@jreback Sorry, I wasn't quite clear - this was a simple way to reproduce the segfault. It's not how I ran into the issue. I know how to make it work, I just wouldn't expect a segfault.
The issue was I expected the object to be a GeoPandas GeoDataFrame and it had converted to a regular DataFrame through some operation. On a GeoDataFrame the method works without needing to specify a default_handler.
On a regular DataFrame I would expect an exception like the overflow error but got a segfault.
```python
>>> import pandas as pd
>>> import geopandas as gpd
>>> from shapely.geometry import Polygon
>>> geom = Polygon([(0, 0), (1, 1), (1, 0)])
>>> gdf = gpd.GeoDataFrame([('testval {}'.format(i), geom) for i in range(5000)], columns=['value', 'geometry'])
>>> gdf.to_json()
'Really long GeoJSON string output'
>>> df = pd.DataFrame(gdf) # GeoDataFrame is a subclass
>>> df.to_json()
Segmentation fault (core dumped)
```
@DavidCEllis as you can see from above this is an open bug, pull-requests are welcome to fix. This should raise as ``default_handler`` not supplied. you cannot serialize something that is not a standard object or a pandas object (w/o special support of course). but it shouldn't segfault either.
Fair point. Unfortunately I got to the point where the json export methods send the entire dataframe into a C function and I'm not a C programmer.
[Based on the docs you linked earlier](http://pandas.pydata.org/pandas-docs/stable/io.html#fallback-behavior) I think the default_handler error not supplied will only come up if you supply an unsupported numpy dtype? It looks like it's falling back on the unsupported object behaviour which finishes with:
> convert the object to a dict by traversing its contents. However this will often fail with an OverflowError or give unexpected results
It seems that sometimes it ends up segfaulting instead of raising the OverflowError. On testing it seemed more likely to segfault the larger the array, sometimes the same sized array would segfault and sometimes it would raise the OverflowError. Not sure if this is useful but it seemed to be additional information on how it was being triggered.
When dealing with dataframes that contain exotic datatypes you need a
default handler for to_json. This has bit my team a couple times now since
we first posted the linked issue above. For now always include a
default_handler.
On Fri, Mar 31, 2017 at 1:13 PM, David Ellis <notifications@github.com>
wrote:
> Fair point. Unfortunately I got to the point where the json export methods
> send the entire dataframe into a C function and I'm not a C programmer.
>
> Based on the docs you linked earlier
> <http://pandas.pydata.org/pandas-docs/stable/io.html#fallback-behavior> I
> think the default_handler error not supplied will only come up if you
> supply an unsupported numpy dtype? It looks like it's falling back on the
> unsupported object behaviour which finishes with:
>
> convert the object to a dict by traversing its contents. However this will
> often fail with an OverflowError or give unexpected results
>
> It seems that sometimes it ends up segfaulting instead of raising the
> OverflowError. On testing it seemed more likely to segfault the larger the
> array, sometimes the same sized array would segfault and sometimes it would
> raise the OverflowError. Not sure if this is useful but it seemed to be
> additional information on how it was being triggered.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/14256#issuecomment-290787649>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AF-fO4893Bxug5QD99HKSg4dqCmKj69Xks5rrUJUgaJpZM4KBYfC>
> .
>
| 2017-10-12T15:59:09Z | [] | [] |
Traceback (most recent call last):
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-7-c9a20090d481>", line 1, in <module>
pd.DataFrame({'A': [bson.objectid.ObjectId('574b4454ba8c5eb4f98a8f45')]}).to_json()
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/pandas/core/generic.py", line 1056, in to_json
default_handler=default_handler)
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/pandas/io/json.py", line 36, in to_json
date_unit=date_unit, default_handler=default_handler).write()
File "/auto/energymdl2/anaconda/envs/commod_20160831/lib/python2.7/site-packages/pandas/io/json.py", line 79, in write
default_handler=self.default_handler)
OverflowError: Unsupported UTF-8 sequence length when encoding string
| 11,431 |
|||
pandas-dev/pandas | pandas-dev__pandas-18017 | 5959ee3e133723136d4862864988a63ef3cc2a2f | diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -103,7 +103,7 @@ Indexing
I/O
^^^
--
+- :func:`read_html` now rewinds seekable IO objects after parse failure, before attempting to parse with a new parser. If a parser errors and the object is non-seekable, an informative error is raised suggesting the use of a different parser (:issue:`17975`)
-
-
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -742,6 +742,18 @@ def _parse(flavor, io, match, attrs, encoding, **kwargs):
try:
tables = p.parse_tables()
except Exception as caught:
+ # if `io` is an io-like object, check if it's seekable
+ # and try to rewind it before trying the next parser
+ if hasattr(io, 'seekable') and io.seekable():
+ io.seek(0)
+ elif hasattr(io, 'seekable') and not io.seekable():
+ # if we couldn't rewind it, let the user know
+ raise ValueError('The flavor {} failed to parse your input. '
+ 'Since you passed a non-rewindable file '
+ 'object, we can\'t rewind it to try '
+ 'another parser. Try read_html() with a '
+ 'different flavor.'.format(flav))
+
retained = caught
else:
break
| BUG: error in read_html when parsing badly-escaped HTML from an io object
#### Code Sample, a copy-pastable example if possible
Create `test.html`, with the contents:
```html
<!doctype html>
<html>
<body>
<table>
<tr><td>poorly-escaped cell with an & oh noes</td></tr>
</table>
</body>
</html>
```
```py
>>> import pandas as pd
>>> pandas.__version__
'0.20.3'
>>> f = open('./test.html')
>>> pd.read_html(f)
Traceback (most recent call last):
File "<input>", line 1, in <module>
pd.read_html(f)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 906, in read_html
keep_default_na=keep_default_na)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 743, in _parse
raise_with_traceback(retained)
File "/usr/lib/python3.6/site-packages/pandas/compat/__init__.py", line 344, in raise_with_traceback
raise exc.with_traceback(traceback)
ValueError: No text parsed from document: <_io.TextIOWrapper name='/home/liam/test.html' mode='r' encoding='UTF-8'>
```
#### Problem description
Pandas attempts to invoke a series of parsers on HTML documents, returning when one produces a result, and continuing to the next on error. This works fine when passing a path or entire document to `read_html()`, but when an IO object is passed, the subsequent parsers will be reading from a file whose read cursor is at EOF, producing an inscrutable 'no text parsed from document' error.
This can easily be fixed by rewinding the file with `seek(0)` before continuing to the next parser (will add PR shortly).
#### Expected Output
```
[ 0
0 poorly-escaped cell with an & oh noes]
```
#### Output of ``pd.show_versions()``
<details>
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: e1dabf37645f0fcabeed1d845a0ada7b32415606
python: 3.6.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.13.6-1-ARCH
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.21.0rc1+36.ge1dabf376.dirty
pytest: 3.2.3
pip: 9.0.1
setuptools: 36.6.0
Cython: 0.27.2
numpy: 1.13.3
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: 4.1.0
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
```
</details>
| Interestingly, when adding a test for the patch, I noticed that stuffing that test document into a `StringIO` seems to work? lxml still fails, but the fallback on html5lib/bs4 rewinds properly.
I'll investigate that further tomorrow.
Same issue: Cannot `read_html()` a webpage directly from an `urlopen()` result when `lxml` does not like it:
```python
>>> from urllib.request import urlopen
>>> import pandas as pd
>>> url = 'http://en.wikipedia.org/wiki/Matplotlib'
>>> assert pd.read_html(urlopen(url), 'Advantages', 'bs4') # works with bs4 alone
>>> pd.read_html(urlopen(url), 'Advantages')
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
pd.read_html(urlopen(url), 'Advantages')
File "C:\Program Files\Python36\lib\site-packages\pandas\io\html.py", line 915, in read_html
keep_default_na=keep_default_na)
File "C:\Program Files\Python36\lib\site-packages\pandas\io\html.py", line 749, in _parse
raise_with_traceback(retained)
File "C:\Program Files\Python36\lib\site-packages\pandas\compat\__init__.py", line 367, in raise_with_traceback
raise exc.with_traceback(traceback)
ValueError: No text parsed from document: <http.client.HTTPResponse object at 0x0000000005621358>
```
Note that one cannot do `.seek(0)` on the `urlopen` return value (so the fix needs to be more complex).
I think `lxml` does something slightly different with `StringIO`s. So here is a self-contained test case:
```python
>>> import pandas as pd
>>> from mock import Mock
>>> def mock_urlopen(data, url='http://spam'):
return Mock(**{'geturl.return_value': url, 'read.side_effect': [data, '', '']})
>>> good = mock_urlopen('<table><tr><td>spam<br />eggs</td></tr></table>')
>>> bad = mock_urlopen('<table><tr><td>spam<wbr />eggs</td></tr></table>')
>>> assert pd.read_html(good)
>>> assert pd.read_html(bad, flavor='bs4')
>>> bad.reset_mock()
>>> pd.read_html(bad)
Traceback (most recent call last):
...
ValueError: No text parsed from document: <Mock id='85948960'>
>>> bad.mock_calls
[call.geturl(),
call.tell(),
call.read(4000),
call.decode('ascii', 'strict'),
call.decode().decode('ascii', 'strict'),
call.decode().decode().find(':'),
call.read()]
```
The second `.read()`-call is the one where `bs4` takes over and fails parsing the empty string.
Minimal amendment: `reset_mock()` does not rewind `read.side_effect` so here is the same with a fresh mock:
```python
>>> bad = mock_urlopen('<table><tr><td>spam<wbr />eggs</td></tr></table>')
>>> pd.read_html(bad)
Traceback (most recent call last):
...
ValueError: No text parsed from document: <Mock id='50837656'>
>>> bad.mock_calls
[call.geturl(),
call.tell(),
call.read(4000),
call.read(3952),
call.decode('ascii', 'strict'),
call.decode().decode('ascii', 'strict'),
call.decode().decode().find(':'),
call.read()]
```
Again, the last `.read()`-call is from `bs4`
The only way to rewind a urlopen is re-requesting it or buffering it, unfortunately. This becomes a _much_ more complex patch, then :frowning:
So i suppose that the try next parser should raise if we only have a filehandle (and not a path). would take that as a PR.
We can seek for some IO handles, though. I don't see any reason not to add something like
```py
if hasattr(io, 'seek'):
io.seek(0)
```
and raise a warning if
```py
hasattr(io, 'read') and not hasattr(io, 'seek')
```
Sounds good to me. I think @jreback means that the `raise` (possibly after checking for `seek`) should occur in the branch after the first parser fails, so it makes the current behaviour more official/transparent (give a better error message). The user can then select/try a different `flavor` (maybe the error message can hint at that) .
ah, you're talking about ditching the fallthrough to the next parser entirely?
I thought for io handles (possibly only non-seekable ones). Does not occur with file names, right?
Yep, since _read() reopens the file for each parser if you're passing in filenames.
On Sat, Oct 28, 2017 at 4:35 PM, Sebastian Bank <notifications@github.com>
wrote:
> I thought for io handles (possibly only non-seekable ones). Does not occur
> with file names, right?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/17975#issuecomment-340221351>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AEa8SQpni0IETbGKuerchH4awQd7JEV5ks5sw54QgaJpZM4QFbJ2>
> .
>
| 2017-10-28T22:44:49Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
pd.read_html(f)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 906, in read_html
keep_default_na=keep_default_na)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 743, in _parse
raise_with_traceback(retained)
File "/usr/lib/python3.6/site-packages/pandas/compat/__init__.py", line 344, in raise_with_traceback
raise exc.with_traceback(traceback)
ValueError: No text parsed from document: <_io.TextIOWrapper name='/home/liam/test.html' mode='r' encoding='UTF-8'>
| 11,461 |
|||
pandas-dev/pandas | pandas-dev__pandas-18248 | 9e3ad63cdb030c6b369d9d822469bb968e2d1804 | diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -158,6 +158,6 @@ Categorical
Other
^^^^^
--
+- Improved error message when attempting to use a Python keyword as an identifier in a numexpr query (:issue:`18221`)
-
-
diff --git a/pandas/core/computation/expr.py b/pandas/core/computation/expr.py
--- a/pandas/core/computation/expr.py
+++ b/pandas/core/computation/expr.py
@@ -307,7 +307,14 @@ def __init__(self, env, engine, parser, preparser=_preparse):
def visit(self, node, **kwargs):
if isinstance(node, string_types):
clean = self.preparser(node)
- node = ast.fix_missing_locations(ast.parse(clean))
+ try:
+ node = ast.fix_missing_locations(ast.parse(clean))
+ except SyntaxError as e:
+ from keyword import iskeyword
+ if any(iskeyword(x) for x in clean.split()):
+ e.msg = ("Python keyword not valid identifier"
+ " in numexpr query")
+ raise e
method = 'visit_' + node.__class__.__name__
visitor = getattr(self, method)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2267,7 +2267,8 @@ def query(self, expr, inplace=False, **kwargs):
by default, which allows you to treat both the index and columns of the
frame as a column in the frame.
The identifier ``index`` is used for the frame index; you can also
- use the name of the index to identify it in a query.
+ use the name of the index to identify it in a query. Please note that
+ Python keywords may not be used as identifiers.
For further details and examples see the ``query`` documentation in
:ref:`indexing <indexing.query>`.
| df.query() does not support column name 'class'
#### Code Sample, a copy-pastable example if possible
```python
indices_to_plot = df.query('class>0')
```
#### Problem description
Above code results in this error traceback:
```python
Traceback (most recent call last):
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-33-6e077c50ac68>", line 2, in <module>
indices_to_plot = df.query('class>0')
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/frame.py", line 2297, in query
res = self.eval(expr, **kwargs)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/frame.py", line 2366, in eval
return _eval(expr, inplace=inplace, **kwargs)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/eval.py", line 290, in eval
truediv=truediv)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/expr.py", line 732, in __init__
self.terms = self.parse()
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/expr.py", line 749, in parse
return self._visitor.visit(self.expr)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/expr.py", line 310, in visit
node = ast.fix_missing_locations(ast.parse(clean))
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1
class >0
^
SyntaxError: invalid syntax
```
My column names are "occ_id, class, et, radius, lon, width, type" and if I execute this query on another column, it works fine:
```python
indices_to_plot = df.query('et>0')
```
Only the column named 'class' seems to fail.
#### Expected Output
Sub selection of the dataframe according to the query.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.21.0
pytest: 3.2.3
pip: 9.0.1
setuptools: 36.6.0
Cython: 0.27.3
numpy: 1.13.3
scipy: 0.19.1
pyarrow: None
xarray: 0.9.6
IPython: 6.2.1
sphinx: 1.6.5
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.0
openpyxl: None
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| While the error message could be better, I'm not sure this is something we can support (easily) - pandas and numexpr use the python parser to evaluate these expressions, and `class` is of course a reserved word in python.
Understood, though this paragraph from the docstring made me believe it should work:
> The DataFrame.index and DataFrame.columns attributes of the DataFrame instance are placed in the query namespace by default, which allows you to treat both the index and columns of the frame as a column in the frame. The identifier index is used for the frame index; you can also use the name of the index to identify it in a query.
If it's impossible to use any reserved keywords as column names for `query` it should be explicitly called out in the docstring, I think.
> If it's impossible to use any reserved keywords as column names for query it should be explicitly called out in the docstring, I think.
Yes agreed, we may also want to wrap the parsing in a try/catch to bubble up a more directed error. PR welcome!
| 2017-11-12T21:11:07Z | [] | [] |
Traceback (most recent call last):
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-33-6e077c50ac68>", line 2, in <module>
indices_to_plot = df.query('class>0')
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/frame.py", line 2297, in query
res = self.eval(expr, **kwargs)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/frame.py", line 2366, in eval
return _eval(expr, inplace=inplace, **kwargs)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/eval.py", line 290, in eval
truediv=truediv)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/expr.py", line 732, in __init__
self.terms = self.parse()
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/expr.py", line 749, in parse
return self._visitor.visit(self.expr)
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/site-packages/pandas/core/computation/expr.py", line 310, in visit
node = ast.fix_missing_locations(ast.parse(clean))
File "/Users/klay6683/miniconda3/envs/stable/lib/python3.6/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1
class >0
^
SyntaxError: invalid syntax
| 11,503 |
|||
pandas-dev/pandas | pandas-dev__pandas-18309 | dbec3c92e08063d247e0d28937c8695bbd66fe94 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -301,6 +301,7 @@ Indexing
- Bug in :func:`MultiIndex.remove_unused_levels` which would fill nan values (:issue:`18417`)
- Bug in :func:`MultiIndex.from_tuples`` which would fail to take zipped tuples in python3 (:issue:`18434`)
- Bug in :class:`Index` construction from list of mixed type tuples (:issue:`18505`)
+- Bug in :func:`Index.drop` when passing a list of both tuples and non-tuples (:issue:`18304`)
- Bug in :class:`IntervalIndex` where empty and purely NA data was constructed inconsistently depending on the construction method (:issue:`18421`)
- Bug in :func:`IntervalIndex.symmetric_difference` where the symmetric difference with a non-``IntervalIndex`` did not raise (:issue:`18475`)
- Bug in indexing a datetimelike ``Index`` that raised ``ValueError`` instead of ``IndexError`` (:issue:`18386`).
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -398,7 +398,19 @@ def _asarray_tuplesafe(values, dtype=None):
return result
-def _index_labels_to_array(labels):
+def _index_labels_to_array(labels, dtype=None):
+ """
+ Transform label or iterable of labels to array, for use in Index.
+
+ Parameters
+ ----------
+ dtype : dtype
+ If specified, use as dtype of the resulting array, otherwise infer.
+
+ Returns
+ -------
+ array
+ """
if isinstance(labels, (compat.string_types, tuple)):
labels = [labels]
@@ -408,7 +420,7 @@ def _index_labels_to_array(labels):
except TypeError: # non-iterable
labels = [labels]
- labels = _asarray_tuplesafe(labels)
+ labels = _asarray_tuplesafe(labels, dtype=dtype)
return labels
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3761,7 +3761,8 @@ def drop(self, labels, errors='raise'):
-------
dropped : Index
"""
- labels = _index_labels_to_array(labels)
+ arr_dtype = 'object' if self.dtype == 'object' else None
+ labels = _index_labels_to_array(labels, dtype=arr_dtype)
indexer = self.get_indexer(labels)
mask = indexer == -1
if mask.any():
| pd.Index([ ('b', 'c'), 'a']).drop(['a', ('b', 'c')]) raises ValueError
#### Code Sample, a copy-pastable example if possible
```bash
pietro@debiousci:~$ PYTHONHASHSEED=5 python3 -c "import pandas as pd; s1 = pd.Series([0,1], name='a'); s2 = pd.Series([2,3], name=('b', 'c')); print(pd.crosstab(s1, s2))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/pietro/nobackup/repo/pandas/pandas/core/reshape/pivot.py", line 466, in crosstab
dropna=dropna, **kwargs)
File "/home/pietro/nobackup/repo/pandas/pandas/core/frame.py", line 4462, in pivot_table
margins_name=margins_name)
File "/home/pietro/nobackup/repo/pandas/pandas/core/reshape/pivot.py", line 82, in pivot_table
agged = grouped.agg(aggfunc)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 4191, in aggregate
return super(DataFrameGroupBy, self).aggregate(arg, *args, **kwargs)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 3632, in aggregate
return self._python_agg_general(arg, *args, **kwargs)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 873, in _python_agg_general
return self._wrap_aggregated_output(output)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 4254, in _wrap_aggregated_output
agg_labels = self._obj_with_exclusions._get_axis(agg_axis)
File "pandas/_libs/properties.pyx", line 39, in pandas._libs.properties.cache_readonly.__get__ (pandas/_libs/properties.c:1604)
File "/home/pietro/nobackup/repo/pandas/pandas/core/base.py", line 235, in _obj_with_exclusions
return self.obj.drop(self.exclusions, axis=1)
File "/home/pietro/nobackup/repo/pandas/pandas/core/generic.py", line 2517, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "/home/pietro/nobackup/repo/pandas/pandas/core/generic.py", line 2549, in _drop_axis
new_axis = axis.drop(labels, errors=errors)
File "/home/pietro/nobackup/repo/pandas/pandas/core/indexes/base.py", line 3750, in drop
labels = _index_labels_to_array(labels)
File "/home/pietro/nobackup/repo/pandas/pandas/core/common.py", line 417, in _index_labels_to_array
labels = _asarray_tuplesafe(labels)
File "/home/pietro/nobackup/repo/pandas/pandas/core/common.py", line 386, in _asarray_tuplesafe
result = np.asarray(values, dtype=dtype)
File "/home/pietro/.local/lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence
```
Compare to:
```
pietro@debiousci:~$ PYTHONHASHSEED=6 python3 -c "import pandas as pd; s1 = pd.Series([0,1], name='a'); s2 = pd.Series([2,3], name=('b', 'c')); print(pd.crosstab(s1, s2))"
('b', 'c') 2 3
a
0 1 0
1 0 1
```
#### Problem description
The above happens (pseudo-)randomly with python 3 and, it seems, always with python 2.
#### Expected Output
The case `` PYTHONHASHSEED=6``.
#### Output of ``pd.show_versions()``
<details>
In [2]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.0-3-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: it_IT.UTF-8
LOCALE: it_IT.UTF-8
pandas: 0.22.0.dev0+131.g63e8527d3
pytest: 3.2.3
pip: 9.0.1
setuptools: 36.7.0
Cython: 0.25.2
numpy: 1.12.1
scipy: 0.19.0
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: 1.5.6
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: 1.2.0dev
tables: 3.3.0
numexpr: 2.6.1
feather: 0.3.1
matplotlib: 2.0.0
openpyxl: None
xlrd: 1.0.0
xlwt: 1.1.2
xlsxwriter: 0.9.6
lxml: None
bs4: 4.5.3
html5lib: 0.999999999
sqlalchemy: 1.0.15
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: 0.2.1
</details>
| > The above happens (pseudo-)randomly with python 3 and, it seems, always with python 2.
Sometimes it works also in Python 2. | 2017-11-15T15:53:00Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/pietro/nobackup/repo/pandas/pandas/core/reshape/pivot.py", line 466, in crosstab
dropna=dropna, **kwargs)
File "/home/pietro/nobackup/repo/pandas/pandas/core/frame.py", line 4462, in pivot_table
margins_name=margins_name)
File "/home/pietro/nobackup/repo/pandas/pandas/core/reshape/pivot.py", line 82, in pivot_table
agged = grouped.agg(aggfunc)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 4191, in aggregate
return super(DataFrameGroupBy, self).aggregate(arg, *args, **kwargs)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 3632, in aggregate
return self._python_agg_general(arg, *args, **kwargs)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 873, in _python_agg_general
return self._wrap_aggregated_output(output)
File "/home/pietro/nobackup/repo/pandas/pandas/core/groupby.py", line 4254, in _wrap_aggregated_output
agg_labels = self._obj_with_exclusions._get_axis(agg_axis)
File "pandas/_libs/properties.pyx", line 39, in pandas._libs.properties.cache_readonly.__get__ (pandas/_libs/properties.c:1604)
File "/home/pietro/nobackup/repo/pandas/pandas/core/base.py", line 235, in _obj_with_exclusions
return self.obj.drop(self.exclusions, axis=1)
File "/home/pietro/nobackup/repo/pandas/pandas/core/generic.py", line 2517, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "/home/pietro/nobackup/repo/pandas/pandas/core/generic.py", line 2549, in _drop_axis
new_axis = axis.drop(labels, errors=errors)
File "/home/pietro/nobackup/repo/pandas/pandas/core/indexes/base.py", line 3750, in drop
labels = _index_labels_to_array(labels)
File "/home/pietro/nobackup/repo/pandas/pandas/core/common.py", line 417, in _index_labels_to_array
labels = _asarray_tuplesafe(labels)
File "/home/pietro/nobackup/repo/pandas/pandas/core/common.py", line 386, in _asarray_tuplesafe
result = np.asarray(values, dtype=dtype)
File "/home/pietro/.local/lib/python3.5/site-packages/numpy/core/numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence
| 11,511 |
|||
pandas-dev/pandas | pandas-dev__pandas-18376 | f2d8db1acccd73340988af9ad5874252fd5c3967 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -342,6 +342,7 @@ Conversion
- Bug in :class:`Series`` with ``dtype='timedelta64[ns]`` where addition or subtraction of ``TimedeltaIndex`` had results cast to ``dtype='int64'`` (:issue:`17250`)
- Bug in :class:`TimedeltaIndex` where division by a ``Series`` would return a ``TimedeltaIndex`` instead of a ``Series`` (issue:`19042`)
- Bug in :class:`Series` with ``dtype='timedelta64[ns]`` where addition or subtraction of ``TimedeltaIndex`` could return a ``Series`` with an incorrect name (issue:`19043`)
+- Fixed bug where comparing :class:`DatetimeIndex` failed to raise ``TypeError`` when attempting to compare timezone-aware and timezone-naive datetimelike objects (:issue:`18162`)
-
Indexing
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -13,14 +13,14 @@
_INT64_DTYPE,
_NS_DTYPE,
is_object_dtype,
- is_datetime64_dtype,
+ is_datetime64_dtype, is_datetime64tz_dtype,
is_datetimetz,
is_dtype_equal,
is_timedelta64_dtype,
is_integer,
is_float,
is_integer_dtype,
- is_datetime64_ns_dtype,
+ is_datetime64_ns_dtype, is_datetimelike,
is_period_dtype,
is_bool_dtype,
is_string_like,
@@ -106,8 +106,12 @@ def _dt_index_cmp(opname, cls, nat_result=False):
def wrapper(self, other):
func = getattr(super(DatetimeIndex, self), opname)
- if (isinstance(other, datetime) or
- isinstance(other, compat.string_types)):
+
+ if isinstance(other, (datetime, compat.string_types)):
+ if isinstance(other, datetime):
+ # GH#18435 strings get a pass from tzawareness compat
+ self._assert_tzawareness_compat(other)
+
other = _to_m8(other, tz=self.tz)
result = func(other)
if isna(other):
@@ -117,6 +121,10 @@ def wrapper(self, other):
other = DatetimeIndex(other)
elif not isinstance(other, (np.ndarray, Index, ABCSeries)):
other = _ensure_datetime64(other)
+
+ if is_datetimelike(other):
+ self._assert_tzawareness_compat(other)
+
result = func(np.asarray(other))
result = _values_from_object(result)
@@ -652,6 +660,20 @@ def _simple_new(cls, values, name=None, freq=None, tz=None,
result._reset_identity()
return result
+ def _assert_tzawareness_compat(self, other):
+ # adapted from _Timestamp._assert_tzawareness_compat
+ other_tz = getattr(other, 'tzinfo', None)
+ if is_datetime64tz_dtype(other):
+ # Get tzinfo from Series dtype
+ other_tz = other.dtype.tz
+ if self.tz is None:
+ if other_tz is not None:
+ raise TypeError('Cannot compare tz-naive and tz-aware '
+ 'datetime-like objects.')
+ elif other_tz is None:
+ raise TypeError('Cannot compare tz-naive and tz-aware '
+ 'datetime-like objects')
+
@property
def tzinfo(self):
"""
| DatetimeIndex comparison tzaware vs naive should raise
```
>>> dr = pd.date_range('2016-01-01', periods=6)
>>> dz = dr.tz_localize('US/Pacific')
>>> dr < dz
array([ True, True, True, True, True, True], dtype=bool)
>>> dr[0] < dz[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/_libs/tslib.pyx", line 1169, in pandas._libs.tslib._Timestamp.__richcmp__
File "pandas/_libs/tslib.pyx", line 1230, in pandas._libs.tslib._Timestamp._assert_tzawareness_compat
TypeError: Cannot compare tz-naive and tz-aware timestamps
```
The vectorized comparison should raise too right?
| yes this should raise | 2017-11-20T00:21:05Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/_libs/tslib.pyx", line 1169, in pandas._libs.tslib._Timestamp.__richcmp__
File "pandas/_libs/tslib.pyx", line 1230, in pandas._libs.tslib._Timestamp._assert_tzawareness_compat
TypeError: Cannot compare tz-naive and tz-aware timestamps
| 11,522 |
|||
pandas-dev/pandas | pandas-dev__pandas-18380 | 1915ffc53ea60494f24d83844bbff00efa392c82 | diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -26,6 +26,7 @@ Other Enhancements
- :func:`pandas.tseries.frequencies.to_offset` now accepts leading '+' signs e.g. '+1h'. (:issue:`18171`)
- :class:`pandas.io.formats.style.Styler` now has method ``hide_index()`` to determine whether the index will be rendered in ouptut (:issue:`14194`)
- :class:`pandas.io.formats.style.Styler` now has method ``hide_columns()`` to determine whether columns will be hidden in output (:issue:`14194`)
+- Improved wording of ValueError raised in :func:`Timestamp.tz_localize` function
.. _whatsnew_0220.api_breaking:
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -1445,10 +1445,8 @@ cpdef array_with_unit_to_datetime(ndarray values, unit, errors='coerce'):
else:
if is_raise:
- raise ValueError("non convertible value {0}"
- "with the unit '{1}'".format(
- val,
- unit))
+ raise ValueError("unit='{0}' not valid with non-numerical "
+ "val='{1}'".format(unit, val))
if is_ignore:
raise AssertionError
| DOC/ERR: update error message / doc-string for to_datetime with non-convertible object and unit kw
#### A small, complete example of the issue
```
import pandas as pd
pd.to_datetime(datetime.datetime(2016,1,1), unit='s')
Traceback (most recent call last):
File "/home/julienv/.pycharm_helpers/pydev/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/pandas/util/decorators.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/tseries/tools.py", line 424, in to_datetime
return _convert_listlike(np.array([arg]), box, format)[0]
File "/usr/local/lib/python3.4/dist-packages/pandas/tseries/tools.py", line 330, in _convert_listlike
errors=errors)
File "pandas/tslib.pyx", line 2144, in pandas.tslib.array_with_unit_to_datetime (pandas/tslib.c:39248)
File "pandas/tslib.pyx", line 2255, in pandas.tslib.array_with_unit_to_datetime (pandas/tslib.c:38492)
ValueError: non convertible value 2016-01-01 00:00:00with the unit 's'
```
#### Expected Output
Timestamp('2016-01-01 00:00:00')
#### Output of `pd.show_versions()`
<details>
> > > pd.show_versions()
## INSTALLED VERSIONS
commit: None
python: 3.4.3.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-96-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: fr_FR.UTF-8
LOCALE: fr_FR.UTF-8
pandas: 0.19.0
nose: None
pip: 1.5.4
setuptools: 3.3
Cython: 0.20.1post0
numpy: 1.11.2
scipy: 0.18.1
statsmodels: None
xarray: None
IPython: None
sphinx: 1.2.2
patsy: None
dateutil: 2.5.3
pytz: 2016.7
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.5.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: 2.5.3 (dt dec pq3 ext)
jinja2: 2.7.3
boto: None
pandas_datareader: None
</details>
| This may have changed, but is is actually in some way correct. As stated in the docstring (but maybe not clear enough), the `unit` keyword is to interpret correctly a integer or float, eg:
```
In [17]: pd.to_datetime(1000000000, unit='s')
Out[17]: Timestamp('2001-09-09 01:46:40')
```
It is not meant to the precision of the resulting datetime, as this is always 'ns' whathever the input. So since the keyword would not have any effect when parsing a datetime object, I think it is correct to raise an error.
if you'd like to submit a doc-string update would be ok. Furthermore the error message is missing a space (after the Timestamp), and I think could be more informative, e.g. print the type of the object (as well, maybe in lieu of the value).
Yes, an error message that says something like "unit='s' is only valid with numerical input" would be a lot more informative
| 2017-11-20T04:00:49Z | [] | [] |
Traceback (most recent call last):
File "/home/julienv/.pycharm_helpers/pydev/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/pandas/util/decorators.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/tseries/tools.py", line 424, in to_datetime
return _convert_listlike(np.array([arg]), box, format)[0]
File "/usr/local/lib/python3.4/dist-packages/pandas/tseries/tools.py", line 330, in _convert_listlike
errors=errors)
File "pandas/tslib.pyx", line 2144, in pandas.tslib.array_with_unit_to_datetime (pandas/tslib.c:39248)
File "pandas/tslib.pyx", line 2255, in pandas.tslib.array_with_unit_to_datetime (pandas/tslib.c:38492)
ValueError: non convertible value 2016-01-01 00:00:00with the unit 's'
| 11,523 |
|||
pandas-dev/pandas | pandas-dev__pandas-18637 | 13f6267207dd1f140b12d7718277508eff5c1efb | diff --git a/pandas/_libs/sparse.pyx b/pandas/_libs/sparse.pyx
--- a/pandas/_libs/sparse.pyx
+++ b/pandas/_libs/sparse.pyx
@@ -12,8 +12,8 @@ from distutils.version import LooseVersion
# numpy versioning
_np_version = np.version.short_version
-_np_version_under1p10 = LooseVersion(_np_version) < '1.10'
-_np_version_under1p11 = LooseVersion(_np_version) < '1.11'
+_np_version_under1p10 = LooseVersion(_np_version) < LooseVersion('1.10')
+_np_version_under1p11 = LooseVersion(_np_version) < LooseVersion('1.11')
np.import_array()
np.import_ufunc()
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -399,7 +399,7 @@ def raise_with_traceback(exc, traceback=Ellipsis):
# dateutil minimum version
import dateutil
-if LooseVersion(dateutil.__version__) < '2.5':
+if LooseVersion(dateutil.__version__) < LooseVersion('2.5'):
raise ImportError('dateutil 2.5.0 is the minimum required version')
from dateutil import parser as _date_parser
parse_date = _date_parser.parse
diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py
--- a/pandas/compat/numpy/__init__.py
+++ b/pandas/compat/numpy/__init__.py
@@ -9,12 +9,12 @@
# numpy versioning
_np_version = np.__version__
_nlv = LooseVersion(_np_version)
-_np_version_under1p10 = _nlv < '1.10'
-_np_version_under1p11 = _nlv < '1.11'
-_np_version_under1p12 = _nlv < '1.12'
-_np_version_under1p13 = _nlv < '1.13'
-_np_version_under1p14 = _nlv < '1.14'
-_np_version_under1p15 = _nlv < '1.15'
+_np_version_under1p10 = _nlv < LooseVersion('1.10')
+_np_version_under1p11 = _nlv < LooseVersion('1.11')
+_np_version_under1p12 = _nlv < LooseVersion('1.12')
+_np_version_under1p13 = _nlv < LooseVersion('1.13')
+_np_version_under1p14 = _nlv < LooseVersion('1.14')
+_np_version_under1p15 = _nlv < LooseVersion('1.15')
if _nlv < '1.9':
raise ImportError('this version of pandas is incompatible with '
diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -70,8 +70,8 @@ def ip():
is_dateutil_le_261 = pytest.mark.skipif(
- LooseVersion(dateutil.__version__) > '2.6.1',
+ LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),
reason="dateutil api change version")
is_dateutil_gt_261 = pytest.mark.skipif(
- LooseVersion(dateutil.__version__) <= '2.6.1',
+ LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),
reason="dateutil stable version")
diff --git a/pandas/core/computation/check.py b/pandas/core/computation/check.py
--- a/pandas/core/computation/check.py
+++ b/pandas/core/computation/check.py
@@ -6,7 +6,7 @@
try:
import numexpr as ne
- ver = ne.__version__
+ ver = LooseVersion(ne.__version__)
_NUMEXPR_INSTALLED = ver >= LooseVersion(_MIN_NUMEXPR_VERSION)
if not _NUMEXPR_INSTALLED:
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -347,7 +347,7 @@ def _from_derivatives(xi, yi, x, order=None, der=0, extrapolate=False):
import scipy
from scipy import interpolate
- if LooseVersion(scipy.__version__) < '0.18.0':
+ if LooseVersion(scipy.__version__) < LooseVersion('0.18.0'):
try:
method = interpolate.piecewise_polynomial_interpolate
return method(xi, yi.reshape(-1, 1), x,
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -22,7 +22,7 @@ def _try_import():
"pip install -U feather-format\n")
try:
- feather.__version__ >= LooseVersion('0.3.1')
+ LooseVersion(feather.__version__) >= LooseVersion('0.3.1')
except AttributeError:
raise ImportError("the feather-format library must be >= "
"version 0.3.1\n"
@@ -106,7 +106,7 @@ def read_feather(path, nthreads=1):
feather = _try_import()
path = _stringify_path(path)
- if feather.__version__ < LooseVersion('0.4.0'):
+ if LooseVersion(feather.__version__) < LooseVersion('0.4.0'):
return feather.read_dataframe(path)
return feather.read_dataframe(path, nthreads=nthreads)
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -684,7 +684,7 @@ def _parser_dispatch(flavor):
raise ImportError(
"BeautifulSoup4 (bs4) not found, please install it")
import bs4
- if bs4.__version__ == LooseVersion('4.2.0'):
+ if LooseVersion(bs4.__version__) == LooseVersion('4.2.0'):
raise ValueError("You're using a version"
" of BeautifulSoup4 (4.2.0) that has been"
" known to cause problems on certain"
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -50,7 +50,7 @@ def __init__(self):
"\nor via pip\n"
"pip install -U pyarrow\n")
- if LooseVersion(pyarrow.__version__) < '0.4.1':
+ if LooseVersion(pyarrow.__version__) < LooseVersion('0.4.1'):
raise ImportError("pyarrow >= 0.4.1 is required for parquet"
"support\n\n"
"you can install via conda\n"
@@ -58,8 +58,10 @@ def __init__(self):
"\nor via pip\n"
"pip install -U pyarrow\n")
- self._pyarrow_lt_050 = LooseVersion(pyarrow.__version__) < '0.5.0'
- self._pyarrow_lt_060 = LooseVersion(pyarrow.__version__) < '0.6.0'
+ self._pyarrow_lt_050 = (LooseVersion(pyarrow.__version__) <
+ LooseVersion('0.5.0'))
+ self._pyarrow_lt_060 = (LooseVersion(pyarrow.__version__) <
+ LooseVersion('0.6.0'))
self.api = pyarrow
def write(self, df, path, compression='snappy',
@@ -97,7 +99,7 @@ def __init__(self):
"\nor via pip\n"
"pip install -U fastparquet")
- if LooseVersion(fastparquet.__version__) < '0.1.0':
+ if LooseVersion(fastparquet.__version__) < LooseVersion('0.1.0'):
raise ImportError("fastparquet >= 0.1.0 is required for parquet "
"support\n\n"
"you can install via conda\n"
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -248,7 +248,7 @@ def _tables():
_table_mod = tables
# version requirements
- if LooseVersion(tables.__version__) < '3.0.0':
+ if LooseVersion(tables.__version__) < LooseVersion('3.0.0'):
raise ImportError("PyTables version >= 3.0.0 is required")
# set the file open policy
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -67,11 +67,11 @@ def _is_sqlalchemy_connectable(con):
_SQLALCHEMY_INSTALLED = True
from distutils.version import LooseVersion
- ver = LooseVersion(sqlalchemy.__version__)
+ ver = sqlalchemy.__version__
# For sqlalchemy versions < 0.8.2, the BIGINT type is recognized
# for a sqlite engine, which results in a warning when trying to
# read/write a DataFrame with int64 values. (GH7433)
- if ver < '0.8.2':
+ if LooseVersion(ver) < LooseVersion('0.8.2'):
from sqlalchemy import BigInteger
from sqlalchemy.ext.compiler import compiles
diff --git a/pandas/plotting/_compat.py b/pandas/plotting/_compat.py
--- a/pandas/plotting/_compat.py
+++ b/pandas/plotting/_compat.py
@@ -8,7 +8,7 @@
def _mpl_le_1_2_1():
try:
import matplotlib as mpl
- return (str(mpl.__version__) <= LooseVersion('1.2.1') and
+ return (LooseVersion(mpl.__version__) <= LooseVersion('1.2.1') and
str(mpl.__version__)[0] != '0')
except ImportError:
return False
@@ -19,8 +19,9 @@ def _mpl_ge_1_3_1():
import matplotlib
# The or v[0] == '0' is because their versioneer is
# messed up on dev
- return (matplotlib.__version__ >= LooseVersion('1.3.1') or
- matplotlib.__version__[0] == '0')
+ return (LooseVersion(matplotlib.__version__) >=
+ LooseVersion('1.3.1') or
+ str(matplotlib.__version__)[0] == '0')
except ImportError:
return False
@@ -28,8 +29,8 @@ def _mpl_ge_1_3_1():
def _mpl_ge_1_4_0():
try:
import matplotlib
- return (matplotlib.__version__ >= LooseVersion('1.4') or
- matplotlib.__version__[0] == '0')
+ return (LooseVersion(matplotlib.__version__) >= LooseVersion('1.4') or
+ str(matplotlib.__version__)[0] == '0')
except ImportError:
return False
@@ -37,8 +38,8 @@ def _mpl_ge_1_4_0():
def _mpl_ge_1_5_0():
try:
import matplotlib
- return (matplotlib.__version__ >= LooseVersion('1.5') or
- matplotlib.__version__[0] == '0')
+ return (LooseVersion(matplotlib.__version__) >= LooseVersion('1.5') or
+ str(matplotlib.__version__)[0] == '0')
except ImportError:
return False
@@ -46,7 +47,7 @@ def _mpl_ge_1_5_0():
def _mpl_ge_2_0_0():
try:
import matplotlib
- return matplotlib.__version__ >= LooseVersion('2.0')
+ return LooseVersion(matplotlib.__version__) >= LooseVersion('2.0')
except ImportError:
return False
@@ -62,7 +63,7 @@ def _mpl_le_2_0_0():
def _mpl_ge_2_0_1():
try:
import matplotlib
- return matplotlib.__version__ >= LooseVersion('2.0.1')
+ return LooseVersion(matplotlib.__version__) >= LooseVersion('2.0.1')
except ImportError:
return False
@@ -70,6 +71,6 @@ def _mpl_ge_2_0_1():
def _mpl_ge_2_1_0():
try:
import matplotlib
- return matplotlib.__version__ >= LooseVersion('2.1')
+ return LooseVersion(matplotlib.__version__) >= LooseVersion('2.1')
except ImportError:
return False
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -329,7 +329,7 @@ def _skip_if_mpl_1_5():
import matplotlib as mpl
v = mpl.__version__
- if v > LooseVersion('1.4.3') or v[0] == '0':
+ if LooseVersion(v) > LooseVersion('1.4.3') or str(v)[0] == '0':
import pytest
pytest.skip("matplotlib 1.5")
else:
@@ -362,7 +362,7 @@ def _skip_if_no_xarray():
xarray = pytest.importorskip("xarray")
v = xarray.__version__
- if v < LooseVersion('0.7.0'):
+ if LooseVersion(v) < LooseVersion('0.7.0'):
import pytest
pytest.skip("xarray version is too low: {version}".format(version=v))
| plotting/_compat Version Comparisons Not Working in Py27
I noticed this while setting up a virtual environment using matplotlib 1.4.0 to test #18190. ``__version__`` sometimes returns a string and other times returns a unicode object in Python 2.7. When returning unicode, the comparisons that occur in ``plotting/_compat.py`` will raise
```python
>>> from distutils.version import LooseVersion
>>> import matplotlib
>>> matplotlib.__version__
u'1.4.0'
>>> matplotlib.__version__ < LooseVersion("1.5")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/miniconda3/envs/mpl1_4/lib/python2.7/distutils/version.py", line 296, in __cmp__
return cmp(self.version, other.version)
AttributeError: 'unicode' object has no attribute 'version'
```
In some cases, the versions are converted to ``str`` objects in ``plotting/_compat.py`` to presumably avoid this error, but it is not done consistently. As perhaps a better approach, we could create ``LooseVersion`` objects from all the ``__version__`` properties so that the comparison is done between equal types across the board.
```python
>>> LooseVersion(matplotlib.__version__) < LooseVersion("1.5")
True
```
#### Output of ``pd.show_versions()``
<details>
Python 2.7.14 | packaged by conda-forge | (default, Nov 4 2017, 10:22:41)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: 2c903d594299b2441d4742e777a10e8c76557386
python: 2.7.14.final.0
python-bits: 64
OS: Darwin
OS-release: 17.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.22.0.dev0+283.g2c903d594.dirty
pytest: 3.3.0
pip: 9.0.1
setuptools: 38.2.3
Cython: 0.27.3
numpy: 1.9.3
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 1.4.0
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| I've seen this error when a library imports `unicode_literals`.
> As perhaps a better approach, we could create LooseVersion objects from all the __version__ properties so that the comparison is done between equal types across the board.
That's probably best anyway. | 2017-12-04T21:20:47Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/miniconda3/envs/mpl1_4/lib/python2.7/distutils/version.py", line 296, in __cmp__
return cmp(self.version, other.version)
AttributeError: 'unicode' object has no attribute 'version'
| 11,569 |
|||
pandas-dev/pandas | pandas-dev__pandas-19013 | 0e3c797c4c12fa04fd745e595e822886e917b316 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -368,7 +368,7 @@ Numeric
^^^^^^^
- Bug in :func:`Series.__sub__` subtracting a non-nanosecond ``np.datetime64`` object from a ``Series`` gave incorrect results (:issue:`7996`)
--
+- Bug in :class:`DatetimeIndex`, :class:`TimedeltaIndex` addition and subtraction of zero-dimensional integer arrays gave incorrect results (:issue:`19012`)
-
Categorical
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -669,6 +669,8 @@ def __add__(self, other):
from pandas.core.index import Index
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.tseries.offsets import DateOffset
+
+ other = lib.item_from_zerodim(other)
if is_timedelta64_dtype(other):
return self._add_delta(other)
elif isinstance(self, TimedeltaIndex) and isinstance(other, Index):
@@ -689,6 +691,7 @@ def __add__(self, other):
return self._add_datelike(other)
else: # pragma: no cover
return NotImplemented
+
cls.__add__ = __add__
cls.__radd__ = __add__
@@ -697,6 +700,8 @@ def __sub__(self, other):
from pandas.core.indexes.datetimes import DatetimeIndex
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.tseries.offsets import DateOffset
+
+ other = lib.item_from_zerodim(other)
if is_timedelta64_dtype(other):
return self._add_delta(-other)
elif isinstance(self, TimedeltaIndex) and isinstance(other, Index):
@@ -724,6 +729,7 @@ def __sub__(self, other):
else: # pragma: no cover
return NotImplemented
+
cls.__sub__ = __sub__
def __rsub__(self, other):
@@ -737,8 +743,10 @@ def _add_delta(self, other):
return NotImplemented
def _add_delta_td(self, other):
- # add a delta of a timedeltalike
- # return the i8 result view
+ """
+ Add a delta of a timedeltalike
+ return the i8 result view
+ """
inc = delta_to_nanoseconds(other)
new_values = checked_add_with_arr(self.asi8, inc,
@@ -748,8 +756,10 @@ def _add_delta_td(self, other):
return new_values.view('i8')
def _add_delta_tdi(self, other):
- # add a delta of a TimedeltaIndex
- # return the i8 result view
+ """
+ Add a delta of a TimedeltaIndex
+ return the i8 result view
+ """
# delta operation
if not len(self) == len(other):
| DatetimeIndex/TimedeltaIndex add/sub zero-dim arrays incorrect
Opening this mainly to get the appropriate reference for the upcoming PR.
Setup:
```
dti = pd.date_range('2016-01-01', periods=3, freq='H')
one = np.array(1)
```
0.21.1:
```
>>> dti + one
DatetimeIndex(['2016-01-01 00:00:00.000000001',
'2016-01-01 01:00:00.000000001',
'2016-01-01 02:00:00.000000001'],
dtype='datetime64[ns]', freq='H')
>>> dti.freq = None
>>> dti + one
DatetimeIndex(['2016-01-01 00:00:00.000000001',
'2016-01-01 01:00:00.000000001',
'2016-01-01 02:00:00.000000001'],
dtype='datetime64[ns]', freq=None)
```
Master (see #19011)
```
>>> dti + one
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/indexes/datetimelike.py", line 685, in __add__
elif is_offsetlike(other):
File "pandas/core/dtypes/common.py", line 294, in is_offsetlike
elif (is_list_like(arr_or_obj) and len(arr_or_obj) and
TypeError: len() of unsized object
```
| 2017-12-31T01:18:48Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/indexes/datetimelike.py", line 685, in __add__
elif is_offsetlike(other):
File "pandas/core/dtypes/common.py", line 294, in is_offsetlike
elif (is_list_like(arr_or_obj) and len(arr_or_obj) and
TypeError: len() of unsized object
| 11,635 |
||||
pandas-dev/pandas | pandas-dev__pandas-19338 | 5fdb9c0edef57da4b29a437eca84bad5f20719b7 | diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -4407,42 +4407,6 @@ def _blklocs(self):
""" compat with BlockManager """
return None
- def reindex(self, new_axis, indexer=None, method=None, fill_value=None,
- limit=None, copy=True):
- # if we are the same and don't copy, just return
- if self.index.equals(new_axis):
- if copy:
- return self.copy(deep=True)
- else:
- return self
-
- values = self._block.get_values()
-
- if indexer is None:
- indexer = self.items.get_indexer_for(new_axis)
-
- if fill_value is None:
- fill_value = np.nan
-
- new_values = algos.take_1d(values, indexer, fill_value=fill_value)
-
- # fill if needed
- if method is not None or limit is not None:
- new_values = missing.interpolate_2d(new_values,
- method=method,
- limit=limit,
- fill_value=fill_value)
-
- if self._block.is_sparse:
- make_block = self._block.make_block_same_class
-
- block = make_block(new_values, copy=copy,
- placement=slice(0, len(new_axis)))
-
- mgr = SingleBlockManager(block, new_axis)
- mgr._consolidate_inplace()
- return mgr
-
def get_slice(self, slobj, axis=0):
if axis >= self.ndim:
raise IndexError("Requested axis not found in manager")
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -197,8 +197,13 @@ def __init__(self, data=None, index=None, dtype=None, name=None,
elif isinstance(data, SingleBlockManager):
if index is None:
index = data.index
- else:
- data = data.reindex(index, copy=copy)
+ elif not data.index.equals(index) or copy:
+ # GH#19275 SingleBlockManager input should only be called
+ # internally
+ raise AssertionError('Cannot pass both SingleBlockManager '
+ '`data` argument and a different '
+ '`index` argument. `copy` must '
+ 'be False.')
elif isinstance(data, Categorical):
# GH12574: Allow dtype=category only, otherwise error
if ((dtype is not None) and
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -166,9 +166,13 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block',
data = data.astype(dtype)
if index is None:
index = data.index.view()
- else:
-
- data = data.reindex(index, copy=False)
+ elif not data.index.equals(index) or copy: # pragma: no cover
+ # GH#19275 SingleBlockManager input should only be called
+ # internally
+ raise AssertionError('Cannot pass both SingleBlockManager '
+ '`data` argument and a different '
+ '`index` argument. `copy` must '
+ 'be False.')
else:
length = len(index)
| SingleBlockManager.reindex almost unused, unusable
https://github.com/pandas-dev/pandas/blob/master/pandas/core/internals.py#L4447
SingleBlockManager.reindex has only code code branch hit in test coverage, immediately returns `self`. This is good because I think if it went further than that it would likely raise one of several errors.
The end of the method:
```
if self._block.is_sparse:
make_block = self._block.make_block_same_class
block = make_block(new_values, copy=copy,
placement=slice(0, len(new_axis)))
mgr = SingleBlockManager(block, new_axis)
mgr._consolidate_inplace()
return mgr
```
In the case where `not self._block.is_sparse`, then I'm pretty sure the author intended for `make_block` to point at the module-level `make_block` function. But instead it would raise an `UnboundLocalError`:
```
ser = pd.Series([1, 2, 3])
idx = ser.index * 2
>>> ser._data.reindex(idx)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/internals.py", line 4518, in reindex
block = make_block(new_values, copy=copy,
UnboundLocalError: local variable 'make_block' referenced before assignment
```
Moreover, the call to `make_block` passes a `copy` kwarg, which is not accepted by the module-level function.
So my hope is that it can be confirmed that this method is no longer needed and can removed.
| well if you remove it and things don't break that would confirm it no?
> well if you remove it and things don't break that would confirm it no?
Well it definitely wouldn't break any of the existing tests; is `core.internals` solidly enough tested that any un-covered code can be removed?
If you look at the test coverage the only path that gets hit is https://github.com/pandas-dev/pandas/blob/master/pandas/core/internals.py#L4464 that returns `self` almost immediately:
```
def reindex(self, new_axis, indexer=None, method=None, fill_value=None,
limit=None, copy=True):
# if we are the same and don't copy, just return
if self.index.equals(new_axis):
if copy:
return self.copy(deep=True)
else:
return self # <-- only relevant path
[... 21 more lines that are never hit ...]
block = make_block(new_values, copy=copy,
placement=slice(0, len(new_axis)))
mgr = SingleBlockManager(block, new_axis)
mgr._consolidate_inplace()
return mgr
```
It looks like the only places where BlockManager.reindex gets called are in `Series.__init__` and `SparseSeries.__init__`
Series:
```
elif isinstance(data, SingleBlockManager):
if index is None:
index = data.index
else:
data = data.reindex(index, copy=copy)
```
SparseSeries
```
elif isinstance(data, SingleBlockManager):
if dtype is not None:
data = data.astype(dtype)
if index is None:
index = data.index.view()
else:
data = data.reindex(index, copy=False)
```
In each case, an extra case after the `if index is None` clause `elif data.index.equals(index): pass` catches _all_ remaining cases that currently pass into the `else:` block. Given that users shouldn't be passing `SingleBlockManager` manually, I think it'd be OK to require `data.index.equals(index)` in this case so we can remove `BlockManager.reindex` | 2018-01-22T03:32:00Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/internals.py", line 4518, in reindex
block = make_block(new_values, copy=copy,
UnboundLocalError: local variable 'make_block' referenced before assignment
| 11,689 |
|||
pandas-dev/pandas | pandas-dev__pandas-19554 | d24a9507ba539f455d9b90885edf098c7dc93e99 | diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1304,7 +1304,12 @@ def assert_frame_equal(left, right, check_dtype=True,
5 digits (False) or 3 digits (True) after decimal points are compared.
If int, then specify the digits to compare
check_names : bool, default True
- Whether to check the Index names attribute.
+ Whether to check that the `names` attribute for both the `index`
+ and `column` attributes of the DataFrame is identical, i.e.
+
+ * left.index.names == right.index.names
+ * left.columns.names == right.columns.names
+
by_blocks : bool, default False
Specify how to compare internal data. If False, compare by columns.
If True, compare by blocks.
| check_names=False parameter for pandas.util.testing.assert_frame_equal applies to index.names but not columns.names
*edit by @TomAugspurger*
The `check_names` docstring for `pandas.util.testing.assert_frame_equal` is unclear:
```
check_names : bool, default True
Whether to check the Index names attribute.
```
This should clarify that both the index and columns names attribute are checked.
---
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
from pandas.util.testing import assert_frame_equal
df1 = pd.DataFrame({'A':[1.0]})
df2 = pd.DataFrame({'B':[1.0]})
assert_frame_equal(df1, df2, check_names=False)
""" will return:
In [7]: assert_frame_equal(df1, df2, check_names=False)
Traceback (most recent call last):
File "<ipython-input-7-d273edeeb6af>", line 1, in <module>
assert_frame_equal(df1, df2, check_names=False)
File "<snipped>/lib/python3.5/site-packages/pandas/util/testing.py", line 1372, in assert_frame_equal
obj='{obj}.columns'.format(obj=obj))
File "<snipped>/lib/python3.5/site-packages/pandas/util/testing.py", line 927, in assert_index_equal
obj=obj, lobj=left, robj=right)
File "pandas/_libs/testing.pyx", line 59, in pandas._libs.testing.assert_almost_equal
File "pandas/_libs/testing.pyx", line 173, in pandas._libs.testing.assert_almost_equal
File "<snipped>/lib/python3.5/site-packages/pandas/util/testing.py", line 1093, in raise_assert_detail
raise AssertionError(msg)
AssertionError: DataFrame.columns are different
DataFrame.columns values are different (100.0 %)
[left]: Index(['A'], dtype='object')
[right]: Index(['B'], dtype='object')
"""
```
#### Problem description
When the parameter `check_names=False` is set for `assert_frames_equal`, the index and columns names should be ignored in the comparison, but an assertion error is still raised if the index or columns names are different. This is the same behaviour as when `check_names=True` (the default) is set, and the opposite of what I believe is intended.
#### Expected Output
The expected output for the case above should be nothing - a valid assertion.
#### Output of ``pd.show_versions()``
<details>
In [8]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.4.final.0
python-bits: 64
OS: Linux
OS-release: 3.10.0-327.13.1.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
pandas: 0.22.0
pytest: 3.2.1
pip: 9.0.1
setuptools: 27.2.0
Cython: None
numpy: 1.13.1
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 5.1.0
sphinx: 1.4.8
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.8
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| From the docstring:
```
check_names : bool, default True
Whether to check the Index names attribute.
```
So it checks if `left.index.names == right.index.names`. Same for `.columns`, so everything is correct I think.
Do you have a usecase for ignoring the actual column labels themselves?
Ah ok, I guess I'd assumed that as both:
`AssertionError: DataFrame.index are different`
and
`AssertionError: DataFrame.columns are different`
are possible returns from `assert_frame_equal`, either could differ if `check_names=False` was set.
`check_names` to me implies both index names and column names are being checked, as they are, but also either can differ. Only index name can actually differ. Perhaps the docstring should clarify that column names are also checked, but cannot differ regardless of this parameter setting.
I can't speak for it being a common use case but yes - I test if various data processing functions can handle df populated by reading from different format source files. The df get assigned column names using whatever columns names are provided by the file or are assigned by user input. My tests only know that the column order for the parts of the df I'm interested in checking should be the same in every case. So the data values, index values and column order of the df should match, but column names and index names don't have to. I could simply assign a temporary set of column names internally of course.
Well, the docstring could be improved. Both `df.index.names` and `df.columns.names` are checked. That's still different from your original issue though, which was about the values.
> I could simply assign a temporary set of column names internally of course.
I'd recommend doing that. I don't think changing `assert_frame_equal` to ignore index / column labels is generally useful enough to warrant a parameter.
I've been a little unclear here still I think.
> I don't think changing `assert_frame_equal` to ignore index / column labels is generally useful enough to warrant a parameter
This is exactly what `check_names` is for and is doing, but only for `df.index.names`. `check_names=False` allows for the case of `left.index.names != right.index.names` to pass `assert_frame_equal`. My issue was that I assumed `check_names=False` would also pass `left.columns.names != right.columns.names`, as I was intending to use it.
If the latter isn't a common use, then I agree it's not worth changes. In that case, my vote would be to simply rename `check_names` to something like `check_index_names`, as that is exactly what it does, and all that bool setting applies to.
@willjbrown88
```python
In [3]: pd.util.testing.assert_frame_equal(
...: pd.DataFrame(columns=pd.Index(['a', 'b'], name='c1')),
...: pd.DataFrame(columns=pd.Index(['a', 'b'], name='c2'))
...: )
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-3-653d160e6f54> in <module>()
1 pd.util.testing.assert_frame_equal(
2 pd.DataFrame(columns=pd.Index(['a', 'b'], name='c1')),
----> 3 pd.DataFrame(columns=pd.Index(['a', 'b'], name='c2'))
4 )
~/sandbox/pandas-ip/pandas/pandas/util/testing.py in assert_frame_equal(left, right, check_dtype, check_index_type, check_column_type, check_frame_type, check_less_precise, check_names, by_blocks, check_exact, check_datetimelike_compat, check_categorical, check_like, obj)
1285 check_exact=check_exact,
1286 check_categorical=check_categorical,
-> 1287 obj='{obj}.columns'.format(obj=obj))
1288
1289 # compare by blocks
~/sandbox/pandas-ip/pandas/pandas/util/testing.py in assert_index_equal(left, right, exact, check_names, check_less_precise, check_exact, check_categorical, obj)
844 # metadata comparison
845 if check_names:
--> 846 assert_attr_equal('names', left, right, obj=obj)
847 if isinstance(left, pd.PeriodIndex) or isinstance(right, pd.PeriodIndex):
848 assert_attr_equal('freq', left, right, obj=obj)
~/sandbox/pandas-ip/pandas/pandas/util/testing.py in assert_attr_equal(attr, left, right, obj)
921 else:
922 msg = 'Attribute "{attr}" are different'.format(attr=attr)
--> 923 raise_assert_detail(obj, msg, left_attr, right_attr)
924
925
~/sandbox/pandas-ip/pandas/pandas/util/testing.py in raise_assert_detail(obj, message, left, right, diff)
1006 msg += "\n[diff]: {diff}".format(diff=diff)
1007
-> 1008 raise AssertionError(msg)
1009
1010
AssertionError: DataFrame.columns are different
Attribute "names" are different
[left]: ['c1']
[right]: ['c2']
In [4]: pd.util.testing.assert_frame_equal(
...: pd.DataFrame(columns=pd.Index(['a', 'b'], name='c1')),
...: pd.DataFrame(columns=pd.Index(['a', 'b'], name='c2')),
...: check_names=False
...: )
``` | 2018-02-06T17:51:59Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-7-d273edeeb6af>", line 1, in <module>
assert_frame_equal(df1, df2, check_names=False)
File "<snipped>/lib/python3.5/site-packages/pandas/util/testing.py", line 1372, in assert_frame_equal
obj='{obj}.columns'.format(obj=obj))
File "<snipped>/lib/python3.5/site-packages/pandas/util/testing.py", line 927, in assert_index_equal
obj=obj, lobj=left, robj=right)
File "pandas/_libs/testing.pyx", line 59, in pandas._libs.testing.assert_almost_equal
File "pandas/_libs/testing.pyx", line 173, in pandas._libs.testing.assert_almost_equal
File "<snipped>/lib/python3.5/site-packages/pandas/util/testing.py", line 1093, in raise_assert_detail
raise AssertionError(msg)
AssertionError: DataFrame.columns are different
| 11,713 |
|||
pandas-dev/pandas | pandas-dev__pandas-19818 | aa59954a217c8f856bb0980265520d37b85a80af | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1612,7 +1612,7 @@ def to_stata(self, fname, convert_dates=None, write_index=True,
time_stamp : datetime
A datetime to use as file creation date. Default is the current
time.
- dataset_label : str
+ data_label : str
A label for the data set. Must be 80 characters or smaller.
variable_labels : dict
Dictionary containing columns as keys and variable labels as
@@ -1635,10 +1635,18 @@ def to_stata(self, fname, convert_dates=None, write_index=True,
Examples
--------
+ >>> data.to_stata('./data_file.dta')
+
+ Or with dates
+
+ >>> data.to_stata('./date_data_file.dta', {2 : 'tw'})
+
+ Alternatively you can create an instance of the StataWriter class
+
>>> writer = StataWriter('./data_file.dta', data)
>>> writer.write_file()
- Or with dates
+ With dates:
>>> writer = StataWriter('./date_data_file.dta', data, {2 : 'tw'})
>>> writer.write_file()
| Wrong parameter name in to_stata() method
In the [API documentation for `df.to_stata()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_stata.html), a parameter `dataset_label` is listed to give the Stata file a label, however the Pandas API [actually uses the parameter `data_label`](https://github.com/pandas-dev/pandas/blob/master/pandas/core/frame.py#L1591).
From `core/frame.py`:
> ```python
> def to_stata(self, fname, convert_dates=None, write_index=True,
> encoding="latin-1", byteorder=None, time_stamp=None,
> data_label=None, variable_labels=None):
> ```
Thus using `dataset_label` doesn't work, but `data_label` does.
```python
>>> import pandas as pd
>>> df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
>>> df.to_stata('test.dta', dataset_label='data label')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: to_stata() got an unexpected keyword argument 'dataset_label'
>>> df.to_stata('test.dta', data_label='data label')
```
I'll submit a PR shortly changing that docstring.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Linux
OS-release: 4.13.0-32-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.22.0
pytest: 3.3.0
pip: 9.0.1
setuptools: 38.5.1
Cython: 0.27.3
numpy: 1.14.0
scipy: 1.0.0
pyarrow: 0.8.0
xarray: None
IPython: 6.2.1
sphinx: 1.6.3
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: 0.4.0
matplotlib: 2.1.1
openpyxl: 2.4.9
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.2
lxml: 4.1.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.1.13
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| 2018-02-21T17:04:17Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: to_stata() got an unexpected keyword argument 'dataset_label'
| 11,746 |
||||
pandas-dev/pandas | pandas-dev__pandas-19833 | 3b135c3c4424cfa10b955a0d505189f0a06e9122 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -896,6 +896,7 @@ Reshaping
- Bug in :func:`DataFrame.join` which does an ``outer`` instead of a ``left`` join when being called with multiple DataFrames and some have non-unique indices (:issue:`19624`)
- :func:`Series.rename` now accepts ``axis`` as a kwarg (:issue:`18589`)
- Comparisons between :class:`Series` and :class:`Index` would return a ``Series`` with an incorrect name, ignoring the ``Index``'s name attribute (:issue:`19582`)
+- Bug in :func:`qcut` where datetime and timedelta data with ``NaT`` present raised a ``ValueError`` (:issue:`19768`)
Other
^^^^^
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -279,18 +279,22 @@ def _trim_zeros(x):
def _coerce_to_type(x):
"""
if the passed data is of datetime/timedelta type,
- this method converts it to integer so that cut method can
+ this method converts it to numeric so that cut method can
handle it
"""
dtype = None
if is_timedelta64_dtype(x):
- x = to_timedelta(x).view(np.int64)
+ x = to_timedelta(x)
dtype = np.timedelta64
elif is_datetime64_dtype(x):
- x = to_datetime(x).view(np.int64)
+ x = to_datetime(x)
dtype = np.datetime64
+ if dtype is not None:
+ # GH 19768: force NaT to NaN during integer conversion
+ x = np.where(x.notna(), x.view(np.int64), np.nan)
+
return x, dtype
| qcut raising ValueError if NaT present
#### Code Sample, a copy-pastable example if possible
```python
from io import StringIO
import pandas as pd
csv = 'Index,Date\n1,2013-01-01 23:00:00\n2,\n3,2013-01-01 23:00:01'
df = pd.read_csv(StringIO(csv), index_col=0, parse_dates=[1])
pd.qcut(df["Date"], 2)
```
#### Problem description
`qcut` raises a `ValueError`:
```
Traceback (most recent call last):
File "mve.py", line 26, in <module>
pd.qcut(df["Date"], 2)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/reshape/tile.py", line 208, in qcut
dtype=dtype, duplicates=duplicates)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/reshape/tile.py", line 251, in _bins_to_cuts
dtype=dtype)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/reshape/tile.py", line 344, in _format_labels
labels = IntervalIndex.from_breaks(breaks, closed=closed)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 370, in from_breaks
name=name, copy=copy)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 411, in from_arrays
copy=copy, verify_integrity=True)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 225, in _simple_new
result._validate()
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 265, in _validate
raise ValueError('missing values must be missing in the same '
ValueError: missing values must be missing in the same location both left and right sides
```
#### Expected Output
`qcut` returning something like
```
Index
1 (2013-01-01 22:59:59.999999999, 2013-01-01 23:00:01.0
2 NaT
3 (2013-01-01 22:59:59.999999999, 2013-01-01 23:00:01.0
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.13.0-32-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: de_DE.UTF-8
LOCALE: de_DE.UTF-8
pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 38.5.1
Cython: None
numpy: 1.14.0
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| 1) Could you rewrite your example to not use `read_csv` (i.e. just construct the `DataFrame` from scratch).
2) Yeah...that does look weird indeed. Even the error message is confusing. PR to patch is welcome! | 2018-02-22T01:05:37Z | [] | [] |
Traceback (most recent call last):
File "mve.py", line 26, in <module>
pd.qcut(df["Date"], 2)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/reshape/tile.py", line 208, in qcut
dtype=dtype, duplicates=duplicates)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/reshape/tile.py", line 251, in _bins_to_cuts
dtype=dtype)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/reshape/tile.py", line 344, in _format_labels
labels = IntervalIndex.from_breaks(breaks, closed=closed)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 370, in from_breaks
name=name, copy=copy)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 411, in from_arrays
copy=copy, verify_integrity=True)
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 225, in _simple_new
result._validate()
File "/tmp/test/env/lib/python3.5/site-packages/pandas/core/indexes/interval.py", line 265, in _validate
raise ValueError('missing values must be missing in the same '
ValueError: missing values must be missing in the same location both left and right sides
| 11,751 |
|||
pandas-dev/pandas | pandas-dev__pandas-20005 | aedbd948938f7e9230a321eb49f6c789867ab2b6 | diff --git a/doc/make.py b/doc/make.py
--- a/doc/make.py
+++ b/doc/make.py
@@ -11,6 +11,7 @@
$ python make.py html
$ python make.py latex
"""
+import importlib
import sys
import os
import shutil
@@ -20,8 +21,6 @@
import webbrowser
import jinja2
-import pandas
-
DOC_PATH = os.path.dirname(os.path.abspath(__file__))
SOURCE_PATH = os.path.join(DOC_PATH, 'source')
@@ -134,7 +133,7 @@ def _process_single_doc(self, single_doc):
self.single_doc = single_doc
elif single_doc is not None:
try:
- obj = pandas
+ obj = pandas # noqa: F821
for name in single_doc.split('.'):
obj = getattr(obj, name)
except AttributeError:
@@ -332,7 +331,7 @@ def main():
'compile, e.g. "indexing", "DataFrame.join"'))
argparser.add_argument('--python-path',
type=str,
- default=os.path.join(DOC_PATH, '..'),
+ default=os.path.dirname(DOC_PATH),
help='path')
argparser.add_argument('-v', action='count', dest='verbosity', default=0,
help=('increase verbosity (can be repeated), '
@@ -343,7 +342,13 @@ def main():
raise ValueError('Unknown command {}. Available options: {}'.format(
args.command, ', '.join(cmds)))
+ # Below we update both os.environ and sys.path. The former is used by
+ # external libraries (namely Sphinx) to compile this module and resolve
+ # the import of `python_path` correctly. The latter is used to resolve
+ # the import within the module, injecting it into the global namespace
os.environ['PYTHONPATH'] = args.python_path
+ sys.path.append(args.python_path)
+ globals()['pandas'] = importlib.import_module('pandas')
builder = DocBuilder(args.num_jobs, not args.no_api, args.single,
args.verbosity)
| Unable to Build Documentation
```bash
python make.py html
Traceback (most recent call last):
File "doc/make.py", line 23, in <module>
import pandas
ModuleNotFoundError: No module named 'pandas'
```
I believe a slight bug was introduced by c8859b57b891701f250fb05f2cc60d2e6cae2d6b in including `import pandas` at the top of the script. Unless the user has pandas installed as a package I don't think this module would know where to find it without modifying the path or being more explicit on import
@jorisvandenbossche
| The path is being modified somewhere else in the file make.py. If you inline the import inside the `_process_single_doc` method, does that fix the error?
I don't think that makes a difference. I looked but didn't see anywhere else in the file that added the project root to the import path - perhaps I'm overlooking it?
Otherwise adding something like the below fixes the issue:
```python
DOC_PATH = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.dirname(DOC_PATH))
import pandas
```
https://github.com/pandas-dev/pandas/blob/master/doc/make.py#L346
and the default is '..'
Can you actually try it? I hope it would work because it then comes after `os.environ['PYTHONPATH'] = args.python_path`, but not fully sure imports work that way.
Hmm OK I see what we are trying to do. I think setting `os.environ['PYTHONPATH']` gives sphinx access to the import path when it compiles the module externally, but doesn't help the import mechanism specifically within the module.
I'll push a PR in a few that I think should serve both purposes | 2018-03-05T22:17:02Z | [] | [] |
Traceback (most recent call last):
File "doc/make.py", line 23, in <module>
import pandas
ModuleNotFoundError: No module named 'pandas'
| 11,780 |
|||
pandas-dev/pandas | pandas-dev__pandas-20292 | 31afaf858604ab85665b54b92f40cef19d69a28d | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -896,6 +896,7 @@ Timezones
- Bug in :func:`Timestamp.tz_localize` where localizing a timestamp near the minimum or maximum valid values could overflow and return a timestamp with an incorrect nanosecond value (:issue:`12677`)
- Bug when iterating over :class:`DatetimeIndex` that was localized with fixed timezone offset that rounded nanosecond precision to microseconds (:issue:`19603`)
- Bug in :func:`DataFrame.diff` that raised an ``IndexError`` with tz-aware values (:issue:`18578`)
+- Bug in :func:`melt` that converted tz-aware dtypes to tz-naive (:issue:`15785`)
Offsets
^^^^^^^
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -13,7 +13,9 @@
import re
from pandas.core.dtypes.missing import notna
+from pandas.core.dtypes.common import is_extension_type
from pandas.core.tools.numeric import to_numeric
+from pandas.core.reshape.concat import concat
@Appender(_shared_docs['melt'] %
@@ -70,7 +72,12 @@ def melt(frame, id_vars=None, value_vars=None, var_name=None,
mdata = {}
for col in id_vars:
- mdata[col] = np.tile(frame.pop(col).values, K)
+ id_data = frame.pop(col)
+ if is_extension_type(id_data):
+ id_data = concat([id_data] * K, ignore_index=True)
+ else:
+ id_data = np.tile(id_data.values, K)
+ mdata[col] = id_data
mcolumns = id_vars + var_name + [value_name]
| BUG: melt changes type of tz-aware columns
#### Code Samples
```python
import pandas as pd
frame = pd.DataFrame({'klass':range(5), 'ts': [pd.Timestamp('2017-03-23 08:22:42.173378+01'), pd.Timestamp('2017-03-23 08:22:42.178578+01'), pd.Timestamp('2017-03-23 08:22:42.173578+01'), pd.Timestamp('2017-03-23 08:22:42.178378+01'), pd.Timestamp('2017-03-23 08:22:42.163378+01')], 'attribute':['att1', 'att2', 'att3', 'att4', 'att5'], 'value': ['a', 'b', 'c', 'd', 'd']})
# At this point, frame.ts is of dtype datetime64[ns, pytz.FixedOffset(60)]
frame.set_index(['ts', 'klass'], inplace=True)
queried_index = frame.query('value=="d"').index
pivoted_frame = frame.reset_index().pivot_table(index=['klass', 'ts'], columns='attribute', values='value', aggfunc='first')
melted_frame = pd.melt(pivoted_frame.reset_index(), id_vars=['klass', 'ts'], var_name='attribute', value_name='value')
# At this point, melted_frame.ts is of dtype datetime64[ns]
queried_after_melted_index = melted_frame.query('value=="d"').set_index(['ts', 'klass']).index
frame.loc[queried_index] # Works
frame.loc[queried_index] = 'test' # Works
frame.loc[queried_after_melted_index] # Works
frame.loc[queried_after_melted_index] = 'test' # Breaks
```
The last statement gives:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 140, in __setitem__
indexer = self._get_setitem_indexer(key)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 127, in _get_setitem_indexer
return self._convert_to_indexer(key, is_setter=True)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1230, in _convert_to_indexer
raise KeyError('%s not in index' % objarr[mask])
KeyError: "MultiIndex(levels=[[2017-03-23 07:22:42.163378, 2017-03-23 07:22:42.173378, 2017-03-23 07:22:42.173578, 2017-03-23 07:22:42.178378, 2017-03-23 07:22:42.178578], [0, 1, 2, 3, 4]],\n labels=[[3, 0], [3, 4]],\n names=['ts', 'klass']) not in index"
```
#### Problem description
- It is counter-intuitive that any operation (which does not explicitly mention in its docs that it does) alters the type of any column.
- Also counter-intuitive is that ```frame.loc``` has different behavior in a statement than it has in an assignment.
#### Expected Output
- ```melted_frame.ts``` and ```frame.ts``` have the same dtype.
- ```DataFrame.loc``` fails in both cases, not just in an assignment, or succeeds in both.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-66-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 20.7.0
Cython: None
numpy: 1.12.0
scipy: None
statsmodels: None
xarray: None
IPython: 5.3.0
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.7.3
lxml: 3.5.0
bs4: 4.4.1
html5lib: 0.999
httplib2: 0.9.1
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: 2.6.1 (dt dec pq3 ext lo64)
jinja2: 2.8
boto: None
pandas_datareader: None
</details>
| @stigviaene ``.melt`` doesn't have the battery of tests that most other things have. So not suprising that this doesn't convert correctly. Welcome to have you submit a patch to fix or at least see if you can locate the problem.
your comments on indexing are orthogonal. If you have a specific bug/comment you can raise in another issue. | 2018-03-12T02:00:24Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 140, in __setitem__
indexer = self._get_setitem_indexer(key)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 127, in _get_setitem_indexer
return self._convert_to_indexer(key, is_setter=True)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1230, in _convert_to_indexer
raise KeyError('%s not in index' % objarr[mask])
KeyError: "MultiIndex(levels=[[2017-03-23 07:22:42.163378, 2017-03-23 07:22:42.173378, 2017-03-23 07:22:42.173578, 2017-03-23 07:22:42.178378, 2017-03-23 07:22:42.178578], [0, 1, 2, 3, 4]],\n labels=[[3, 0], [3, 4]],\n names=['ts', 'klass']) not in index"
| 11,798 |
|||
pandas-dev/pandas | pandas-dev__pandas-20401 | cdfce2b0ad99f7faad57cc5247cf33aab5725bed | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1035,6 +1035,7 @@ Reshaping
- Bug in :class:`Series` constructor with ``Categorical`` where a ```ValueError`` is not raised when an index of different length is given (:issue:`19342`)
- Bug in :meth:`DataFrame.astype` where column metadata is lost when converting to categorical or a dictionary of dtypes (:issue:`19920`)
- Bug in :func:`cut` and :func:`qcut` where timezone information was dropped (:issue:`19872`)
+- Bug in :class:`Series` constructor with a ``dtype=str``, previously raised in some cases (:issue:`19853`)
Other
^^^^^
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4059,9 +4059,10 @@ def _try_cast(arr, take_fast_path):
if issubclass(subarr.dtype.type, compat.string_types):
# GH 16605
# If not empty convert the data to dtype
- if not isna(data).all():
- data = np.array(data, dtype=dtype, copy=False)
-
- subarr = np.array(data, dtype=object, copy=copy)
+ # GH 19853: If data is a scalar, subarr has already the result
+ if not is_scalar(data):
+ if not np.all(isna(data)):
+ data = np.array(data, dtype=dtype, copy=False)
+ subarr = np.array(data, dtype=object, copy=copy)
return subarr
| BUG: invalid constrution of a Series with dtype=str
```python
pd.Series('', dtype=str, index=range(1000))
```
throws a `ValueError` with the following message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\series.py", line 266, in __init__
data = SingleBlockManager(data, index, fastpath=True)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 4402, in __init__
fastpath=True)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 2957, in make_block
return klass(values, ndim=ndim, fastpath=fastpath, placement=placement)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 2082, in __init__
placement=placement, **kwargs)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 111, in __init__
raise ValueError('Wrong number of dimensions')
ValueError: Wrong number of dimensions
```
Would it be possible to fix the behavior to initialize the series to `''` (or at least provide a clearer message)?
| ``pd.Series('', dtype=object, index=range(1000))``
That's ok. String uses 'object' dtype.
this takes a different path in master. We pretty much treat str as object. So this is a construction bug.
```
In [4]: pd.Series('', index=range(1000), dtype=str)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-4-3bd08f17c610> in <module>()
----> 1 pd.Series('', index=range(1000), dtype=str)
~/pandas/pandas/core/series.py in __init__(self, data, index, dtype, name, copy, fastpath)
237 else:
238 data = _sanitize_array(data, index, dtype, copy,
--> 239 raise_cast_failure=True)
240
241 data = SingleBlockManager(data, index, fastpath=True)
~/pandas/pandas/core/series.py in _sanitize_array(data, index, dtype, copy, raise_cast_failure)
3260 # GH 16605
3261 # If not empty convert the data to dtype
-> 3262 if not isna(data).all():
3263 data = np.array(data, dtype=dtype, copy=False)
3264
AttributeError: 'bool' object has no attribute 'all'
```
@jamesqo there is no reason to specify a dtype here as this will be inferred to ``object`` dtype anyhow (``str`` as I said above is pretty much an alias for ``object`` dtype).
a PR to fix is welcome.
@jamesqo note that setting the string ``''`` like this doesn't have much utilitiy. pandas has a full suite of string operations that are all NaN aware. | 2018-03-18T16:20:47Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\series.py", line 266, in __init__
data = SingleBlockManager(data, index, fastpath=True)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 4402, in __init__
fastpath=True)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 2957, in make_block
return klass(values, ndim=ndim, fastpath=fastpath, placement=placement)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 2082, in __init__
placement=placement, **kwargs)
File "C:\Users\james\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\internals.py", line 111, in __init__
raise ValueError('Wrong number of dimensions')
ValueError: Wrong number of dimensions
| 11,806 |
|||
pandas-dev/pandas | pandas-dev__pandas-20537 | fac2ef1b2095c7785006c901e941e2657571d935 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1098,6 +1098,7 @@ I/O
- Bug in :func:`read_pickle` when unpickling objects with :class:`TimedeltaIndex` or :class:`Float64Index` created with pandas prior to version 0.20 (:issue:`19939`)
- Bug in :meth:`pandas.io.json.json_normalize` where subrecords are not properly normalized if any subrecords values are NoneType (:issue:`20030`)
- Bug in ``usecols`` parameter in :func:`pandas.io.read_csv` and :func:`pandas.io.read_table` where error is not raised correctly when passing a string. (:issue:`20529`)
+- Bug in :func:`HDFStore.keys` when reading a file with a softlink causes exception (:issue:`20523`)
Plotting
^^^^^^^^
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1073,10 +1073,11 @@ def groups(self):
self._check_if_open()
return [
g for g in self._handle.walk_nodes()
- if (getattr(g._v_attrs, 'pandas_type', None) or
- getattr(g, 'table', None) or
+ if (not isinstance(g, _table_mod.link.Link) and
+ (getattr(g._v_attrs, 'pandas_type', None) or
+ getattr(g, 'table', None) or
(isinstance(g, _table_mod.table.Table) and
- g._v_name != u('table')))
+ g._v_name != u('table'))))
]
def get_node(self, key):
| Presence of softlink in HDF5 file breaks HDFStore.keys()
#### Code Sample, a copy-pastable example if possible
```python
#! /path/to/python3.6
import pandas as pd
df = pd.DataFrame({ "a": [1], "b": [2] })
print(df.to_string())
hdf = pd.HDFStore("/tmp/test.hdf", mode="w")
hdf.put("/test/key", df)
#Brittle
hdf._handle.create_soft_link(hdf._handle.root.test, "symlink", "/test/key")
hdf.close()
print("Successful write")
hdf = pd.HDFStore("/tmp/test.hdf", mode="r")
'''
Traceback (most recent call last):
File "snippet.py", line 31, in <module>
print(hdf.keys())
File "python3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 529, in keys
return [n._v_pathname for n in self.groups()]
File "python3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 1077, in groups
g for g in self._handle.walk_nodes()
File "python3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 1078, in <listcomp>
if (getattr(g._v_attrs, 'pandas_type', None) or
File "python3.6.3/lib/python3.6/site-packages/tables/link.py", line 79, in __getattr__
"`%s` instance" % self.__class__.__name__)
KeyError: 'you cannot get attributes from this `NoAttrs` instance'
'''
print(hdf.keys()) #causes exception
hdf.close()
print("Successful read")
```
#### Problem description
I know I have a esoteric problem, but I'm building an HDF5 file using Pandas and then using pytables to softlink to the Pandas dataframe. I understand this is unsupported and brittle but for my use case I haven't been able to come up with a better/simpler solution.
This issue is similar to: https://github.com/pandas-dev/pandas/issues/6019
The root cause is when we call HDFStore.keys(), it calls HDFStore.groups() and eventually g._v_attrs on a Pytables File.
https://github.com/pandas-dev/pandas/blob/master/pandas/io/pytables.py#L1076
But calling g._v_attrs on a tables.link.SoftLink causes a KeyError due to:
https://github.com/PyTables/PyTables/blob/develop/tables/link.py#L76
And there doesn't look to be a way to guard against an instance of NoAttrs since that class is defined within the method. One solution may be to check the instance of g if it's a Link
```
return [
g for g in self._handle.walk_nodes()
if (not isinstance(g, _table_mod.link.Link) and
(getattr(g._v_attrs, 'pandas_type', None) or
getattr(g, 'table', None) or
(isinstance(g, _table_mod.table.Table) and
g._v_name != u('table'))))
]
```
I'd be happy to write a PR and tests if you find this change acceptable.
#### Expected Output
```
a b
0 1 2
Successful write
['/test/key']
Successful read
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Linux
OS-release: 3.10.0-514.21.1.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.utf-8
LANG: en_US.utf-8
LOCALE: en_US.UTF-8
pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 38.5.1
Cython: None
numpy: 1.14.0
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: None
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.0
openpyxl: None
xlrd: 1.1.0
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| sure would take a patch to avoid an error on this | 2018-03-29T13:43:21Z | [] | [] |
Traceback (most recent call last):
File "snippet.py", line 31, in <module>
print(hdf.keys())
File "python3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 529, in keys
return [n._v_pathname for n in self.groups()]
File "python3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 1077, in groups
g for g in self._handle.walk_nodes()
File "python3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 1078, in <listcomp>
if (getattr(g._v_attrs, 'pandas_type', None) or
File "python3.6.3/lib/python3.6/site-packages/tables/link.py", line 79, in __getattr__
"`%s` instance" % self.__class__.__name__)
KeyError: 'you cannot get attributes from this `NoAttrs` instance'
| 11,821 |
|||
pandas-dev/pandas | pandas-dev__pandas-20549 | 336fba7c0191444c3328009e6d4f9f5d00ee224b | diff --git a/doc/source/api.rst b/doc/source/api.rst
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -2106,6 +2106,7 @@ Standard moving window functions
Rolling.skew
Rolling.kurt
Rolling.apply
+ Rolling.aggregate
Rolling.quantile
Window.mean
Window.sum
@@ -2133,6 +2134,7 @@ Standard expanding window functions
Expanding.skew
Expanding.kurt
Expanding.apply
+ Expanding.aggregate
Expanding.quantile
Exponentially-weighted moving window functions
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -438,6 +438,7 @@ Other Enhancements
``SQLAlchemy`` dialects supporting multivalue inserts include: ``mysql``, ``postgresql``, ``sqlite`` and any dialect with ``supports_multivalues_insert``. (:issue:`14315`, :issue:`8953`)
- :func:`read_html` now accepts a ``displayed_only`` keyword argument to controls whether or not hidden elements are parsed (``True`` by default) (:issue:`20027`)
- zip compression is supported via ``compression=zip`` in :func:`DataFrame.to_pickle`, :func:`Series.to_pickle`, :func:`DataFrame.to_csv`, :func:`Series.to_csv`, :func:`DataFrame.to_json`, :func:`Series.to_json`. (:issue:`17778`)
+- :class:`WeekOfMonth` constructor now supports ``n=0`` (:issue:`20517`).
- :class:`DataFrame` and :class:`Series` now support matrix multiplication (```@```) operator (:issue:`10259`) for Python>=3.5
- Updated ``to_gbq`` and ``read_gbq`` signature and documentation to reflect changes from
the Pandas-GBQ library version 0.4.0. Adds intersphinx mapping to Pandas-GBQ
@@ -847,7 +848,7 @@ Other API Changes
- :func:`DatetimeIndex.strftime` and :func:`PeriodIndex.strftime` now return an ``Index`` instead of a numpy array to be consistent with similar accessors (:issue:`20127`)
- Constructing a Series from a list of length 1 no longer broadcasts this list when a longer index is specified (:issue:`19714`, :issue:`20391`).
- :func:`DataFrame.to_dict` with ``orient='index'`` no longer casts int columns to float for a DataFrame with only int and float columns (:issue:`18580`)
-- A user-defined-function that is passed to :func:`Series.rolling().aggregate() <pandas.core.window.Rolling.aggregate>`, :func:`DataFrame.rolling().aggregate() <pandas.core.window.Rolling.aggregate>`, or its expanding cousins, will now *always* be passed a ``Series``, rather than an ``np.array``; ``.apply()`` only has the ``raw`` keyword, see :ref:`here <whatsnew_0230.enhancements.window_raw>`. This is consistent with the signatures of ``.aggregate()`` across pandas (:issue:`20584`)
+- A user-defined-function that is passed to :func:`Series.rolling().aggregate() <pandas.core.window.Rolling.aggregate>`, :func:`DataFrame.rolling().aggregate() <pandas.core.window.Rolling.aggregate>`, or its expanding cousins, will now *always* be passed a ``Series``, rather than a ``np.array``; ``.apply()`` only has the ``raw`` keyword, see :ref:`here <whatsnew_0230.enhancements.window_raw>`. This is consistent with the signatures of ``.aggregate()`` across pandas (:issue:`20584`)
.. _whatsnew_0230.deprecations:
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -1461,9 +1461,6 @@ def __init__(self, n=1, normalize=False, week=0, weekday=0):
self.weekday = weekday
self.week = week
- if self.n == 0:
- raise ValueError('N cannot be 0')
-
if self.weekday < 0 or self.weekday > 6:
raise ValueError('Day must be 0<=day<=6, got {day}'
.format(day=self.weekday))
| date_range fails when I try to generate ones with 1 periods and freq equal WOM-1MON
#### Code Sample
```python
import pandas as pd
pd.date_range('20100104', periods=2, freq='WOM-1MON') # works
pd.date_range('20100104', periods=1, freq='WOM-1MON') # fails
```
```python-traceback
Traceback (most recent call last):
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-31-ec0b4bad59c9>", line 1, in <module>
pd.date_range('20100104', periods=1, freq='WOM-1MON')
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 2057, in date_range
closed=closed, **kwargs)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 324, in __new__
ambiguous=ambiguous)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 531, in _generate
index = _generate_regular_range(start, end, periods, offset)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 2009, in _generate_regular_range
dates = list(xdr)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 2960, in generate_range
end = start + (periods - 1) * offset
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 425, in __rmul__
return self.__mul__(someInt)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 422, in __mul__
**self.kwds)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 1671, in __init__
raise ValueError('N cannot be 0')
ValueError: N cannot be 0
```
#### Problem description
If N is equal to periods then is not 0 as we can see, that make me think that probably there is something wrong in the code.
#### Expected Output
```python
Out[33]: DatetimeIndex(['2010-01-04'], dtype='datetime64[ns]', freq='WOM-1MON')
```
#### Output of ``pd.show_versions()``
<details>
pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Darwin
OS-release: 17.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
LOCALE: es_ES.UTF-8
pandas: 0.22.0
pytest: 3.4.2
pip: 9.0.1
setuptools: 38.5.1
Cython: 0.27.3
numpy: 1.14.2
scipy: 1.0.0
pyarrow: 0.9.0
xarray: 0.10.2
IPython: 6.2.1
sphinx: 1.7.1
patsy: 0.5.0
dateutil: 2.7.0
pytz: 2018.3
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: 0.4.0
matplotlib: 2.2.2
openpyxl: 2.5.1
xlrd: 1.1.0
xlwt: 1.2.0
xlsxwriter: 1.0.2
lxml: 4.2.0
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.5
pymysql: 0.8.0
psycopg2: None
jinja2: 2.10
s3fs: 0.1.3
fastparquet: 0.1.4
pandas_gbq: None
pandas_datareader: None
</details>
| Well the traceback here points exactly to the offending line of code - within the constructor there is an explicit check that you have more than one period which I've linked below for reference (`generate_periods` up one level in the stack subtracts one from `periods`)
https://github.com/pandas-dev/pandas/blob/c4b4a81f56205082ec7f12bf77766e3b74d27c37/pandas/tseries/offsets.py#L1464
If you remove that assertion you get the value you are expecting. I'm not overly familiar with offsets though - @jbrockmendel any chance you know whether it makes sense to relax that assertion or not?
This is one of those "it was like that when I got here" things. I'd guess that n==0 would cause trouble with incrementing, but not really sure.
@mmngreco do you want to try a PR for this? Would need test cases to cover this and any other edge case you can think of
Ok, I would like to try. | 2018-03-30T14:10:52Z | [] | [] |
Traceback (most recent call last):
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-31-ec0b4bad59c9>", line 1, in <module>
pd.date_range('20100104', periods=1, freq='WOM-1MON')
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 2057, in date_range
closed=closed, **kwargs)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 324, in __new__
ambiguous=ambiguous)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 531, in _generate
index = _generate_regular_range(start, end, periods, offset)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 2009, in _generate_regular_range
dates = list(xdr)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 2960, in generate_range
end = start + (periods - 1) * offset
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 425, in __rmul__
return self.__mul__(someInt)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 422, in __mul__
**self.kwds)
File "/Users/mmngreco/miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 1671, in __init__
raise ValueError('N cannot be 0')
ValueError: N cannot be 0
| 11,824 |
|||
pandas-dev/pandas | pandas-dev__pandas-20672 | d5d5a718254f45b4bdc386c360c830df395ec02a | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1053,6 +1053,10 @@ Numeric
- Bug in :meth:`Series.rank` and :meth:`DataFrame.rank` when ``ascending='False'`` failed to return correct ranks for infinity if ``NaN`` were present (:issue:`19538`)
- Bug where ``NaN`` was returned instead of 0 by :func:`Series.pct_change` and :func:`DataFrame.pct_change` when ``fill_method`` is not ``None`` (:issue:`19873`)
+Strings
+^^^^^^^
+- Bug in :func:`Series.str.get` with a dictionary in the values and the index not in the keys, raising `KeyError` (:issue:`20671`)
+
Indexing
^^^^^^^^
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1663,7 +1663,12 @@ def str_get(arr, i):
-------
items : Series/Index of objects
"""
- f = lambda x: x[i] if len(x) > i >= -len(x) else np.nan
+ def f(x):
+ if isinstance(x, dict):
+ return x.get(i)
+ elif len(x) > i >= -len(x):
+ return x[i]
+ return np.nan
return _na_map(f, arr)
| str.get fails if Series contains dict
#### Code Sample, a copy-pastable example if possible
```python
>>> s = pandas.Series([{0: 'a', 1: 'b'}])
>>> s
0 {0: 'a', 1: 'b'}
dtype: object
>>> s.str.get(-1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 1556, in get
result = str_get(self._data, i)
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 1264, in str_get
return _na_map(f, arr)
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 156, in _na_map
return _map(f, arr, na_mask=True, na_value=na_result, dtype=dtype)
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 171, in _map
result = lib.map_infer_mask(arr, f, mask.view(np.uint8), convert)
File "pandas/_libs/src/inference.pyx", line 1482, in pandas._libs.lib.map_infer_mask
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 1263, in <lambda>
f = lambda x: x[i] if len(x) > i >= -len(x) else np.nan
KeyError: -1
```
#### Problem description
`str.get` is designed for strings, but also useful with other structures like lists, for which works fine. When the values of the Series contain a dict, `str.get` tries to get the key provided as an index from the dictionary and fails with a `KeyError`.
I think it's more consistent with the rest of pandas to simply return `numpy.nan` when this happens.
#### Expected Output
```
>>> s = pandas.Series([{0: 'a', 1: 'b'}])
>>> s
0 {0: 'a', 1: 'b'}
dtype: object
>>> s.str.get(-1)
0 NaN
```
#### Output of ``pd.show_versions()``
<details>
>>> pandas.show_versions()
INSTALLED VERSIONS
------------------
commit: fa231e8766e02610ae5a45e4b2bc90b6c7e9ee6f
python: 3.6.4.final.0
python-bits: 64
OS: Linux
OS-release: 4.8.13-100.fc23.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.utf8
LOCALE: en_GB.UTF-8
pandas: 0.23.0.dev0+740.gfa231e8.dirty
pytest: 3.1.3
pip: 9.0.1
setuptools: 38.5.1
Cython: 0.27.3
numpy: 1.14.0
scipy: 1.0.0
pyarrow: 0.8.0
xarray: 0.10.0
IPython: 6.2.1
sphinx: 1.5
patsy: 0.5.0
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: 0.4.0
matplotlib: 2.1.2
openpyxl: 2.5.0
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.2
lxml: 4.1.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.1
pymysql: 0.8.0
psycopg2: None
jinja2: 2.10
s3fs: 0.1.3
fastparquet: 0.1.4
pandas_gbq: None
pandas_datareader: None
</details>
| 2018-04-12T22:50:28Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 1556, in get
result = str_get(self._data, i)
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 1264, in str_get
return _na_map(f, arr)
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 156, in _na_map
return _map(f, arr, na_mask=True, na_value=na_result, dtype=dtype)
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 171, in _map
result = lib.map_infer_mask(arr, f, mask.view(np.uint8), convert)
File "pandas/_libs/src/inference.pyx", line 1482, in pandas._libs.lib.map_infer_mask
File "/home/mgarcia/.anaconda3/lib/python3.6/site-packages/pandas/core/strings.py", line 1263, in <lambda>
f = lambda x: x[i] if len(x) > i >= -len(x) else np.nan
KeyError: -1
| 11,838 |
||||
pandas-dev/pandas | pandas-dev__pandas-20705 | d04b7464dcc20051ef38ac2acda580de854d3e01 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1111,6 +1111,7 @@ I/O
- Bug in :meth:`pandas.io.json.json_normalize` where subrecords are not properly normalized if any subrecords values are NoneType (:issue:`20030`)
- Bug in ``usecols`` parameter in :func:`pandas.io.read_csv` and :func:`pandas.io.read_table` where error is not raised correctly when passing a string. (:issue:`20529`)
- Bug in :func:`HDFStore.keys` when reading a file with a softlink causes exception (:issue:`20523`)
+- Bug in :func:`HDFStore.select_column` where a key which is not a valid store raised an ``AttributeError`` instead of a ``KeyError`` (:issue:`17912`)
Plotting
^^^^^^^^
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -887,7 +887,10 @@ def remove(self, key, where=None, start=None, stop=None):
where = _ensure_term(where, scope_level=1)
try:
s = self.get_storer(key)
- except:
+ except KeyError:
+ # the key is not a valid store, re-raising KeyError
+ raise
+ except Exception:
if where is not None:
raise ValueError(
@@ -899,9 +902,6 @@ def remove(self, key, where=None, start=None, stop=None):
s._f_remove(recursive=True)
return None
- if s is None:
- raise KeyError('No object named %s in the file' % key)
-
# remove the node
if com._all_none(where, start, stop):
s.group._f_remove(recursive=True)
@@ -1094,7 +1094,8 @@ def get_storer(self, key):
""" return the storer object for a key, raise if not in the file """
group = self.get_node(key)
if group is None:
- return None
+ raise KeyError('No object named {} in the file'.format(key))
+
s = self._create_storer(group)
s.infer_axes()
return s
| HDFStore.select_column
#### Code Sample, a copy-pastable example if possible
Let's select column from a non-existing dataframe in a HDFStore:
```python
import pandas as pd
store = pd.HDFStore('test.hdf5', mode='w')
store.select_column('dummy', 'index')
```
#### Problem description
We get an `AttributeError` because `get_storer` returns `None`:
```
Traceback (most recent call last):
File "pandas_hdf5.py", line 4, in <module>
store.select_column('dummy', 'index')
File "[...]/site-packages/pandas/io/pytables.py", line 778, in select_column
return self.get_storer(key).read_column(column=column, **kwargs)
AttributeError: 'NoneType' object has no attribute 'read_column'
```
Is this intended?
#### Expected Output
The docstring says:
```python
"""
Exceptions
----------
raises KeyError if the column is not found (or key is not a valid
store)
raises ValueError if the column can not be extracted individually (it
is part of a data block)
"""
```
Shouldn't I expect a `KeyError`, then?
It could be just this simple patch:
```
- return self.get_storer(key).read_column(column=column, **kwargs)
+ storer = self.get_storer(key)
+ if storer is None:
+ raise KeyError('{} not in {}'.format(key, self))
+ return storer.read_column(column=column, **kwargs)
```
or should `get_storer` raise en exception in the first place?
I'm new to Pandas/PyTables so I don't have the big picture.
From a caller perspective, I could to check first that the dataframe is in the store:
```python
store = pd.HDFStore('test.hdf5', mode='w')
if 'dummy' in store:
store.select_column('dummy', 'index')
```
but I'd rather "ask forgiveness not permission",
```python
store = pd.HDFStore('test.hdf5', mode='w')
try:
store.select_column('dummy', 'index')
except AttributeError:
[...]
```
so I should catch `AttributeError` but I'm not sure this exception being thrown is a design choice.
I hope I'm being constructive and I don't sound like I'm nitpicking.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.0-4-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: fr_FR.UTF-8
LOCALE: fr_FR.UTF-8
pandas: 0.20.3
pytest: 3.2.3
pip: 9.0.1
setuptools: 36.6.0
Cython: None
numpy: 1.13.3
scipy: 0.19.1
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| Agree this looks buggy, I would probably push the exception down to `get_storer` and see if any tests break, docstring _says_ it raises. Want to do a PR?
https://github.com/pandas-dev/pandas/blob/5687f9e8f63c325249caabf0c8b7f0bee0a12f09/pandas/io/pytables.py#L1095
I might try to give it a shot.
The error in get_storer would be a `KeyError`, right? Would this be fine?
```python
def get_storer(self, key):
""" return the storer object for a key, raise if not in the file """
group = self.get_node(key)
if group is None:
- return None
+ raise KeyError('No {} node in {}'.format(key, self))
s = self._create_storer(group)
s.infer_axes()
return s
``` | 2018-04-15T15:01:41Z | [] | [] |
Traceback (most recent call last):
File "pandas_hdf5.py", line 4, in <module>
store.select_column('dummy', 'index')
File "[...]/site-packages/pandas/io/pytables.py", line 778, in select_column
return self.get_storer(key).read_column(column=column, **kwargs)
AttributeError: 'NoneType' object has no attribute 'read_column'
| 11,846 |
|||
pandas-dev/pandas | pandas-dev__pandas-20846 | b02c69ac7309ccf63a17471b25475bf0c0ebe3c3 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -523,6 +523,7 @@ Other Enhancements
library. (:issue:`20564`)
- Added new writer for exporting Stata dta files in version 117, ``StataWriter117``. This format supports exporting strings with lengths up to 2,000,000 characters (:issue:`16450`)
- :func:`to_hdf` and :func:`read_hdf` now accept an ``errors`` keyword argument to control encoding error handling (:issue:`20835`)
+- :func:`date_range` now returns a linearly spaced ``DatetimeIndex`` if ``start``, ``stop``, and ``periods`` are specified, but ``freq`` is not. (:issue:`20808`)
.. _whatsnew_0230.api_breaking:
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -358,7 +358,8 @@ def __new__(cls, data=None,
msg = 'periods must be a number, got {periods}'
raise TypeError(msg.format(periods=periods))
- if data is None and freq is None:
+ if data is None and freq is None \
+ and com._any_none(periods, start, end):
raise ValueError("Must provide freq argument if no data is "
"supplied")
@@ -466,9 +467,9 @@ def __new__(cls, data=None,
@classmethod
def _generate(cls, start, end, periods, name, freq,
tz=None, normalize=False, ambiguous='raise', closed=None):
- if com._count_not_none(start, end, periods) != 2:
- raise ValueError('Of the three parameters: start, end, and '
- 'periods, exactly two must be specified')
+ if com._count_not_none(start, end, periods, freq) != 3:
+ raise ValueError('Of the four parameters: start, end, periods, '
+ 'and freq, exactly three must be specified')
_normalized = True
@@ -566,23 +567,30 @@ def _generate(cls, start, end, periods, name, freq,
if end.tz is None and start.tz is not None:
start = start.replace(tzinfo=None)
- if _use_cached_range(freq, _normalized, start, end):
- index = cls._cached_range(start, end, periods=periods,
- freq=freq, name=name)
+ if freq is not None:
+ if _use_cached_range(freq, _normalized, start, end):
+ index = cls._cached_range(start, end, periods=periods,
+ freq=freq, name=name)
+ else:
+ index = _generate_regular_range(start, end, periods, freq)
+
+ if tz is not None and getattr(index, 'tz', None) is None:
+ index = conversion.tz_localize_to_utc(_ensure_int64(index),
+ tz,
+ ambiguous=ambiguous)
+ index = index.view(_NS_DTYPE)
+
+ # index is localized datetime64 array -> have to convert
+ # start/end as well to compare
+ if start is not None:
+ start = start.tz_localize(tz).asm8
+ if end is not None:
+ end = end.tz_localize(tz).asm8
else:
- index = _generate_regular_range(start, end, periods, freq)
-
- if tz is not None and getattr(index, 'tz', None) is None:
- index = conversion.tz_localize_to_utc(_ensure_int64(index), tz,
- ambiguous=ambiguous)
- index = index.view(_NS_DTYPE)
-
- # index is localized datetime64 array -> have to convert
- # start/end as well to compare
- if start is not None:
- start = start.tz_localize(tz).asm8
- if end is not None:
- end = end.tz_localize(tz).asm8
+ index = tools.to_datetime(np.linspace(start.value,
+ end.value, periods))
+ if tz is not None:
+ index = index.tz_localize('UTC').tz_convert(tz)
if not left_closed and len(index) and index[0] == start:
index = index[1:]
@@ -2565,13 +2573,15 @@ def _generate_regular_range(start, end, periods, freq):
return data
-def date_range(start=None, end=None, periods=None, freq='D', tz=None,
+def date_range(start=None, end=None, periods=None, freq=None, tz=None,
normalize=False, name=None, closed=None, **kwargs):
"""
Return a fixed frequency DatetimeIndex.
- Exactly two of the three parameters `start`, `end` and `periods`
- must be specified.
+ Of the three parameters `start`, `end`, `periods`, and `freq` exactly
+ three must be specified. If `freq` is omitted, the resulting DatetimeIndex
+ will have `periods` linearly spaced elements between `start` and `end`
+ (closed on both sides).
Parameters
----------
@@ -2613,7 +2623,7 @@ def date_range(start=None, end=None, periods=None, freq='D', tz=None,
--------
**Specifying the values**
- The next three examples generate the same `DatetimeIndex`, but vary
+ The next four examples generate the same `DatetimeIndex`, but vary
the combination of `start`, `end` and `periods`.
Specify `start` and `end`, with the default daily frequency.
@@ -2637,6 +2647,13 @@ def date_range(start=None, end=None, periods=None, freq='D', tz=None,
'2017-12-29', '2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
+ Specify `start`, `end`, and `periods`; the frequency is generated
+ automatically (linearly spaced).
+
+ >>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3)
+ DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00',
+ '2018-04-27 00:00:00'], freq=None)
+
**Other Parameters**
Changed the `freq` (frequency) to ``'M'`` (month end frequency).
@@ -2687,6 +2704,10 @@ def date_range(start=None, end=None, periods=None, freq='D', tz=None,
DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
"""
+
+ if freq is None and com._any_none(periods, start, end):
+ freq = 'D'
+
return DatetimeIndex(start=start, end=end, periods=periods,
freq=freq, tz=tz, normalize=normalize, name=name,
closed=closed, **kwargs)
| ENH: date_range not working as it intuitively should when specifying start, end, and periods
#### Code Sample, a copy-pastable example if possible
```python
>>> start = pd.Timestamp('2008-01-02 07:51:37.999477')
>>> end = start + pd.Timedelta('2 hours')
>>> pd.date_range(start, end, periods=1000) # Intuitively a linearly spaced time series
Traceback (most recent call last):
File "<ipython-input-69-2304a28824c6>", line 1, in <module>
pd.date_range(start, end, periods=1000)
File "E:\Anaconda3\lib\site-packages\pandas\core\indexes\datetimes.py", line 2057, in date_range
closed=closed, **kwargs)
File "E:\Anaconda3\lib\site-packages\pandas\util\_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "E:\Anaconda3\lib\site-packages\pandas\core\indexes\datetimes.py", line 324, in __new__
ambiguous=ambiguous)
File "E:\Anaconda3\lib\site-packages\pandas\core\indexes\datetimes.py", line 421, in _generate
raise ValueError('Of the three parameters: start, end, and '
ValueError: Of the three parameters: start, end, and periods, exactly two must be specified
```
#### Problem description
I need a DatetimeIndex object to later use as index in a Series. DatetimeIndex should start at `start`, end at `end` and have a fixed number of elements (1000). Intuitively, this should work with `pd.date_range`, but it doesn't, and I haven't found a good explanation about why this is the case.
I have found a workaround on Stackoverflow (https://stackoverflow.com/questions/25796030/how-can-i-use-pandas-date-range-to-obtain-a-time-series-with-n-specified-perio) that does work:
```python
>>> start = pd.Timestamp('2008-01-02 07:51:37.999477')
>>> end = start + pd.Timedelta('2 hours')
>>> pd.to_datetime(np.linspace(start.value, end.value, 1000))
DatetimeIndex(['2008-01-02 07:51:37.999476992',
'2008-01-02 07:51:45.206684160',
'2008-01-02 07:51:52.413891328',
'2008-01-02 07:51:59.621098496',
'2008-01-02 07:52:06.828305920',
'2008-01-02 07:52:14.035513088',
'2008-01-02 07:52:21.242720256',
'2008-01-02 07:52:28.449927424',
'2008-01-02 07:52:35.657134592',
'2008-01-02 07:52:42.864341760',
...
'2008-01-02 09:50:33.134612224',
'2008-01-02 09:50:40.341819392',
'2008-01-02 09:50:47.549026560',
'2008-01-02 09:50:54.756233728',
'2008-01-02 09:51:01.963440896',
'2008-01-02 09:51:09.170648064',
'2008-01-02 09:51:16.377855488',
'2008-01-02 09:51:23.585062656',
'2008-01-02 09:51:30.792269824',
'2008-01-02 09:51:37.999476992'],
dtype='datetime64[ns]', length=1000, freq=None)
```
#### Expected Output
```python
>>> start = pd.Timestamp('2008-01-02 07:51:37.999477')
>>> end = start + pd.Timedelta('2 hours')
>>> pd.date_range(start, end, periods=1000)
DatetimeIndex(['2008-01-02 07:51:37.999476992',
'2008-01-02 07:51:45.206684160',
'2008-01-02 07:51:52.413891328',
'2008-01-02 07:51:59.621098496',
'2008-01-02 07:52:06.828305920',
'2008-01-02 07:52:14.035513088',
'2008-01-02 07:52:21.242720256',
'2008-01-02 07:52:28.449927424',
'2008-01-02 07:52:35.657134592',
'2008-01-02 07:52:42.864341760',
...
'2008-01-02 09:50:33.134612224',
'2008-01-02 09:50:40.341819392',
'2008-01-02 09:50:47.549026560',
'2008-01-02 09:50:54.756233728',
'2008-01-02 09:51:01.963440896',
'2008-01-02 09:51:09.170648064',
'2008-01-02 09:51:16.377855488',
'2008-01-02 09:51:23.585062656',
'2008-01-02 09:51:30.792269824',
'2008-01-02 09:51:37.999476992'],
dtype='datetime64[ns]', length=1000, freq=None)
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 44 Stepping 2, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en
LOCALE: None.None
pandas: 0.22.0
pytest: 3.3.2
pip: 9.0.1
setuptools: 38.4.0
Cython: 0.27.3
numpy: 1.14.2
scipy: 1.0.1
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: 1.6.6
patsy: 0.5.0
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.2
openpyxl: 2.4.10
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.2
lxml: 4.1.1
bs4: 4.6.0
html5lib: 0.9999999
sqlalchemy: 1.2.1
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| I suppose if freq is NOT specified then you could accept all three and give a linspace repr. What breaks if you do that?
If freq is specified then this should for sure raise. The idea of the error is simply to be helpful in telling you that you may have overspecified the args.
Your workaround from SO is a pretty reasonable solution, not sure we should support this. I definitely would not want to change the default behavior, but suppose could with with something like `freq=None` or `freq='interpolate'`
I agree to definetely don't change the default behaviour. As I see, the default `freq` is 'D', so this:
```python
pd.date_range(start, end, periods=1000)
```
would not work, because it is the same as
```python
pd.date_range(start, end, periods=1000, freq='D')
```
which really should not work. However, if the user explicitly sets `freq=None`, the linspace behaviour would be practical:
```python
pd.date_range(start, end, periods=1000, freq=None)
```
I suggest a simple change in the `date_range` function like so:
```python
def date_range(start=None, end=None, periods=None, freq='D', tz=None,
normalize=False, name=None, closed=None, **kwargs):
"""
...
"""
# Return a linearly spaced DatetimeIndex if `freq` is not set, but `start`, `end`, and `periods` are
if start and end and periods and not freq:
di = tools.to_datetime(np.linspace(start, end, periods), **kwargs)
if name:
di.name = name
return di
return DatetimeIndex(start=start, end=end, periods=periods,
freq=freq, tz=tz, normalize=normalize, name=name,
closed=closed, **kwargs)
```
together with appropriate changes in the docstring and test functions. This would provide the desired behaviour, while not changing anything else.
I am not yet a contributor, so I cannot implement this myself (unless I am made one :upside_down_face:)
Because the change is really just a convenience feature of the `date_range` function, I don't think it would be wise to implement this directly in the DatetimeIndex constructor, or in another function, like `bdate_range`.
FWI, we could change the default `freq` to `None`, and document that it's `'D'` when only two of star, end, and freq are specified. That way `pd.date_range(start, end, periods=100)` will work.
> I am not yet a contributor, so I cannot implement this myself (unless I am made one 🙃)
Anyone can make a pull request: http://pandas-docs.github.io/pandas-docs-travis/contributing.html
That's a good idea, I like it much better.
> Anyone can make a pull request
Oh okay I didn't know that (never done this before), then I am going to try to implement it :) | 2018-04-27T15:14:38Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-69-2304a28824c6>", line 1, in <module>
pd.date_range(start, end, periods=1000)
File "E:\Anaconda3\lib\site-packages\pandas\core\indexes\datetimes.py", line 2057, in date_range
closed=closed, **kwargs)
File "E:\Anaconda3\lib\site-packages\pandas\util\_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "E:\Anaconda3\lib\site-packages\pandas\core\indexes\datetimes.py", line 324, in __new__
ambiguous=ambiguous)
File "E:\Anaconda3\lib\site-packages\pandas\core\indexes\datetimes.py", line 421, in _generate
raise ValueError('Of the three parameters: start, end, and '
ValueError: Of the three parameters: start, end, and periods, exactly two must be specified
| 11,870 |
|||
pandas-dev/pandas | pandas-dev__pandas-20933 | d15c104d0596454c289ba48906a397be45dda959 | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -96,7 +96,7 @@ of multi-axis indexing.
.. versionadded:: 0.18.1
- See more at :ref:`Selection by Position <indexing.integer>`,
+ See more at :ref:`Selection by Position <indexing.integer>`,
:ref:`Advanced Indexing <advanced>` and :ref:`Advanced
Hierarchical <advanced.advanced_hierarchical>`.
@@ -125,7 +125,7 @@ Basics
As mentioned when introducing the data structures in the :ref:`last section
<basics>`, the primary function of indexing with ``[]`` (a.k.a. ``__getitem__``
for those familiar with implementing class behavior in Python) is selecting out
-lower-dimensional slices. The following table shows return type values when
+lower-dimensional slices. The following table shows return type values when
indexing pandas objects with ``[]``:
.. csv-table::
@@ -235,7 +235,7 @@ as an attribute:
- The attribute will not be available if it conflicts with an existing method name, e.g. ``s.min`` is not allowed.
- Similarly, the attribute will not be available if it conflicts with any of the following list: ``index``,
- ``major_axis``, ``minor_axis``, ``items``, ``labels``.
+ ``major_axis``, ``minor_axis``, ``items``.
- In any of these cases, standard indexing will still work, e.g. ``s['1']``, ``s['min']``, and ``s['index']`` will
access the corresponding element or column.
@@ -888,10 +888,10 @@ Boolean indexing
.. _indexing.boolean:
Another common operation is the use of boolean vectors to filter the data.
-The operators are: ``|`` for ``or``, ``&`` for ``and``, and ``~`` for ``not``.
+The operators are: ``|`` for ``or``, ``&`` for ``and``, and ``~`` for ``not``.
These **must** be grouped by using parentheses, since by default Python will
-evaluate an expression such as ``df.A > 2 & df.B < 3`` as
-``df.A > (2 & df.B) < 3``, while the desired evaluation order is
+evaluate an expression such as ``df.A > 2 & df.B < 3`` as
+``df.A > (2 & df.B) < 3``, while the desired evaluation order is
``(df.A > 2) & (df.B < 3)``.
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
@@ -944,8 +944,8 @@ and :ref:`Advanced Indexing <advanced>` you may select along more than one axis
Indexing with isin
------------------
-Consider the :meth:`~Series.isin` method of ``Series``, which returns a boolean
-vector that is true wherever the ``Series`` elements exist in the passed list.
+Consider the :meth:`~Series.isin` method of ``Series``, which returns a boolean
+vector that is true wherever the ``Series`` elements exist in the passed list.
This allows you to select rows where one or more columns have values you want:
.. ipython:: python
@@ -1666,7 +1666,7 @@ Set an index
.. _indexing.set_index:
-DataFrame has a :meth:`~DataFrame.set_index` method which takes a column name
+DataFrame has a :meth:`~DataFrame.set_index` method which takes a column name
(for a regular ``Index``) or a list of column names (for a ``MultiIndex``).
To create a new, re-indexed DataFrame:
@@ -1707,9 +1707,9 @@ the index in-place (without creating a new object):
Reset the index
~~~~~~~~~~~~~~~
-As a convenience, there is a new function on DataFrame called
-:meth:`~DataFrame.reset_index` which transfers the index values into the
-DataFrame's columns and sets a simple integer index.
+As a convenience, there is a new function on DataFrame called
+:meth:`~DataFrame.reset_index` which transfers the index values into the
+DataFrame's columns and sets a simple integer index.
This is the inverse operation of :meth:`~DataFrame.set_index`.
| Update docs on reserved attributes
#### Code Sample, a copy-pastable example if possible
```python
>>> t = pd.DataFrame({'foo': [1,2], 'labels': [3,4], 'bar': [5,6]})
>>> t.foo
0 1
1 2
Name: foo, dtype: int64
>>> t.bar
0 5
1 6
Name: bar, dtype: int64
>>> t.labels
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mrg/git/xdash/env/lib/python2.7/site-packages/pandas/core/generic.py", line 3077, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'labels'
```
#### Problem description
I would expect the `labels` column to be accessible as an attribute on the DataFrame, like `t.foo` and `t.bar`.
Instead, `t.labels` gives an AttributeError.
I eventually found the relevant section of the docs, which notes that `index`, `major_axis`, `minor_axis`, `items`, and `labels` are reserved.
http://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
#### Expected Output
Given the limitation, I would expect a warning at DataFrame creation time that the `labels` column will not be accessible as an attribute.
I see that this issue has been raised before (#8082, #8100) and closed, but I saw no warning. It looks like the only change was to expand the documentation (9b12ccbcf2bc5893dcca262c81ac5dc28096c682).
The suggestion from @jtratner [on 7 Sep 2014](https://github.com/pandas-dev/pandas/pull/8100#issuecomment-54756388) looked good to me:
```
UserWarning: Using reserved column name `labels` will be inaccessible by `getattr` calls - you must use `[]` instead.
```
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Darwin
OS-release: 17.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: None.None
pandas: 0.20.3
pytest: None
pip: 9.0.1
setuptools: 36.4.0
Cython: None
numpy: 1.13.1
scipy: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: 0.5.0
</details>
| On master, `labels` goes through for me.
```
In [1]: import pandas as pd
In [2]: t = pd.DataFrame({'foo': [1,2], 'labels': [3,4], 'bar': [5,6]})
In [3]: t.labels
Out[3]:
0 3
1 4
Name: labels, dtype: int64
```
Did this change recently? If so, we should update the docs.
I don't think we would do a warning for this (so e.g. `index`). The `.` accessing is a convenience for interactive use. We wouldn't want to force people using these column names to catch warnings every time they make a series / dataframe.
Fair enough. I can see that not everyone will use the `.` access, and for them the warning would be an annoyance. Now that I'm aware of the issue, it shouldn't catch me out again.
Interesting that `t.labels` works on master. I was initially running 0.20.3, but I get the same behaviour (AttributeError) from 0.22.0.
Added this as a docs issues. We should update the reserved attr names to be the ones that we're using.
I can take a stab at it.
👍 thanks @sharad-vm. Let us know if you get stuck. | 2018-05-02T21:14:13Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mrg/git/xdash/env/lib/python2.7/site-packages/pandas/core/generic.py", line 3077, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'labels'
| 11,881 |
|||
pandas-dev/pandas | pandas-dev__pandas-20938 | ce4ab828d882a0c50f2f63921621ccae0d14b5ae | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -956,6 +956,7 @@ Deprecations
retain the previous behavior, use a list instead of a tuple (:issue:`18314`)
- ``Series.valid`` is deprecated. Use :meth:`Series.dropna` instead (:issue:`18800`).
- :func:`read_excel` has deprecated the ``skip_footer`` parameter. Use ``skipfooter`` instead (:issue:`18836`)
+- :meth:`ExcelFile.parse` has deprecated ``sheetname`` in favor of ``sheet_name`` for consistency with :func:`read_excel` (:issue:`20920`).
- The ``is_copy`` attribute is deprecated and will be removed in a future version (:issue:`18801`).
- ``IntervalIndex.from_intervals`` is deprecated in favor of the :class:`IntervalIndex` constructor (:issue:`19263`)
- ``DataFrame.from_items`` is deprecated. Use :func:`DataFrame.from_dict` instead, or ``DataFrame.from_dict(OrderedDict())`` if you wish to preserve the key order (:issue:`17320`, :issue:`17312`)
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -303,20 +303,11 @@ def read_excel(io,
convert_float=True,
**kwds):
- # Can't use _deprecate_kwarg since sheetname=None has a special meaning
- if is_integer(sheet_name) and sheet_name == 0 and 'sheetname' in kwds:
- warnings.warn("The `sheetname` keyword is deprecated, use "
- "`sheet_name` instead", FutureWarning, stacklevel=2)
- sheet_name = kwds.pop("sheetname")
- elif 'sheetname' in kwds:
- raise TypeError("Cannot specify both `sheet_name` and `sheetname`. "
- "Use just `sheet_name`")
-
if not isinstance(io, ExcelFile):
io = ExcelFile(io, engine=engine)
- return io._parse_excel(
- sheetname=sheet_name,
+ return io.parse(
+ sheet_name=sheet_name,
header=header,
names=names,
index_col=index_col,
@@ -435,7 +426,16 @@ def parse(self,
docstring for more info on accepted parameters
"""
- return self._parse_excel(sheetname=sheet_name,
+ # Can't use _deprecate_kwarg since sheetname=None has a special meaning
+ if is_integer(sheet_name) and sheet_name == 0 and 'sheetname' in kwds:
+ warnings.warn("The `sheetname` keyword is deprecated, use "
+ "`sheet_name` instead", FutureWarning, stacklevel=2)
+ sheet_name = kwds.pop("sheetname")
+ elif 'sheetname' in kwds:
+ raise TypeError("Cannot specify both `sheet_name` "
+ "and `sheetname`. Use just `sheet_name`")
+
+ return self._parse_excel(sheet_name=sheet_name,
header=header,
names=names,
index_col=index_col,
@@ -489,7 +489,7 @@ def _excel2num(x):
return i in usecols
def _parse_excel(self,
- sheetname=0,
+ sheet_name=0,
header=0,
names=None,
index_col=None,
@@ -585,14 +585,14 @@ def _parse_cell(cell_contents, cell_typ):
ret_dict = False
# Keep sheetname to maintain backwards compatibility.
- if isinstance(sheetname, list):
- sheets = sheetname
+ if isinstance(sheet_name, list):
+ sheets = sheet_name
ret_dict = True
- elif sheetname is None:
+ elif sheet_name is None:
sheets = self.sheet_names
ret_dict = True
else:
- sheets = [sheetname]
+ sheets = [sheet_name]
# handle same-type duplicates.
sheets = list(OrderedDict.fromkeys(sheets).keys())
| ExcelFile.parse() and pd.read_xlsx() have different behavior for "sheetname" argument
#### Code Sample, a copy-pastable example if possible
* `pd.read_excel()`
```python
>>> import pandas as pd
>>> pd.read_excel('sampledata.xlsx', sheet_name='Sheet2')
a b c
0 this is sheet2
>>> pd.read_excel('sampledata.xlsx', sheetname='Sheet2')
/Users/<myname>/.pyenv/versions/miniconda3-latest/envs/py36/envs/py36/lib/python3.6/site-packages/pandas/util/_decorators.py:118: FutureWarning: The `sheetname` keyword is deprecated, use `sheet_name` instead
return func(*args, **kwargs)
a b c
0 this is sheet2
```
* `ExcelFile.parse()`
```python
>>> import pandas as pd
>>> xlsx_file=pd.ExcelFile('sampledata.xlsx')
>>> xlsx_file.sheet_names
['Sheet1', 'Sheet2', 'Sheet3']
>>> xlsx_file.parse(sheet_name='Sheet2')
a b c
0 this is sheet2
>>> xlsx_file.parse(sheetname='Sheet2')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/<myname>/.pyenv/versions/miniconda3-latest/envs/py36/envs/py36/lib/python3.6/site-packages/pandas/io/excel.py", line 327, in parse
**kwds)
TypeError: _parse_excel() got multiple values for keyword argument 'sheetname'
```
#### Problem description
* The document says ExcelFile.parse() is "Equivalent to read_excel(ExcelFile, ...)", but when using argument `sheetname`,which is deprecated, these two gives different results.
* pd.read_excel() works with `FutureWarning`, but ExcelFile.parse() gives `TypeError` instead.
#### Expected Output
ExcelFile.parse() should raise `FutureWarning` and use the value of `sheetname` as that of `sheet_name`
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Darwin
OS-release: 17.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: ja_JP.UTF-8
LOCALE: ja_JP.UTF-8
pandas: 0.22.0
pytest: 3.3.2
pip: 9.0.1
setuptools: 38.4.0
Cython: 0.27.3
numpy: 1.14.0
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: 1.6.6
patsy: 0.5.0
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: 1.2.1
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.1.2
openpyxl: 2.4.10
xlrd: 1.1.0
xlwt: 1.2.0
xlsxwriter: 1.0.2
lxml: 4.1.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.1
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| you are welcome to submit a fix to this, though ``sheetname`` is going to be removed in the next version. | 2018-05-03T01:02:09Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/<myname>/.pyenv/versions/miniconda3-latest/envs/py36/envs/py36/lib/python3.6/site-packages/pandas/io/excel.py", line 327, in parse
**kwds)
TypeError: _parse_excel() got multiple values for keyword argument 'sheetname'
| 11,882 |
|||
pandas-dev/pandas | pandas-dev__pandas-20946 | 28dbae9f306ade549eb1edd5484b3e1da758bcdb | diff --git a/doc/source/api.rst b/doc/source/api.rst
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -1632,6 +1632,8 @@ IntervalIndex Components
IntervalIndex.length
IntervalIndex.values
IntervalIndex.is_non_overlapping_monotonic
+ IntervalIndex.get_loc
+ IntervalIndex.get_indexer
.. _api.multiindex:
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1245,6 +1245,7 @@ Indexing
- Bug in ``Series.is_unique`` where extraneous output in stderr is shown if Series contains objects with ``__ne__`` defined (:issue:`20661`)
- Bug in ``.loc`` assignment with a single-element list-like incorrectly assigns as a list (:issue:`19474`)
- Bug in partial string indexing on a ``Series/DataFrame`` with a monotonic decreasing ``DatetimeIndex`` (:issue:`19362`)
+- Bug in :meth:`IntervalIndex.get_loc` and :meth:`IntervalIndex.get_indexer` when used with an :class:`IntervalIndex` containing a single interval (:issue:`17284`, :issue:`20921`)
MultiIndex
^^^^^^^^^^
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -159,20 +159,22 @@ class IntervalIndex(IntervalMixin, Index):
Attributes
----------
- left
- right
closed
- mid
+ is_non_overlapping_monotonic
+ left
length
+ mid
+ right
values
- is_non_overlapping_monotonic
Methods
-------
+ contains
from_arrays
- from_tuples
from_breaks
- contains
+ from_tuples
+ get_indexer
+ get_loc
Examples
---------
@@ -938,8 +940,11 @@ def _searchsorted_monotonic(self, label, side, exclude_label=False):
if isinstance(label, IntervalMixin):
raise NotImplementedError
+ # GH 20921: "not is_monotonic_increasing" for the second condition
+ # instead of "is_monotonic_decreasing" to account for single element
+ # indexes being both increasing and decreasing
if ((side == 'left' and self.left.is_monotonic_increasing) or
- (side == 'right' and self.left.is_monotonic_decreasing)):
+ (side == 'right' and not self.left.is_monotonic_increasing)):
sub_idx = self.right
if self.open_right or exclude_label:
label = _get_next_label(label)
| BUG: IntervalIndex.get_loc fails when there is only one entry
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> pd.IntervalIndex.from_tuples([(1,100)]).get_loc(50)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/imre/code/pandas/pandas/core/indexes/interval.py", line 1049, in get_loc
raise KeyError(original_key)
KeyError: 50
```
#### Problem description
50 is contained in the interval (1, 100), so this should not raise `KeyError`.
#### Expected Output
0
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: c4da79b5b322c73d8e61d1cb98ac4ab1e2438b40
python: 3.6.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.13.0-39-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.0.dev0+824.gc4da79b5b
pytest: None
pip: 10.0.1
setuptools: 39.1.0
Cython: 0.28.2
numpy: 1.14.3
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.2
pytz: 2018.4
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| xref https://github.com/pandas-dev/pandas/issues/17284#issuecomment-325890615
Not exactly a dupe of that issue, but the same fix will probably resolve both of these. | 2018-05-04T00:25:09Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/imre/code/pandas/pandas/core/indexes/interval.py", line 1049, in get_loc
raise KeyError(original_key)
KeyError: 50
| 11,885 |
|||
pandas-dev/pandas | pandas-dev__pandas-20959 | bd4332f4bff135d4119291f66e98f76cc5f9a80e | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1320,6 +1320,7 @@ Groupby/Resample/Rolling
- Bug in :func:`DataFrame.resample` that dropped timezone information (:issue:`13238`)
- Bug in :func:`DataFrame.groupby` where transformations using ``np.all`` and ``np.any`` were raising a ``ValueError`` (:issue:`20653`)
- Bug in :func:`DataFrame.resample` where ``ffill``, ``bfill``, ``pad``, ``backfill``, ``fillna``, ``interpolate``, and ``asfreq`` were ignoring ``loffset``. (:issue:`20744`)
+- Bug in :func:`DataFrame.groupby` when applying a function that has mixed data types and the user supplied function can fail on the grouping column (:issue:`20949`)
Sparse
^^^^^^
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -6,6 +6,7 @@
import warnings
import copy
from textwrap import dedent
+from contextlib import contextmanager
from pandas.compat import (
zip, range, lzip,
@@ -549,6 +550,16 @@ def f(self):
return attr
+@contextmanager
+def _group_selection_context(groupby):
+ """
+ set / reset the _group_selection_context
+ """
+ groupby._set_group_selection()
+ yield groupby
+ groupby._reset_group_selection()
+
+
class _GroupBy(PandasObject, SelectionMixin):
_group_selection = None
_apply_whitelist = frozenset([])
@@ -696,26 +707,32 @@ def _reset_group_selection(self):
each group regardless of whether a group selection was previously set.
"""
if self._group_selection is not None:
- self._group_selection = None
# GH12839 clear cached selection too when changing group selection
+ self._group_selection = None
self._reset_cache('_selected_obj')
def _set_group_selection(self):
"""
Create group based selection. Used when selection is not passed
directly but instead via a grouper.
+
+ NOTE: this should be paired with a call to _reset_group_selection
"""
grp = self.grouper
- if self.as_index and getattr(grp, 'groupings', None) is not None and \
- self.obj.ndim > 1:
- ax = self.obj._info_axis
- groupers = [g.name for g in grp.groupings
- if g.level is None and g.in_axis]
+ if not (self.as_index and
+ getattr(grp, 'groupings', None) is not None and
+ self.obj.ndim > 1 and
+ self._group_selection is None):
+ return
+
+ ax = self.obj._info_axis
+ groupers = [g.name for g in grp.groupings
+ if g.level is None and g.in_axis]
- if len(groupers):
- self._group_selection = ax.difference(Index(groupers)).tolist()
- # GH12839 clear selected obj cache when group selection changes
- self._reset_cache('_selected_obj')
+ if len(groupers):
+ # GH12839 clear selected obj cache when group selection changes
+ self._group_selection = ax.difference(Index(groupers)).tolist()
+ self._reset_cache('_selected_obj')
def _set_result_index_ordered(self, result):
# set the result index on the passed values object and
@@ -781,10 +798,10 @@ def _make_wrapper(self, name):
type(self).__name__))
raise AttributeError(msg)
- # need to setup the selection
- # as are not passed directly but in the grouper
self._set_group_selection()
+ # need to setup the selection
+ # as are not passed directly but in the grouper
f = getattr(self._selected_obj, name)
if not isinstance(f, types.MethodType):
return self.apply(lambda self: getattr(self, name))
@@ -897,7 +914,22 @@ def f(g):
# ignore SettingWithCopy here in case the user mutates
with option_context('mode.chained_assignment', None):
- return self._python_apply_general(f)
+ try:
+ result = self._python_apply_general(f)
+ except Exception:
+
+ # gh-20949
+ # try again, with .apply acting as a filtering
+ # operation, by excluding the grouping column
+ # This would normally not be triggered
+ # except if the udf is trying an operation that
+ # fails on *some* columns, e.g. a numeric operation
+ # on a string grouper column
+
+ with _group_selection_context(self):
+ return self._python_apply_general(f)
+
+ return result
def _python_apply_general(self, f):
keys, values, mutated = self.grouper.apply(f, self._selected_obj,
@@ -1275,9 +1307,9 @@ def mean(self, *args, **kwargs):
except GroupByError:
raise
except Exception: # pragma: no cover
- self._set_group_selection()
- f = lambda x: x.mean(axis=self.axis, **kwargs)
- return self._python_agg_general(f)
+ with _group_selection_context(self):
+ f = lambda x: x.mean(axis=self.axis, **kwargs)
+ return self._python_agg_general(f)
@Substitution(name='groupby')
@Appender(_doc_template)
@@ -1293,13 +1325,12 @@ def median(self, **kwargs):
raise
except Exception: # pragma: no cover
- self._set_group_selection()
-
def f(x):
if isinstance(x, np.ndarray):
x = Series(x)
return x.median(axis=self.axis, **kwargs)
- return self._python_agg_general(f)
+ with _group_selection_context(self):
+ return self._python_agg_general(f)
@Substitution(name='groupby')
@Appender(_doc_template)
@@ -1336,9 +1367,9 @@ def var(self, ddof=1, *args, **kwargs):
if ddof == 1:
return self._cython_agg_general('var', **kwargs)
else:
- self._set_group_selection()
f = lambda x: x.var(ddof=ddof, **kwargs)
- return self._python_agg_general(f)
+ with _group_selection_context(self):
+ return self._python_agg_general(f)
@Substitution(name='groupby')
@Appender(_doc_template)
@@ -1384,6 +1415,7 @@ def f(self, **kwargs):
kwargs['numeric_only'] = numeric_only
if 'min_count' not in kwargs:
kwargs['min_count'] = min_count
+
self._set_group_selection()
try:
return self._cython_agg_general(
@@ -1453,11 +1485,11 @@ def ohlc(self):
@Appender(DataFrame.describe.__doc__)
def describe(self, **kwargs):
- self._set_group_selection()
- result = self.apply(lambda x: x.describe(**kwargs))
- if self.axis == 1:
- return result.T
- return result.unstack()
+ with _group_selection_context(self):
+ result = self.apply(lambda x: x.describe(**kwargs))
+ if self.axis == 1:
+ return result.T
+ return result.unstack()
@Substitution(name='groupby')
@Appender(_doc_template)
@@ -1778,13 +1810,12 @@ def ngroup(self, ascending=True):
.cumcount : Number the rows in each group.
"""
- self._set_group_selection()
-
- index = self._selected_obj.index
- result = Series(self.grouper.group_info[0], index)
- if not ascending:
- result = self.ngroups - 1 - result
- return result
+ with _group_selection_context(self):
+ index = self._selected_obj.index
+ result = Series(self.grouper.group_info[0], index)
+ if not ascending:
+ result = self.ngroups - 1 - result
+ return result
@Substitution(name='groupby')
def cumcount(self, ascending=True):
@@ -1835,11 +1866,10 @@ def cumcount(self, ascending=True):
.ngroup : Number the groups themselves.
"""
- self._set_group_selection()
-
- index = self._selected_obj.index
- cumcounts = self._cumcount_array(ascending=ascending)
- return Series(cumcounts, index)
+ with _group_selection_context(self):
+ index = self._selected_obj.index
+ cumcounts = self._cumcount_array(ascending=ascending)
+ return Series(cumcounts, index)
@Substitution(name='groupby')
@Appender(_doc_template)
@@ -3768,7 +3798,6 @@ def nunique(self, dropna=True):
@Appender(Series.describe.__doc__)
def describe(self, **kwargs):
- self._set_group_selection()
result = self.apply(lambda x: x.describe(**kwargs))
if self.axis == 1:
return result.T
@@ -4411,6 +4440,7 @@ def transform(self, func, *args, **kwargs):
return self._transform_general(func, *args, **kwargs)
obj = self._obj_with_exclusions
+
# nuiscance columns
if not result.columns.equals(obj.columns):
return self._transform_general(func, *args, **kwargs)
| pandas.core.groupby.GroupBy.apply fails
#### Code Sample:
```python
>>> df = pd.DataFrame({'A': 'a a b'.split(), 'B': [1,2,3], 'C': [4,6, 5]})
>>> g = df.groupby('A')
>>> g.apply(lambda x: x / x.sum())
```
#### Problem description
Applying a function to a grouped data frame fails. The code above is the example code from the official pandas documentation: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html
Output to the above code:
```python
/usr/local/lib/python2.7/dist-packages/pandas/core/computation/check.py:17: UserWarning: The installed version of numexpr 2.4.3 is not supported in pandas and will be not be used
The minimum supported version is 2.4.6
ver=ver, min_ver=_MIN_NUMEXPR_VERSION), UserWarning)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pandas/core/groupby.py", line 805, in apply
return self._python_apply_general(f)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/groupby.py", line 809, in _python_apply_general
self.axis)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/groupby.py", line 1969, in apply
res = f(group)
File "<stdin>", line 1, in <lambda>
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 1262, in f
return self._combine_series(other, na_op, fill_value, axis, level)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 3944, in _combine_series
try_cast=try_cast)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 3958, in _combine_series_infer
try_cast=try_cast)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 3981, in _combine_match_columns
try_cast=try_cast)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 3435, in eval
return self.apply('eval', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 3329, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 1377, in eval
result = get_result(other)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 1346, in get_result
result = func(values, other)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 1216, in na_op
yrav.fill(yrav.item())
ValueError: can only convert an array of size 1 to a Python scalar
```
The error can be 'fixed' by applying another command to the grouped object first:
```python
>>> g.sum()
B C
A
a 3 10
b 3 5
>>> g.apply(lambda x: x / x.sum())
B C
0 0.333333 0.4
1 0.666667 0.6
2 1.000000 1.0
```
#### Expected Output
```python
>>> g.apply(lambda x: x / x.sum())
B C
0 0.333333 0.4
1 0.666667 0.6
2 1.000000 1.0
```
#### Output of ``pd.show_versions()``
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.12.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-122-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.utf8
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.22.0
pytest: 2.8.7
pip: 9.0.1
setuptools: 20.7.0
Cython: 0.23.4
numpy: 1.11.0
scipy: 0.17.0
pyarrow: None
xarray: None
IPython: 5.5.0
sphinx: None
patsy: 0.4.1
dateutil: 2.4.2
pytz: 2014.10
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.4.3
feather: None
matplotlib: 1.5.1
openpyxl: 2.3.0
xlrd: 0.9.4
xlwt: 0.7.5
xlsxwriter: None
lxml: 3.5.0
bs4: None
html5lib: 1.0.1
sqlalchemy: 1.0.11
pymysql: 0.7.2.None
psycopg2: 2.6.1 (dt dec mx pq3 ext lo64)
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
>>>
</details>
| Thanks for the bug report.
Hmm interesting. FWIW when I remove numexpr I can't get this to run at all, regardless of whether or not I run another agg function first.
Numexpr may be a red herring. From what I can tell the problem occurs at the following line of code:
https://github.com/pandas-dev/pandas/blob/ef019faa06f762c8c203985a11108731384b2dae/pandas/core/groupby/groupby.py#L5063
`sdata` when run without another agg function first includes the Grouping as part of the data and throws here, causing it to go down another path. `sdata` comes from `_selected_obj`.
For agg functions like `sum`, `mean`, etc... they have a call to `_set_group_selection` which takes care of setting the appropriately cached value for `_selected_obj`. I suppose a quick fix is to add a call to that at the beginning of `apply`, though I can't tell from the code alone why that isn't done across the board
cc @jreback for any insight
Here's another example that fails with 0.23rc2 (and in 0.22.0 as well), based on code from `pandas\core\indexes\datetimes.py` in `test_agg_timezone_round_trip`:
```
In [1]: import pandas as pd
In [2]: pd.__version__
Out[2]: '0.23.0rc2'
In [3]: dates = [pd.Timestamp("2016-01-0%d 12:00:00" % i, tz='US/Pacific')
...: for i in range(1, 5)]
...: df = pd.DataFrame({'A': ['a', 'b'] * 2, 'B': dates})
...: grouped = df.groupby('A')
...:
In [4]: df
Out[4]:
A B
0 a 2016-01-01 12:00:00-08:00
1 b 2016-01-02 12:00:00-08:00
2 a 2016-01-03 12:00:00-08:00
3 b 2016-01-04 12:00:00-08:00
In [5]: grouped.apply(lambda x: x.iloc[0])[0]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3062 try:
-> 3063 return self._engine.get_loc(key)
3064 except KeyError:
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5720)()
138
--> 139 cpdef get_loc(self, object val):
140 if is_definitely_invalid_key(val):
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5566)()
160 try:
--> 161 return self.mapping.get_item(val)
162 except (TypeError, ValueError):
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:22442)()
1491
-> 1492 cpdef get_item(self, object val):
1493 cdef khiter_t k
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:22396)()
1499 else:
-> 1500 raise KeyError(val)
1501
KeyError: 0
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-5-2b16555d6e05> in <module>()
----> 1 grouped.apply(lambda x: x.iloc[0])[0]
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\core\frame.py in __getitem__(self, key)
2685 return self._getitem_multilevel(key)
2686 else:
-> 2687 return self._getitem_column(key)
2688
2689 def _getitem_column(self, key):
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\core\frame.py in _getitem_column(self, key)
2692 # get column
2693 if self.columns.is_unique:
-> 2694 return self._get_item_cache(key)
2695
2696 # duplicate columns & possible reduce dimensionality
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\core\generic.py in _get_item_cache(self, item)
2485 res = cache.get(item)
2486 if res is None:
-> 2487 values = self._data.get(item)
2488 res = self._box_item_values(item, values)
2489 cache[item] = res
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\core\internals.py in get(self, item, fastpath)
4113
4114 if not isna(item):
-> 4115 loc = self.items.get_loc(item)
4116 else:
4117 indexer = np.arange(len(self.items))[isna(self.items)]
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3063 return self._engine.get_loc(key)
3064 except KeyError:
-> 3065 return self._engine.get_loc(self._maybe_cast_indexer(key))
3066
3067 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5720)()
137 util.set_value_at(arr, loc, value)
138
--> 139 cpdef get_loc(self, object val):
140 if is_definitely_invalid_key(val):
141 raise TypeError("'{val}' is an invalid key".format(val=val))
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5566)()
159
160 try:
--> 161 return self.mapping.get_item(val)
162 except (TypeError, ValueError):
163 raise KeyError(val)
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:22442)()
1490 sizeof(uint32_t)) # flags
1491
-> 1492 cpdef get_item(self, object val):
1493 cdef khiter_t k
1494 if val != val or val is None:
C:\EclipseWorkspaces\LiClipseWorkspace\pandas-dev\pandas36\pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:22396)()
1498 return self.table.vals[k]
1499 else:
-> 1500 raise KeyError(val)
1501
1502 cpdef set_item(self, object key, Py_ssize_t val):
KeyError: 0
```
However, if you do the following, it works:
```
In [6]: grouped.nth(0)['B'].iloc[0]
Out[6]: Timestamp('2016-01-01 12:00:00-0800', tz='US/Pacific')
In [7]: grouped.apply(lambda x: x.iloc[0])[0]
Out[7]: Timestamp('2016-01-01 12:00:00-0800', tz='US/Pacific')
```
So doing one operation (in this case `nth`) prior to the `apply` then makes the `apply` work.
@Dr-Irv seems related. Some code below illustrating what I think is going on:
```python
>>> grouped.apply(lambda x: x.iloc[0])[0] # KeyError as indicator
KeyError
>>> grouped._set_group_selection()
>>> grouped.apply(lambda x: x.iloc[0])[0] # Works now, as 'A' was not part of data
Timestamp('2016-01-01 12:00:00-0800', tz='US/Pacific')
>>> grouped._reset_group_selection() # Clear out the group selection
>>> grouped.apply(lambda x: x.iloc[0])[0] # Back to failing
KeyError
```
Unfortunately just adding this call before `_python_apply_general` broke other tests where the grouping was supposed to be part of the returned object (at least according to the tests). Reviewing in more detail hope to have a PR soon
this didn't work even in 0.20.3. not sure how we don't have a test for it though.
@Dr-Irv your example is a separate issue. pls make a new report for that one. | 2018-05-05T14:29:27Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pandas/core/groupby.py", line 805, in apply
return self._python_apply_general(f)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/groupby.py", line 809, in _python_apply_general
self.axis)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/groupby.py", line 1969, in apply
res = f(group)
File "<stdin>", line 1, in <lambda>
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 1262, in f
return self._combine_series(other, na_op, fill_value, axis, level)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 3944, in _combine_series
try_cast=try_cast)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 3958, in _combine_series_infer
try_cast=try_cast)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 3981, in _combine_match_columns
try_cast=try_cast)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 3435, in eval
return self.apply('eval', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 3329, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 1377, in eval
result = get_result(other)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 1346, in get_result
result = func(values, other)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 1216, in na_op
yrav.fill(yrav.item())
ValueError: can only convert an array of size 1 to a Python scalar
| 11,887 |
|||
pandas-dev/pandas | pandas-dev__pandas-21093 | c85ab083919b59ce84c220d5baf7d34ff4a0bcf2 | diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.txt
--- a/doc/source/whatsnew/v0.23.1.txt
+++ b/doc/source/whatsnew/v0.23.1.txt
@@ -46,8 +46,6 @@ Documentation Changes
Bug Fixes
~~~~~~~~~
-- tab completion on :class:`Index` in IPython no longer outputs deprecation warnings (:issue:`21125`)
-
Groupby/Resample/Rolling
^^^^^^^^^^^^^^^^^^^^^^^^
@@ -101,3 +99,9 @@ Reshaping
- Bug in :func:`concat` where error was raised in concatenating :class:`Series` with numpy scalar and tuple names (:issue:`21015`)
-
+
+Other
+^^^^^
+
+- Tab completion on :class:`Index` in IPython no longer outputs deprecation warnings (:issue:`21125`)
+- Bug preventing pandas from being importable with -OO optimization (:issue:`21071`)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -1090,12 +1090,17 @@ def apply(self, other):
class CustomBusinessMonthEnd(_CustomBusinessMonth):
- __doc__ = _CustomBusinessMonth.__doc__.replace('[BEGIN/END]', 'end')
+ # TODO(py27): Replace condition with Subsitution after dropping Py27
+ if _CustomBusinessMonth.__doc__:
+ __doc__ = _CustomBusinessMonth.__doc__.replace('[BEGIN/END]', 'end')
_prefix = 'CBM'
class CustomBusinessMonthBegin(_CustomBusinessMonth):
- __doc__ = _CustomBusinessMonth.__doc__.replace('[BEGIN/END]', 'beginning')
+ # TODO(py27): Replace condition with Subsitution after dropping Py27
+ if _CustomBusinessMonth.__doc__:
+ __doc__ = _CustomBusinessMonth.__doc__.replace('[BEGIN/END]',
+ 'beginning')
_prefix = 'CBMS'
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -4,7 +4,7 @@
import types
import warnings
from textwrap import dedent, wrap
-from functools import wraps, update_wrapper
+from functools import wraps, update_wrapper, WRAPPER_ASSIGNMENTS
def deprecate(name, alternative, version, alt_name=None,
@@ -20,18 +20,18 @@ def deprecate(name, alternative, version, alt_name=None,
Parameters
----------
name : str
- Name of function to deprecate
- alternative : str
- Name of function to use instead
+ Name of function to deprecate.
+ alternative : func
+ Function to use instead.
version : str
- Version of pandas in which the method has been deprecated
+ Version of pandas in which the method has been deprecated.
alt_name : str, optional
- Name to use in preference of alternative.__name__
+ Name to use in preference of alternative.__name__.
klass : Warning, default FutureWarning
stacklevel : int, default 2
msg : str
- The message to display in the warning.
- Default is '{name} is deprecated. Use {alt_name} instead.'
+ The message to display in the warning.
+ Default is '{name} is deprecated. Use {alt_name} instead.'
"""
alt_name = alt_name or alternative.__name__
@@ -39,25 +39,26 @@ def deprecate(name, alternative, version, alt_name=None,
warning_msg = msg or '{} is deprecated, use {} instead'.format(name,
alt_name)
- @wraps(alternative)
+ # adding deprecated directive to the docstring
+ msg = msg or 'Use `{alt_name}` instead.'.format(alt_name=alt_name)
+ msg = '\n '.join(wrap(msg, 70))
+
+ @Substitution(version=version, msg=msg)
+ @Appender(alternative.__doc__)
def wrapper(*args, **kwargs):
+ """
+ .. deprecated:: %(version)s
+
+ %(msg)s
+
+ """
warnings.warn(warning_msg, klass, stacklevel=stacklevel)
return alternative(*args, **kwargs)
- # adding deprecated directive to the docstring
- msg = msg or 'Use `{alt_name}` instead.'.format(alt_name=alt_name)
- tpl = dedent("""
- .. deprecated:: {version}
-
- {msg}
-
- {rest}
- """)
- rest = getattr(wrapper, '__doc__', '')
- docstring = tpl.format(version=version,
- msg='\n '.join(wrap(msg, 70)),
- rest=dedent(rest))
- wrapper.__doc__ = docstring
+ # Since we are using Substitution to create the required docstring,
+ # remove that from the attributes that should be assigned to the wrapper
+ assignments = tuple(x for x in WRAPPER_ASSIGNMENTS if x != '__doc__')
+ update_wrapper(wrapper, alternative, assigned=assignments)
return wrapper
| pandas is no longer importable with -OO optimization
#### Code Sample, a copy-pastable example if possible
In your shell:
```
$ python -OO -c 'import pandas'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/__init__.py", line 42, in <module>
from pandas.core.api import *
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/api.py", line 10, in <module>
from pandas.core.groupby.groupby import Grouper
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/groupby/__init__.py", line 2, in <module>
from pandas.core.groupby.groupby import (
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/groupby/groupby.py", line 46, in <module>
from pandas.core.index import (Index, MultiIndex,
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/index.py", line 2, in <module>
from pandas.core.indexes.api import *
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/indexes/api.py", line 4, in <module>
from pandas.core.indexes.base import (Index,
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 7, in <module>
from pandas._libs import (lib, index as libindex, tslib as libts,
File "pandas/_libs/index.pyx", line 28, in init pandas._libs.index
File "pandas/_libs/tslibs/period.pyx", line 59, in init pandas._libs.tslibs.period
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 1092, in <module>
class CustomBusinessMonthEnd(_CustomBusinessMonth):
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 1093, in CustomBusinessMonthEnd
__doc__ = _CustomBusinessMonth.__doc__.replace('[BEGIN/END]', 'end')
AttributeError: 'NoneType' object has no attribute 'replace'
```
#### Problem description
`-OO` optimization strips out docstrings, which may give a minor performance boost (I honestly don't know). Nonetheless, users requested that xarray import properly properly with the `-OO` flag (https://github.com/pydata/xarray/issues/1706), so we added a regression test that caught this in the latest pandas release (https://github.com/pydata/xarray/pull/1708).
#### Expected Output
Pandas should be imported without any errors.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Darwin
OS-release: 17.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.22.0
pytest: 3.5.0
pip: 9.0.1
setuptools: 39.0.1
Cython: None
numpy: 1.14.2
scipy: None
pyarrow: None
xarray: None
IPython: 6.3.0
sphinx: None
patsy: None
dateutil: 2.7.2
pytz: 2018.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| Hmm OK. Just running locally it looks like it could be fixed adding a condition in 3 places that assume we have a docstring, but don't if you run with those flags.
Just out of curiosity do you only have this in your travis configuration or have you placed it somewhere in your unit testing? Think it would be easier to track if we had the latter but off the top of my head not sure how to do that - curious if you have any insight
You could probably test this with a Python subprocess, but I didn’t bother for xarray so it’s just in our Travis config. | 2018-05-16T21:26:01Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/__init__.py", line 42, in <module>
from pandas.core.api import *
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/api.py", line 10, in <module>
from pandas.core.groupby.groupby import Grouper
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/groupby/__init__.py", line 2, in <module>
from pandas.core.groupby.groupby import (
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/groupby/groupby.py", line 46, in <module>
from pandas.core.index import (Index, MultiIndex,
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/index.py", line 2, in <module>
from pandas.core.indexes.api import *
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/indexes/api.py", line 4, in <module>
from pandas.core.indexes.base import (Index,
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 7, in <module>
from pandas._libs import (lib, index as libindex, tslib as libts,
File "pandas/_libs/index.pyx", line 28, in init pandas._libs.index
File "pandas/_libs/tslibs/period.pyx", line 59, in init pandas._libs.tslibs.period
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 1092, in <module>
class CustomBusinessMonthEnd(_CustomBusinessMonth):
File "/Users/shoyer/miniconda3/envs/xarray-py36/lib/python3.6/site-packages/pandas/tseries/offsets.py", line 1093, in CustomBusinessMonthEnd
__doc__ = _CustomBusinessMonth.__doc__.replace('[BEGIN/END]', 'end')
AttributeError: 'NoneType' object has no attribute 'replace'
| 11,910 |
|||
pandas-dev/pandas | pandas-dev__pandas-21164 | cea0a81b3d1ade61a5c662458dd8edc135dc94f6 | diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.txt
--- a/doc/source/whatsnew/v0.23.1.txt
+++ b/doc/source/whatsnew/v0.23.1.txt
@@ -97,6 +97,7 @@ I/O
- Bug in IO methods specifying ``compression='zip'`` which produced uncompressed zip archives (:issue:`17778`, :issue:`21144`)
- Bug in :meth:`DataFrame.to_stata` which prevented exporting DataFrames to buffers and most file-like objects (:issue:`21041`)
+- Bug when :meth:`pandas.io.json.json_normalize` was called with ``None`` values in nested levels in JSON (:issue:`21158`)
- Bug in :meth:`DataFrame.to_csv` and :meth:`Series.to_csv` causes encoding error when compression and encoding are specified (:issue:`21241`, :issue:`21118`)
- Bug in :meth:`read_stata` and :class:`StataReader` which did not correctly decode utf-8 strings on Python 3 from Stata 14 files (dta version 118) (:issue:`21244`)
-
diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py
--- a/pandas/io/json/normalize.py
+++ b/pandas/io/json/normalize.py
@@ -80,7 +80,7 @@ def nested_to_record(ds, prefix="", sep=".", level=0):
if level != 0: # so we skip copying for top level, common case
v = new_d.pop(k)
new_d[newkey] = v
- if v is None: # pop the key if the value is None
+ elif v is None: # pop the key if the value is None
new_d.pop(k)
continue
else:
| json_normalize gives KeyError in 0.23
#### Code Sample, a copy-pastable example if possible
```python
import json
from pandas import show_versions
from pandas.io.json import json_normalize
print(show_versions())
with open('test.json', 'r') as infile:
d = json.load(infile)
normed = json_normalize(d)
```
The `test.json` file is rather lengthy, with a structure similar to:
```json
{
"subject": {
"pairs": {
"A1-A2": {
"atlases": {
"avg.corrected": {
"region": null,
"x": 49.151580810546875,
"y": -33.148521423339844,
"z": 27.572303771972656
}
}
}
}
}
}
```
This minimal version is enough to show the error below.
#### Problem description
This problem is *new* in pandas 0.23. I get the following traceback:
Traceback:
```pytb
Traceback (most recent call last):
File "test.py", line 10, in <module>
normed = json_normalize(d)
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 203, in json_normalize
data = nested_to_record(data, sep=sep)
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
new_d.update(nested_to_record(v, newkey, sep, level + 1))
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
new_d.update(nested_to_record(v, newkey, sep, level + 1))
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
new_d.update(nested_to_record(v, newkey, sep, level + 1))
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 84, in nested_to_record
new_d.pop(k)
KeyError: 'region'
```
Note that running the same code on pandas 0.22 does not result in any errors. I suspect this could be related to #20399.
#### Expected Output
Expected output is a flattened `DataFrame` without any errors.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.0
pytest: 3.5.1
pip: 9.0.1
setuptools: 38.4.0
Cython: None
numpy: 1.14.2
scipy: None
pyarrow: None
xarray: 0.10.3
IPython: 6.3.1
sphinx: 1.7.2
patsy: None
dateutil: 2.7.2
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.2.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| I couldn't reproduce your error with the information provide (was getting others) - can you please update it so the example can be fully copy/pasted to reproduce?
I'm not sure what errors you are getting. Here's a version with the JSON contents directly in the Python file as a dict:
```python
from pandas import show_versions
from pandas.io.json import json_normalize
print(show_versions())
d = {
"subject": {
"pairs": {
"A1-A2": {
"atlases": {
"avg.corrected": {
"region": None,
"x": 49.151580810546875,
"y": -33.148521423339844,
"z": 27.572303771972656
}
}
}
}
}
}
normed = json_normalize(d)
print(normed)
```
This results in the same error.
Running his code I have the same error as well.
https://github.com/pandas-dev/pandas/blob/master/pandas/io/json/normalize.py#L79
I think the problem is here.
If I add two print statements:
```
if not isinstance(v, dict):
print("cond1: %s" % (level != 0))
print("cond2: %s" % (v is None))
if level != 0: # so we skip copying for top level, common case
print(new_d)
v = new_d.pop(k)
new_d[newkey] = v
if v is None: # pop the key if the value is None
print(new_d)
new_d.pop(k)
continue
```
I get the following printout:
cond1: True
cond2: True
{'region': None, 'x': 49.151580810546875, 'y': -33.148521423339844, 'z': 27.572303771972656}
{'x': 49.151580810546875, 'y': -33.148521423339844, 'z': 27.572303771972656, 'subject.pairs.A1-A2.atlases.avg.corrected.region': None}
So new_d is getting popped twice which causes an error on the second time?
I added a continue in the code:
if level != 0: # so we skip copying for top level, common case
v = new_d.pop(k)
new_d[newkey] = v
continue
and the code looks to run fine.
Also as mivade pointed out these lines of code were changed in #20399
https://github.com/pandas-dev/pandas/pull/20399/files#diff-9c654764f5f21c8e9d58d9ebf14de86dR83 | 2018-05-22T04:39:53Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 10, in <module>
normed = json_normalize(d)
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 203, in json_normalize
data = nested_to_record(data, sep=sep)
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
new_d.update(nested_to_record(v, newkey, sep, level + 1))
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
new_d.update(nested_to_record(v, newkey, sep, level + 1))
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 88, in nested_to_record
new_d.update(nested_to_record(v, newkey, sep, level + 1))
File "/Users/depalati/miniconda3/lib/python3.6/site-packages/pandas/io/json/normalize.py", line 84, in nested_to_record
new_d.pop(k)
KeyError: 'region'
| 11,918 |
|||
pandas-dev/pandas | pandas-dev__pandas-21187 | b36b451a74bc16d7ea64c158a3cd33fbfb504068 | diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt
--- a/doc/source/whatsnew/v0.23.2.txt
+++ b/doc/source/whatsnew/v0.23.2.txt
@@ -81,6 +81,7 @@ Bug Fixes
**Categorical**
+- Bug in rendering :class:`Series` with ``Categorical`` dtype in rare conditions under Python 2.7 (:issue:`21002`)
-
**Timezones**
diff --git a/pandas/_libs/hashing.pyx b/pandas/_libs/hashing.pyx
--- a/pandas/_libs/hashing.pyx
+++ b/pandas/_libs/hashing.pyx
@@ -8,8 +8,7 @@ import numpy as np
from numpy cimport ndarray, uint8_t, uint32_t, uint64_t
from util cimport _checknull
-from cpython cimport (PyString_Check,
- PyBytes_Check,
+from cpython cimport (PyBytes_Check,
PyUnicode_Check)
from libc.stdlib cimport malloc, free
@@ -62,9 +61,7 @@ def hash_object_array(ndarray[object] arr, object key, object encoding='utf8'):
cdef list datas = []
for i in range(n):
val = arr[i]
- if PyString_Check(val):
- data = <bytes>val.encode(encoding)
- elif PyBytes_Check(val):
+ if PyBytes_Check(val):
data = <bytes>val
elif PyUnicode_Check(val):
data = <bytes>val.encode(encoding)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -553,6 +553,28 @@ def _valid_locales(locales, normalize):
# Stdout / stderr decorators
+@contextmanager
+def set_defaultencoding(encoding):
+ """
+ Set default encoding (as given by sys.getdefaultencoding()) to the given
+ encoding; restore on exit.
+
+ Parameters
+ ----------
+ encoding : str
+ """
+ if not PY2:
+ raise ValueError("set_defaultencoding context is only available "
+ "in Python 2.")
+ orig = sys.getdefaultencoding()
+ reload(sys) # noqa:F821
+ sys.setdefaultencoding(encoding)
+ try:
+ yield
+ finally:
+ sys.setdefaultencoding(orig)
+
+
def capture_stdout(f):
"""
Decorator to capture stdout in a buffer so that it can be checked
| Rendering Series[Categorical] raises UnicodeDecodeError
calling repr() on a Series with categorical-dtype can raise UnicodeDecodeError under certain conditions. These conditions appear to include:
- The series must have length at least 61 (Note: `pd.get_option('max_rows') == 60`)
- python2
- sys.getdefaultencoding() == 'ascii'
Reproduce with:
```
from pandas.core.base import StringMixin
class County(StringMixin):
name = u'San Sebastián'
state = u'PR'
def __unicode__(self):
return self.name + u', ' + self.state
cat = pd.Categorical([County() for n in range(61)])
idx = pd.Index(cat)
ser = idx.to_series()
>>> ser
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/base.py", line 82, in __repr__
return str(self)
File "pandas/core/base.py", line 62, in __str__
return self.__bytes__()
File "pandas/core/base.py", line 74, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "pandas/core/series.py", line 1233, in __unicode__
max_rows=max_rows, length=show_dimensions)
File "pandas/core/series.py", line 1276, in to_string
max_rows=max_rows)
File "pandas/io/formats/format.py", line 187, in __init__
self._chk_truncate()
File "pandas/io/formats/format.py", line 201, in _chk_truncate
series.iloc[-row_num:]))
File "pandas/core/reshape/concat.py", line 225, in concat
copy=copy, sort=sort)
File "pandas/core/reshape/concat.py", line 378, in __init__
self.new_axes = self._get_new_axes()
File "pandas/core/reshape/concat.py", line 458, in _get_new_axes
new_axes[self.axis] = self._get_concat_axis()
File "pandas/core/reshape/concat.py", line 511, in _get_concat_axis
concat_axis = _concat_indexes(indexes)
File "pandas/core/reshape/concat.py", line 529, in _concat_indexes
return indexes[0].append(indexes[1:])
File "pandas/core/indexes/base.py", line 2126, in append
return self._concat(to_concat, name)
File "pandas/core/indexes/category.py", line 771, in _concat
return CategoricalIndex._concat_same_dtype(self, to_concat, name)
File "pandas/core/indexes/category.py", line 778, in _concat_same_dtype
to_concat = [self._is_dtype_compat(c) for c in to_concat]
File "pandas/core/indexes/category.py", line 232, in _is_dtype_compat
if not other.is_dtype_equal(self):
File "pandas/core/arrays/categorical.py", line 2242, in is_dtype_equal
return hash(self.dtype) == hash(other.dtype)
File "pandas/core/dtypes/dtypes.py", line 181, in __hash__
return int(self._hash_categories(self.categories, self.ordered))
File "pandas/core/dtypes/dtypes.py", line 250, in _hash_categories
cat_array = hash_array(np.asarray(categories), categorize=False)
File "pandas/core/util/hashing.py", line 296, in hash_array
hash_key, encoding)
File "pandas/_libs/hashing.pyx", line 66, in pandas._libs.hashing.hash_object_array
data = <bytes>val.encode(encoding)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11: ordinal not in range(128)
```
It tentatively looks like the issue is in `_libs.hashing.hash_object_array`:
```
if PyString_Check(val):
data = <bytes>val.encode(encoding)
elif PyBytes_Check(val):
data = <bytes>val
elif PyUnicode_Check(val):
data = <bytes>val.encode(encoding)
```
When we get here, `val` is already a `str` in _both py2 and py3_, so we go down the `if PyString_Check(val):` branch. But when it tries to `encode` a `str` in py2, it first will try to decode with `sys.getdefaultencoding()`, which raises.
So my best guess is that the `PyString_Check` branch just doesn't belong.
I'll take a look for related issues.
| 2018-05-24T02:25:08Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/base.py", line 82, in __repr__
return str(self)
File "pandas/core/base.py", line 62, in __str__
return self.__bytes__()
File "pandas/core/base.py", line 74, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "pandas/core/series.py", line 1233, in __unicode__
max_rows=max_rows, length=show_dimensions)
File "pandas/core/series.py", line 1276, in to_string
max_rows=max_rows)
File "pandas/io/formats/format.py", line 187, in __init__
self._chk_truncate()
File "pandas/io/formats/format.py", line 201, in _chk_truncate
series.iloc[-row_num:]))
File "pandas/core/reshape/concat.py", line 225, in concat
copy=copy, sort=sort)
File "pandas/core/reshape/concat.py", line 378, in __init__
self.new_axes = self._get_new_axes()
File "pandas/core/reshape/concat.py", line 458, in _get_new_axes
new_axes[self.axis] = self._get_concat_axis()
File "pandas/core/reshape/concat.py", line 511, in _get_concat_axis
concat_axis = _concat_indexes(indexes)
File "pandas/core/reshape/concat.py", line 529, in _concat_indexes
return indexes[0].append(indexes[1:])
File "pandas/core/indexes/base.py", line 2126, in append
return self._concat(to_concat, name)
File "pandas/core/indexes/category.py", line 771, in _concat
return CategoricalIndex._concat_same_dtype(self, to_concat, name)
File "pandas/core/indexes/category.py", line 778, in _concat_same_dtype
to_concat = [self._is_dtype_compat(c) for c in to_concat]
File "pandas/core/indexes/category.py", line 232, in _is_dtype_compat
if not other.is_dtype_equal(self):
File "pandas/core/arrays/categorical.py", line 2242, in is_dtype_equal
return hash(self.dtype) == hash(other.dtype)
File "pandas/core/dtypes/dtypes.py", line 181, in __hash__
return int(self._hash_categories(self.categories, self.ordered))
File "pandas/core/dtypes/dtypes.py", line 250, in _hash_categories
cat_array = hash_array(np.asarray(categories), categorize=False)
File "pandas/core/util/hashing.py", line 296, in hash_array
hash_key, encoding)
File "pandas/_libs/hashing.pyx", line 66, in pandas._libs.hashing.hash_object_array
data = <bytes>val.encode(encoding)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11: ordinal not in range(128)
| 11,923 |
||||
pandas-dev/pandas | pandas-dev__pandas-21321 | d79203af0552e73933e6f80f4284ac2697372eaa | diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.txt
--- a/doc/source/whatsnew/v0.23.1.txt
+++ b/doc/source/whatsnew/v0.23.1.txt
@@ -132,3 +132,4 @@ Bug Fixes
**Other**
- Tab completion on :class:`Index` in IPython no longer outputs deprecation warnings (:issue:`21125`)
+- Bug preventing pandas being used on Windows without C++ redistributable installed (:issue:`21106`)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -453,10 +453,10 @@ def pxd(name):
return pjoin('pandas', name + '.pxd')
-# args to ignore warnings
if is_platform_windows():
extra_compile_args = []
else:
+ # args to ignore warnings
extra_compile_args = ['-Wno-unused-function']
lib_depends = lib_depends + ['pandas/_libs/src/numpy_helper.h',
@@ -733,7 +733,7 @@ def pxd(name):
maintainer=AUTHOR,
version=versioneer.get_version(),
packages=find_packages(include=['pandas', 'pandas.*']),
- package_data={'': ['data/*', 'templates/*'],
+ package_data={'': ['data/*', 'templates/*', '_libs/*.dll'],
'pandas.tests.io': ['data/legacy_hdf/*.h5',
'data/legacy_pickle/*/*.pickle',
'data/legacy_msgpack/*/*.msgpack',
| Pandas 0.23.0 gives ImportError: DLL load failed
Installed pandas not able to import with:
```
ImportError: DLL load failed: The specified module could not be found.
```
As far as we know, this happens if you install with pip on Windows 32bit machines (if you have another case, please comment below with specifying your OS, Python version, how you installed pandas, ..).
**Workaround for now is to keep your version at pandas 0.22.0.** (or to install using conda, or to install VS tools for C++, see https://github.com/pandas-dev/pandas/issues/21106#issuecomment-391459521)
We will fix this problem for 0.23.1.
---
original post:
#### Code Sample, a copy-pastable example if possible
```python
import pandas
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\__init__.py", line 42, in <module>
from pandas.core.api import *
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\core\api.py", line 10, in <module>
from pandas.core.groupby.groupby import Grouper
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\core\groupby\__init__.py", line 2, in <module>
from pandas.core.groupby.groupby import (
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\core\groupby\groupby.py", line 49, in <module>
from pandas.core.frame import DataFrame
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\core\frame.py", line 74, in <module>
from pandas.core.series import Series
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\core\series.py", line 3978, in <module>
Series._add_series_or_dataframe_operations()
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\core\generic.py", line 8891, in _add_series_or_dataframe_operation
s
from pandas.core import window as rwindow
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\core\window.py", line 36, in <module>
import pandas._libs.window as _window
ImportError: DLL load failed: The specified module could not be found.
| How'd you install pandas?
I used pip install pandas
On 5/17/18, Tom Augspurger <notifications@github.com> wrote:
> How'd you install pandas?
>
> --
> You are receiving this because you authored the thread.
> Reply to this email directly or view it on GitHub:
> https://github.com/pandas-dev/pandas/issues/21106#issuecomment-389951948
the version is 0.23.0
On 5/17/18, Manish <manishkumar.bobbili3@gmail.com> wrote:
> I used pip install pandas
>
> On 5/17/18, Tom Augspurger <notifications@github.com> wrote:
>> How'd you install pandas?
>>
>> --
>> You are receiving this because you authored the thread.
>> Reply to this email directly or view it on GitHub:
>> https://github.com/pandas-dev/pandas/issues/21106#issuecomment-389951948
>
Can you paste the output from your pip install?
when i installed from pip it successfully installed. I didnt see any errors
On 5/17/18, Tom Augspurger <notifications@github.com> wrote:
> Can you paste the output from your pip install?
>
> --
> You are receiving this because you authored the thread.
> Reply to this email directly or view it on GitHub:
> https://github.com/pandas-dev/pandas/issues/21106#issuecomment-389954577
And what was the output? You can uninstall and reinstall to get the log, if
you don't have it anymore.
On Thu, May 17, 2018 at 1:49 PM, manish59 <notifications@github.com> wrote:
> when i installed from pip it successfully installed. I didnt see any errors
>
> On 5/17/18, Tom Augspurger <notifications@github.com> wrote:
> > Can you paste the output from your pip install?
> >
> > --
> > You are receiving this because you authored the thread.
> > Reply to this email directly or view it on GitHub:
> > https://github.com/pandas-dev/pandas/issues/21106#issuecomment-389954577
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/21106#issuecomment-389970033>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIhnYGqVV7gVU6UpLsqcbfqlWGofsks5tzcYogaJpZM4UDf0A>
> .
>
Collecting pandas
Using cached https://files.pythonhosted.org/packages/a2/f1/9c90efc7a128c3336bca8ceb38374c2ba97b90d590e3bb9a2cca1c87fda9/pandas-0.23.0-cp36-cp36m-win32.whl
Requirement already satisfied: pytz>=2011k in
c:\users\lfletcher\appdata\local\programs\python\python36-32\lib\site-packages
(from pandas)
Requirement already satisfied: numpy>=1.9.0 in
c:\users\lfletcher\appdata\local\programs\python\python36-32\lib\site-packages
(from pandas)
Requirement already satisfied: python-dateutil>=2.5.0 in
c:\users\lfletcher\appdata\local\programs\python\python36-32\lib\site-packages
(from pandas)
Requirement already satisfied: six>=1.5 in
c:\users\lfletcher\appdata\local\programs\python\python36-32\lib\site-packages
(from python-dateutil>=2.5.0->pandas)
Installing collected packages: pandas
Successfully installed pandas-0.23.0
On 5/17/18, Manish <manishkumar.bobbili3@gmail.com> wrote:
> when i installed from pip it successfully installed. I didnt see any errors
>
> On 5/17/18, Tom Augspurger <notifications@github.com> wrote:
>> Can you paste the output from your pip install?
>>
>> --
>> You are receiving this because you authored the thread.
>> Reply to this email directly or view it on GitHub:
>> https://github.com/pandas-dev/pandas/issues/21106#issuecomment-389954577
>
How did you install python? You seem to be on 32 bit windows which is less tested, but I just tried with a clean conda environment and it worked fine
```
set CONDA_FORCE_32BIT=1
conda create -n py36_32 python=3.6 numpy -y
activate py36_32
pip install pandas
python -c "import pandas"
```
You might also provide your versions of Pip, setuptools and NumPy.
On Thu, May 17, 2018 at 2:45 PM, chris-b1 <notifications@github.com> wrote:
> How did you install python? You seem to be on 32 bit windows which is less
> tested, but I just tried with a clean conda environment and it worked fine
>
> set CONDA_FORCE_32BIT=1
> conda create -n py36_32 python=3.6 numpy -y
> activate py36_32
> pip install pandas
> python -c 'import pandas'
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/21106#issuecomment-389986723>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIkQTekIg93m6JwZMqdUyWqn1qo2Gks5tzdN1gaJpZM4UDf0A>
> .
>
I have the same problem
> pip 10.0.1
> python 3.6
> NumPy 1.14.3
What platform? How did you install python?
On Mon, May 21, 2018, 5:57 PM marcelo <notifications@github.com> wrote:
> I have the same problem
>
> pip 10.0.1
> python 3.6
> NumPy 1.14.3
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/21106#issuecomment-390807539>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AB1b_BlekLdJj094H7KnItpXnlKPnqS8ks5t00ZygaJpZM4UDf0A>
> .
>
Installation of pandas 0.22.0 seemed to help some of my students
Yes, this issue is apparently with 0.23 wheels.
If people could post their Python (installer, 32 or 64 bit), pip, & NumPy
info we may be able to track this down.
On Tue, May 22, 2018 at 9:19 AM, Abador <notifications@github.com> wrote:
> Install version pandas 0.22.0 it seemed to help some of my students
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/21106#issuecomment-391007800>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIpXhMz71zFW-ajVVTiDs0YU1HK-0ks5t1B5igaJpZM4UDf0A>
> .
>
I can confirm this issue is due to 0.23
Uninstall then reinstall 0.22
```
pip uninstall pandas
pip install pandas==0.22
```
@asangansi can you please give the additional information as mentioned here https://github.com/pandas-dev/pandas/issues/21106#issuecomment-391013339. That would be helpful.
As far as I can tell most of my students have libraries similar to this(copy from pyCharm settings):
et-xmlfile 1.0.1
jdcal 1.4 1.4
numpy 1.14.3 1.14.3
openpyxl 2.5.3 2.5.3
pandas 0.23.0 0.23.0
pip 9.0.1 10.0.1
python-dateutil 2.7.3 2.7.3
pytz 2018.4 2018.4
setuptools 28.8.0 39.2.0
six 1.11.0 1.11.0
xlrd 1.1.0 1.1.0
Python 3.6
Project in PyCharm 2018.1.1
Not a whole lot to go on here, but @cgohlke do you have any guesses?
FWIW, `_window.pyx` is the first C++ pyx file in https://github.com/pandas-dev/pandas/blob/1abfd1bfdb26e9f444b4f44ffbcd2e37026e6497/setup.py#L334
Here's another failed attempt to repro, using a python.org binary.
```cmd
# download, unzip, cd to root of https://www.python.org/ftp/python/3.6.2/python-3.6.2-embed-win32.zip
rm python36._pth
curl https://bootstrap.pypa.io/get-pip.py > get-pip.py
python get-pip.py
python -m pip install pandas
python
>>> import pandas
>>>
```
c++ is probably the issue, guessing missing the runtime DLL, though I'm not sure the best fix. From what I recall c++ wasn't particularly necessary for that change so could revert back to `c` for `0.23.1`
Could someone one this issue try install the VS 2015 Redistributable and see if that fixes it for you?
https://www.microsoft.com/en-us/download/details.aspx?id=48145
PR was #19549
The missing DLL is most probably `MSVCP140.DLL`, the MSVC C++ runtime library.
It is part of the [Microsoft Visual C++ Redistributable for Visual Studio 2015/2017](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads).
Some projects, e.g. matplotlib, include this DLL in the binary wheels.
we have used c++ for quite some time
But we were not using libcpp in cython code before. @jreback might that be a difference with previous c++ code of msgpack?
That's right, msgpack only depends on libc, where the window extension is utilizing the c++ std library.
When i degraded my pandas version to 0.22 it was solved
On Thu, May 24, 2018 at 7:13 AM chris-b1 <notifications@github.com> wrote:
> That's right, msgpack only depends on libc, where the window extension is
> utilizing the c++ std library.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/21106#issuecomment-391729860>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/APQD9Kz2BtVMjaPrDqN-DUt3NG4OLywDks5t1r_tgaJpZM4UDf0A>
> .
>
--
@chris-b1 can you see what mpl is doing? maybe need a directive in setup.py? or the wheeel building step
@cgohlke is this something you want to do? (including the binaries? similar as matplotlib) (since we are using your wheels to upload to pypi)
same here:
````
Traceback (most recent call last):
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\__init__.py", line 42, in <module>
from pandas.core.api import *
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\api.py", line 10, in <module>
from pandas.core.groupby.groupby import Grouper
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\groupby\__init__.py", line 2, in <module>
from pandas.core.groupby.groupby import (
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\groupby\groupby.py", line 49, in <module>
from pandas.core.frame import DataFrame
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\frame.py", line 74, in <module>
from pandas.core.series import Series
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\series.py", line 3978, in <module>
Series._add_series_or_dataframe_operations()
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\generic.py", line 8891, in _add_series_or_dataframe_operations
from pandas.core import window as rwindow
File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\window.py", line 36, in <module>
import pandas._libs.window as _window
ImportError: DLL load failed: The specified module could not be found.
PS C:\Users\xxx\Dropbox\xxx>
````
version 23 fail.
version 22 works.
When i installed pandas 0.23 version i got same error. But when i installed
the version to 0.22 it worked. Just saying try these option it might work
On Sun, May 27, 2018 at 10:12 PM sionking <notifications@github.com> wrote:
> same here:
>
> Traceback (most recent call last):
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\__init__.py", line 42, in <module>
> from pandas.core.api import *
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\api.py", line 10, in <module>
> from pandas.core.groupby.groupby import Grouper
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\groupby\__init__.py", line 2, in <module>
> from pandas.core.groupby.groupby import (
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\groupby\groupby.py", line 49, in <module>
> from pandas.core.frame import DataFrame
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\frame.py", line 74, in <module>
> from pandas.core.series import Series
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\series.py", line 3978, in <module>
> Series._add_series_or_dataframe_operations()
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\generic.py", line 8891, in _add_series_or_dataframe_operations
> from pandas.core import window as rwindow
> File "C:\Program Files (x86)\Python36-32\lib\site-packages\pandas\core\window.py", line 36, in <module>
> import pandas._libs.window as _window
> ImportError: DLL load failed: The specified module could not be found.
> PS C:\Users\xxx\Dropbox\xxx>
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/21106#issuecomment-392420818>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/APQD9AUAgiw77S7I9l-PqQ6ZNYcARIR6ks5t24c9gaJpZM4UDf0A>
> .
>
--
Any updates on this? (somebody who can look at fixing the wheel building?)
Otherwise we can also revert the PR (it was only a performance improvement) *for 0.23.1*, but keep it in master so have more time to fix the wheel building for 0.24.0.
Could someone reporting on this issue (@manish59, @sionking, @asangansi, @abador, @mezitax ) please confirm that installing the redistributable fixes this for 0.23? Every windows machine I have access to already has it installed.
I'll look at what matplotlib does later today. | 2018-06-05T01:03:57Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\lfletcher\AppData\Local\Programs\Python\Python36-32\lib\site-pa
ckages\pandas\__init__.py", line 42, in <module>
| 11,953 |
|||
pandas-dev/pandas | pandas-dev__pandas-21540 | 5fbb683712ce0312e35e06152cf8410c33cee330 | diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt
--- a/doc/source/whatsnew/v0.23.2.txt
+++ b/doc/source/whatsnew/v0.23.2.txt
@@ -65,7 +65,7 @@ Bug Fixes
**I/O**
- Bug in :func:`read_csv` that caused it to incorrectly raise an error when ``nrows=0``, ``low_memory=True``, and ``index_col`` was not ``None`` (:issue:`21141`)
--
+- Bug in :func:`json_normalize` when formatting the ``record_prefix`` with integer columns (:issue:`21536`)
-
**Plotting**
diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py
--- a/pandas/io/json/normalize.py
+++ b/pandas/io/json/normalize.py
@@ -170,6 +170,11 @@ def json_normalize(data, record_path=None, meta=None,
3 Summit 1234 John Kasich Ohio OH
4 Cuyahoga 1337 John Kasich Ohio OH
+ >>> data = {'A': [1, 2]}
+ >>> json_normalize(data, 'A', record_prefix='Prefix.')
+ Prefix.0
+ 0 1
+ 1 2
"""
def _pull_field(js, spec):
result = js
@@ -259,7 +264,8 @@ def _recursive_extract(data, path, seen_meta, level=0):
result = DataFrame(records)
if record_prefix is not None:
- result.rename(columns=lambda x: record_prefix + x, inplace=True)
+ result = result.rename(
+ columns=lambda x: "{p}{c}".format(p=record_prefix, c=x))
# Data types, a problem
for k, v in compat.iteritems(meta_vals):
| json_normalize throws `TypeError` with array of values and `record_prefix`
#### Code Sample, a copy-pastable example if possible
```python
from pandas.io.json import json_normalize
df = json_normalize({'A': [1, 2]}, 'A', record_prefix='Prefix.')
print(df)
```
#### Problem description
The above code throws a `TypeError`:
```
Traceback (most recent call last):
File "c:\Users\levu\Desktop\tmp\json_normalize\main.py", line 3, in <module>
df = json_normalize({'A': [1, 2]}, 'A', record_prefix='Prefix.')
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 262, in json_normalize
result.rename(columns=lambda x: record_prefix + x, inplace=True)
File "C:\Python36\lib\site-packages\pandas\util\_decorators.py", line 187, in wrapper
return func(*args, **kwargs)
File "C:\Python36\lib\site-packages\pandas\core\frame.py", line 3781, in rename
return super(DataFrame, self).rename(**kwargs)
File "C:\Python36\lib\site-packages\pandas\core\generic.py", line 973, in rename
level=level)
File "C:\Python36\lib\site-packages\pandas\core\internals.py", line 3340, in rename_axis
obj.set_axis(axis, _transform_index(self.axes[axis], mapper, level))
File "C:\Python36\lib\site-packages\pandas\core\internals.py", line 5298, in _transform_index
items = [func(x) for x in index]
File "C:\Python36\lib\site-packages\pandas\core\internals.py", line 5298, in <listcomp>
items = [func(x) for x in index]
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 262, in <lambda>
result.rename(columns=lambda x: record_prefix + x, inplace=True)
TypeError: must be str, not int
```
I think line 262 in `normalize.py` should be:
```
result.rename(columns=lambda x: "{p}{c}".format(p=record_prefix,c=x), inplace=True)
```
because `x` can be integer.
#### Expected Output
| |Prefix.0|
|-|-|
|0|1|
|1|2|
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 62 Stepping 4, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.23.1
pytest: 3.6.1
pip: 10.0.1
setuptools: 28.8.0
Cython: None
numpy: 1.14.2
scipy: None
pyarrow: None
xarray: None
IPython: 6.3.1
sphinx: None
patsy: None
dateutil: 2.7.2
pytz: 2018.4
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| That indeed looks suspicious! PR to patch is welcome!
cc @WillAyd
@vuminhle : Marking this for `0.23.2`, as @vuminhle has already identified a potential fix, which we can easily check and patch if this actually works if no one picks this up. | 2018-06-19T09:09:28Z | [] | [] |
Traceback (most recent call last):
File "c:\Users\levu\Desktop\tmp\json_normalize\main.py", line 3, in <module>
df = json_normalize({'A': [1, 2]}, 'A', record_prefix='Prefix.')
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 262, in json_normalize
result.rename(columns=lambda x: record_prefix + x, inplace=True)
File "C:\Python36\lib\site-packages\pandas\util\_decorators.py", line 187, in wrapper
return func(*args, **kwargs)
File "C:\Python36\lib\site-packages\pandas\core\frame.py", line 3781, in rename
return super(DataFrame, self).rename(**kwargs)
File "C:\Python36\lib\site-packages\pandas\core\generic.py", line 973, in rename
level=level)
File "C:\Python36\lib\site-packages\pandas\core\internals.py", line 3340, in rename_axis
obj.set_axis(axis, _transform_index(self.axes[axis], mapper, level))
File "C:\Python36\lib\site-packages\pandas\core\internals.py", line 5298, in _transform_index
items = [func(x) for x in index]
File "C:\Python36\lib\site-packages\pandas\core\internals.py", line 5298, in <listcomp>
items = [func(x) for x in index]
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 262, in <lambda>
result.rename(columns=lambda x: record_prefix + x, inplace=True)
TypeError: must be str, not int
| 11,989 |
|||
pandas-dev/pandas | pandas-dev__pandas-21541 | 2625759abeb78655558067d55a23c293628c3165 | diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt
--- a/doc/source/whatsnew/v0.23.2.txt
+++ b/doc/source/whatsnew/v0.23.2.txt
@@ -60,6 +60,7 @@ Bug Fixes
- Bug in :meth:`Index.get_indexer_non_unique` with categorical key (:issue:`21448`)
- Bug in comparison operations for :class:`MultiIndex` where error was raised on equality / inequality comparison involving a MultiIndex with ``nlevels == 1`` (:issue:`21149`)
+- Bug in :func:`DataFrame.duplicated` with a large number of columns causing a 'maximum recursion depth exceeded' (:issue:`21524`).
-
**I/O**
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -52,7 +52,21 @@ def _int64_cut_off(shape):
return i
return len(shape)
- def loop(labels, shape):
+ def maybe_lift(lab, size):
+ # promote nan values (assigned -1 label in lab array)
+ # so that all output values are non-negative
+ return (lab + 1, size + 1) if (lab == -1).any() else (lab, size)
+
+ labels = map(_ensure_int64, labels)
+ if not xnull:
+ labels, shape = map(list, zip(*map(maybe_lift, labels, shape)))
+
+ labels = list(labels)
+ shape = list(shape)
+
+ # Iteratively process all the labels in chunks sized so less
+ # than _INT64_MAX unique int ids will be required for each chunk
+ while True:
# how many levels can be done without overflow:
nlev = _int64_cut_off(shape)
@@ -74,7 +88,7 @@ def loop(labels, shape):
out[mask] = -1
if nlev == len(shape): # all levels done!
- return out
+ break
# compress what has been done so far in order to avoid overflow
# to retain lexical ranks, obs_ids should be sorted
@@ -83,16 +97,7 @@ def loop(labels, shape):
labels = [comp_ids] + labels[nlev:]
shape = [len(obs_ids)] + shape[nlev:]
- return loop(labels, shape)
-
- def maybe_lift(lab, size): # pormote nan values
- return (lab + 1, size + 1) if (lab == -1).any() else (lab, size)
-
- labels = map(_ensure_int64, labels)
- if not xnull:
- labels, shape = map(list, zip(*map(maybe_lift, labels, shape)))
-
- return loop(list(labels), list(shape))
+ return out
def get_compressed_ids(labels, sizes):
| "maximum recursion depth exceeded" when calculating duplicates in big DataFrame (regression comparing to the old version)
#### Code Sample, a copy-pastable example if possible
I'm currently in the middle of upgrading old system from old pandas (0.12) to the new version (0.23.0). One of the parts of the system is duplicate columns detection in medium-sized DataFrames (~100 columns, few thousand rows). We were detecting it like this `dupes = df.T.duplicated()` and previously it worked but after the upgrade, it started failing. Simplest snippet to reproduce this locally
```python
import numpy as np
import pandas as pd
data = {}
for i in range(70):
data['col_{0:02d}'.format(i)] = np.random.randint(0, 1000, 20000)
df = pd.DataFrame(data)
dupes = df.T.duplicated()
print dupes
```
#### Problem description
To the contrast of the note below, this issue isn't resolved by upgrading to the newest pandas. On the contrary, it is caused by such upgrade :) Old version I've copied below from 0.12 works on a snippet above
```python
def old_duplicated(self, cols=None, take_last=False):
"""
Return boolean Series denoting duplicate rows, optionally only
considering certain columns
Parameters
----------
cols : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by
default use all of the columns
take_last : boolean, default False
Take the last observed row in a row. Defaults to the first row
Returns
-------
duplicated : Series
"""
# kludge for #1833
def _m8_to_i8(x):
if issubclass(x.dtype.type, np.datetime64):
return x.view(np.int64)
return x
if cols is None:
values = list(_m8_to_i8(self.values.T))
else:
if np.iterable(cols) and not isinstance(cols, basestring):
if isinstance(cols, tuple):
if cols in self.columns:
values = [self[cols]]
else:
values = [_m8_to_i8(self[x].values) for x in cols]
else:
values = [_m8_to_i8(self[x].values) for x in cols]
else:
values = [self[cols]]
keys = lib.fast_zip_fillna(values)
duplicated = lib.duplicated(keys, take_last=take_last)
return pd.Series(duplicated, index=self.index)
```
but the new one now fails with
```
Traceback (most recent call last):
File "/home/modintsov/workspace/DataRobot/playground.py", line 56, in <module>
dupes = df.T.duplicated()
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/frame.py", line 4384, in duplicated
ids = get_group_index(labels, shape, sort=False, xnull=False)
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/sorting.py", line 95, in get_group_index
return loop(list(labels), list(shape))
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/sorting.py", line 86, in loop
return loop(labels, shape)
... many-many lines of the same...
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/sorting.py", line 60, in loop
stride = np.prod(shape[1:nlev], dtype='i8')
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 2566, in prod
out=out, **kwargs)
RuntimeError: maximum recursion depth exceeded
```
Which is obviously a regression.
[this should explain **why** the current behaviour is a problem and why the expected output is a better solution.]
**Note**: We receive a lot of issues on our GitHub tracker, so it is very possible that your issue has been posted before. Please check first before submitting so that we do not have to handle and close duplicates!
**Note**: Many problems can be resolved by simply upgrading `pandas` to the latest version. Before submitting, please check if that solution works for you. If possible, you may want to check if `master` addresses this issue, but that is not necessary.
For documentation-related issues, you can check the latest versions of the docs on `master` here:
https://pandas-docs.github.io/pandas-docs-travis/
If the issue has not been resolved there, go ahead and file it in the issue tracker.
#### Expected Output
I expect no exception and return of bool Series. Example above in old pandas output this
```
col_00 False
col_01 False
col_02 False
col_03 False
col_04 False
col_05 False
col_06 False
col_07 False
col_08 False
col_09 False
col_10 False
col_11 False
col_12 False
col_13 False
col_14 False
...
col_55 False
col_56 False
col_57 False
col_58 False
col_59 False
col_60 False
col_61 False
col_62 False
col_63 False
col_64 False
col_65 False
col_66 False
col_67 False
col_68 False
col_69 False
Length: 70, dtype: bool
```
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.12.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-128-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.23.0
pytest: 3.5.1
pip: 9.0.1
setuptools: 39.2.0
Cython: 0.21
numpy: 1.14.3
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 5.5.0
sphinx: 1.5.5
patsy: 0.2.1
dateutil: 2.7.3
pytz: 2015.7
blosc: None
bottleneck: None
tables: None
numexpr: 2.6.5
feather: None
matplotlib: None
openpyxl: None
xlrd: 0.9.2
xlwt: 0.7.5
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: None
sqlalchemy: 1.2.7
pymysql: None
psycopg2: 2.7.3.2.dr2 (dt dec pq3 ext lo64)
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| I was unable to reproduce this on master - can you check there and see if that resolves your issue?
@WillAyd Master works for me on Python 3.6 but I can reproduce issue under Python 2.7.15 (OP is using 2.7.12).
Thanks @Liam3851 . Well in that case this is a recursive function call and I think the compatibility difference is that `sys.getrecursionlimit()` is at 3000 for Python3 and only 1000 for Python2. Tracing this call with this size DataFrame requires 2,223 recursive calls on my end, hence the failure.
Can you see if increasing that limit in Python2 resolves the issue?
@WillAyd OP's test case works on Python 2 with the system recursion limit increased to 3,000 as written.
If I increase the number of columns in OP's test case from 20,000 to 30,000, that re-breaks Python 2 with the maximum recursion depth (perhaps as expected).
However, at least on my box (Win7, 40 GB RAM free for process) increasing the number of columns from 20,000 to 30,000 causes Python 3 to crash completely (I think this was not expected, or at least I didn't expect it).
Mostly I was wandering why it's written in this way? Tail recursion is a beautiful thing, but in a language without tail recursion optimization it can (imho, should...) be replaced with simple loop. That will be just as fast, consume less resources and will not depend on the recursion limit.
Sorry for multiple edits of my previous comment, phone autocorrect decided he knows best what I'm trying to say :)
I can't speak to the history of that code but if you have a way to optimize it to perform better, use less resources, avoid the recursion limit, etc... then PRs are always welcome!
@WillAyd Well, I've never did any PR to pandas before (and only one one-liner to numpy...) so I've thought someone with more pandas-specific experience will be better here :) but I can certainly try | 2018-06-19T09:12:43Z | [] | [] |
Traceback (most recent call last):
File "/home/modintsov/workspace/DataRobot/playground.py", line 56, in <module>
dupes = df.T.duplicated()
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/frame.py", line 4384, in duplicated
ids = get_group_index(labels, shape, sort=False, xnull=False)
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/sorting.py", line 95, in get_group_index
return loop(list(labels), list(shape))
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/sorting.py", line 86, in loop
return loop(labels, shape)
... many-many lines of the same...
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/pandas/core/sorting.py", line 60, in loop
stride = np.prod(shape[1:nlev], dtype='i8')
File "/home/modintsov/.virtualenvs/dev/local/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 2566, in prod
out=out, **kwargs)
RuntimeError: maximum recursion depth exceeded
| 11,990 |
|||
pandas-dev/pandas | pandas-dev__pandas-21590 | c45bb0b5ae3b1d1671e78efce68a5ee6db034ea3 | diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt
--- a/doc/source/whatsnew/v0.23.2.txt
+++ b/doc/source/whatsnew/v0.23.2.txt
@@ -54,6 +54,7 @@ Fixed Regressions
- Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`)
- Bug in both :meth:`DataFrame.first_valid_index` and :meth:`Series.first_valid_index` raised for a row index having duplicate values (:issue:`21441`)
+- Fixed regression in unary negative operations with object dtype (:issue:`21380`)
- Bug in :meth:`Timestamp.ceil` and :meth:`Timestamp.floor` when timestamp is a multiple of the rounding frequency (:issue:`21262`)
.. _whatsnew_0232.performance:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -27,6 +27,7 @@
is_dict_like,
is_re_compilable,
is_period_arraylike,
+ is_object_dtype,
pandas_dtype)
from pandas.core.dtypes.cast import maybe_promote, maybe_upcast_putmask
from pandas.core.dtypes.inference import is_hashable
@@ -1117,7 +1118,8 @@ def __neg__(self):
values = com._values_from_object(self)
if is_bool_dtype(values):
arr = operator.inv(values)
- elif (is_numeric_dtype(values) or is_timedelta64_dtype(values)):
+ elif (is_numeric_dtype(values) or is_timedelta64_dtype(values)
+ or is_object_dtype(values)):
arr = operator.neg(values)
else:
raise TypeError("Unary negative expects numeric dtype, not {}"
@@ -1128,7 +1130,8 @@ def __pos__(self):
values = com._values_from_object(self)
if (is_bool_dtype(values) or is_period_arraylike(values)):
arr = values
- elif (is_numeric_dtype(values) or is_timedelta64_dtype(values)):
+ elif (is_numeric_dtype(values) or is_timedelta64_dtype(values)
+ or is_object_dtype(values)):
arr = operator.pos(values)
else:
raise TypeError("Unary plus expects numeric dtype, not {}"
| pandas 0.23 broke unary negative expression on Decimal data type
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
from decimal import Decimal as D
series = pd.Series([D(1)])
print(series)
print(-(series))
```
#### Problem description
I'm dealing with decimal data where exact representation is required, thus I use Python's Decimal type with pandas. With the update from 0.22 to 0.23, the unary negative expression broke.
#### Expected Output (from 0.22)
```
>>> import pandas as pd
>>> from decimal import Decimal as D
>>> series = pd.Series([D(1)])
>>> print(series)
0 1
dtype: object
>>> print(-(series))
0 -1
dtype: object
```
#### Actual Output (from 0.23)
```
>>> import pandas as pd
>>> from decimal import Decimal as D
>>> series = pd.Series([D(1)])
>>> print(series)
0 1
dtype: object
>>> print(-(series))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "python3.6/site-packages/pandas/core/generic.py", line 1124, in __neg__
.format(values.dtype))
TypeError: Unary negative expects numeric dtype, not object
```
#### Workaround (in 0.23)
Broadcasting against 0 has the expected effect:
```
>>> 0-series
0 -1
dtype: object
>>> (0-series).iloc[0]
Decimal('-1')
```
#### Output of ``pd.show_versions()``
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.16.13-300.fc28.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.0
pytest: None
pip: 9.0.3
setuptools: 38.5.1
Cython: None
numpy: 1.14.4
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: None
patsy: 0.4.1
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.1.0
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
sqlalchemy: 1.1.15
pymysql: None
psycopg2: 2.7.3.2 (dt dec pq3 ext lo64)
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| @rbu Thanks for the report. I tagged it as a regression for now, we should further look into the reason for the change. | 2018-06-22T07:54:40Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "python3.6/site-packages/pandas/core/generic.py", line 1124, in __neg__
.format(values.dtype))
TypeError: Unary negative expects numeric dtype, not object
| 12,001 |
|||
pandas-dev/pandas | pandas-dev__pandas-21655 | 1cc547185b92073a3465ea105055d7791e9e6c48 | diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt
--- a/doc/source/whatsnew/v0.23.2.txt
+++ b/doc/source/whatsnew/v0.23.2.txt
@@ -55,6 +55,7 @@ Fixed Regressions
- Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`)
- Re-allowed duplicate level names of a ``MultiIndex``. Accessing a level that has a duplicate name by name still raises an error (:issue:`19029`).
- Bug in both :meth:`DataFrame.first_valid_index` and :meth:`Series.first_valid_index` raised for a row index having duplicate values (:issue:`21441`)
+- Fixed printing of DataFrames with hierarchical columns with long names (:issue:`21180`)
- Fixed regression in :meth:`~DataFrame.reindex` and :meth:`~DataFrame.groupby`
with a MultiIndex or multiple keys that contains categorical datetime-like values (:issue:`21390`).
- Fixed regression in unary negative operations with object dtype (:issue:`21380`)
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -636,10 +636,14 @@ def to_string(self):
mid = int(round(n_cols / 2.))
mid_ix = col_lens.index[mid]
col_len = col_lens[mid_ix]
- adj_dif -= (col_len + 1) # adjoin adds one
+ # adjoin adds one
+ adj_dif -= (col_len + 1)
col_lens = col_lens.drop(mid_ix)
n_cols = len(col_lens)
- max_cols_adj = n_cols - self.index # subtract index column
+ # subtract index column
+ max_cols_adj = n_cols - self.index
+ # GH-21180. Ensure that we print at least two.
+ max_cols_adj = max(max_cols_adj, 2)
self.max_cols_adj = max_cols_adj
# Call again _chk_truncate to cut frame appropriately
@@ -778,7 +782,7 @@ def space_format(x, y):
str_columns = list(zip(*[[space_format(x, y) for y in x]
for x in fmt_columns]))
- if self.sparsify:
+ if self.sparsify and len(str_columns):
str_columns = _sparsify(str_columns)
str_columns = [list(x) for x in zip(*str_columns)]
| MultiIndex `to_string` edge case Error after 0.23.0 upgrade
#### Code example
```python
import pandas as pd
import numpy as np
index = pd.date_range('1970', '2018', freq='A')
data = np.random.randn(len(index))
columns1 = [
['This is a long title with > 37 chars.'],
['cat'],
]
columns2 = [
['This is a loooooonger title with > 43 chars.'],
['dog'],
]
df1 = pd.DataFrame(data=data, index=index, columns=columns1)
df2 = pd.DataFrame(data=data, index=index, columns=columns2)
df = pd.concat([df1, df2], axis=1)
df.head()
```
#### Output (using pandas 0.23.0)
```
>>> df.head()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/base.py", line 82, in __repr__
return str(self)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/base.py", line 61, in __str__
return self.__unicode__()
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/frame.py", line 663, in __unicode__
line_width=width, show_dimensions=show_dimensions)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/frame.py", line 1968, in to_string
formatter.to_string()
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/io/formats/format.py", line 648, in to_string
strcols = self._to_str_columns()
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/io/formats/format.py", line 539, in _to_str_columns
str_columns = self._get_formatted_column_labels(frame)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/io/formats/format.py", line 782, in _get_formatted_column_labels
str_columns = _sparsify(str_columns)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/indexes/multi.py", line 2962, in _sparsify
prev = pivoted[start]
IndexError: list index out of range
```
#### Problem description
After upgrading Pandas 0.22.0 to 0.23.0 I have experienced the above error. I have noticed that it is the length of the column values, `This is a long title with > 37 chars.` and `This is a loooooonger title with > 43 chars.`, that makes the difference. If I tweak the combined length of these to be <= 80 characters, there is no error, and output is as expected.
#### Expected Output (using pandas 0.22.0)
```
>>> df.head()
This is a long title with > 37 chars. \
cat
1970-12-31 -1.448415
1971-12-31 0.081324
1972-12-31 -0.018105
1973-12-31 0.902790
1974-12-31 0.668474
This is a loooooonger title with > 43 chars.
dog
1970-12-31 -1.448415
1971-12-31 0.081324
1972-12-31 -0.018105
1973-12-31 0.902790
1974-12-31 0.668474
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-124-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_ZA.UTF-8
LOCALE: en_ZA.UTF-8
pandas: 0.23.0
pytest: None
pip: 10.0.1
setuptools: 32.3.1
Cython: None
numpy: 1.14.0
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: 2.5.3
xlrd: None
xlwt: None
xlsxwriter: 1.0.4
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: 2.7.4 (dt dec pq3 ext lo64)
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| This doesn't raise for me (py36, and pandas master).
What is `pd.options.display.max_colwidth`, `pd.options.display.wdith`, and `pd.options.display.max_columns`?
@TomAugspurger Here my system pandas 0.23.0 output:
```
>>> import pandas as pd
>>> pd.options.display.max_colwidth
50
>>> pd.options.display.width
80
>>> pd.options.display.max_columns
0
```
0.22.0 output:
```
>>> import pandas as pd
>>> pd.options.display.max_colwidth
50
>>> pd.options.display.width
80
>>> pd.options.display.max_columns
20
```
If I do the following it works in 0.23.0!
```
pd.set_option("max_columns", 20)
```
Did the default setting change in 0.23.0?
Reading the docs show how [0.22](http://pandas.pydata.org/pandas-docs/version/0.22/options.html#available-options):
> In case python/IPython is running in a terminal this can be set to 0
has been updated in [0.23](https://pandas.pydata.org/pandas-docs/stable/options.html#available-options) to:
> In case Python/IPython is running in a terminal this is set to 0 by default.
However, when switching back to 0.22.0 and manually changing the `max_columns` option to `0` doesn't result in raising the exception.
:thinking: So it still doesn't explain why there would be an error raised when `max_columns` is set to `0`?
cc @cbrnr if you have any ideas.
I get an `AttributeError: module 'pandas._libs.tslibs.timezones' has no attribute 'tz_standardize'` when I test this with the latest master branch revision. Any ideas how to fix this? Using 0.23, I can reproduce the issue.
You need to recompile the extension modules. Commands for your platform should be in the contributing docs.
________________________________
From: Clemens Brunner <notifications@github.com>
Sent: Monday, May 28, 2018 1:48:25 AM
To: pandas-dev/pandas
Cc: Tom Augspurger; Mention
Subject: Re: [pandas-dev/pandas] MultiIndex `to_string` edge case Error after 0.23.0 upgrade (#21180)
I get an AttributeError: module 'pandas._libs.tslibs.timezones' has no attribute 'tz_standardize' when I test this with the latest master branch revision. Any ideas how to fix this? Using 0.23, I can reproduce the issue.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://github.com/pandas-dev/pandas/issues/21180#issuecomment-392435108>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABQHIikYuSBQuVOuVYnZ4mSdLRkxq_pCks5t2525gaJpZM4UKJe1>.
Thanks, I forgot about that. Thankfully, it's not the [add one business](https://github.com/pandas-dev/pandas/commit/c9e8f59668b63738cccb913f837c529887097da1#diff-425b4da47d01dc33d86c5c697e196b70R629) (I get the same error when I revert this change). This will take a bit of work, since everything works in PyCharm but not in IPython (so debugging will be much slower for me since I'm not used to pdb at all)...
Apparently, setting `pd.options.display.max_columns = 0` in 0.22 also results in this error. So the issue was not introduced by my change, which merely changed the default to 0.
Hi @cbrnr
> also results in this error.
Probably you ment to say `does not`? I do agree, merely changing the default to 0 should not result in the unexpected error.
No, I get the same error with pandas 0.22 if I first set `pd.options.display.max_columns = 0`. This means that this bug has been there for a while (I haven't tried older versions, but I suspect that they will behave similarly).
I do not get the exception on pandas 0.22.0 with
```python
import pandas as pd
pd.options.display.max_columns = 0
import numpy as np
index = pd.date_range('1970', '2018', freq='A')
data = np.random.randn(len(index))
columns1 = [
['This is a long title with > 37 chars.'],
['cat'],
]
columns2 = [
['This is a loooooonger title with > 43 chars.'],
['dog'],
]
df1 = pd.DataFrame(data=data, index=index, columns=columns1)
df2 = pd.DataFrame(data=data, index=index, columns=columns2)
df = pd.concat([df1, df2], axis=1)
df.head()
```
Though it occurs to me that this probably depends on the width of the terminal.
not a regression, but still should fix.
@TomAugspurger I just tried again, I do get the error with 0.22. How are you running this code? If you are not in interactive mode (e.g. IPython), you need to change the last line to `print(df.head())` in order to produce the output. I'm running this in IPython on macOS in a normal terminal (not Jupyter QtConsole) with 100x35 window size.
@jreback could you please make a note when you're moving the milestone? This should be fixed for 0.23.1.
i made a note
and this does not need to block 0.23.1
it’s jot a regression
pls don’t mark milestones unless ready to go
@jreback This *is* a regression in user experience. It may be an existing bug, but code that was working before, is failing now, because we changed the default. So we should still fix that existing bug for 0.23.1.
I cannot reproduce the error with the example in this issue, but I *do* see it with the example from https://github.com/pandas-dev/pandas/issues/21327
@jorisvandenbossche sure regressions happen, and we *should* fix them all. but unless this is fixed today, it will go in the next release.
We can also change the default of max_columns back to 20 for now if we don't find the effort to fix the bugs
Here's a failing unit test
```diff
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index f221df93d..52f83f093 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -305,6 +305,36 @@ class TestDataFrameFormatting(object):
assert not has_truncated_repr(df)
assert not has_expanded_repr(df)
+ def test_repr_multiindex(self):
+ # https://github.com/pandas-dev/pandas/issues/21180
+ from unittest import mock
+
+ def f():
+ return os.terminal_size((118, 96))
+
+ terminal_size = os.terminal_size((118, 96))
+
+ p1 = mock.patch('pandas.io.formats.console.get_terminal_size',
+ return_value=terminal_size)
+ p2 = mock.patch('pandas.io.formats.format.get_terminal_size',
+ return_value=terminal_size)
+ index = pd.date_range('1970', '2018', freq='A')
+ data = np.random.randn(len(index))
+ columns1 = [
+ ['This is a long title with > 37 chars.'],
+ ['cat'],
+ ]
+ columns2 = [
+ ['This is a loooooonger title with > 43 chars.'],
+ ['dog'],
+ ]
+ df1 = pd.DataFrame(data=data, index=index, columns=columns1)
+ df2 = pd.DataFrame(data=data, index=index, columns=columns2)
+ df = pd.concat([df1, df2], axis=1)
+
+ with p1, p2:
+ repr(df.head())
+
def test_repr_max_columns_max_rows(self):
term_width, term_height = get_terminal_size()
if term_width < 10 or term_height < 10:
```
If we don't have a fix for this, I would consider reverting the `pandas.options.display.max_columns` back to 20, and work on fixing this and possibly turning back to 0 for 0.24.0.
Errors in the repr are really annoying, as you cannot even inspect the data properly to see what might be the reason something is not working.
I'm going to try to fix it now.
What's the expected behavior here? I can easily match the behavior of the non-MI case,
```python
In [3]: s = pd.DataFrame({"A" * 41: [1, 2], 'B' * 41: [1, 2]})
In [4]: with p1, p2:
...: print(repr(s))
...:
...
0 ...
1 ...
[2 rows x 2 columns]
```
but that's not too useful... | 2018-06-27T13:40:08Z | [] | [] |
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/base.py", line 82, in __repr__
return str(self)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/base.py", line 61, in __str__
return self.__unicode__()
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/frame.py", line 663, in __unicode__
line_width=width, show_dimensions=show_dimensions)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/frame.py", line 1968, in to_string
formatter.to_string()
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/io/formats/format.py", line 648, in to_string
strcols = self._to_str_columns()
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/io/formats/format.py", line 539, in _to_str_columns
str_columns = self._get_formatted_column_labels(frame)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/io/formats/format.py", line 782, in _get_formatted_column_labels
str_columns = _sparsify(str_columns)
File "/home/david/.virtualenvs/thegrid-py3-venv/lib/python3.5/site-packages/pandas/core/indexes/multi.py", line 2962, in _sparsify
prev = pivoted[start]
IndexError: list index out of range
| 12,009 |
|||
pandas-dev/pandas | pandas-dev__pandas-21674 | dc45fbafef172e357cb5decdeab22de67160f5b7 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -248,6 +248,8 @@ Timezones
- Bug in :meth:`Series.truncate` with a tz-aware :class:`DatetimeIndex` which would cause a core dump (:issue:`9243`)
- Bug in :class:`Series` constructor which would coerce tz-aware and tz-naive :class:`Timestamp`s to tz-aware (:issue:`13051`)
- Bug in :class:`Index` with ``datetime64[ns, tz]`` dtype that did not localize integer data correctly (:issue:`20964`)
+- Bug in :class:`DatetimeIndex` where constructing with an integer and tz would not localize correctly (:issue:`12619`)
+- Bug in :func:`DataFrame.fillna` where a ``ValueError`` would raise when one column contained a ``datetime64[ns, tz]`` dtype (:issue:`15522`)
Offsets
^^^^^^^
@@ -326,7 +328,7 @@ Sparse
Reshaping
^^^^^^^^^
--
+- Bug in :func:`pandas.concat` when joining resampled DataFrames with timezone aware index (:issue:`13783`)
-
-
| DataFrame.fillna() working on row vector instead of column vector?
#### Code Sample, a copy-pastable example if possible
```python
>>> df.head(5)
time id bid bid_depth bid_depth_total \
0 2017-02-27 11:34:31+00:00 105 148.0 497.0 216589.0
1 2017-02-27 11:34:35+00:00 105 NaN NaN NaN
2 2017-02-27 11:34:38+00:00 105 NaN NaN NaN
3 2017-02-27 11:34:40+00:00 105 NaN NaN NaN
4 2017-02-27 11:34:41+00:00 105 NaN NaN NaN
bid_number offer offer_depth offer_depth_total offer_number open \
0 243.0 148.1 14192.0 530373.0 503.0 147.5
1 NaN NaN 14272.0 530453.0 504.0 NaN
2 NaN NaN 14192.0 530373.0 503.0 NaN
3 NaN NaN 14272.0 530453.0 504.0 NaN
4 NaN NaN 14492.0 530673.0 505.0 NaN
high low last change change_percent volume value trades
0 148.2 147.3 148.0 0.9 0.61 1286830.0 190224000.0 2112.0
1 NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN
>>> df.fillna(method='pad')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/pandas/core/frame.py", line 2842, in fillna
downcast=downcast, **kwargs)
File "/usr/lib/python3.6/site-packages/pandas/core/generic.py", line 3250, in fillna
downcast=downcast)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 3177, in interpolate
return self.apply('interpolate', **kwargs)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 3056, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 917, in interpolate
downcast=downcast, mgr=mgr)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 956, in _interpolate_with_fill
values = self._try_coerce_result(values)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 2448, in _try_coerce_result
result = result.reshape(len(result))
ValueError: cannot reshape array of size 24311 into shape (1,)
```
#### Problem description
msgpack of dataframe for replication:
https://www.dropbox.com/s/5skf6v8x2vg103o/dataframe?dl=0
I'm a beginner so I can only guess at what is wrong, but it seems to be working on rows instead of the columns. I can loop through df.columns and do it series by series to end up with the expected output so it doesn't seem to me as if it is a problem with any of the columns.
#### Expected Output
Fill the columns of NaN's with prior value in column.
#### Output of ``pd.show_versions()``
<details>
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.8-1-ARCH
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 34.2.0
Cython: None
numpy: 1.12.0
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.1.5
pymysql: None
psycopg2: 2.6.2 (dt dec pq3 ext lo64)
jinja2: None
boto: None
pandas_datareader: None
</details>
| can you show ``df.info()``
```python
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 24311 entries, 0 to 24310
Data columns (total 19 columns):
time 24311 non-null datetime64[ns, UTC]
id 24311 non-null int64
bid 1469 non-null float64
bid_depth 7988 non-null float64
bid_depth_total 11630 non-null float64
bid_number 10765 non-null float64
offer 1370 non-null float64
offer_depth 7864 non-null float64
offer_depth_total 10617 non-null float64
offer_number 9940 non-null float64
open 1085 non-null float64
high 1086 non-null float64
low 1085 non-null float64
last 1223 non-null float64
change 1223 non-null float64
change_percent 1223 non-null float64
volume 3697 non-null float64
value 3697 non-null float64
trades 3697 non-null float64
dtypes: datetime64[ns, UTC](1), float64(17), int64(1)
memory usage: 3.5 MB
```
Something to do with datetimetz. Here's a simpler repro:
```python
df = pd.DataFrame({'date': pd.date_range('2014-01-01', periods=5, tz='US/Central')})
df.fillna(method='pad')
ValueError Traceback (most recent call last)
<ipython-input-77-8f5ecb26a2f6> in <module>()
----> 1 df.fillna(method='pad')
```
yeah need to handle these in the Block correctly (the tz)
@Matsalm easy way to do this is (though not super pretty)
```
In [20]: df = pd.DataFrame({'A':pd.date_range('20130101',periods=4,tz='US/Eastern'),'B':[1,2,np.nan,np.nan]})
In [21]: df
Out[21]:
A B
0 2013-01-01 00:00:00-05:00 1.0
1 2013-01-02 00:00:00-05:00 2.0
2 2013-01-03 00:00:00-05:00 NaN
3 2013-01-04 00:00:00-05:00 NaN
In [23]: df[df.select_dtypes(exclude=['number']).columns].join(df.select_dtypes(include=['number']).fillna(method='pad'))
Out[23]:
A B
0 2013-01-01 00:00:00-05:00 1.0
1 2013-01-02 00:00:00-05:00 2.0
2 2013-01-03 00:00:00-05:00 2.0
3 2013-01-04 00:00:00-05:00 2.0
```
Thank you | 2018-06-29T06:14:20Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/pandas/core/frame.py", line 2842, in fillna
downcast=downcast, **kwargs)
File "/usr/lib/python3.6/site-packages/pandas/core/generic.py", line 3250, in fillna
downcast=downcast)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 3177, in interpolate
return self.apply('interpolate', **kwargs)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 3056, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 917, in interpolate
downcast=downcast, mgr=mgr)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 956, in _interpolate_with_fill
values = self._try_coerce_result(values)
File "/usr/lib/python3.6/site-packages/pandas/core/internals.py", line 2448, in _try_coerce_result
result = result.reshape(len(result))
ValueError: cannot reshape array of size 24311 into shape (1,)
| 12,012 |
|||
pandas-dev/pandas | pandas-dev__pandas-21914 | 2b51c968ca1e16a7fb517968576f8a9ab47ce1ed | diff --git a/doc/make.py b/doc/make.py
--- a/doc/make.py
+++ b/doc/make.py
@@ -363,6 +363,10 @@ def main():
sys.path.append(args.python_path)
globals()['pandas'] = importlib.import_module('pandas')
+ # Set the matplotlib backend to the non-interactive Agg backend for all
+ # child processes.
+ os.environ['MPLBACKEND'] = 'module://matplotlib.backends.backend_agg'
+
builder = DocBuilder(args.num_jobs, not args.no_api, args.single,
args.verbosity)
getattr(builder, args.command)()
| Default docs builds to a non-interactive matplotlib backend
`python make.py html` fails on Mac OS in a virtualenv, using a Python interpreter installed with pyenv, because the interpreter isn't a framework build, and matplotlib defaults to using the macosx interactive backend, which requires a framework build of the interpreter.
I think the docs build doesn't require an interactive backend and it should be safe to use Agg, which is available on all platforms.
The failure looks like:
```
(pandas-dev) tsmith-0yhv2t:tsmith doc (master *)$ python make.py html
Running Sphinx v1.7.5
Configuration error:
There is a programable error in your configuration file:
Traceback (most recent call last):
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/sphinx/config.py", line 161, in __init__
execfile_(filename, config)
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/sphinx/util/pycompat.py", line 150, in execfile_
exec_(code, _globals)
File "conf.py", line 285, in <module>
klass = getattr(importlib.import_module(mod), classname)
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/tsmith/upstream/pandas/pandas/io/formats/style.py", line 34, in <module>
import matplotlib.pyplot as plt
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/pyplot.py", line 115, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/backends/__init__.py", line 62, in pylab_setup
[backend_name], 0)
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/backends/backend_macosx.py", line 17, in <module>
from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.
```
| 2018-07-14T16:56:55Z | [] | [] |
Traceback (most recent call last):
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/sphinx/config.py", line 161, in __init__
execfile_(filename, config)
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/sphinx/util/pycompat.py", line 150, in execfile_
exec_(code, _globals)
File "conf.py", line 285, in <module>
klass = getattr(importlib.import_module(mod), classname)
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/tsmith/upstream/pandas/pandas/io/formats/style.py", line 34, in <module>
import matplotlib.pyplot as plt
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/pyplot.py", line 115, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/backends/__init__.py", line 62, in pylab_setup
[backend_name], 0)
File "/Users/tsmith/.pyenv/versions/3.6.4/envs/pandas-dev/lib/python3.6/site-packages/matplotlib/backends/backend_macosx.py", line 17, in <module>
from matplotlib.backends import _macosx
RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.
| 12,036 |
||||
pandas-dev/pandas | pandas-dev__pandas-21917 | 0480f4c183a95712cb8ceaf5682c5b8dd02e0f21 | diff --git a/ci/doctests.sh b/ci/doctests.sh
--- a/ci/doctests.sh
+++ b/ci/doctests.sh
@@ -21,7 +21,7 @@ if [ "$DOCTEST" ]; then
# DataFrame / Series docstrings
pytest --doctest-modules -v pandas/core/frame.py \
- -k"-assign -axes -combine -isin -itertuples -join -nlargest -nsmallest -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack -to_dict -to_stata"
+ -k"-axes -combine -isin -itertuples -join -nlargest -nsmallest -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack -to_dict -to_stata"
if [ $? -ne "0" ]; then
RET=1
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3273,7 +3273,7 @@ def assign(self, **kwargs):
Parameters
----------
- kwargs : keyword, value pairs
+ **kwargs : dict of {str: callable or Series}
The column names are keywords. If the values are
callable, they are computed on the DataFrame and
assigned to the new columns. The callable must not
@@ -3283,7 +3283,7 @@ def assign(self, **kwargs):
Returns
-------
- df : DataFrame
+ DataFrame
A new DataFrame with the new columns in addition to
all the existing columns.
@@ -3303,48 +3303,34 @@ def assign(self, **kwargs):
Examples
--------
- >>> df = pd.DataFrame({'A': range(1, 11), 'B': np.random.randn(10)})
+ >>> df = pd.DataFrame({'temp_c': [17.0, 25.0]},
+ ... index=['Portland', 'Berkeley'])
+ >>> df
+ temp_c
+ Portland 17.0
+ Berkeley 25.0
Where the value is a callable, evaluated on `df`:
-
- >>> df.assign(ln_A = lambda x: np.log(x.A))
- A B ln_A
- 0 1 0.426905 0.000000
- 1 2 -0.780949 0.693147
- 2 3 -0.418711 1.098612
- 3 4 -0.269708 1.386294
- 4 5 -0.274002 1.609438
- 5 6 -0.500792 1.791759
- 6 7 1.649697 1.945910
- 7 8 -1.495604 2.079442
- 8 9 0.549296 2.197225
- 9 10 -0.758542 2.302585
-
- Where the value already exists and is inserted:
-
- >>> newcol = np.log(df['A'])
- >>> df.assign(ln_A=newcol)
- A B ln_A
- 0 1 0.426905 0.000000
- 1 2 -0.780949 0.693147
- 2 3 -0.418711 1.098612
- 3 4 -0.269708 1.386294
- 4 5 -0.274002 1.609438
- 5 6 -0.500792 1.791759
- 6 7 1.649697 1.945910
- 7 8 -1.495604 2.079442
- 8 9 0.549296 2.197225
- 9 10 -0.758542 2.302585
-
- Where the keyword arguments depend on each other
-
- >>> df = pd.DataFrame({'A': [1, 2, 3]})
-
- >>> df.assign(B=df.A, C=lambda x:x['A']+ x['B'])
- A B C
- 0 1 1 2
- 1 2 2 4
- 2 3 3 6
+ >>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32)
+ temp_c temp_f
+ Portland 17.0 62.6
+ Berkeley 25.0 77.0
+
+ Alternatively, the same behavior can be achieved by directly
+ referencing an existing Series or sequence:
+ >>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32)
+ temp_c temp_f
+ Portland 17.0 62.6
+ Berkeley 25.0 77.0
+
+ In Python 3.6+, you can create multiple columns within the same assign
+ where one of the columns depends on another one defined within the same
+ assign:
+ >>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32,
+ ... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9)
+ temp_c temp_f temp_k
+ Portland 17.0 62.6 290.15
+ Berkeley 25.0 77.0 298.15
"""
data = self.copy()
| Make Series.shift always a copy?
Right now, `Series.shift(0)` will just return the series. Shifting for all other periods induces a copy:
```python
In [1]: import pandas as pd
In [2]: a = pd.Series([1, 2])
In [3]: a.shift(1) is a
Out[3]: False
In [4]: a.shift(0) is a
Out[4]: True
```
Should we defensively copy on `0` as well, for a consistent user experience?
https://github.com/pandas-dev/pandas/blob/e669fae0762d901e61f7af84fc3b5181848d257d/pandas/core/generic.py#L8084-L8086
np.ndarray[object] - Timedelta raises
```
arr = np.array([pd.Timestamp.now(), pd.Timedelta('2D')])
>>> arr - pd.Timedelta('1D')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Timedelta'
```
It should attempt to operate element-wise.
|
I suppose this is because of
https://github.com/pandas-dev/pandas/blob/27ebb3e1e40513ad5f8919a5bbc7298e2e070a39/pandas/_libs/tslibs/timedeltas.pyx#L539-L544
Any idea what the "wrong" answer would be? (with timedelta.timedelta instead of Timedelta that seems to work just fine, so I assume with Timedelta it will be the same)
No idea what the wrong answer would be. This should be easy to fix; if no one else picks it up I'll take care of it once the current PR queue settles down.
Yes, PR with a fix is certainly welcome I think | 2018-07-14T23:11:49Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Timedelta'
| 12,037 |
|||
pandas-dev/pandas | pandas-dev__pandas-22054 | 71852da03994c7c79a4ba3a0f91c6d723be6a299 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -642,6 +642,7 @@ Timedelta
- Bug in :class:`Series` with numeric dtype when adding or subtracting an an array or ``Series`` with ``timedelta64`` dtype (:issue:`22390`)
- Bug in :class:`Index` with numeric dtype when multiplying or dividing an array with dtype ``timedelta64`` (:issue:`22390`)
- Bug in :class:`TimedeltaIndex` incorrectly allowing indexing with ``Timestamp`` object (:issue:`20464`)
+- Fixed bug where subtracting :class:`Timedelta` from an object-dtyped array would raise ``TypeError`` (:issue:`21980`)
-
-
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -541,10 +541,12 @@ def _binary_op_method_timedeltalike(op, name):
elif hasattr(other, 'dtype'):
# nd-array like
- if other.dtype.kind not in ['m', 'M']:
- # raise rathering than letting numpy return wrong answer
+ if other.dtype.kind in ['m', 'M']:
+ return op(self.to_timedelta64(), other)
+ elif other.dtype.kind == 'O':
+ return np.array([op(self, x) for x in other])
+ else:
return NotImplemented
- return op(self.to_timedelta64(), other)
elif not _validate_ops_compat(other):
return NotImplemented
| np.ndarray[object] - Timedelta raises
```
arr = np.array([pd.Timestamp.now(), pd.Timedelta('2D')])
>>> arr - pd.Timedelta('1D')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Timedelta'
```
It should attempt to operate element-wise.
| I suppose this is because of
https://github.com/pandas-dev/pandas/blob/27ebb3e1e40513ad5f8919a5bbc7298e2e070a39/pandas/_libs/tslibs/timedeltas.pyx#L539-L544
Any idea what the "wrong" answer would be? (with timedelta.timedelta instead of Timedelta that seems to work just fine, so I assume with Timedelta it will be the same)
No idea what the wrong answer would be. This should be easy to fix; if no one else picks it up I'll take care of it once the current PR queue settles down.
Yes, PR with a fix is certainly welcome I think
Is this still an issue? I wasn't able to repro from master.
> Is this still an issue? I wasn't able to repro from master.
What platform etc? I still get it on OSX in both py27 and py37.
OSX 10.11.6 with Python 3.6. I just pulled up a REPL and imported pandas from a compile I did yesterday from master and didn't get an exception from the example code posted. Specifically ```Python 3.6.6 (default, Jul 23 2018, 11:08:18)
[GCC 4.2.1 Compatible Clang 6.0.0 (tags/RELEASE_600/final)] on darwin```
I also didn't see the issue from the latest install from pip either. Both times I just got
```python
>>> arr = np.array([pd.Timestamp.now(), pd.Timedelta('2D')])
>>> arr
array([Timestamp('2018-07-24 10:49:41.898067'),
Timedelta('2 days 00:00:00')], dtype=object)
```
Did you try subtracting a `Timedelta` from `arr`?
Ah! 🤦♂️ yea I missed that part in the example. I repro'd the bug with that on master and latest pip. So with this then how should I go about the fix? It's not operating element wise on the array because the timedeltas.pyx isn't returning that it is a timedelta correctly? or...?
> how should I go about the fix?
Take a look at the code block Joris quoted above. At the moment that lets 'm' and 'M' dtypes through but stops everything else. The fix will involve letting 'o' dtypes through (and making sure they are handled correctly) | 2018-07-25T19:49:53Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Timedelta'
| 12,055 |
|||
pandas-dev/pandas | pandas-dev__pandas-22169 | 3bcc2bb4e275efef6d4d4d87ac1d661aa4c2bdbc | diff --git a/doc/source/whatsnew/v0.23.5.txt b/doc/source/whatsnew/v0.23.5.txt
--- a/doc/source/whatsnew/v0.23.5.txt
+++ b/doc/source/whatsnew/v0.23.5.txt
@@ -40,3 +40,7 @@ Bug Fixes
-
-
+
+**I/O**
+
+- Bug in :func:`read_csv` that caused it to raise ``OverflowError`` when trying to use 'inf' as ``na_value`` with integer index column (:issue:`17128`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -95,7 +95,7 @@ def _ensure_data(values, dtype=None):
values = ensure_float64(values)
return values, 'float64', 'float64'
- except (TypeError, ValueError):
+ except (TypeError, ValueError, OverflowError):
# if we are trying to coerce to a dtype
# and it is incompat this will fall thru to here
return ensure_object(values), 'object', 'object'
@@ -429,7 +429,7 @@ def isin(comps, values):
values = values.astype('int64', copy=False)
comps = comps.astype('int64', copy=False)
f = lambda x, y: htable.ismember_int64(x, y)
- except (TypeError, ValueError):
+ except (TypeError, ValueError, OverflowError):
values = values.astype(object)
comps = comps.astype(object)
| OverflowError in read_csv when specifying certain na_values
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
from pandas.compat import StringIO
data = StringIO("a,b,c\n1,2,3\n4,5,6\n7,8,9")
na_values = ['-inf']
index_col = 0
df = pd.read_csv(data, na_values=na_values, index_col=index_col)
```
#### Problem description
`read_csv()` fails with the following traceback when specifying certain `na_values` with `index_col`:
```
Traceback (most recent call last):
File "run.py", line 9, in <module>
df = pd.read_csv(data, na_values=na_values, index_col=index_col)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 660, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 416, in _read
data = parser.read(nrows)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1010, in read
ret = self._engine.read(nrows)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1837, in read
index, names = self._make_index(data, alldata, names)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1347, in _make_index
index = self._agg_index(index)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1440, in _agg_index
arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1524, in _infer_types
mask = algorithms.isin(values, list(na_values))
File "/home/liauys/Code/pandas/pandas/core/algorithms.py", line 408, in isin
values, _, _ = _ensure_data(values, dtype=dtype)
File "/home/liauys/Code/pandas/pandas/core/algorithms.py", line 74, in _ensure_data
return _ensure_int64(values), 'int64', 'int64'
File "pandas/_libs/algos_common_helper.pxi", line 3227, in pandas._libs.algos.ensure_int64
File "pandas/_libs/algos_common_helper.pxi", line 3232, in pandas._libs.algos.ensure_int64
OverflowError: cannot convert float infinity to integer
```
Any of the following makes the error go away:
* The index column does contain the said NA value
* Using `na_values` of `['inf']` instead of `['-inf']`
* Not specifying index_col
* Using version 0.19 or older
#### Expected Output
There should not be any error.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.13.final.0
python-bits: 64
OS: Linux
OS-release: 4.11.9-1-ARCH
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.21.0.dev+316.gf2b0bdc9b
pytest: None
pip: 9.0.1
setuptools: 36.2.5
Cython: 0.26
numpy: 1.13.1
scipy: None
pyarrow: None
xarray: None
IPython: 5.4.1
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| @YS-L : Thanks for the report!
I'm not sure I follow you here: if upgrading makes the error go away, why are you filing this issue? Closing given your explanation.
It seems like I was mislead by your comment. This issue is in fact reproducible on `master`, which I see now is what you were using. Sorry about that! Reopening. | 2018-08-02T11:56:34Z | [] | [] |
Traceback (most recent call last):
File "run.py", line 9, in <module>
df = pd.read_csv(data, na_values=na_values, index_col=index_col)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 660, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 416, in _read
data = parser.read(nrows)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1010, in read
ret = self._engine.read(nrows)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1837, in read
index, names = self._make_index(data, alldata, names)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1347, in _make_index
index = self._agg_index(index)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1440, in _agg_index
arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
File "/home/liauys/Code/pandas/pandas/io/parsers.py", line 1524, in _infer_types
mask = algorithms.isin(values, list(na_values))
File "/home/liauys/Code/pandas/pandas/core/algorithms.py", line 408, in isin
values, _, _ = _ensure_data(values, dtype=dtype)
File "/home/liauys/Code/pandas/pandas/core/algorithms.py", line 74, in _ensure_data
return _ensure_int64(values), 'int64', 'int64'
File "pandas/_libs/algos_common_helper.pxi", line 3227, in pandas._libs.algos.ensure_int64
File "pandas/_libs/algos_common_helper.pxi", line 3232, in pandas._libs.algos.ensure_int64
OverflowError: cannot convert float infinity to integer
| 12,076 |
|||
pandas-dev/pandas | pandas-dev__pandas-22261 | 4f11d1a9a9b02a37dbe109d8413cc75d73b92853 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -693,7 +693,7 @@ Groupby/Resample/Rolling
``SeriesGroupBy`` when the grouping variable only contains NaNs and numpy version < 1.13 (:issue:`21956`).
- Multiple bugs in :func:`pandas.core.Rolling.min` with ``closed='left'` and a
datetime-like index leading to incorrect results and also segfault. (:issue:`21704`)
--
+- Bug in :meth:`Resampler.apply` when passing postiional arguments to applied func (:issue:`14615`).
Sparse
^^^^^^
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -234,12 +234,15 @@ def pipe(self, func, *args, **kwargs):
klass='DataFrame',
versionadded='',
axis=''))
- def aggregate(self, arg, *args, **kwargs):
+ def aggregate(self, func, *args, **kwargs):
self._set_binner()
- result, how = self._aggregate(arg, *args, **kwargs)
+ result, how = self._aggregate(func, *args, **kwargs)
if result is None:
- result = self._groupby_and_aggregate(arg,
+ how = func
+ grouper = None
+ result = self._groupby_and_aggregate(how,
+ grouper,
*args,
**kwargs)
@@ -852,7 +855,7 @@ def __init__(self, obj, *args, **kwargs):
self._groupby.grouper.mutated = True
self.groupby = copy.copy(parent.groupby)
- def _apply(self, f, **kwargs):
+ def _apply(self, f, grouper=None, *args, **kwargs):
"""
dispatch to _upsample; we are stripping all of the _upsample kwargs and
performing the original function call on the grouped object
@@ -864,7 +867,7 @@ def func(x):
if isinstance(f, compat.string_types):
return getattr(x, f)(**kwargs)
- return x.apply(f, **kwargs)
+ return x.apply(f, *args, **kwargs)
result = self._groupby.apply(func)
return self._wrap_result(result)
| Unable to pass additional arguments to resample().apply()
```python
import pandas as pd
import numpy as np
def stuff(vals, th):
return np.mean(vals)
rng = pd.date_range('1/1/2011', periods=72, freq='H')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
df_res = ts.resample("D").apply(stuff, 10)
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.12.final.0
python-bits: 64
OS: Darwin
OS-release: 16.1.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.19.1
nose: 1.3.0
pip: 9.0.0
setuptools: 28.7.1
Cython: 0.20.2
numpy: 1.11.2
scipy: 0.13.2
statsmodels: None
xarray: None
IPython: 5.1.0
sphinx: 1.4.8
patsy: None
dateutil: 2.5.3
pytz: 2016.7
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.5.2
matplotlib: 1.3.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.12
pymysql: None
psycopg2: None
jinja2: 2.8
boto: None
pandas_datareader: None
<details>
In this version, it seems to be impossible to apply a function with an additional argument (which is not used in the example above): checking the Pandas 0.19.1 code, what happens is that the grouped argument of the function _groupby_and_aggregate gets the first value of the *args argument passed from the function aggregate, which is clearly wrong.
Traceback (most recent call last):
File "test.py", line 9, in <module>
df_res = ts.resample("D").apply(stuff, 10)
File "/usr/local/lib/python2.7/site-packages/pandas/tseries/resample.py", line 324, in aggregate
**kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/tseries/resample.py", line 405, in _groupby_and_aggregate
result = grouped.apply(how, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 694, in apply
return self._python_apply_general(f)
File "/usr/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 697, in _python_apply_general
keys, values, mutated = self.grouper.apply(f, self._selected_obj,
AttributeError: 'int' object has no attribute 'apply'
</details>
| hmm thought we had an issue for this already - yep looks like a bug
if you'd like to submit a PR to fix would be great
I will look at this today if no one else working on it? | 2018-08-09T11:48:09Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 9, in <module>
df_res = ts.resample("D").apply(stuff, 10)
File "/usr/local/lib/python2.7/site-packages/pandas/tseries/resample.py", line 324, in aggregate
**kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/tseries/resample.py", line 405, in _groupby_and_aggregate
result = grouped.apply(how, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 694, in apply
return self._python_apply_general(f)
File "/usr/local/lib/python2.7/site-packages/pandas/core/groupby.py", line 697, in _python_apply_general
keys, values, mutated = self.grouper.apply(f, self._selected_obj,
AttributeError: 'int' object has no attribute 'apply'
| 12,091 |
|||
pandas-dev/pandas | pandas-dev__pandas-22377 | e7fca911872e29612bca77613d6e77468514acbe | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -634,6 +634,7 @@ Numeric
a ``TypeError`` was wrongly raised. For all three methods such calculation are now done correctly. (:issue:`16679`).
- Bug in :class:`Series` comparison against datetime-like scalars and arrays (:issue:`22074`)
- Bug in :class:`DataFrame` multiplication between boolean dtype and integer returning ``object`` dtype instead of integer dtype (:issue:`22047`,:issue:`22163`)
+- Bug in :meth:`DataFrame.apply` where, when supplied with a string argument and additional positional or keyword arguments (e.g. ``df.apply('sum', min_count=1)``), a ``TypeError`` was wrongly raised (:issue:`22376`)
-
Strings
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -71,7 +71,9 @@ def __init__(self, obj, func, broadcast, raw, reduce, result_type,
self.result_type = result_type
# curry if needed
- if kwds or args and not isinstance(func, np.ufunc):
+ if ((kwds or args) and
+ not isinstance(func, (np.ufunc, compat.string_types))):
+
def f(x):
return func(x, *args, **kwds)
else:
| DataFrame.apply fails for string function arguments with additional positional or keyword arguments
#### Code sample
```python
import pandas as pd
df = pd.DataFrame([[1, 2, 3], [1, 2, 3]])
df.apply('sum', axis=1, min_count=1)
```
#### Problem description
When we use the ``DataFrame.apply`` method with a string function argument (e.g. 'sum') and provide additional positional or keyword arguments it fails with the following exception:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<cut>/pandas/core/frame.py", line 6173, in apply
return op.get_result()
File "<cut>/pandas/core/apply.py", line 151, in get_result
return self.apply_standard()
File "<cut>/pandas/core/apply.py", line 257, in apply_standard
self.apply_series_generator()
File "<cut>/pandas/core/apply.py", line 286, in apply_series_generator
results[i] = self.f(v)
File "<cut>/pandas-dev/pandas/core/apply.py", line 78, in f
return func(x, *args, **kwds)
TypeError: ("'str' object is not callable", 'occurred at index 0')
```
but works just fine without additional arguments. The code above fails in master.
#### Expected Output
```python
>>> df.apply('sum', axis=1, min_count=1)
0 6
1 6
dtype: int64
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: 70e6f7c3ce7aca9a0ee08bacb2fe0ad85db02d88
python: 3.6.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.0.101-108.13.1.14249.0.PTF-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.0.dev0+469.g70e6f7c3c
pytest: 3.7.1
pip: 10.0.1
setuptools: 40.0.0
Cython: 0.28.5
numpy: 1.15.0
scipy: 1.1.0
pyarrow: 0.9.0
xarray: 0.10.8
IPython: 6.5.0
sphinx: 1.7.6
patsy: 0.5.0
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.7
feather: 0.4.0
matplotlib: 2.2.3
openpyxl: 2.5.5
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.5
lxml: 4.2.4
bs4: 4.6.3
html5lib: 1.0.1
sqlalchemy: 1.2.10
pymysql: 0.9.2
psycopg2: None
jinja2: 2.10
s3fs: 0.1.5
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: 0.1.1
</details>
| 2018-08-16T00:26:04Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<cut>/pandas/core/frame.py", line 6173, in apply
return op.get_result()
File "<cut>/pandas/core/apply.py", line 151, in get_result
return self.apply_standard()
File "<cut>/pandas/core/apply.py", line 257, in apply_standard
self.apply_series_generator()
File "<cut>/pandas/core/apply.py", line 286, in apply_series_generator
results[i] = self.f(v)
File "<cut>/pandas-dev/pandas/core/apply.py", line 78, in f
return func(x, *args, **kwds)
TypeError: ("'str' object is not callable", 'occurred at index 0')
| 12,114 |
||||
pandas-dev/pandas | pandas-dev__pandas-22394 | b5d81cfe43eeccfc3641aa9578097f726da9ce9d | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -711,7 +711,7 @@ Reshaping
- Bug in :func:`get_dummies` with Unicode attributes in Python 2 (:issue:`22084`)
- Bug in :meth:`DataFrame.replace` raises ``RecursionError`` when replacing empty lists (:issue:`22083`)
- Bug in :meth:`Series.replace` and meth:`DataFrame.replace` when dict is used as the `to_replace` value and one key in the dict is is another key's value, the results were inconsistent between using integer key and using string key (:issue:`20656`)
--
+- Bug in :meth:`DataFrame.drop_duplicates` for empty ``DataFrame`` which incorrectly raises an error (:issue:`20516`)
Build Changes
^^^^^^^^^^^^^
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4335,6 +4335,9 @@ def drop_duplicates(self, subset=None, keep='first', inplace=False):
-------
deduplicated : DataFrame
"""
+ if self.empty:
+ return self.copy()
+
inplace = validate_bool_kwarg(inplace, 'inplace')
duplicated = self.duplicated(subset, keep=keep)
@@ -4369,6 +4372,9 @@ def duplicated(self, subset=None, keep='first'):
from pandas.core.sorting import get_group_index
from pandas._libs.hashtable import duplicated_int64, _SIZE_HINT_LIMIT
+ if self.empty:
+ return Series()
+
def f(vals):
labels, shape = algorithms.factorize(
vals, size_hint=min(len(self), _SIZE_HINT_LIMIT))
| Calling drop_duplicates method for empty pandas dataframe throws error
#### Code Sample
```python
>>> pd.DataFrame().drop_duplicates()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/analytical-monk/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 3098, in drop_duplicates
duplicated = self.duplicated(subset, keep=keep)
File "/home/analytical-monk/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 3144, in duplicated
labels, shape = map(list, zip(*map(f, vals)))
ValueError: not enough values to unpack (expected 2, got 0)
```
#### Problem description
Currently, calling the drop_duplicates method for an empty dataframe object (simply pd.DataFrame()) throws an error.
Ideally it should return back the empty dataframe just liked it does when at least one column is present.
#### Expected Output
```
>>> pd.DataFrame().drop_duplicates()
Empty DataFrame
Columns: []
Index: []
```
#### Output of ``pd.show_versions()``
<details>
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.8.0-58-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_IN
LOCALE: en_IN.ISO8859-1
pandas: 0.20.3
pytest: None
pip: 9.0.1
setuptools: 36.6.0
Cython: None
numpy: 1.13.1
scipy: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0b10
sqlalchemy: 1.1.14
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
pandas_gbq: None
pandas_datareader: None
```
</details>
| @TomAugspurger Opened this issue for the problem I'd mentioned in the gitter chat.
Can I work on this issue? @TomAugspurger
Please do!
On Wed, Mar 28, 2018 at 1:35 PM, Arpit Solanki <notifications@github.com>
wrote:
> Can I work on this issue? @TomAugspurger
> <https://github.com/TomAugspurger>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/20516#issuecomment-376991195>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIlMYlNyhe2x7TrsfjIMGzOm9Y4KJks5ti9gKgaJpZM4S-lLY>
> .
>
@arpit1997 are you still working on this? | 2018-08-17T02:56:14Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/analytical-monk/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 3098, in drop_duplicates
duplicated = self.duplicated(subset, keep=keep)
File "/home/analytical-monk/miniconda3/lib/python3.6/site-packages/pandas/core/frame.py", line 3144, in duplicated
labels, shape = map(list, zip(*map(f, vals)))
ValueError: not enough values to unpack (expected 2, got 0)
| 12,116 |
|||
pandas-dev/pandas | pandas-dev__pandas-22436 | fa47b8d95e4752d2687b0aee5942dcbb34f61362 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -663,6 +663,7 @@ Indexing
- Fixed ``DataFrame[np.nan]`` when columns are non-unique (:issue:`21428`)
- Bug when indexing :class:`DatetimeIndex` with nanosecond resolution dates and timezones (:issue:`11679`)
- Bug where indexing with a Numpy array containing negative values would mutate the indexer (:issue:`21867`)
+- Bug where mixed indexes wouldn't allow integers for ``.at`` (:issue:`19860`)
- ``Float64Index.get_loc`` now raises ``KeyError`` when boolean key passed. (:issue:`19087`)
Missing
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3125,8 +3125,8 @@ def get_value(self, series, key):
iloc = self.get_loc(key)
return s[iloc]
except KeyError:
- if (len(self) > 0 and
- self.inferred_type in ['integer', 'boolean']):
+ if (len(self) > 0
+ and (self.holds_integer() or self.is_boolean())):
raise
elif is_integer(key):
return s[key]
@@ -3139,7 +3139,7 @@ def get_value(self, series, key):
return self._engine.get_value(s, k,
tz=getattr(series.dtype, 'tz', None))
except KeyError as e1:
- if len(self) > 0 and self.inferred_type in ['integer', 'boolean']:
+ if len(self) > 0 and (self.holds_integer() or self.is_boolean()):
raise
try:
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -2354,7 +2354,7 @@ def _convert_key(self, key, is_setter=False):
raise ValueError("At based indexing on an integer index "
"can only have integer indexers")
else:
- if is_integer(i):
+ if is_integer(i) and not ax.holds_integer():
raise ValueError("At based indexing on an non-integer "
"index can only have non-integer "
"indexers")
| BUG?: .at not working on object indexes containing some integers
Version 0.22.0
#### Problem description
Using the .at - Method on an Index which contains Integers as well as str/objects raises an Error. This used to be possible using the ``.get_value()``-Method. As ``.at`` is the designated successor (#15269) the same behaviour should be supported.
I also noticed that ``.get_value`` is approx. twice as fast as ``.at``. Is there a specific reason to stick with ``.at``? (see again #15269 for a speed comparison)
#### Code Sample
```python
import pandas as pd
import numpy as np
data = np.random.randn(10, 5)
df = pd.DataFrame(data, columns=['a', 'b', 'c', 1, 2])
df.at[0, 1]
```
Raises:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/thielc/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1868, in __getitem__
key = self._convert_key(key)
File "/home/thielc/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1915, in _convert_key
raise ValueError("At based indexing on an non-integer "
ValueError: At based indexing on an non-integer index can only have non-integer indexers
```
| @c-thiel : Thanks for reporting this! I'm a little unclear as to what's being reported. Is this a regression from a previous version (you said this used to be possible), or is this a general API inconsistency that you're referring to?
That aside, I do agree that the behavior does look strange.
I don't think it is a regression from a previous version (test with 0.16, already broken there), but it is possible with `get_value`. However, this is deprecated, and we say people should use `.at` instead. So then `.at` should work correctly on this as well.
The problem comes from here:
https://github.com/pandas-dev/pandas/blob/572476f0a3652222c17458d418a107554580eaa5/pandas/core/indexing.py#L1907-L1916
where we do this check. But I agree the check looks to strict, as a mixed object index can indeed contain integers as well.
Welcome to try to fix this (eg try removing this check, and see if some tests fail due to that).
yeah I think prob ok to remove the else check; this ultimately goes thru ``.loc`` so indexing verfication can occur there
@c-thiel indexing with mixed dtype indexes is simply going to be slow generally.
@jorisvandenbossche Yes, this is what I was reffering to.
@jreback : Regarding Performance, ``at`` is still much faster than ``loc``, especially when setting values. The setting-performance for single values is the main reason for me using the set_value and get_value functions. But also the ``get_value``, ``set_value`` functions are twice as fast as ``at`` :
```python
import pandas as pd
import numpy as np
import time
c = ['a', 'b', 'c', 'd', 'e']
data = np.random.rand(10000, 5)
df = pd.DataFrame(data, columns=c)
rows = np.random.randint(0, 9999, (100000,))
columns = np.random.choice(c, (100000,))
t = time.time()
for row, column in zip(rows, columns):
a = df.get_value(row, column)
print(f'get_value: {time.time()-t}')
t = time.time()
for row, column in zip(rows, columns):
a = df.at[row, column]
print(f'at: {time.time()-t}')
t = time.time()
for row, column in zip(rows, columns):
a = df.loc[row, column]
print(f'loc: {time.time()-t}')
t = time.time()
for row, column in zip(rows, columns):
df.at[row, column] = 4
print(f'set at: {time.time()-t}')
t = time.time()
for row, column in zip(rows, columns):
df.loc[row, column] = 5
print(f'set loc: {time.time()-t}')
t = time.time()
for row, column in zip(rows, columns):
df.set_value(row, column, 4)
print(f'set_value: {time.time()-t}')
```
```
get_value: 0.257692813873291
at: 0.52744460105896
loc: 0.7349758148193359
set at: 0.687880277633667
set loc: 11.664336204528809
set_value: 0.3008086681365967
```
@c-thiel setting individual values in a loop is non-idiomatic. set_value/get_value were deprecated because they didn't properly handle *any* edge cases nor had any type safetly whatsoever. Correct is much much better then wrong but *slightly* faster. | 2018-08-21T06:54:14Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/thielc/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1868, in __getitem__
key = self._convert_key(key)
File "/home/thielc/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1915, in _convert_key
raise ValueError("At based indexing on an non-integer "
ValueError: At based indexing on an non-integer index can only have non-integer indexers
| 12,120 |
|||
pandas-dev/pandas | pandas-dev__pandas-22647 | f87fe147c7494f3db56f3de31aeda12f80ef9c67 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -182,6 +182,7 @@ Other Enhancements
- :func:`to_timedelta` now supports iso-formated timedelta strings (:issue:`21877`)
- :class:`Series` and :class:`DataFrame` now support :class:`Iterable` in constructor (:issue:`2193`)
- :class:`DatetimeIndex` gained :attr:`DatetimeIndex.timetz` attribute. Returns local time with timezone information. (:issue:`21358`)
+- :meth:`round`, :meth:`ceil`, and meth:`floor` for :class:`DatetimeIndex` and :class:`Timestamp` now support an ``ambiguous`` argument for handling datetimes that are rounded to ambiguous times (:issue:`18946`)
- :class:`Resampler` now is iterable like :class:`GroupBy` (:issue:`15314`).
- :meth:`Series.resample` and :meth:`DataFrame.resample` have gained the :meth:`Resampler.quantile` (:issue:`15023`).
- :meth:`Index.to_frame` now supports overriding column name(s) (:issue:`22580`).
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -477,6 +477,13 @@ class NaTType(_NaT):
Parameters
----------
freq : a freq string indicating the rounding resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
Raises
------
@@ -489,6 +496,17 @@ class NaTType(_NaT):
Parameters
----------
freq : a freq string indicating the flooring resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
""")
ceil = _make_nat_func('ceil', # noqa:E128
"""
@@ -497,6 +515,17 @@ class NaTType(_NaT):
Parameters
----------
freq : a freq string indicating the ceiling resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
""")
tz_convert = _make_nat_func('tz_convert', # noqa:E128
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -656,7 +656,7 @@ class Timestamp(_Timestamp):
return create_timestamp_from_ts(ts.value, ts.dts, ts.tzinfo, freq)
- def _round(self, freq, rounder):
+ def _round(self, freq, rounder, ambiguous='raise'):
if self.tz is not None:
value = self.tz_localize(None).value
else:
@@ -668,10 +668,10 @@ class Timestamp(_Timestamp):
r = round_ns(value, rounder, freq)[0]
result = Timestamp(r, unit='ns')
if self.tz is not None:
- result = result.tz_localize(self.tz)
+ result = result.tz_localize(self.tz, ambiguous=ambiguous)
return result
- def round(self, freq):
+ def round(self, freq, ambiguous='raise'):
"""
Round the Timestamp to the specified resolution
@@ -682,32 +682,61 @@ class Timestamp(_Timestamp):
Parameters
----------
freq : a freq string indicating the rounding resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
Raises
------
ValueError if the freq cannot be converted
"""
- return self._round(freq, np.round)
+ return self._round(freq, np.round, ambiguous)
- def floor(self, freq):
+ def floor(self, freq, ambiguous='raise'):
"""
return a new Timestamp floored to this resolution
Parameters
----------
freq : a freq string indicating the flooring resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
"""
- return self._round(freq, np.floor)
+ return self._round(freq, np.floor, ambiguous)
- def ceil(self, freq):
+ def ceil(self, freq, ambiguous='raise'):
"""
return a new Timestamp ceiled to this resolution
Parameters
----------
freq : a freq string indicating the ceiling resolution
+ ambiguous : bool, 'NaT', default 'raise'
+ - bool contains flags to determine if time is dst or not (note
+ that this flag is only applicable for ambiguous fall dst dates)
+ - 'NaT' will return NaT for an ambiguous time
+ - 'raise' will raise an AmbiguousTimeError for an ambiguous time
+
+ .. versionadded:: 0.24.0
+
+ Raises
+ ------
+ ValueError if the freq cannot be converted
"""
- return self._round(freq, np.ceil)
+ return self._round(freq, np.ceil, ambiguous)
@property
def tz(self):
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -99,6 +99,18 @@ class TimelikeOps(object):
frequency like 'S' (second) not 'ME' (month end). See
:ref:`frequency aliases <timeseries.offset_aliases>` for
a list of possible `freq` values.
+ ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
+ - 'infer' will attempt to infer fall dst-transition hours based on
+ order
+ - bool-ndarray where True signifies a DST time, False designates
+ a non-DST time (note that this flag is only applicable for
+ ambiguous times)
+ - 'NaT' will return NaT where there are ambiguous times
+ - 'raise' will raise an AmbiguousTimeError if there are ambiguous
+ times
+ Only relevant for DatetimeIndex
+
+ .. versionadded:: 0.24.0
Returns
-------
@@ -168,7 +180,7 @@ class TimelikeOps(object):
"""
)
- def _round(self, freq, rounder):
+ def _round(self, freq, rounder, ambiguous):
# round the local times
values = _ensure_datetimelike_to_i8(self)
result = round_ns(values, rounder, freq)
@@ -180,19 +192,20 @@ def _round(self, freq, rounder):
if 'tz' in attribs:
attribs['tz'] = None
return self._ensure_localized(
- self._shallow_copy(result, **attribs))
+ self._shallow_copy(result, **attribs), ambiguous
+ )
@Appender((_round_doc + _round_example).format(op="round"))
- def round(self, freq, *args, **kwargs):
- return self._round(freq, np.round)
+ def round(self, freq, ambiguous='raise'):
+ return self._round(freq, np.round, ambiguous)
@Appender((_round_doc + _floor_example).format(op="floor"))
- def floor(self, freq):
- return self._round(freq, np.floor)
+ def floor(self, freq, ambiguous='raise'):
+ return self._round(freq, np.floor, ambiguous)
@Appender((_round_doc + _ceil_example).format(op="ceil"))
- def ceil(self, freq):
- return self._round(freq, np.ceil)
+ def ceil(self, freq, ambiguous='raise'):
+ return self._round(freq, np.ceil, ambiguous)
class DatetimeIndexOpsMixin(DatetimeLikeArrayMixin):
@@ -264,7 +277,7 @@ def _evaluate_compare(self, other, op):
except TypeError:
return result
- def _ensure_localized(self, result):
+ def _ensure_localized(self, result, ambiguous='raise'):
"""
ensure that we are re-localized
@@ -274,6 +287,8 @@ def _ensure_localized(self, result):
Parameters
----------
result : DatetimeIndex / i8 ndarray
+ ambiguous : str, bool, or bool-ndarray
+ default 'raise'
Returns
-------
@@ -284,7 +299,7 @@ def _ensure_localized(self, result):
if getattr(self, 'tz', None) is not None:
if not isinstance(result, ABCIndexClass):
result = self._simple_new(result)
- result = result.tz_localize(self.tz)
+ result = result.tz_localize(self.tz, ambiguous=ambiguous)
return result
def _box_values_as_index(self):
| AmbiguousTimeError in floor() operation
This is probably related to #18885
Given a DataFrame with a DST change on it:
```python
df1=pd.DataFrame([pd.to_datetime('2017-10-29 02:00:00+02:00'),
pd.to_datetime('2017-10-29 02:00:00+01:00'),
pd.to_datetime('2017-10-29 03:00:00+01:00')],columns=['date'])
df1['date'] = df1['date'].dt.tz_localize('UTC').dt.tz_convert('Europe/Madrid')
df1['value'] = 1
```
When we try to do a `floor()` or `ceil()` operation, we get an AmbiguousTimeError exception:
```python
df1.date.dt.floor('H')
```
### Expected output
```
0 2017-10-29 02:00:00+02:00
1 2017-10-29 02:00:00+01:00
2 2017-10-29 03:00:00+01:00
```
### Actual output
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "(...)/venv/lib/python3.6/site-packages/pandas/core/accessor.py", line 115, in f
return self._delegate_method(name, *args, **kwargs)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/accessors.py", line 131, in _delegate_method
result = method(*args, **kwargs)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 118, in floor
return self._round(freq, np.floor)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 110, in _round
self._shallow_copy(result, **attribs))
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 230, in _ensure_localized
result = result.tz_localize(self.tz)
File "(...)/venv/lib/python3.6/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 1858, in tz_localize
errors=errors)
File "pandas/_libs/tslib.pyx", line 3593, in pandas._libs.tslib.tz_localize_to_utc
pytz.exceptions.AmbiguousTimeError: Cannot infer dst time from Timestamp('2017-10-29 02:00:00'), try using the 'ambiguous' argument
```
| yes there a number of issues related to this. the fix is all the same. | 2018-09-09T06:48:32Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "(...)/venv/lib/python3.6/site-packages/pandas/core/accessor.py", line 115, in f
return self._delegate_method(name, *args, **kwargs)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/accessors.py", line 131, in _delegate_method
result = method(*args, **kwargs)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 118, in floor
return self._round(freq, np.floor)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 110, in _round
self._shallow_copy(result, **attribs))
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 230, in _ensure_localized
result = result.tz_localize(self.tz)
File "(...)/venv/lib/python3.6/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "(...)/venv/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 1858, in tz_localize
errors=errors)
File "pandas/_libs/tslib.pyx", line 3593, in pandas._libs.tslib.tz_localize_to_utc
pytz.exceptions.AmbiguousTimeError: Cannot infer dst time from Timestamp('2017-10-29 02:00:00'), try using the 'ambiguous' argument
| 12,151 |
|||
pandas-dev/pandas | pandas-dev__pandas-22725 | c8ce3d01e9ffafc24c6f9dd568cd9eb7e42c610c | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -3,8 +3,9 @@
from pandas.compat import zip
from pandas.core.dtypes.generic import ABCSeries, ABCIndex
-from pandas.core.dtypes.missing import isna, notna
+from pandas.core.dtypes.missing import isna
from pandas.core.dtypes.common import (
+ ensure_object,
is_bool_dtype,
is_categorical_dtype,
is_object_dtype,
@@ -36,114 +37,26 @@
_shared_docs = dict()
-def _get_array_list(arr, others):
- """
- Auxiliary function for :func:`str_cat`
-
- Parameters
- ----------
- arr : ndarray
- The left-most ndarray of the concatenation
- others : list, ndarray, Series
- The rest of the content to concatenate. If list of list-likes,
- all elements must be passable to ``np.asarray``.
-
- Returns
- -------
- list
- List of all necessary arrays
- """
- from pandas.core.series import Series
-
- if len(others) and isinstance(com.values_from_object(others)[0],
- (list, np.ndarray, Series)):
- arrays = [arr] + list(others)
- else:
- arrays = [arr, others]
-
- return [np.asarray(x, dtype=object) for x in arrays]
-
-
-def str_cat(arr, others=None, sep=None, na_rep=None):
+def cat_core(list_of_columns, sep):
"""
Auxiliary function for :meth:`str.cat`
- If `others` is specified, this function concatenates the Series/Index
- and elements of `others` element-wise.
- If `others` is not being passed then all values in the Series are
- concatenated in a single string with a given `sep`.
-
Parameters
----------
- others : list-like, or list of list-likes, optional
- List-likes (or a list of them) of the same length as calling object.
- If None, returns str concatenating strings of the Series.
- sep : string or None, default None
- If None, concatenates without any separator.
- na_rep : string or None, default None
- If None, NA in the series are ignored.
+ list_of_columns : list of numpy arrays
+ List of arrays to be concatenated with sep;
+ these arrays may not contain NaNs!
+ sep : string
+ The separator string for concatenating the columns
Returns
-------
- concat
- ndarray containing concatenated results (if `others is not None`)
- or str (if `others is None`)
+ nd.array
+ The concatenation of list_of_columns with sep
"""
- if sep is None:
- sep = ''
-
- if others is not None:
- arrays = _get_array_list(arr, others)
-
- n = _length_check(arrays)
- masks = np.array([isna(x) for x in arrays])
- cats = None
-
- if na_rep is None:
- na_mask = np.logical_or.reduce(masks, axis=0)
-
- result = np.empty(n, dtype=object)
- np.putmask(result, na_mask, np.nan)
-
- notmask = ~na_mask
-
- tuples = zip(*[x[notmask] for x in arrays])
- cats = [sep.join(tup) for tup in tuples]
-
- result[notmask] = cats
- else:
- for i, x in enumerate(arrays):
- x = np.where(masks[i], na_rep, x)
- if cats is None:
- cats = x
- else:
- cats = cats + sep + x
-
- result = cats
-
- return result
- else:
- arr = np.asarray(arr, dtype=object)
- mask = isna(arr)
- if na_rep is None and mask.any():
- if sep == '':
- na_rep = ''
- else:
- return sep.join(arr[notna(arr)])
- return sep.join(np.where(mask, na_rep, arr))
-
-
-def _length_check(others):
- n = None
- for x in others:
- try:
- if n is None:
- n = len(x)
- elif len(x) != n:
- raise ValueError('All arrays must be same length')
- except TypeError:
- raise ValueError('Must pass arrays containing strings to str_cat')
- return n
+ list_with_sep = [sep] * (2 * len(list_of_columns) - 1)
+ list_with_sep[::2] = list_of_columns
+ return np.sum(list_with_sep, axis=0)
def _na_map(f, arr, na_result=np.nan, dtype=object):
@@ -2283,6 +2196,8 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
if isinstance(others, compat.string_types):
raise ValueError("Did you mean to supply a `sep` keyword?")
+ if sep is None:
+ sep = ''
if isinstance(self._orig, Index):
data = Series(self._orig, index=self._orig)
@@ -2291,9 +2206,13 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
# concatenate Series/Index with itself if no "others"
if others is None:
- result = str_cat(data, others=others, sep=sep, na_rep=na_rep)
- return self._wrap_result(result,
- use_codes=(not self._is_categorical))
+ data = ensure_object(data)
+ na_mask = isna(data)
+ if na_rep is None and na_mask.any():
+ data = data[~na_mask]
+ elif na_rep is not None and na_mask.any():
+ data = np.where(na_mask, na_rep, data)
+ return sep.join(data)
try:
# turn anything in "others" into lists of Series
@@ -2320,23 +2239,45 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
"'outer'|'inner'|'right'`. The future default will "
"be `join='left'`.", FutureWarning, stacklevel=2)
+ # if join is None, _get_series_list already force-aligned indexes
+ join = 'left' if join is None else join
+
# align if required
- if join is not None:
+ if any(not data.index.equals(x.index) for x in others):
# Need to add keys for uniqueness in case of duplicate columns
others = concat(others, axis=1,
join=(join if join == 'inner' else 'outer'),
- keys=range(len(others)))
+ keys=range(len(others)), copy=False)
data, others = data.align(others, join=join)
others = [others[x] for x in others] # again list of Series
- # str_cat discards index
- res = str_cat(data, others=others, sep=sep, na_rep=na_rep)
+ all_cols = [ensure_object(x) for x in [data] + others]
+ na_masks = np.array([isna(x) for x in all_cols])
+ union_mask = np.logical_or.reduce(na_masks, axis=0)
+
+ if na_rep is None and union_mask.any():
+ # no na_rep means NaNs for all rows where any column has a NaN
+ # only necessary if there are actually any NaNs
+ result = np.empty(len(data), dtype=object)
+ np.putmask(result, union_mask, np.nan)
+
+ not_masked = ~union_mask
+ result[not_masked] = cat_core([x[not_masked] for x in all_cols],
+ sep)
+ elif na_rep is not None and union_mask.any():
+ # fill NaNs with na_rep in case there are actually any NaNs
+ all_cols = [np.where(nm, na_rep, col)
+ for nm, col in zip(na_masks, all_cols)]
+ result = cat_core(all_cols, sep)
+ else:
+ # no NaNs - can just concatenate
+ result = cat_core(all_cols, sep)
if isinstance(self._orig, Index):
- res = Index(res, name=self._orig.name)
+ result = Index(result, name=self._orig.name)
else: # Series
- res = Series(res, index=data.index, name=self._orig.name)
- return res
+ result = Series(result, index=data.index, name=self._orig.name)
+ return result
_shared_docs['str_split'] = ("""
Split strings around given separator/delimiter.
| Improve TypeError message for str.cat
Currently,
```
s = pd.Series(['a', 'b', 'c'])
s.str.cat([1, 2, 3])
```
yields
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 2222, in cat
res = str_cat(data, others=others, sep=sep, na_rep=na_rep)
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in str_cat
cats = [sep.join(tup) for tup in tuples]
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in <listcomp>
cats = [sep.join(tup) for tup in tuples]
TypeError: sequence item 1: expected str instance, int found
```
IMO, this should be improved to have a better error message, and shallower stack trace.
| 2018-09-16T00:11:27Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 2222, in cat
res = str_cat(data, others=others, sep=sep, na_rep=na_rep)
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in str_cat
cats = [sep.join(tup) for tup in tuples]
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in <listcomp>
cats = [sep.join(tup) for tup in tuples]
TypeError: sequence item 1: expected str instance, int found
| 12,164 |
||||
pandas-dev/pandas | pandas-dev__pandas-22737 | 9e2039bad0112436e3d2adda721d40bb773f5a48 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -577,6 +577,7 @@ Removal of prior version deprecations/changes
- Removed the ``pandas.formats.style`` shim for :class:`pandas.io.formats.style.Styler` (:issue:`16059`)
- :meth:`Categorical.searchsorted` and :meth:`Series.searchsorted` have renamed the ``v`` argument to ``value`` (:issue:`14645`)
- :meth:`TimedeltaIndex.searchsorted`, :meth:`DatetimeIndex.searchsorted`, and :meth:`PeriodIndex.searchsorted` have renamed the ``key`` argument to ``value`` (:issue:`14645`)
+- Removal of the previously deprecated module ``pandas.json`` (:issue:`19944`)
.. _whatsnew_0240.performance:
diff --git a/pandas/__init__.py b/pandas/__init__.py
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -61,9 +61,6 @@
# extension module deprecations
from pandas.util._depr_module import _DeprecatedModule
-json = _DeprecatedModule(deprmod='pandas.json',
- moved={'dumps': 'pandas.io.json.dumps',
- 'loads': 'pandas.io.json.loads'})
parser = _DeprecatedModule(deprmod='pandas.parser',
removals=['na_values'],
moved={'CParserError': 'pandas.errors.ParserError'})
diff --git a/pandas/json.py b/pandas/json.py
deleted file mode 100644
--- a/pandas/json.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# flake8: noqa
-
-import warnings
-warnings.warn("The pandas.json module is deprecated and will be "
- "removed in a future version. Please import from "
- "pandas.io.json instead", FutureWarning, stacklevel=2)
-from pandas._libs.json import dumps, loads
| deprecation warning importing pandas (python2.7 only)
#### Code Sample:
```
$mkvirtualenv pandas-deprecation-repro --python=python2.7
$workon pandas-deprecation-repro
$pip install pandas
$PYTHONWARNINGS=error::FutureWarning python -c "import pandas"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/davidchudzicki/.virtualenvs/pandas-deprecation-repro/lib/python2.7/site-packages/pandas/__init__.py", line 84, in <module>
from ._version import get_versions
File "/Users/davidchudzicki/.virtualenvs/pandas-deprecation-repro/lib/python2.7/site-packages/pandas/_version.py", line 9, in <module>
import json
File "/Users/davidchudzicki/.virtualenvs/pandas-deprecation-repro/lib/python2.7/site-packages/pandas/json.py", line 6, in <module>
"pandas.io.json instead", FutureWarning, stacklevel=2)
FutureWarning: The pandas.json module is deprecated and will be removed in a future version. Please import from pandas.io.json instead
```
#### Problem description
I help with a package that wants to run our tests with `PYTHONWARNINGS=error::FutureWarning`, so that we can learn about changes in our dependencies and can avoid passing on deprecation warnings to our users. When we turn this on, `import pandas` gives us an error.
It looks like `_version.py` (autogenerated as part of your release process?) includes an `import json` that's interpreted (incorrectly) as referring to the old `pandas.json`.
```
$cat /Users/davidchudzicki/.virtualenvs/pandas-deprecation-repro/lib/python2.7/site-packages/pandas/_version.py
# This file was generated by 'versioneer.py' (0.15) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
from warnings import catch_warnings
with catch_warnings(record=True):
import json
import sys
version_json = '''
{
"dirty": false,
"error": null,
"full-revisionid": "a00154dcfe5057cb3fd86653172e74b6893e337d",
"version": "0.22.0"
}
''' # END VERSION_JSON
def get_versions():
return json.loads(version_json)
```
#### Output of ``pd.show_versions()``
```
>>> import pandas as pd
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.13.final.0
python-bits: 64
OS: Darwin
OS-release: 16.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 38.5.1
Cython: None
numpy: 1.14.1
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
```
| Huh. This only affects released versions.
Adding a simplefilter to that `catch_warnings` block seemed to fix things for a local version.
```python
from warnings import catch_warnings, simplefilter
with catch_warnings(record=True):
simplefilter('ignore', FutureWarning)
import json
import sys
```
these are going away soon anyhow (0.24). https://github.com/pandas-dev/pandas/pull/15537
I suppose you could edit this.
Hey all - just checkin if this issue is closed? I am looking to pick something for a beginner. thank you! Is there a way to know an issue is fixed and closed?
This issue is still open, but instead of fixing the warning with the solution mentioned above (https://github.com/pandas-dev/pandas/issues/19944#issuecomment-369576892), I think we would rather just remove the `pandas.json` module entirely for the next release (but PR welcome for that as well!)
Hi there, is anyone currently working on this one? Can I get a try on this last suggestion from @jorisvandenbossche?
Looks like no one is working on it, feel free to take it @vitoriahmc.
Let us know if you need help getting started. | 2018-09-17T22:15:15Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/davidchudzicki/.virtualenvs/pandas-deprecation-repro/lib/python2.7/site-packages/pandas/__init__.py", line 84, in <module>
from ._version import get_versions
File "/Users/davidchudzicki/.virtualenvs/pandas-deprecation-repro/lib/python2.7/site-packages/pandas/_version.py", line 9, in <module>
import json
File "/Users/davidchudzicki/.virtualenvs/pandas-deprecation-repro/lib/python2.7/site-packages/pandas/json.py", line 6, in <module>
"pandas.io.json instead", FutureWarning, stacklevel=2)
FutureWarning: The pandas.json module is deprecated and will be removed in a future version. Please import from pandas.io.json instead
| 12,165 |
|||
pandas-dev/pandas | pandas-dev__pandas-22804 | 91802fb0accde031c3b6aca040a8b533a193fef6 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1495,6 +1495,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- Bug in :meth:`DataFrame.to_dict` when the resulting dict contains non-Python scalars in the case of numeric data (:issue:`23753`)
- :func:`DataFrame.to_string()`, :func:`DataFrame.to_html()`, :func:`DataFrame.to_latex()` will correctly format output when a string is passed as the ``float_format`` argument (:issue:`21625`, :issue:`22270`)
- Bug in :func:`read_csv` that caused it to raise ``OverflowError`` when trying to use 'inf' as ``na_value`` with integer index column (:issue:`17128`)
+- Bug in :func:`json_normalize` that caused it to raise ``TypeError`` when two consecutive elements of ``record_path`` are dicts (:issue:`22706`)
Plotting
^^^^^^^^
diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py
--- a/pandas/io/json/normalize.py
+++ b/pandas/io/json/normalize.py
@@ -229,6 +229,8 @@ def _pull_field(js, spec):
meta_keys = [sep.join(val) for val in meta]
def _recursive_extract(data, path, seen_meta, level=0):
+ if isinstance(data, dict):
+ data = [data]
if len(path) > 1:
for obj in data:
for val, key in zip(meta, meta_keys):
| json_normalize raises TypeError exception
#### Code Sample, a copy-pastable example if possible
```python
from pandas.io.json import json_normalize
d = {
'name': 'alan smith',
'info': {
'phones': [{
'area': 111,
'number': 2222
}, {
'area': 333,
'number': 4444
}]
}
}
json_normalize(d, record_path=["info", "phones"])
```
#### Problem description
The above code throws `TypeError` exception:
```
Traceback (most recent call last):
File ".\test.py", line 15, in <module>
json_normalize(d, record_path = ["info", "phones"])
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 262, in json_normalize
_recursive_extract(data, record_path, {}, level=0)
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 235, in _recursive_extract
seen_meta, level=level + 1)
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 238, in _recursive_extract
recs = _pull_field(obj, path[0])
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 185, in _pull_field
result = result[spec]
TypeError: string indices must be integers
```
#### Expected Output
| |area|number|
|-|-|-|
|0|111|2222|
|1|333|4444|
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 62 Stepping 4, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.23.4
pytest: 3.6.2
pip: 18.0
setuptools: 40.2.0
Cython: None
numpy: 1.14.5
scipy: None
pyarrow: None
xarray: None
IPython: 6.3.1
sphinx: 1.5.5
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| Thanks for the report - investigation and PRs are always welcome!
If `record_path` points to a nested dict of dicts, after one `_recursive_extract`, `data` is the inner dict (`{'phones': ...}` in the example)
When `data` is a dict, the for loop [here](https://github.com/pandas-dev/pandas/blob/1c500fb7b3fa08c163e13375d01b9607fcdac0d6/pandas/io/json/normalize.py#L237
) only iterates over the keys.
Do we assume that `data` is always a list? If that is the case, there are two options:
1. Turn data into a list if it is a dict (similar to line 194).
2. Hoist the for loop into a method. If data is not a list call this method instead of iterating over the elements.
I prefer (2). Let me know what you think. I can create a PR.
@WillAyd : What do you think of the proposed fix? I'll create a PR if you think it's the right thing to do.
> Do we assume that data is always a list? If that is the case, there are two options:
The docstring claims that either a dict or list of dicts is allowed. The only example with a dict doesn't really do any normalization though:
```
>>> data = {'A': [1, 2]}
>>> json_normalize(data, 'A', record_prefix='Prefix.')
Prefix.0
0 1
1 2
```
I'm inclined to do whatever is easiest to maintain in the long-run, though it's not clear what that is in this case.
I don't think we should assume that it is always a list. In my mind the behavior for `record_path` should mirror whatever happens at the top level but just resolving that at the specified `record_path`. These calls have an equivalent return:
```python
In [6]: json_normalize({'foo': 1, 'bar': 2, 'baz': 3})
Out[6]:
bar baz foo
0 2 3 1
In [7]: json_normalize([{'foo': 1, 'bar': 2, 'baz': 3}])
Out[7]:
bar baz foo
0 2 3 1
```
So I would assume the following to also be equivalent (though currently failing)
```python
>>> json_normalize({'info': {'phones': {'foo': 1, 'bar': 2, 'baz': 3}}}, record_path=['info', 'phones'])
>>> json_normalize({'info': {'phones': [{'foo': 1, 'bar': 2, 'baz': 3}]}}, record_path=['info', 'phones'])
To be clear, I asked about `data` in [`_recursive_extract`](https://github.com/pandas-dev/pandas/blob/1c500fb7b3fa08c163e13375d01b9607fcdac0d6/pandas/io/json/normalize.py#L227) (not the parameter `data` in `json_normalize`).
I agree with @WillAyd that the list assumption inside `_recursive_extract` is wrong. Inside this function `data` can be anything (list, dict, value). That's why my proposed fix above has a check to deal with non-list type. The proposed fix is as follows:
```python
def _extract(data, path, seen_meta, level):
for obj in data: # the body of else clause at L237
...
def _recursive_extract(data, path, seen_meta, level=0):
if len(path) > 1:
# unchanged
else:
if isinstance(data, list):
for obj in data: # similar to the current version
_extract(obj, path, seen_meta, level)
else:
_extract(data, path, seen_meta, level) # this is new to deal with non-list data
```
Note that the current version is
```python
def _recursive_extract(data, path, seen_meta, level=0):
if len(path) > 1:
# unchanged
else:
for obj in data:
_extract(obj, path, seen_meta, level)
```
which raises exception when `data` is not a list.
@vuminhle feel free to submit a PR for code review | 2018-09-22T02:59:21Z | [] | [] |
Traceback (most recent call last):
File ".\test.py", line 15, in <module>
json_normalize(d, record_path = ["info", "phones"])
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 262, in json_normalize
_recursive_extract(data, record_path, {}, level=0)
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 235, in _recursive_extract
seen_meta, level=level + 1)
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 238, in _recursive_extract
recs = _pull_field(obj, path[0])
File "C:\Python36\lib\site-packages\pandas\io\json\normalize.py", line 185, in _pull_field
result = result[spec]
TypeError: string indices must be integers
| 12,174 |
|||
pandas-dev/pandas | pandas-dev__pandas-22825 | 2f1b842119bc4d5242b587b62bde71d8f7ef19f8 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -812,6 +812,7 @@ Reshaping
- Bug in :meth:`Series.replace` and meth:`DataFrame.replace` when dict is used as the ``to_replace`` value and one key in the dict is is another key's value, the results were inconsistent between using integer key and using string key (:issue:`20656`)
- Bug in :meth:`DataFrame.drop_duplicates` for empty ``DataFrame`` which incorrectly raises an error (:issue:`20516`)
- Bug in :func:`pandas.wide_to_long` when a string is passed to the stubnames argument and a column name is a substring of that stubname (:issue:`22468`)
+- Bug in :func:`merge` when merging ``datetime64[ns, tz]`` data that contained a DST transition (:issue:`18885`)
Build Changes
^^^^^^^^^^^^^
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -277,7 +277,7 @@ def _evaluate_compare(self, other, op):
except TypeError:
return result
- def _ensure_localized(self, result, ambiguous='raise'):
+ def _ensure_localized(self, arg, ambiguous='raise', from_utc=False):
"""
ensure that we are re-localized
@@ -286,9 +286,11 @@ def _ensure_localized(self, result, ambiguous='raise'):
Parameters
----------
- result : DatetimeIndex / i8 ndarray
- ambiguous : str, bool, or bool-ndarray
- default 'raise'
+ arg : DatetimeIndex / i8 ndarray
+ ambiguous : str, bool, or bool-ndarray, default 'raise'
+ from_utc : bool, default False
+ If True, localize the i8 ndarray to UTC first before converting to
+ the appropriate tz. If False, localize directly to the tz.
Returns
-------
@@ -297,10 +299,13 @@ def _ensure_localized(self, result, ambiguous='raise'):
# reconvert to local tz
if getattr(self, 'tz', None) is not None:
- if not isinstance(result, ABCIndexClass):
- result = self._simple_new(result)
- result = result.tz_localize(self.tz, ambiguous=ambiguous)
- return result
+ if not isinstance(arg, ABCIndexClass):
+ arg = self._simple_new(arg)
+ if from_utc:
+ arg = arg.tz_localize('UTC').tz_convert(self.tz)
+ else:
+ arg = arg.tz_localize(self.tz, ambiguous=ambiguous)
+ return arg
def _box_values_as_index(self):
"""
@@ -622,11 +627,11 @@ def repeat(self, repeats, *args, **kwargs):
@Appender(_index_shared_docs['where'] % _index_doc_kwargs)
def where(self, cond, other=None):
- other = _ensure_datetimelike_to_i8(other)
- values = _ensure_datetimelike_to_i8(self)
+ other = _ensure_datetimelike_to_i8(other, to_utc=True)
+ values = _ensure_datetimelike_to_i8(self, to_utc=True)
result = np.where(cond, values, other).astype('i8')
- result = self._ensure_localized(result)
+ result = self._ensure_localized(result, from_utc=True)
return self._shallow_copy(result,
**self._get_attributes_dict())
@@ -695,23 +700,37 @@ def astype(self, dtype, copy=True):
return super(DatetimeIndexOpsMixin, self).astype(dtype, copy=copy)
-def _ensure_datetimelike_to_i8(other):
- """ helper for coercing an input scalar or array to i8 """
+def _ensure_datetimelike_to_i8(other, to_utc=False):
+ """
+ helper for coercing an input scalar or array to i8
+
+ Parameters
+ ----------
+ other : 1d array
+ to_utc : bool, default False
+ If True, convert the values to UTC before extracting the i8 values
+ If False, extract the i8 values directly.
+
+ Returns
+ -------
+ i8 1d array
+ """
if is_scalar(other) and isna(other):
- other = iNaT
+ return iNaT
elif isinstance(other, ABCIndexClass):
# convert tz if needed
if getattr(other, 'tz', None) is not None:
- other = other.tz_localize(None).asi8
- else:
- other = other.asi8
+ if to_utc:
+ other = other.tz_convert('UTC')
+ else:
+ other = other.tz_localize(None)
else:
try:
- other = np.array(other, copy=False).view('i8')
+ return np.array(other, copy=False).view('i8')
except TypeError:
# period array cannot be coerces to int
- other = Index(other).asi8
- return other
+ other = Index(other)
+ return other.asi8
def wrap_arithmetic_op(self, other, result):
| AmbiguousTimeError merging two timezone-aware DataFrames with DST change
When merging two DataFrames by a timezone-aware datetime column, if the datetime values doesn't include a DST change, there's no problem:
```python
df1 = pd.DataFrame([pd.to_datetime('2017-10-30 02:00:00+01:00'),
pd.to_datetime('2017-10-30 03:00:00+01:00'),
pd.to_datetime('2017-10-30 04:00:00+01:00')],columns=['date'])
df1['date'] = df1['date'].dt.tz_localize('UTC').dt.tz_convert('Europe/Madrid')
df1['value'] = 1
df2 = pd.DataFrame([pd.to_datetime('2017-10-30 04:00:00+01:00'),
pd.to_datetime('2017-10-30 05:00:00+01:00'),
pd.to_datetime('2017-10-30 06:00:00+01:00')],columns=['date'])
df2['date'] = df2['date'].dt.tz_localize('UTC').dt.tz_convert('Europe/Madrid')
df2['value'] = 2
pd.merge(df1, df2, how='outer', on='date')
```
### Output
```
date value_x value_y
0 2017-10-30 02:00:00+01:00 1.0 NaN
1 2017-10-30 03:00:00+01:00 1.0 NaN
2 2017-10-30 04:00:00+01:00 1.0 2.0
3 2017-10-30 05:00:00+01:00 NaN 2.0
4 2017-10-30 06:00:00+01:00 NaN 2.0
```
This is correct. But if the datetime values include a date with DST change, we get an AmbiguousTimeError exception:
```python
df1 = pd.DataFrame([pd.to_datetime('2017-10-29 02:00:00+02:00'),
pd.to_datetime('2017-10-29 02:00:00+01:00'),
pd.to_datetime('2017-10-29 03:00:00+01:00')],columns=['date'])
df1['date'] = df1['date'].dt.tz_localize('UTC').dt.tz_convert('Europe/Madrid')
df1['value'] = 1
df2 = pd.DataFrame([pd.to_datetime('2017-10-29 03:00:00+01:00'),
pd.to_datetime('2017-10-29 04:00:00+01:00'),
pd.to_datetime('2017-10-29 05:00:00+01:00')],columns=['date'])
df2['date'] = df2['date'].dt.tz_localize('UTC').dt.tz_convert('Europe/Madrid')
df2['value'] = 2
pd.merge(df1, df2, how='outer', on='date')
```
### Expected output
```
date value_x value_y
0 2017-10-29 02:00:00+02:00 1.0 NaN
1 2017-10-29 02:00:00+01:00 1.0 NaN
2 2017-10-29 03:00:00+01:00 1.0 2.0
3 2017-10-29 04:00:00+01:00 NaN 2.0
4 2017-10-29 05:00:00+01:00 NaN 2.0
```
### Actual output
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "(...)/venv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 58, in merge
return op.get_result()
File "(...)//venv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 604, in get_result
self._maybe_add_join_keys(result, left_indexer, right_indexer)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 715, in _maybe_add_join_keys
key_col = Index(lvals).where(~mask, rvals)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 809, in where
result = self._ensure_localized(result)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 230, in _ensure_localized
result = result.tz_localize(self.tz)
File "(...)//venv/lib/python3.6/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 1858, in tz_localize
errors=errors)
File "pandas/_libs/tslib.pyx", line 3593, in pandas._libs.tslib.tz_localize_to_utc
pytz.exceptions.AmbiguousTimeError: Cannot infer dst time from Timestamp('2017-10-29 02:00:00'), try using the 'ambiguous' argument
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.2.final.0
python-bits: 64
OS: Darwin
OS-release: 17.3.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: es_ES.UTF-8
LOCALE: es_ES.UTF-8
pandas: 0.21.1
pytest: 3.2.5
pip: 9.0.1
setuptools: 36.8.0
Cython: None
numpy: 1.13.3
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: 1.5.3
patsy: None
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.9.6
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| yeah, we are joining indicies and do a ``.where()`` on them. we drop the tz, do the op in i8, then localize to the original zone.
what we need is an attribute for ``Timestamp`` and ``DatetimeIndex`` like ``is_ambiguous``, then we could record the ambiguous so we can recreate properly.
interested in a PR?
cc @jbrockmendel
I'd love to, but I don't know the pandas/numpy internals, and `merge()` doesn't sound like an easy place to start :-) Maybe with some guidance... | 2018-09-24T23:44:10Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "(...)/venv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 58, in merge
return op.get_result()
File "(...)//venv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 604, in get_result
self._maybe_add_join_keys(result, left_indexer, right_indexer)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 715, in _maybe_add_join_keys
key_col = Index(lvals).where(~mask, rvals)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 809, in where
result = self._ensure_localized(result)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/indexes/datetimelike.py", line 230, in _ensure_localized
result = result.tz_localize(self.tz)
File "(...)//venv/lib/python3.6/site-packages/pandas/util/_decorators.py", line 118, in wrapper
return func(*args, **kwargs)
File "(...)//venv/lib/python3.6/site-packages/pandas/core/indexes/datetimes.py", line 1858, in tz_localize
errors=errors)
File "pandas/_libs/tslib.pyx", line 3593, in pandas._libs.tslib.tz_localize_to_utc
pytz.exceptions.AmbiguousTimeError: Cannot infer dst time from Timestamp('2017-10-29 02:00:00'), try using the 'ambiguous' argument
| 12,180 |
|||
pandas-dev/pandas | pandas-dev__pandas-22880 | e4b67ca725db373afe8f4565672eb16e1e8e3b31 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -510,6 +510,88 @@ Previous Behavior:
0
0 NaT
+.. _whatsnew_0240.api.dataframe_cmp_broadcasting:
+
+DataFrame Comparison Operations Broadcasting Changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Previously, the broadcasting behavior of :class:`DataFrame` comparison
+operations (``==``, ``!=``, ...) was inconsistent with the behavior of
+arithmetic operations (``+``, ``-``, ...). The behavior of the comparison
+operations has been changed to match the arithmetic operations in these cases.
+(:issue:`22880`)
+
+The affected cases are:
+
+- operating against a 2-dimensional ``np.ndarray`` with either 1 row or 1 column will now broadcast the same way a ``np.ndarray`` would (:issue:`23000`).
+- a list or tuple with length matching the number of rows in the :class:`DataFrame` will now raise ``ValueError`` instead of operating column-by-column (:issue:`22880`.
+- a list or tuple with length matching the number of columns in the :class:`DataFrame` will now operate row-by-row instead of raising ``ValueError`` (:issue:`22880`).
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [3]: arr = np.arange(6).reshape(3, 2)
+ In [4]: df = pd.DataFrame(arr)
+
+ In [5]: df == arr[[0], :]
+ ...: # comparison previously broadcast where arithmetic would raise
+ Out[5]:
+ 0 1
+ 0 True True
+ 1 False False
+ 2 False False
+ In [6]: df + arr[[0], :]
+ ...
+ ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
+
+ In [7]: df == (1, 2)
+ ...: # length matches number of columns;
+ ...: # comparison previously raised where arithmetic would broadcast
+ ...
+ ValueError: Invalid broadcasting comparison [(1, 2)] with block values
+ In [8]: df + (1, 2)
+ Out[8]:
+ 0 1
+ 0 1 3
+ 1 3 5
+ 2 5 7
+
+ In [9]: df == (1, 2, 3)
+ ...: # length matches number of rows
+ ...: # comparison previously broadcast where arithmetic would raise
+ Out[9]:
+ 0 1
+ 0 False True
+ 1 True False
+ 2 False False
+ In [10]: df + (1, 2, 3)
+ ...
+ ValueError: Unable to coerce to Series, length must be 2: given 3
+
+*Current Behavior*:
+
+.. ipython:: python
+ :okexcept:
+
+ arr = np.arange(6).reshape(3, 2)
+ df = pd.DataFrame(arr)
+
+.. ipython:: python
+ # Comparison operations and arithmetic operations both broadcast.
+ df == arr[[0], :]
+ df + arr[[0], :]
+
+.. ipython:: python
+ # Comparison operations and arithmetic operations both broadcast.
+ df == (1, 2)
+ df + (1, 2)
+
+.. ipython:: python
+ :okexcept:
+ # Comparison operations and arithmetic opeartions both raise ValueError.
+ df == (1, 2, 3)
+ df + (1, 2, 3)
+
.. _whatsnew_0240.api.dataframe_arithmetic_broadcasting:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4948,13 +4948,8 @@ def _combine_match_columns(self, other, func, level=None, try_cast=True):
return ops.dispatch_to_series(left, right, func, axis="columns")
def _combine_const(self, other, func, errors='raise', try_cast=True):
- if lib.is_scalar(other) or np.ndim(other) == 0:
- return ops.dispatch_to_series(self, other, func)
-
- new_data = self._data.eval(func=func, other=other,
- errors=errors,
- try_cast=try_cast)
- return self._constructor(new_data)
+ assert lib.is_scalar(other) or np.ndim(other) == 0
+ return ops.dispatch_to_series(self, other, func)
def combine(self, other, func, fill_value=None, overwrite=True):
"""
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1313,145 +1313,6 @@ def shift(self, periods, axis=0, mgr=None):
return [self.make_block(new_values)]
- def eval(self, func, other, errors='raise', try_cast=False, mgr=None):
- """
- evaluate the block; return result block from the result
-
- Parameters
- ----------
- func : how to combine self, other
- other : a ndarray/object
- errors : str, {'raise', 'ignore'}, default 'raise'
- - ``raise`` : allow exceptions to be raised
- - ``ignore`` : suppress exceptions. On error return original object
-
- try_cast : try casting the results to the input type
-
- Returns
- -------
- a new block, the result of the func
- """
- orig_other = other
- values = self.values
-
- other = getattr(other, 'values', other)
-
- # make sure that we can broadcast
- is_transposed = False
- if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
- if values.ndim != other.ndim:
- is_transposed = True
- else:
- if values.shape == other.shape[::-1]:
- is_transposed = True
- elif values.shape[0] == other.shape[-1]:
- is_transposed = True
- else:
- # this is a broadcast error heree
- raise ValueError(
- "cannot broadcast shape [{t_shape}] with "
- "block values [{oth_shape}]".format(
- t_shape=values.T.shape, oth_shape=other.shape))
-
- transf = (lambda x: x.T) if is_transposed else (lambda x: x)
-
- # coerce/transpose the args if needed
- try:
- values, values_mask, other, other_mask = self._try_coerce_args(
- transf(values), other)
- except TypeError:
- block = self.coerce_to_target_dtype(orig_other)
- return block.eval(func, orig_other,
- errors=errors,
- try_cast=try_cast, mgr=mgr)
-
- # get the result, may need to transpose the other
- def get_result(other):
-
- # avoid numpy warning of comparisons again None
- if other is None:
- result = not func.__name__ == 'eq'
-
- # avoid numpy warning of elementwise comparisons to object
- elif is_numeric_v_string_like(values, other):
- result = False
-
- # avoid numpy warning of elementwise comparisons
- elif func.__name__ == 'eq':
- if is_list_like(other) and not isinstance(other, np.ndarray):
- other = np.asarray(other)
-
- # if we can broadcast, then ok
- if values.shape[-1] != other.shape[-1]:
- return False
- result = func(values, other)
- else:
- result = func(values, other)
-
- # mask if needed
- if isinstance(values_mask, np.ndarray) and values_mask.any():
- result = result.astype('float64', copy=False)
- result[values_mask] = np.nan
- if other_mask is True:
- result = result.astype('float64', copy=False)
- result[:] = np.nan
- elif isinstance(other_mask, np.ndarray) and other_mask.any():
- result = result.astype('float64', copy=False)
- result[other_mask.ravel()] = np.nan
-
- return result
-
- # error handler if we have an issue operating with the function
- def handle_error():
-
- if errors == 'raise':
- # The 'detail' variable is defined in outer scope.
- raise TypeError(
- 'Could not operate {other!r} with block values '
- '{detail!s}'.format(other=other, detail=detail)) # noqa
- else:
- # return the values
- result = np.empty(values.shape, dtype='O')
- result.fill(np.nan)
- return result
-
- # get the result
- try:
- with np.errstate(all='ignore'):
- result = get_result(other)
-
- # if we have an invalid shape/broadcast error
- # GH4576, so raise instead of allowing to pass through
- except ValueError as detail:
- raise
- except Exception as detail:
- result = handle_error()
-
- # technically a broadcast error in numpy can 'work' by returning a
- # boolean False
- if not isinstance(result, np.ndarray):
- if not isinstance(result, np.ndarray):
-
- # differentiate between an invalid ndarray-ndarray comparison
- # and an invalid type comparison
- if isinstance(values, np.ndarray) and is_list_like(other):
- raise ValueError(
- 'Invalid broadcasting comparison [{other!r}] with '
- 'block values'.format(other=other))
-
- raise TypeError('Could not compare [{other!r}] '
- 'with block values'.format(other=other))
-
- # transpose if needed
- result = transf(result)
-
- # try to cast if requested
- if try_cast:
- result = self._try_cast_result(result)
-
- result = _block_shape(result, ndim=self.ndim)
- return [self.make_block(result)]
-
def where(self, other, cond, align=True, errors='raise',
try_cast=False, axis=0, transpose=False, mgr=None):
"""
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -373,9 +373,6 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False,
align_keys = ['new', 'mask']
else:
align_keys = ['mask']
- elif f == 'eval':
- align_copy = False
- align_keys = ['other']
elif f == 'fillna':
# fillna internally does putmask, maybe it's better to do this
# at mgr, not block level?
@@ -511,9 +508,6 @@ def isna(self, func, **kwargs):
def where(self, **kwargs):
return self.apply('where', **kwargs)
- def eval(self, **kwargs):
- return self.apply('eval', **kwargs)
-
def quantile(self, **kwargs):
return self.reduction('quantile', **kwargs)
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1934,6 +1934,9 @@ def _comp_method_FRAME(cls, func, special):
@Appender('Wrapper for comparison method {name}'.format(name=op_name))
def f(self, other):
+
+ other = _align_method_FRAME(self, other, axis=None)
+
if isinstance(other, ABCDataFrame):
# Another DataFrame
if not self._indexed_same(other):
| Mixed-dtype dataframe comparison with array raises incorrectly
This came up while going through some of statsmodels tests:
```
arr = np.random.randn(3, 2)
arr[:, 0] = [1, 2, 3]
df = pd.DataFrame(arr)
df[0] = df[0].astype(int)
>>> df == arr
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/ops.py", line 1572, in f
try_cast=False)
File "pandas/core/frame.py", line 4021, in _combine_const
try_cast=try_cast)
File "pandas/core/internals.py", line 3644, in eval
return self.apply('eval', **kwargs)
File "pandas/core/internals.py", line 3538, in apply
applied = getattr(b, f)(**kwargs)
File "pandas/core/internals.py", line 1348, in eval
t_shape=values.T.shape, oth_shape=other.shape))
ValueError: cannot broadcast shape [(3, 1)] with block values [(3, 2)]
```
I'd expect this to wrap the ndarray in a frame and return an all-True frame.
| 2018-09-28T18:53:16Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/ops.py", line 1572, in f
try_cast=False)
File "pandas/core/frame.py", line 4021, in _combine_const
try_cast=try_cast)
File "pandas/core/internals.py", line 3644, in eval
return self.apply('eval', **kwargs)
File "pandas/core/internals.py", line 3538, in apply
applied = getattr(b, f)(**kwargs)
File "pandas/core/internals.py", line 1348, in eval
t_shape=values.T.shape, oth_shape=other.shape))
ValueError: cannot broadcast shape [(3, 1)] with block values [(3, 2)]
| 12,189 |
||||
pandas-dev/pandas | pandas-dev__pandas-23132 | 5e06c84c8994b625407293ff6c80b8d9ddaaca5d | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -567,6 +567,88 @@ Previous Behavior:
0
0 NaT
+.. _whatsnew_0240.api.dataframe_cmp_broadcasting:
+
+DataFrame Comparison Operations Broadcasting Changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Previously, the broadcasting behavior of :class:`DataFrame` comparison
+operations (``==``, ``!=``, ...) was inconsistent with the behavior of
+arithmetic operations (``+``, ``-``, ...). The behavior of the comparison
+operations has been changed to match the arithmetic operations in these cases.
+(:issue:`22880`)
+
+The affected cases are:
+
+- operating against a 2-dimensional ``np.ndarray`` with either 1 row or 1 column will now broadcast the same way a ``np.ndarray`` would (:issue:`23000`).
+- a list or tuple with length matching the number of rows in the :class:`DataFrame` will now raise ``ValueError`` instead of operating column-by-column (:issue:`22880`.
+- a list or tuple with length matching the number of columns in the :class:`DataFrame` will now operate row-by-row instead of raising ``ValueError`` (:issue:`22880`).
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [3]: arr = np.arange(6).reshape(3, 2)
+ In [4]: df = pd.DataFrame(arr)
+
+ In [5]: df == arr[[0], :]
+ ...: # comparison previously broadcast where arithmetic would raise
+ Out[5]:
+ 0 1
+ 0 True True
+ 1 False False
+ 2 False False
+ In [6]: df + arr[[0], :]
+ ...
+ ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
+
+ In [7]: df == (1, 2)
+ ...: # length matches number of columns;
+ ...: # comparison previously raised where arithmetic would broadcast
+ ...
+ ValueError: Invalid broadcasting comparison [(1, 2)] with block values
+ In [8]: df + (1, 2)
+ Out[8]:
+ 0 1
+ 0 1 3
+ 1 3 5
+ 2 5 7
+
+ In [9]: df == (1, 2, 3)
+ ...: # length matches number of rows
+ ...: # comparison previously broadcast where arithmetic would raise
+ Out[9]:
+ 0 1
+ 0 False True
+ 1 True False
+ 2 False False
+ In [10]: df + (1, 2, 3)
+ ...
+ ValueError: Unable to coerce to Series, length must be 2: given 3
+
+*Current Behavior*:
+
+.. ipython:: python
+ :okexcept:
+
+ arr = np.arange(6).reshape(3, 2)
+ df = pd.DataFrame(arr)
+
+.. ipython:: python
+ # Comparison operations and arithmetic operations both broadcast.
+ df == arr[[0], :]
+ df + arr[[0], :]
+
+.. ipython:: python
+ # Comparison operations and arithmetic operations both broadcast.
+ df == (1, 2)
+ df + (1, 2)
+
+.. ipython:: python
+ :okexcept:
+ # Comparison operations and arithmetic opeartions both raise ValueError.
+ df == (1, 2, 3)
+ df + (1, 2, 3)
+
.. _whatsnew_0240.api.dataframe_arithmetic_broadcasting:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4950,13 +4950,8 @@ def _combine_match_columns(self, other, func, level=None, try_cast=True):
return ops.dispatch_to_series(left, right, func, axis="columns")
def _combine_const(self, other, func, errors='raise', try_cast=True):
- if lib.is_scalar(other) or np.ndim(other) == 0:
- return ops.dispatch_to_series(self, other, func)
-
- new_data = self._data.eval(func=func, other=other,
- errors=errors,
- try_cast=try_cast)
- return self._constructor(new_data)
+ assert lib.is_scalar(other) or np.ndim(other) == 0
+ return ops.dispatch_to_series(self, other, func)
def combine(self, other, func, fill_value=None, overwrite=True):
"""
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1318,145 +1318,6 @@ def shift(self, periods, axis=0, mgr=None):
return [self.make_block(new_values)]
- def eval(self, func, other, errors='raise', try_cast=False, mgr=None):
- """
- evaluate the block; return result block from the result
-
- Parameters
- ----------
- func : how to combine self, other
- other : a ndarray/object
- errors : str, {'raise', 'ignore'}, default 'raise'
- - ``raise`` : allow exceptions to be raised
- - ``ignore`` : suppress exceptions. On error return original object
-
- try_cast : try casting the results to the input type
-
- Returns
- -------
- a new block, the result of the func
- """
- orig_other = other
- values = self.values
-
- other = getattr(other, 'values', other)
-
- # make sure that we can broadcast
- is_transposed = False
- if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
- if values.ndim != other.ndim:
- is_transposed = True
- else:
- if values.shape == other.shape[::-1]:
- is_transposed = True
- elif values.shape[0] == other.shape[-1]:
- is_transposed = True
- else:
- # this is a broadcast error heree
- raise ValueError(
- "cannot broadcast shape [{t_shape}] with "
- "block values [{oth_shape}]".format(
- t_shape=values.T.shape, oth_shape=other.shape))
-
- transf = (lambda x: x.T) if is_transposed else (lambda x: x)
-
- # coerce/transpose the args if needed
- try:
- values, values_mask, other, other_mask = self._try_coerce_args(
- transf(values), other)
- except TypeError:
- block = self.coerce_to_target_dtype(orig_other)
- return block.eval(func, orig_other,
- errors=errors,
- try_cast=try_cast, mgr=mgr)
-
- # get the result, may need to transpose the other
- def get_result(other):
-
- # avoid numpy warning of comparisons again None
- if other is None:
- result = not func.__name__ == 'eq'
-
- # avoid numpy warning of elementwise comparisons to object
- elif is_numeric_v_string_like(values, other):
- result = False
-
- # avoid numpy warning of elementwise comparisons
- elif func.__name__ == 'eq':
- if is_list_like(other) and not isinstance(other, np.ndarray):
- other = np.asarray(other)
-
- # if we can broadcast, then ok
- if values.shape[-1] != other.shape[-1]:
- return False
- result = func(values, other)
- else:
- result = func(values, other)
-
- # mask if needed
- if isinstance(values_mask, np.ndarray) and values_mask.any():
- result = result.astype('float64', copy=False)
- result[values_mask] = np.nan
- if other_mask is True:
- result = result.astype('float64', copy=False)
- result[:] = np.nan
- elif isinstance(other_mask, np.ndarray) and other_mask.any():
- result = result.astype('float64', copy=False)
- result[other_mask.ravel()] = np.nan
-
- return result
-
- # error handler if we have an issue operating with the function
- def handle_error():
-
- if errors == 'raise':
- # The 'detail' variable is defined in outer scope.
- raise TypeError(
- 'Could not operate {other!r} with block values '
- '{detail!s}'.format(other=other, detail=detail)) # noqa
- else:
- # return the values
- result = np.empty(values.shape, dtype='O')
- result.fill(np.nan)
- return result
-
- # get the result
- try:
- with np.errstate(all='ignore'):
- result = get_result(other)
-
- # if we have an invalid shape/broadcast error
- # GH4576, so raise instead of allowing to pass through
- except ValueError as detail:
- raise
- except Exception as detail:
- result = handle_error()
-
- # technically a broadcast error in numpy can 'work' by returning a
- # boolean False
- if not isinstance(result, np.ndarray):
- if not isinstance(result, np.ndarray):
-
- # differentiate between an invalid ndarray-ndarray comparison
- # and an invalid type comparison
- if isinstance(values, np.ndarray) and is_list_like(other):
- raise ValueError(
- 'Invalid broadcasting comparison [{other!r}] with '
- 'block values'.format(other=other))
-
- raise TypeError('Could not compare [{other!r}] '
- 'with block values'.format(other=other))
-
- # transpose if needed
- result = transf(result)
-
- # try to cast if requested
- if try_cast:
- result = self._try_cast_result(result)
-
- result = _block_shape(result, ndim=self.ndim)
- return [self.make_block(result)]
-
def where(self, other, cond, align=True, errors='raise',
try_cast=False, axis=0, transpose=False, mgr=None):
"""
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -373,9 +373,6 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False,
align_keys = ['new', 'mask']
else:
align_keys = ['mask']
- elif f == 'eval':
- align_copy = False
- align_keys = ['other']
elif f == 'fillna':
# fillna internally does putmask, maybe it's better to do this
# at mgr, not block level?
@@ -511,9 +508,6 @@ def isna(self, func, **kwargs):
def where(self, **kwargs):
return self.apply('where', **kwargs)
- def eval(self, **kwargs):
- return self.apply('eval', **kwargs)
-
def quantile(self, **kwargs):
return self.reduction('quantile', **kwargs)
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1929,6 +1929,9 @@ def _comp_method_FRAME(cls, func, special):
@Appender('Wrapper for comparison method {name}'.format(name=op_name))
def f(self, other):
+
+ other = _align_method_FRAME(self, other, axis=None)
+
if isinstance(other, ABCDataFrame):
# Another DataFrame
if not self._indexed_same(other):
| Mixed-dtype dataframe comparison with array raises incorrectly
This came up while going through some of statsmodels tests:
```
arr = np.random.randn(3, 2)
arr[:, 0] = [1, 2, 3]
df = pd.DataFrame(arr)
df[0] = df[0].astype(int)
>>> df == arr
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/ops.py", line 1572, in f
try_cast=False)
File "pandas/core/frame.py", line 4021, in _combine_const
try_cast=try_cast)
File "pandas/core/internals.py", line 3644, in eval
return self.apply('eval', **kwargs)
File "pandas/core/internals.py", line 3538, in apply
applied = getattr(b, f)(**kwargs)
File "pandas/core/internals.py", line 1348, in eval
t_shape=values.T.shape, oth_shape=other.shape))
ValueError: cannot broadcast shape [(3, 1)] with block values [(3, 2)]
```
I'd expect this to wrap the ndarray in a frame and return an all-True frame.
| 2018-10-13T16:25:16Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/ops.py", line 1572, in f
try_cast=False)
File "pandas/core/frame.py", line 4021, in _combine_const
try_cast=try_cast)
File "pandas/core/internals.py", line 3644, in eval
return self.apply('eval', **kwargs)
File "pandas/core/internals.py", line 3538, in apply
applied = getattr(b, f)(**kwargs)
File "pandas/core/internals.py", line 1348, in eval
t_shape=values.T.shape, oth_shape=other.shape))
ValueError: cannot broadcast shape [(3, 1)] with block values [(3, 2)]
| 12,229 |
||||
pandas-dev/pandas | pandas-dev__pandas-23495 | 54982c24ed23b29d87a18bb9a28ee268463ad0bb | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -1118,6 +1118,7 @@ Datetimelike
- Bug in :func:`DataFrame.combine` with datetimelike values raising a TypeError (:issue:`23079`)
- Bug in :func:`date_range` with frequency of ``Day`` or higher where dates sufficiently far in the future could wrap around to the past instead of raising ``OutOfBoundsDatetime`` (:issue:`14187`)
- Bug in :class:`PeriodIndex` with attribute ``freq.n`` greater than 1 where adding a :class:`DateOffset` object would return incorrect results (:issue:`23215`)
+- Bug in :class:`Series` that interpreted string indices as lists of characters when setting datetimelike values (:issue:`23451`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -947,7 +947,9 @@ def _set_with(self, key, value):
except Exception:
pass
- if not isinstance(key, (list, Series, np.ndarray, Series)):
+ if is_scalar(key):
+ key = [key]
+ elif not isinstance(key, (list, Series, np.ndarray)):
try:
key = list(key)
except Exception:
| Can't put date in Series if index is a string longer than 1 character
#### Code Sample
```
>>> import pandas
>>> x = pandas.Series([1,2,3], index=['Date','b','other'])
>>> x
Date 1
b 2
other 3
dtype: int64
>>> from datetime import date
>>> x.Date = date.today()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\pandas\core\generic.py", line 4405, in __setattr__
self[name] = value
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 939, in __setitem__
setitem(key, value)
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 935, in setitem
self._set_with(key, value)
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 983, in _set_with
self._set_labels(key, value)
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 993, in _set_labels
raise ValueError('%s not contained in the index' % str(key[mask]))
ValueError: ['D' 'a' 't' 'e'] not contained in the index
>>> x.b = date.today()
>>> x.b
datetime.date(2018, 11, 1)
>>> x
Date 1
b 2018-11-01
other 3
dtype: object
>>>
```
#### Problem description
I cannot put a date object in a Series if the index is a string with len > 1.
It works if it's only a single character. Other types seem to work. I've only seen the problem with dates.
| This looks similar to https://github.com/pandas-dev/pandas/issues/12862, and I can reproduce this in master as well. | 2018-11-04T19:40:36Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\pandas\core\generic.py", line 4405, in __setattr__
self[name] = value
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 939, in __setitem__
setitem(key, value)
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 935, in setitem
self._set_with(key, value)
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 983, in _set_with
self._set_labels(key, value)
File "C:\Python37\lib\site-packages\pandas\core\series.py", line 993, in _set_labels
raise ValueError('%s not contained in the index' % str(key[mask]))
ValueError: ['D' 'a' 't' 'e'] not contained in the index
| 12,290 |
|||
pandas-dev/pandas | pandas-dev__pandas-23524 | efd1844daaadee29a57943597431611d554b6c4a | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -1125,6 +1125,9 @@ Datetimelike
- Bug in :class:`PeriodIndex` with attribute ``freq.n`` greater than 1 where adding a :class:`DateOffset` object would return incorrect results (:issue:`23215`)
- Bug in :class:`Series` that interpreted string indices as lists of characters when setting datetimelike values (:issue:`23451`)
- Bug in :class:`Timestamp` constructor which would drop the frequency of an input :class:`Timestamp` (:issue:`22311`)
+- Bug in :class:`DatetimeIndex` where calling ``np.array(dtindex, dtype=object)`` would incorrectly return an array of ``long`` objects (:issue:`23524`)
+- Bug in :class:`Index` where passing a timezone-aware :class:`DatetimeIndex` and `dtype=object` would incorrectly raise a ``ValueError`` (:issue:`23524`)
+- Bug in :class:`Index` where calling ``np.array(dtindex, dtype=object)`` on a timezone-naive :class:`DatetimeIndex` would return an array of ``datetime`` objects instead of :class:`Timestamp` objects, potentially losing nanosecond portions of the timestamps (:issue:`23524`)
Timedelta
^^^^^^^^^
@@ -1171,6 +1174,7 @@ Offsets
- Bug in :class:`FY5253` where date offsets could incorrectly raise an ``AssertionError`` in arithmetic operatons (:issue:`14774`)
- Bug in :class:`DateOffset` where keyword arguments ``week`` and ``milliseconds`` were accepted and ignored. Passing these will now raise ``ValueError`` (:issue:`19398`)
- Bug in adding :class:`DateOffset` with :class:`DataFrame` or :class:`PeriodIndex` incorrectly raising ``TypeError`` (:issue:`23215`)
+- Bug in comparing :class:`DateOffset` objects with non-DateOffset objects, particularly strings, raising ``ValueError`` instead of returning ``False`` for equality checks and ``True`` for not-equal checks (:issue:`23524`)
Numeric
^^^^^^^
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -308,8 +308,13 @@ class _BaseOffset(object):
def __eq__(self, other):
if is_string_object(other):
- other = to_offset(other)
-
+ try:
+ # GH#23524 if to_offset fails, we are dealing with an
+ # incomparable type so == is False and != is True
+ other = to_offset(other)
+ except ValueError:
+ # e.g. "infer"
+ return False
try:
return self._params == other._params
except AttributeError:
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -19,6 +19,7 @@
from pandas.core.dtypes.common import (
_NS_DTYPE,
is_object_dtype,
+ is_int64_dtype,
is_datetime64tz_dtype,
is_datetime64_dtype,
ensure_int64)
@@ -388,6 +389,15 @@ def _resolution(self):
# ----------------------------------------------------------------
# Array-like Methods
+ def __array__(self, dtype=None):
+ if is_object_dtype(dtype):
+ return np.array(list(self), dtype=object)
+ elif is_int64_dtype(dtype):
+ return self.asi8
+
+ # TODO: warn that conversion may be lossy?
+ return self._data.view(np.ndarray) # follow Index.__array__
+
def __iter__(self):
"""
Return an iterator over the boxed values
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -301,11 +301,19 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None,
(dtype is not None and is_datetime64_any_dtype(dtype)) or
'tz' in kwargs):
from pandas import DatetimeIndex
- result = DatetimeIndex(data, copy=copy, name=name,
- dtype=dtype, **kwargs)
+
if dtype is not None and is_dtype_equal(_o_dtype, dtype):
- return Index(result.to_pydatetime(), dtype=_o_dtype)
+ # GH#23524 passing `dtype=object` to DatetimeIndex is invalid,
+ # will raise in the where `data` is already tz-aware. So
+ # we leave it out of this step and cast to object-dtype after
+ # the DatetimeIndex construction.
+ # Note we can pass copy=False because the .astype below
+ # will always make a copy
+ result = DatetimeIndex(data, copy=False, name=name, **kwargs)
+ return result.astype(object)
else:
+ result = DatetimeIndex(data, copy=copy, name=name,
+ dtype=dtype, **kwargs)
return result
elif (is_timedelta64_dtype(data) or
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -2199,9 +2199,18 @@ def apply_index(self, i):
def _tick_comp(op):
+ assert op not in [operator.eq, operator.ne]
+
def f(self, other):
- return op(self.delta, other.delta)
+ try:
+ return op(self.delta, other.delta)
+ except AttributeError:
+ # comparing with a non-Tick object
+ raise TypeError("Invalid comparison between {cls} and {typ}"
+ .format(cls=type(self).__name__,
+ typ=type(other).__name__))
+ f.__name__ = '__{opname}__'.format(opname=op.__name__)
return f
@@ -2220,8 +2229,6 @@ def __init__(self, n=1, normalize=False):
__ge__ = _tick_comp(operator.ge)
__lt__ = _tick_comp(operator.lt)
__le__ = _tick_comp(operator.le)
- __eq__ = _tick_comp(operator.eq)
- __ne__ = _tick_comp(operator.ne)
def __add__(self, other):
if isinstance(other, Tick):
@@ -2242,8 +2249,13 @@ def __add__(self, other):
def __eq__(self, other):
if isinstance(other, compat.string_types):
from pandas.tseries.frequencies import to_offset
-
- other = to_offset(other)
+ try:
+ # GH#23524 if to_offset fails, we are dealing with an
+ # incomparable type so == is False and != is True
+ other = to_offset(other)
+ except ValueError:
+ # e.g. "infer"
+ return False
if isinstance(other, Tick):
return self.delta == other.delta
@@ -2258,8 +2270,13 @@ def __hash__(self):
def __ne__(self, other):
if isinstance(other, compat.string_types):
from pandas.tseries.frequencies import to_offset
-
- other = to_offset(other)
+ try:
+ # GH#23524 if to_offset fails, we are dealing with an
+ # incomparable type so == is False and != is True
+ other = to_offset(other)
+ except ValueError:
+ # e.g. "infer"
+ return True
if isinstance(other, Tick):
return self.delta != other.delta
| BUG: DatetimeIndex cast to object dtype raises/wrong for tzaware
```
dti = pd.date_range('2016-01-01', periods=3, tz='US/Central')
>>> pd.Index(dti, dtype=object)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/base.py", line 294, in __new__
dtype=dtype, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/datetimes.py", line 453, in __new__
raise ValueError("cannot localize from non-UTC data")
ValueError: cannot localize from non-UTC data
>>> np.array(dti, dtype=object)
array([1451628000000000000L, 1451714400000000000L, 1451800800000000000L],
dtype=object)
```
I expected these to match `pd.Index(list(dti), dtype=object)` and `np.array(list(dti))`, respectively.
| 2018-11-06T01:55:16Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/base.py", line 294, in __new__
dtype=dtype, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/datetimes.py", line 453, in __new__
raise ValueError("cannot localize from non-UTC data")
ValueError: cannot localize from non-UTC data
| 12,298 |
||||
pandas-dev/pandas | pandas-dev__pandas-23527 | dcb8b6a779874663d5cfa8b61d3a2c6896f29a0f | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -1298,6 +1298,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- :func:`read_excel()` will correctly show the deprecation warning for previously deprecated ``sheetname`` (:issue:`17994`)
- :func:`read_csv()` and func:`read_table()` will throw ``UnicodeError`` and not coredump on badly encoded strings (:issue:`22748`)
- :func:`read_csv()` will correctly parse timezone-aware datetimes (:issue:`22256`)
+- Bug in :func:`read_csv()` in which memory management was prematurely optimized for the C engine when the data was being read in chunks (:issue:`23509`)
- :func:`read_sas()` will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (:issue:`21616`)
- :func:`read_sas()` will correctly parse sas7bdat files with many columns (:issue:`22628`)
- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -132,6 +132,7 @@ cdef extern from "parser/tokenizer.h":
int64_t *word_starts # where we are in the stream
int64_t words_len
int64_t words_cap
+ int64_t max_words_cap # maximum word cap encountered
char *pword_start # pointer to stream start of current field
int64_t word_start # position start of current field
diff --git a/pandas/_libs/src/parser/tokenizer.c b/pandas/_libs/src/parser/tokenizer.c
--- a/pandas/_libs/src/parser/tokenizer.c
+++ b/pandas/_libs/src/parser/tokenizer.c
@@ -197,6 +197,7 @@ int parser_init(parser_t *self) {
sz = sz ? sz : 1;
self->words = (char **)malloc(sz * sizeof(char *));
self->word_starts = (int64_t *)malloc(sz * sizeof(int64_t));
+ self->max_words_cap = sz;
self->words_cap = sz;
self->words_len = 0;
@@ -247,7 +248,7 @@ void parser_del(parser_t *self) {
}
static int make_stream_space(parser_t *self, size_t nbytes) {
- int64_t i, cap;
+ int64_t i, cap, length;
int status;
void *orig_ptr, *newptr;
@@ -287,8 +288,23 @@ static int make_stream_space(parser_t *self, size_t nbytes) {
*/
cap = self->words_cap;
+
+ /**
+ * If we are reading in chunks, we need to be aware of the maximum number
+ * of words we have seen in previous chunks (self->max_words_cap), so
+ * that way, we can properly allocate when reading subsequent ones.
+ *
+ * Otherwise, we risk a buffer overflow if we mistakenly under-allocate
+ * just because a recent chunk did not have as many words.
+ */
+ if (self->words_len + nbytes < self->max_words_cap) {
+ length = self->max_words_cap - nbytes;
+ } else {
+ length = self->words_len;
+ }
+
self->words =
- (char **)grow_buffer((void *)self->words, self->words_len,
+ (char **)grow_buffer((void *)self->words, length,
(int64_t*)&self->words_cap, nbytes,
sizeof(char *), &status);
TRACE(
@@ -1241,6 +1257,19 @@ int parser_trim_buffers(parser_t *self) {
int64_t i;
+ /**
+ * Before we free up space and trim, we should
+ * save how many words we saw when parsing, if
+ * it exceeds the maximum number we saw before.
+ *
+ * This is important for when we read in chunks,
+ * so that we can inform subsequent chunk parsing
+ * as to how many words we could possibly see.
+ */
+ if (self->words_cap > self->max_words_cap) {
+ self->max_words_cap = self->words_cap;
+ }
+
/* trim words, word_starts */
new_cap = _next_pow2(self->words_len) + 1;
if (new_cap < self->words_cap) {
diff --git a/pandas/_libs/src/parser/tokenizer.h b/pandas/_libs/src/parser/tokenizer.h
--- a/pandas/_libs/src/parser/tokenizer.h
+++ b/pandas/_libs/src/parser/tokenizer.h
@@ -142,6 +142,7 @@ typedef struct parser_t {
int64_t *word_starts; // where we are in the stream
int64_t words_len;
int64_t words_cap;
+ int64_t max_words_cap; // maximum word cap encountered
char *pword_start; // pointer to stream start of current field
int64_t word_start; // position start of current field
| C error: Buffer overflow caught on CSV with chunksize
#### Code Sample
This will create the error, but it is slow. I recommend [downloading the file directly](https://github.com/pandas-dev/pandas/files/2548189/debug.txt).
```python
import pandas
filename = 'https://github.com/pandas-dev/pandas/files/2548189/debug.txt'
for chunk in pandas.read_csv(filename, chunksize=1000, names=range(2504)):
pass
```
#### Problem description
I get the following exception only while using the C engine. This is similar to https://github.com/pandas-dev/pandas/issues/11166.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1007, in __next__
return self.get_chunk()
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1070, in get_chunk
return self.read(nrows=size)
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1036, in read
ret = self._engine.read(nrows)
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1848, in read
data = self._reader.read(nrows)
File "pandas\_libs\parsers.pyx", line 876, in pandas._libs.parsers.TextReader.read
File "pandas\_libs\parsers.pyx", line 903, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas\_libs\parsers.pyx", line 945, in pandas._libs.parsers.TextReader._read_rows
File "pandas\_libs\parsers.pyx", line 932, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas\_libs\parsers.pyx", line 2112, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
```
#### Expected Output
None. It should just loop through the file.
#### Output of ``pd.show_versions()``
Both machines exhibit the exception.
<details>
<summary>RedHat</summary>
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.10.0-862.14.4.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.4
pytest: None
pip: 18.1
setuptools: 39.1.0
Cython: 0.29
numpy: 1.15.3
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 3.0.0
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
```
</details>
<details>
<summary>Windows 7</summary>
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.23.0
pytest: 3.5.1
pip: 18.1
setuptools: 39.1.0
Cython: 0.28.2
numpy: 1.14.3
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 6.4.0
sphinx: 1.7.4
patsy: 0.5.0
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: 1.2.1
tables: 3.4.3
numexpr: 2.6.5
feather: None
matplotlib: 2.2.2
openpyxl: 2.5.3
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.4
lxml: 4.2.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
```
</details>
| Have you been able to narrow down what exactly in the linked file is causing the exception?
@TomAugspurger I have not. I'm unsure how to debug the C engine.
@dgrahn : I have strong reason to believe that this file is actually malformed. Run this code:
~~~python
with open("debug.txt", "r") as f:
data = f.readlines()
lengths = set()
# Get row width
#
# Delimiter is definitely ","
for l in data:
l = l.strip()
lengths.add(len(l.split(",")))
print(lengths)
~~~
This will output:
~~~python
{2304, 1154, 2054, 904, 1804, 654, 1554, 404, 2454, 1304, 154, 2204, 1054, 1954, 804, 1704, 554, 1454, 304, 2354, 1204, 54, 2104, 954, 1854, 704, 1604, 454, 2504, 1354, 204, 2254, 1104, 2004, 854, 1754, 604, 1504, 354, 2404, 1254, 104, 2154, 1004, 1904, 754, 1654, 504, 1404, 254}
~~~
If the file was correctly formatted, it should be that there is only one row width.
@gfyoung It's not formatted incorrectly. It's a jagged CSV because I didn't want to bloat the file with lots of empty columns. That's why I use the `names` parameter.
@dgrahn : Yes, it is, according to our definition. We need properly formatted CSV's, and that means having the same number of comma's across the board for all rows. Jagged CSV's unfortunately do not meet that criterion.
@gfyoung It works when reading the entire CSV. How can I debug this for chunks? Neither saving the extra columns nor reading the entire file is a feasible option. This is already a subset of a 7 GB file.
> It works when reading the entire CSV.
@dgrahn : Given that you mention that it's a subset, what do you mean by "entire CSV" ? Are you referring to the entire 7 GB file or all of `debug.txt` ? On my end, I cannot read all of `debug.txt`.
@gfyoung When I use the following, I'm able to read the entire CSV.
```
pd.read_csv('debug.csv', names=range(2504))
```
The debug file contains the first 7k lines of a file with more than 2.6M.
@dgrahn : I'm not sure you actually answered my question. Let me rephrase:
Are you able to read the file that you posted to GitHub in its entirety (via `pd.read_csv`)?
@gfyoung I'm able to read the debug file using the below code. But it fails when introducing the chunks. Does that answer the question?
```
pd.read_csv('debug.csv', names=range(2504))
```
Okay, got it. So I'm definitely not able to read all of `debug.txt` in its entirety (Ubuntu 64-bit, `0.23.4`). What version of `pandas` are you using (and on which OS)?
@gfyoung Details are included in the original post. Both Windows 7 and RedHat. 0.23.4 on RedHat, 0.23.0 on Windows 7.
Interestingly, when `chunksize=10` it fails around line 6,810. When `chunksize=100`, it fails around 3100.
More details.
```
chunksize=1, no failure
chunksize=3, no failure
chunksize=4, failure=92-96
chunksize=5, failure=5515-5520
chunksize=10, failure= 6810-6820
chunksize=100, failure= 3100-3200
```
> Details are included in the original post. Both Windows 7 and RedHat. 0.23.4 on RedHat, 0.23.0 on Windows 7.
I saw, but I wasn't sure whether you meant that it worked on both environments.
Here's a smaller file which exhibits the same behavior.
[minimal.txt](https://github.com/pandas-dev/pandas/files/2549461/minimal.txt)
````python
import pandas as pd
i = 0
for c in pd.read_csv('https://github.com/pandas-dev/pandas/files/2549461/minimal.txt', names=range(2504), chunksize=4):
print(f'{i}-{i+len(c)}')
i += len(c)
````
Okay, so I managed to read the file in its entirety on another environment. The C engine is "filling in the blanks" thanks to the `names` parameter that you passed in, so while I'm still wary of the jagged CSV format, `pandas` is a little more generous than I recalled.
As for the discrepancies, as was already noted in the older issue, passing in `engine="python"` works across the board. Thus, it remains to debug the C code and see why it breaks...
(@dgrahn : BTW, that is your answer to: "how would I debug chunks")
> Here's a smaller file which exhibits the same behavior.
@dgrahn : Oh, that's very nice! Might you by any chance be able to make the file "skinnier" ?
(the smaller the file, the easier it would be for us to test)
@gfyoung Working on it now.
@gfyoung Ok. So it gets weirder. 2397 and below works, 2398 and above fails.
```python
i = 0
for c in pd.read_csv('https://github.com/pandas-dev/pandas/files/2549525/skinnier.txt', names=range(2397), chunksize=4):
print(f'{i}-{i+len(c)}')
i += len(c)
print('-----')
i = 0
for c in pd.read_csv('https://github.com/pandas-dev/pandas/files/2549525/skinnier.txt', names=range(2398), chunksize=4):
print(f'{i}-{i+len(c)}')
i += len(c)
```
Each line has the following number of columns:
```
801
801
451
901
- chunk divider -
1001
1
201
1001
```
[skinnier.txt](https://github.com/pandas-dev/pandas/files/2549525/skinnier.txt)
@gfyoung Ok. I have a minimal example.
[minimal.txt](https://github.com/pandas-dev/pandas/files/2549561/minimal.txt)
```
0
0
0
0
0
0
0
0,0
```
```python
import pandas as pd
i = 0
for c in pd.read_csv('https://github.com/pandas-dev/pandas/files/2549561/minimal.txt', names=range(5), chunksize=4):
print(f'{i}-{i+len(c)}')
i += len(c)
```
@dgrahn : Nice! I'm on my phone currently, so a couple of questions:
* Can you read this file in its entirety?
* Does reading this file in chunks work with the Python engine?
Also, why do you have to pass in `names=range(5)` (and not say `range(2)`) ?
@gfyoung Ok. I tried different `chunksize`s from 1-20 and columns from 2-20.
* Reading the entire file worked for columns 2-20.
* Python engine worked for columns 2-20
* C engine failed for the following conditions:
```
chunk=2,columns=7
chunk=2,columns=15
chunk=3,columns=7
chunk=3,columns=15
chunk=4,columns=5
chunk=6,columns=7
chunk=6,columns=15
```
@gfyoung I've tried varying the number of columns in the last row. Here's my results.
### 1 column
All work.
### 2 columns
```
chunksize, columns
2, 7
2, 15
3, 7
3, 15
4, 5
6, 7
6, 15
```
### 3 columns
```
chunksize, columns
2, 6
2, 7
2, 14
2, 15
3, 6
3, 7
3, 14
3, 15
4, 5
4, 10
5, 7
5, 15
6, 6
6, 7
6, 14
6, 15
```
### 4 columns
```
chunksize, columns
2, 13
2, 14
2, 15
3, 13
3, 14
3, 15
4, 5
4, 10
5, 7
5, 15
6, 13
6, 14
6, 15
````
@dgrahn : Thanks for the very thorough investigation! That is very helpful. I'll take a look at the C code later today and see what might be causing the discrepancy.
@gfyoung I tried to debug it myself by following the dev guide, but it says pandas has no attribute `read_csv`, so I think I better rely on your findings.
So I think I know what's happening. In short, with the C engine, we are able to allocate and de-allocate memory as we see fit. In our attempt to optimize space consumption after reading each chunk, the parser frees up all of the space needed to read a full row (i.e. 2,504 elements).
Unfortunately, when it tries to allocate again (at least when using [this dataset](https://github.com/pandas-dev/pandas/issues/23509#issuecomment-435930757)), it comes across one of the "skinnier" rows, causing it to under-allocate and crash with the buffer overflow error (which is a safety measure and not a core-dumping error). | 2018-11-06T09:08:08Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1007, in __next__
return self.get_chunk()
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1070, in get_chunk
return self.read(nrows=size)
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1036, in read
ret = self._engine.read(nrows)
File "D:\programs\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1848, in read
data = self._reader.read(nrows)
File "pandas\_libs\parsers.pyx", line 876, in pandas._libs.parsers.TextReader.read
File "pandas\_libs\parsers.pyx", line 903, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas\_libs\parsers.pyx", line 945, in pandas._libs.parsers.TextReader._read_rows
File "pandas\_libs\parsers.pyx", line 932, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas\_libs\parsers.pyx", line 2112, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
| 12,299 |
|||
pandas-dev/pandas | pandas-dev__pandas-23550 | 28a42da41ca8e13efaa2ceb3939e576d08c232c8 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -1292,6 +1292,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- Bug in :func:`DataFrame.to_csv` where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (:issue:`19589`).
- Bug in :meth:`HDFStore.append` when appending a :class:`DataFrame` with an empty string column and ``min_itemsize`` < 8 (:issue:`12242`)
- Bug in :meth:`read_csv()` in which :class:`MultiIndex` index names were being improperly handled in the cases when they were not provided (:issue:`23484`)
+- Bug in :meth:`read_html()` in which the error message was not displaying the valid flavors when an invalid one was provided (:issue:`23549`)
Plotting
^^^^^^^^
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -854,7 +854,8 @@ def _parser_dispatch(flavor):
def _print_as_set(s):
- return '{{arg}}'.format(arg=', '.join(pprint_thing(el) for el in s))
+ return ('{' + '{arg}'.format(arg=', '.join(
+ pprint_thing(el) for el in s)) + '}')
def _validate_flavor(flavor):
| Passing an invalid flavor to read_html prints the wrong error message
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> df_list = pd.read_html('https://google.com', flavor='unknown')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/achabot/envs/tmp-b79b7181308bb9c1/lib/python3.7/site-packages/pandas/io/html.py", line 987, in read_html
displayed_only=displayed_only)
File "/Users/achabot/envs/tmp-b79b7181308bb9c1/lib/python3.7/site-packages/pandas/io/html.py", line 787, in _parse
flavor = _validate_flavor(flavor)
File "/Users/achabot/envs/tmp-b79b7181308bb9c1/lib/python3.7/site-packages/pandas/io/html.py", line 782, in _validate_flavor
valid=_print_as_set(valid_flavors)))
ValueError: {arg} is not a valid set of flavors, valid flavors are {arg}
```
#### Problem description
The error message should show the `flavor` selected and the valid choices. It's happenning on this line below, and it's a regression from previous versions, which used `%`-formatting and worked properly.
https://github.com/pandas-dev/pandas/blob/de39bfc5e5c6483cb2669773fa10ddc2e32ca111/pandas/io/html.py#L857
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.0.final.0
python-bits: 64
OS: Darwin
OS-release: 18.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.4
pytest: None
pip: 18.1
setuptools: 40.5.0
Cython: None
numpy: 1.15.4
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| We actually have a test for this, but we don't check the error message. Oops. | 2018-11-07T21:19:09Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/achabot/envs/tmp-b79b7181308bb9c1/lib/python3.7/site-packages/pandas/io/html.py", line 987, in read_html
displayed_only=displayed_only)
File "/Users/achabot/envs/tmp-b79b7181308bb9c1/lib/python3.7/site-packages/pandas/io/html.py", line 787, in _parse
flavor = _validate_flavor(flavor)
File "/Users/achabot/envs/tmp-b79b7181308bb9c1/lib/python3.7/site-packages/pandas/io/html.py", line 782, in _validate_flavor
valid=_print_as_set(valid_flavors)))
ValueError: {arg} is not a valid set of flavors, valid flavors are {arg}
| 12,302 |
|||
pandas-dev/pandas | pandas-dev__pandas-23618 | 1250500cfe18f60cfb8a4867c82d14467bbee7ad | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1268,8 +1268,8 @@ Numeric
Strings
^^^^^^^
--
--
+- Bug in :meth:`Index.str.partition` was not nan-safe (:issue:`23558`).
+- Bug in :meth:`Index.str.split` was not nan-safe (:issue:`23677`).
-
Interval
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -2273,7 +2273,7 @@ def to_object_array_tuples(rows: list):
k = 0
for i in range(n):
- tmp = len(rows[i])
+ tmp = 1 if checknull(rows[i]) else len(rows[i])
if tmp > k:
k = tmp
@@ -2287,7 +2287,7 @@ def to_object_array_tuples(rows: list):
except Exception:
# upcast any subclasses to tuple
for i in range(n):
- row = tuple(rows[i])
+ row = (rows[i],) if checknull(rows[i]) else tuple(rows[i])
for j in range(len(row)):
result[i, j] = row[j]
| API/BUG: Index.str.split(expand=True) not nan-safe
This is similar to #23558 and shares the same underlying reason: #23578
Found through extensive testing introduced in #23582 (which itself is a split off from #23167)
```
>>> values = ['a', np.nan, 'c']
>>> pd.Series(values).str.split(' ')
0 [a]
1 NaN
2 [c]
dtype: object
>>> pd.Series(values).str.split(' ', expand=True)
0
0 a
1 NaN
2 c
>>> pd.Index(values).str.split(' ')
Index([['a'], nan, ['c']], dtype='object')
>>> pd.Index(values).str.split(' ', expand=True)
Traceback (most recent call last):
[...]
TypeError: object of type 'float' has no len()
```
| 2018-11-10T21:17:00Z | [] | [] |
Traceback (most recent call last):
[...]
TypeError: object of type 'float' has no len()
| 12,312 |
||||
pandas-dev/pandas | pandas-dev__pandas-23621 | 24bce1a5fdd70a66b9fb5e2f9f51631d1df6add3 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1036,6 +1036,7 @@ Deprecations
- Constructing a :class:`TimedeltaIndex` from data with ``datetime64``-dtyped data is deprecated, will raise ``TypeError`` in a future version (:issue:`23539`)
- The ``keep_tz=False`` option (the default) of the ``keep_tz`` keyword of
:meth:`DatetimeIndex.to_series` is deprecated (:issue:`17832`).
+- Timezone converting a tz-aware ``datetime.datetime`` or :class:`Timestamp` with :class:`Timestamp` and the ``tz`` argument is now deprecated. Instead, use :meth:`Timestamp.tz_convert` (:issue:`23579`)
.. _whatsnew_0240.deprecations.datetimelike_int_ops:
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -700,6 +700,9 @@ class Timestamp(_Timestamp):
elif tz is not None:
raise ValueError('Can provide at most one of tz, tzinfo')
+ # User passed tzinfo instead of tz; avoid silently ignoring
+ tz, tzinfo = tzinfo, None
+
if is_string_object(ts_input):
# User passed a date string to parse.
# Check that the user didn't also pass a date attribute kwarg.
@@ -709,24 +712,23 @@ class Timestamp(_Timestamp):
elif ts_input is _no_input:
# User passed keyword arguments.
- if tz is None:
- # Handle the case where the user passes `tz` and not `tzinfo`
- tz = tzinfo
- return Timestamp(datetime(year, month, day, hour or 0,
- minute or 0, second or 0,
- microsecond or 0, tzinfo),
- nanosecond=nanosecond, tz=tz)
+ ts_input = datetime(year, month, day, hour or 0,
+ minute or 0, second or 0,
+ microsecond or 0)
elif is_integer_object(freq):
# User passed positional arguments:
# Timestamp(year, month, day[, hour[, minute[, second[,
# microsecond[, nanosecond[, tzinfo]]]]]])
- return Timestamp(datetime(ts_input, freq, tz, unit or 0,
- year or 0, month or 0, day or 0,
- minute), nanosecond=hour, tz=minute)
-
- if tzinfo is not None:
- # User passed tzinfo instead of tz; avoid silently ignoring
- tz, tzinfo = tzinfo, None
+ ts_input = datetime(ts_input, freq, tz, unit or 0,
+ year or 0, month or 0, day or 0)
+ nanosecond = hour
+ tz = minute
+ freq = None
+
+ if getattr(ts_input, 'tzinfo', None) is not None and tz is not None:
+ warnings.warn("Passing a datetime or Timestamp with tzinfo and the"
+ " tz parameter will raise in the future. Use"
+ " tz_convert instead.", FutureWarning)
ts = convert_to_tsobject(ts_input, tz, unit, 0, 0, nanosecond or 0)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -45,7 +45,12 @@ def _to_m8(key, tz=None):
"""
if not isinstance(key, Timestamp):
# this also converts strings
- key = Timestamp(key, tz=tz)
+ key = Timestamp(key)
+ if key.tzinfo is not None and tz is not None:
+ # Don't tz_localize(None) if key is already tz-aware
+ key = key.tz_convert(tz)
+ else:
+ key = key.tz_localize(tz)
return np.int64(conversion.pydt_to_i8(key)).view(_NS_DTYPE)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9336,8 +9336,14 @@ def describe_categorical_1d(data):
if is_datetime64_any_dtype(data):
tz = data.dt.tz
asint = data.dropna().values.view('i8')
+ top = Timestamp(top)
+ if top.tzinfo is not None and tz is not None:
+ # Don't tz_localize(None) if key is already tz-aware
+ top = top.tz_convert(tz)
+ else:
+ top = top.tz_localize(tz)
names += ['top', 'freq', 'first', 'last']
- result += [Timestamp(top, tz=tz), freq,
+ result += [top, freq,
Timestamp(asint.min(), tz=tz),
Timestamp(asint.max(), tz=tz)]
else:
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -937,7 +937,10 @@ def get_value(self, series, key):
# needed to localize naive datetimes
if self.tz is not None:
- key = Timestamp(key, tz=self.tz)
+ if key.tzinfo is not None:
+ key = Timestamp(key).tz_convert(self.tz)
+ else:
+ key = Timestamp(key).tz_localize(self.tz)
return self.get_value_maybe_box(series, key)
@@ -963,7 +966,11 @@ def get_value(self, series, key):
def get_value_maybe_box(self, series, key):
# needed to localize naive datetimes
if self.tz is not None:
- key = Timestamp(key, tz=self.tz)
+ key = Timestamp(key)
+ if key.tzinfo is not None:
+ key = key.tz_convert(self.tz)
+ else:
+ key = key.tz_localize(self.tz)
elif not isinstance(key, Timestamp):
key = Timestamp(key)
values = self._engine.get_value(com.values_from_object(series),
@@ -986,7 +993,10 @@ def get_loc(self, key, method=None, tolerance=None):
if isinstance(key, datetime):
# needed to localize naive datetimes
- key = Timestamp(key, tz=self.tz)
+ if key.tzinfo is None:
+ key = Timestamp(key, tz=self.tz)
+ else:
+ key = Timestamp(key).tz_convert(self.tz)
return Index.get_loc(self, key, method, tolerance)
elif isinstance(key, timedelta):
@@ -1010,7 +1020,11 @@ def get_loc(self, key, method=None, tolerance=None):
pass
try:
- stamp = Timestamp(key, tz=self.tz)
+ stamp = Timestamp(key)
+ if stamp.tzinfo is not None and self.tz is not None:
+ stamp = stamp.tz_convert(self.tz)
+ else:
+ stamp = stamp.tz_localize(self.tz)
return Index.get_loc(self, stamp, method, tolerance)
except KeyError:
raise KeyError(key)
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1246,7 +1246,10 @@ def _format_datetime64(x, tz=None, nat_rep='NaT'):
return nat_rep
if tz is not None or not isinstance(x, Timestamp):
- x = Timestamp(x, tz=tz)
+ if getattr(x, 'tzinfo', None) is not None:
+ x = Timestamp(x).tz_convert(tz)
+ else:
+ x = Timestamp(x).tz_localize(tz)
return str(x)
| API: tz_convert within DatetimeIndex constructor
At the moment the following raises:
```
>>> dti = pd.date_range('2016-01-01', periods=3, tz='US/Central')
>>> pd.DatetimeIndex(dti, tz='Asia/Tokyo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/datetimes.py", line 413, in __new__
raise TypeError(msg.format(data.tz, tz))
TypeError: data is already tz-aware US/Central, unable to set specified tz: Asia/Tokyo
```
It isn't clear to me that raising is the right thing to do; shouldn't this just be equivalent to `dti.tz_convert('Asia/Tokyo')`? Or is this ambiguous for some reason?
| This works for `Timestamp` albeit I am not really a fan of `tz=` meaning localizing and converting. But if this is properly documented, we might as well follow `Timestamp`'s behavior unless I am missing something
```
In [1]: pd.Timestamp(pd.Timestamp('2016-01-01', tz='US/Central'), tz='Asia/Tokyo')
Out[1]: Timestamp('2016-01-01 15:00:00+0900', tz='Asia/Tokyo')
```
I would rather leave this and be very explicit
doing anything non explicit with tz localize vs conversion has bitten lots in the past
For consistency sake then, we should depreciate the `Timestamp` behavior then.
yep that sounds right | 2018-11-11T00:24:27Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/indexes/datetimes.py", line 413, in __new__
raise TypeError(msg.format(data.tz, tz))
TypeError: data is already tz-aware US/Central, unable to set specified tz: Asia/Tokyo
| 12,313 |
|||
pandas-dev/pandas | pandas-dev__pandas-23762 | 24bce1a5fdd70a66b9fb5e2f9f51631d1df6add3 | Adding big offset to timedelta generates a python crash
#### Code Sample, a copy-pastable example if possible
##### In:
```
import pandas as pd
from pandas.tseries.frequencies import to_offset
d = pd.Timestamp("2000/1/1")
d + to_offset("D")*100**25
```
##### Out:
**=> python crash**
Fatal Python error: Cannot recover from stack overflow.
Current thread 0x00002b00 (most recent call first):
File "C:\Users\geoffroy.destaintot\Miniconda3\envs\pd-0.18\lib\site-packages\pandas\tseries\offsets.py", line 2526 in delta
File "C:\Users\geoffroy.destaintot\Miniconda3\envs\pd-0.18\lib\site-packages\pandas\tseries\offsets.py", line 2535 in apply
File "C:\Users\geoffroy.destaintot\Miniconda3\envs\pd-0.18\lib\site-packages\pandas\tseries\offsets.py", line 2493 in **add**
File "C:\Users\geoffroy.destaintot\Miniconda3\envs\pd-0.18\lib\site-packages\pandas\tseries\offsets.py", line 390 in **radd**
File "C:\Users\geoffroy.destaintot\Miniconda3\envs\pd-0.18\lib\site-packages\pandas\tseries\offsets.py", line 2535 in apply
File "C:\Users\geoffroy.destaintot\Miniconda3\envs\pd-0.18\lib\site-packages\pandas\tseries\offsets.py", line 2493 in **add**
File "C:\Users\geoffroy.destaintot\Miniconda3\envs\pd-0.18\lib\site-packages\pandas\tseries\offsets.py", line 390 in **radd**
...
#### Expected Output
Satisfactory behaviour when using python timedeltas:
##### In:
```
import datetime as dt
import pandas as pd
from pandas.tseries.frequencies import to_offset
d = pd.Timestamp("2000/1/1")
d + dt.timedelta(days=1)*100**25
```
##### Out:
**=> python error**
Traceback (most recent call last):
File "C:/Users/geoffroy.destaintot/Documents/Local/Informatique/Projets/2016-08-django-debug/to_offset_bug.py", line 11, in <module>
d + dt.timedelta(days=1)_100_*25
OverflowError: Python int too large to convert to C long
#### output of `pd.show_versions()`
(same behaviour with pandas 0.17.1, 0.16.2, 0.15.2)
## INSTALLED VERSIONS
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 69 Stepping 1, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.18.1
nose: None
pip: 8.1.2
setuptools: 25.1.6
Cython: None
numpy: 1.11.1
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.5.3
pytz: 2016.6.1
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None
| thought we had an issue for this....
its an wraparound thing I think.
PR's are welcome.
Any pointers on how to fix this?
step thru the code - this hits cython at some point (for the add) then again for the construction of a new Timestamp - think it's crashing there
I generated the stack trace, and stepped through the code. I've isolated the problem to the subset of the trace I've attached.
It crashes at the point where it's trying to multiply "self.n" and "self._inc", within the Delta function of the Tick class. Any suggestions on fixing this?
`> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(393)**radd**()
-> def **radd**(self, other):
(Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(394)**radd**()
> -> return self.**add**(other)
> (Pdb) s
> --Call--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2698)**add**()
> -> def **add**(self, other):
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2699)**add**()
> -> if isinstance(other, Tick):
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2704)**add**()
> -> elif isinstance(other, ABCPeriod):
> (Pdb) s
> --Call--
> /home/bhaprayan/Workspace/pandas/pandas/types/generic.py(7)_check()
> -> @classmethod
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/types/generic.py(9)_check()
> -> return getattr(inst, attr, '_typ') in comp
> (Pdb) s
> --Return--
> /home/bhaprayan/Workspace/pandas/pandas/types/generic.py(9)_check()->False
> -> return getattr(inst, attr, '_typ') in comp
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2706)__add__()
> -> try:
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2707)**add**()
> -> return self.apply(other)
> (Pdb) s
> --Call--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2746)apply()
> -> def apply(self, other):
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2748)apply()
> -> if isinstance(other, (datetime, np.datetime64, date)):
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2749)apply()
> -> return as_timestamp(other) + self
> (Pdb) s
> --Call--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(35)as_timestamp()
> -> def as_timestamp(obj):
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(36)as_timestamp()
> -> if isinstance(obj, Timestamp):
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(37)as_timestamp()
> -> return obj
> (Pdb) s
> --Return--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(37)as_timestamp()->Timestam...0:00:00')
> -> return obj
> (Pdb) s
> --Call--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2738)delta()
> -> @property
> (Pdb) s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2740)delta()
> -> return self.n \* self._inc
> (Pdb) s
> **OverflowError: 'Python int too large to convert to C long'**
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2740)delta()
> -> return self.n \* self._inc
> (Pdb) s
> --Return--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2740)delta()->None
> -> return self.n \* self._inc
> (Pdb) s
> --Call--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(393)__radd__()
> -> def **radd**(self, other):
> (Pdb)
> `
so I think that multiplcation needs a guard on overflow
```
In [2]: np.iinfo(np.int64).max
Out[2]: 9223372036854775807
In [3]: np.int64(1000000)*np.int64(86400*1e9)
/Users/jreback/miniconda/bin/ipython:1: RuntimeWarning: overflow encountered in long_scalars
#!/bin/bash /Users/jreback/miniconda/bin/python.app
Out[3]: -5833720368547758080
```
First, I set a guard on the multiplication overflow. However it's still stuck in a recursive loop, where after catching the OverflowError, it still calls **radd**.
`ipdb> s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2741)delta()
> 2739 def delta(self):
> 2740 try:
> -> 2741 self.n \* self._inc
> 2742 except OverflowError:
> 2743 raise
ipdb> s
OverflowError: 'Python int too large to convert to C long'
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2741)delta()
> 2739 def delta(self):
> 2740 try:
> -> 2741 self.n \* self._inc
> 2742 except OverflowError:
> 2743 raise
ipdb> s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2742)delta()
> 2740 try:
> 2741 self.n \* self._inc
> -> 2742 except OverflowError:
> 2743 raise
> 2744
ipdb> s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2743)delta()
> 2741 self.n \* self._inc
> 2742 except OverflowError:
> -> 2743 raise
> 2744
> 2745 @property
ipdb> s
--Return--
None
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(2743)delta()
> 2741 self.n \* self._inc
> 2742 except OverflowError:
> -> 2743 raise
> 2744
> 2745 @property
ipdb> s
--Call--
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(393)**radd**()
> 391 return NotImplemented
> 392
> --> 393 def **radd**(self, other):
> 394 return self.**add**(other)
> 395
ipdb> s
> /home/bhaprayan/Workspace/pandas/pandas/tseries/offsets.py(394)**radd**()
> 392
> 393 def **radd**(self, other):
> --> 394 return self.**add**(other)
> 395
> 396 def **sub**(self, other):
> `
Looks like this issue was already solved, by running the reproduction scenario now I get a clear exception:
`OverflowError: the add operation between <100000000000000000000000000000000000000000000000000 * Days> and 2000-01-01 00:00:00 will overflow`
great
do u want to do a PR with some tests ?
I put together a quick smoke test, and indeed it looks like things are generating exceptions like they should.
But two offsets, the FY5253Quarter and DateOffset cases, both take forever to fail, ~20s in one case, ~10s in the other, so something's different about them (I haven't given even a cursory glance).
this is already fixed in master if someone would like to add tests in a PR | 2018-11-18T05:04:11Z | [] | [] |
Traceback (most recent call last):
File "C:/Users/geoffroy.destaintot/Documents/Local/Informatique/Projets/2016-08-django-debug/to_offset_bug.py", line 11, in <module>
d + dt.timedelta(days=1)_100_*25
OverflowError: Python int too large to convert to C long
| 12,337 |
||||
pandas-dev/pandas | pandas-dev__pandas-23776 | c9c99129108cf16bc6c3684dc0df5a5fc60ffc8a | Lookup using datetimes does not work with hierarchical indices containing periods
Lookup in a PeriodIndex using a datetime works as expected (the period in which the timestamp falls will be returned). However, when the PeriodIndex is part of a hierarchy, this functionality fails in a non-obvious way:
```
>>> s = pd.Series([1,2,3,4,5], pd.MultiIndex.from_arrays([["a", "a", "a", "b", "b"], pd.period_range("2012-01", periods=5, freq="M")]))
>>> s.loc["a", datetime(2012,1,1)]
Traceback (most recent call last):
File "C:\VirtualEnvs\test\lib\site-packages\ipython-1.0.dev-py2.6.egg\IPython\core\interactiveshell.py", line 2837, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-18-9e6cd34eee66>", line 1, in <module>
a.loc["a", datetime(2012,1,1)]
File "C:\VirtualEnvs\test\lib\site-packages\pandas-0.12.0-py2.6-win32.egg\pandas\core\indexing.py", line 697, in __getitem__
return self._getitem_tuple(key)
File "C:\VirtualEnvs\test\lib\site-packages\pandas-0.12.0-py2.6-win32.egg\pandas\core\indexing.py", line 258, in _getitem_tuple
self._has_valid_tuple(tup)
File "C:\VirtualEnvs\test\lib\site-packages\pandas-0.12.0-py2.6-win32.egg\pandas\core\indexing.py", line 691, in _has_valid_tuple
raise ValueError('Too many indexers')
ValueError: Too many indexers
```
Using a period works just fine:
```
>>> s.loc["a", pd.Period("2012-01")]
1
```
| A possibly related issue (happens when using a MultiIndex containing periods), is that when querying with a label that is not in the index, a ValueError("Too many indexers") will be raised instead of a KeyError.
Works in 0.23.2, needs test. | 2018-11-19T06:37:18Z | [] | [] |
Traceback (most recent call last):
File "C:\VirtualEnvs\test\lib\site-packages\ipython-1.0.dev-py2.6.egg\IPython\core\interactiveshell.py", line 2837, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-18-9e6cd34eee66>", line 1, in <module>
a.loc["a", datetime(2012,1,1)]
File "C:\VirtualEnvs\test\lib\site-packages\pandas-0.12.0-py2.6-win32.egg\pandas\core\indexing.py", line 697, in __getitem__
return self._getitem_tuple(key)
File "C:\VirtualEnvs\test\lib\site-packages\pandas-0.12.0-py2.6-win32.egg\pandas\core\indexing.py", line 258, in _getitem_tuple
self._has_valid_tuple(tup)
File "C:\VirtualEnvs\test\lib\site-packages\pandas-0.12.0-py2.6-win32.egg\pandas\core\indexing.py", line 691, in _has_valid_tuple
raise ValueError('Too many indexers')
ValueError: Too many indexers
| 12,340 |
||||
pandas-dev/pandas | pandas-dev__pandas-23864 | 20ae4543c1d8838f52229830bfae0cc8626801bb | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1420,6 +1420,7 @@ Groupby/Resample/Rolling
- Bug in :meth:`DataFrame.expanding` in which the ``axis`` argument was not being respected during aggregations (:issue:`23372`)
- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.transform` which caused missing values when the input function can accept a :class:`DataFrame` but renames it (:issue:`23455`).
- Bug in :func:`pandas.core.groupby.GroupBy.nth` where column order was not always preserved (:issue:`20760`)
+- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.rank` with ``method='dense'`` and ``pct=True`` when a group has only one member would raise a ``ZeroDivisionError`` (:issue:`23666`).
Reshaping
^^^^^^^^^
diff --git a/pandas/_libs/groupby_helper.pxi.in b/pandas/_libs/groupby_helper.pxi.in
--- a/pandas/_libs/groupby_helper.pxi.in
+++ b/pandas/_libs/groupby_helper.pxi.in
@@ -587,7 +587,7 @@ def group_rank_{{name}}(ndarray[float64_t, ndim=2] out,
# rankings, so we assign them percentages of NaN.
if out[i, 0] != out[i, 0] or out[i, 0] == NAN:
out[i, 0] = NAN
- else:
+ elif grp_sizes[i, 0] != 0:
out[i, 0] = out[i, 0] / grp_sizes[i, 0]
{{endif}}
{{endfor}}
| ZeroDivisionError when groupby rank with method="dense" and pct=True
When I tried to use groupby rank function with method="dense", pct=True options, I encountered the ZeroDivisionError.
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
df = pd.DataFrame({"A": [1, 1, 1, 2, 2, 2],
"B": [1, 1, 1, 1, 2, 2],
"C": [1, 2, 1, 1, 1, 2]})
df.groupby(["A", "B"])["C"].rank(method="dense", pct=True)
```
error:
```
Traceback (most recent call last):
File "c:/Users/<user_name>/Documents/test.py", line 6, in <module>
df.groupby(["A", "B"])["C"].rank(method="dense", pct=True)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 1906, in rank
na_option=na_option, pct=pct, axis=axis)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 1025, in _cython_transform
**kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2630, in transform
return self._cython_operation('transform', values, how, axis, **kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2590, in _cython_operation
**kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2664, in _transform
transform_func(result, values, comp_ids, is_datetimelike, **kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2479, in wrapper
return f(afunc, *args, **kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2431, in <lambda>
kwargs.get('na_option', 'keep')
File "pandas\_libs\groupby_helper.pxi", line 1292, in pandas._libs.groupby.group_rank_int64
ZeroDivisionError: float division
```
#### Problem description
I encountered ZeroDivisionError when I tried to use the groupby rank function.
I can't find out exactly what a problem is. But when I drop either method="dense" or pct=True option, the above code works.
If some elements in the above DataFrame are changed, this error disappear. For example, the following code gives the expected output.
```python
df = pd.DataFrame({"A": [1, 1, 1, 2, 2, 2],
"B": [1, 1, 1, 1, 2, 2],
"C": [1, 2, 1, 0, 1, 2]}) # a little change in column C
df.groupby(["A", "B"])["C"].rank(method="dense", pct=True)
```
output:
```
0 0.5
1 1.0
2 0.5
3 1.0
4 0.5
5 1.0
Name: C, dtype: float64
```
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.6.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.23.4
pytest: 3.5.1
pip: 10.0.1
setuptools: 39.1.0
Cython: 0.28.2
numpy: 1.14.5
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 6.4.0
sphinx: 1.7.4
patsy: 0.5.0
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: 1.2.1
tables: 3.4.3
numexpr: 2.6.5
feather: None
matplotlib: 3.0.0
openpyxl: 2.5.3
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.4
lxml: 4.2.5
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| I think this is caused due to groups of size 1. By removing (A, B) = (2, 1) group, the error goes away.
@WillAyd Do you mind if I tackle this? It's my first time contributing to pandas but I think I have a rough idea on how to fix the problem.
Go for it! | 2018-11-23T01:26:54Z | [] | [] |
Traceback (most recent call last):
File "c:/Users/<user_name>/Documents/test.py", line 6, in <module>
df.groupby(["A", "B"])["C"].rank(method="dense", pct=True)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 1906, in rank
na_option=na_option, pct=pct, axis=axis)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 1025, in _cython_transform
**kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2630, in transform
return self._cython_operation('transform', values, how, axis, **kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2590, in _cython_operation
**kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2664, in _transform
transform_func(result, values, comp_ids, is_datetimelike, **kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2479, in wrapper
return f(afunc, *args, **kwargs)
File "C:\Users\<user_name>\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2431, in <lambda>
kwargs.get('na_option', 'keep')
File "pandas\_libs\groupby_helper.pxi", line 1292, in pandas._libs.groupby.group_rank_int64
ZeroDivisionError: float division
| 12,355 |
|||
pandas-dev/pandas | pandas-dev__pandas-24005 | 92d25f0da6c3b1175047cba8c900e04da68920b8 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1256,6 +1256,7 @@ Categorical
- Bug in :meth:`Categorical.take` with a user-provided ``fill_value`` not encoding the ``fill_value``, which could result in a ``ValueError``, incorrect results, or a segmentation fault (:issue:`23296`).
- In meth:`Series.unstack`, specifying a ``fill_value`` not present in the categories now raises a ``TypeError`` rather than ignoring the ``fill_value`` (:issue:`23284`)
- Bug when resampling :meth:`Dataframe.resample()` and aggregating on categorical data, the categorical dtype was getting lost. (:issue:`23227`)
+- Bug in many methods of the ``.str``-accessor, which always failed on calling the ``CategoricalIndex.str`` constructor (:issue:`23555`, :issue:`23556`)
Datetimelike
^^^^^^^^^^^^
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -15,7 +15,7 @@
from pandas.core.dtypes.common import (
ensure_object, is_bool_dtype, is_categorical_dtype, is_integer,
is_list_like, is_object_dtype, is_re, is_scalar, is_string_like)
-from pandas.core.dtypes.generic import ABCIndex, ABCSeries
+from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
from pandas.core.dtypes.missing import isna
from pandas.core.algorithms import take_1d
@@ -931,7 +931,7 @@ def str_extractall(arr, pat, flags=0):
if regex.groups == 0:
raise ValueError("pattern contains no capture groups")
- if isinstance(arr, ABCIndex):
+ if isinstance(arr, ABCIndexClass):
arr = arr.to_series().reset_index(drop=True)
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
@@ -1854,7 +1854,7 @@ def __iter__(self):
def _wrap_result(self, result, use_codes=True,
name=None, expand=None, fill_value=np.nan):
- from pandas.core.index import Index, MultiIndex
+ from pandas import Index, Series, MultiIndex
# for category, we do the stuff on the categories, so blow it up
# to the full series again
@@ -1862,7 +1862,8 @@ def _wrap_result(self, result, use_codes=True,
# so make it possible to skip this step as the method already did this
# before the transformation...
if use_codes and self._is_categorical:
- result = take_1d(result, self._orig.cat.codes,
+ # if self._orig is a CategoricalIndex, there is no .cat-accessor
+ result = take_1d(result, Series(self._orig, copy=False).cat.codes,
fill_value=fill_value)
if not hasattr(result, 'ndim') or not hasattr(result, 'dtype'):
| BUG: many methods on CategoricalIndex.str are broken
This was also uncovered by #23167, but is a different error than #23555.
Basically, all methods calling `CategoricalIndex.str._wrap_result(result, use_codes=True)` will necessarily fail, e.g.:
```
>>> import pandas as pd
>>> pd.Index(['a', 'b', 'aa'], dtype='category').str.replace('a', 'c')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Miniconda3\envs\pandas-dev\lib\site-packages\pandas\core\strings.py", line 2430, in replace
return self._wrap_result(result)
File "C:\ProgramData\Miniconda3\envs\pandas-dev\lib\site-packages\pandas\core\strings.py", line 1964, in _wrap_result
result = take_1d(result, self._orig.cat.codes)
AttributeError: 'CategoricalIndex' object has no attribute 'cat'
```
This is because `self._orig` is the original `CategoricalIndex`, which does not have a `cat`-accessor.
| Yikes! That's a pretty serious bug there...
cc @jreback | 2018-11-29T23:37:46Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Miniconda3\envs\pandas-dev\lib\site-packages\pandas\core\strings.py", line 2430, in replace
return self._wrap_result(result)
File "C:\ProgramData\Miniconda3\envs\pandas-dev\lib\site-packages\pandas\core\strings.py", line 1964, in _wrap_result
result = take_1d(result, self._orig.cat.codes)
AttributeError: 'CategoricalIndex' object has no attribute 'cat'
| 12,374 |
|||
pandas-dev/pandas | pandas-dev__pandas-24634 | dc91f4cb03208889b98dc29c1a1fe46b979e81c7 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1553,6 +1553,7 @@ Timezones
- Bug in :func:`to_datetime` where ``utc=True`` was not respected when specifying a ``unit`` and ``errors='ignore'`` (:issue:`23758`)
- Bug in :func:`to_datetime` where ``utc=True`` was not respected when passing a :class:`Timestamp` (:issue:`24415`)
- Bug in :meth:`DataFrame.any` returns wrong value when ``axis=1`` and the data is of datetimelike type (:issue:`23070`)
+- Bug in :meth:`DatetimeIndex.to_period` where a timezone aware index was converted to UTC first before creating :class:`PeriodIndex` (:issue:`22905`)
Offsets
^^^^^^^
@@ -1802,6 +1803,9 @@ Reshaping
- Constructing a DataFrame with an index argument that wasn't already an instance of :class:`~pandas.core.Index` was broken (:issue:`22227`).
- Bug in :class:`DataFrame` prevented list subclasses to be used to construction (:issue:`21226`)
- Bug in :func:`DataFrame.unstack` and :func:`DataFrame.pivot_table` returning a missleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (:issue:`20601`)
+- Bug in :func:`DataFrame.unstack` where a ``ValueError`` was raised when unstacking timezone aware values (:issue:`18338`)
+- Bug in :func:`DataFrame.stack` where timezone aware values were converted to timezone naive values (:issue:`19420`)
+- Bug in :func:`merge_asof` where a ``TypeError`` was raised when ``by_col`` were timezone aware values (:issue:`21184`)
.. _whatsnew_0240.bug_fixes.sparse:
| bug: merge_asof with tz-aware datetime "by" parameter raises
#### Code Sample
```python
import pandas as pd
left = pd.DataFrame({'by_col': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
'on_col': [2], 'values': ['a']})
right = pd.DataFrame({'by_col': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
'on_col': [1], 'values': ['b']})
merged = pd.merge_asof(left, right, by='by_col', on='on_col')
```
#### Error traceback
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 478, in merge_asof
return op.get_result()
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 1163, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 776, in _get_join_info
right_indexer) = self._get_join_indexers()
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 1437, in _get_join_indexers
tolerance)
TypeError: Argument 'left_by_values' has incorrect type (expected numpy.ndarray, got Index)
```
#### Problem description
Function pandas.merge_asof raises when "by" parameter is provided a column of tz-aware datetime type.
Note that the same code with tz-naive datetimes works :
```python
import pandas as pd
left = pd.DataFrame({'by_col': pd.DatetimeIndex(['2018-01-01']),
'on_col': [2], 'values': ['a']})
right = pd.DataFrame({'by_col': pd.DatetimeIndex(['2018-01-01']),
'on_col': [1], 'values': ['b']})
merged = pd.merge_asof(left, right, by='by_col', on='on_col')
print(merged)
```
outputs :
```
by_col on_col values_x values_y
0 2018-01-01 2 a b
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: FR
LOCALE: None.None
pandas: 0.23.0
pytest: None
pip: 10.0.1
setuptools: 38.5.1
Cython: None
numpy: 1.14.1
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2018.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.9999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| Thanks, I can confirm that this bug is occurring on master. PR to fix is welcome!
xref #14844 : a similar `merge_asof` tz-aware issue that's been fixed, and could potentially be useful for determining a fix here (not certain though).
I'd be glad to help, but have no experience in contributing to a big project. So I can try to find a fix when I have time to dive into it, but no promises yet ! :)
When the `_get_merge_keys` function preps the keys in `pandas/core/reshape/merge.py`, essentially this happens:
```
In [8]: left
Out[8]:
by_col on_col values
0 2018-01-01 00:00:00+00:00 2 a
In [9]: left_naive
Out[9]:
by_col on_col values
0 2018-01-01 2 a
In [10]: left._get_label_or_level_values('by_col')
Out[10]: DatetimeIndex(['2018-01-01'], dtype='datetime64[ns, UTC]', freq=None)
In [11]: left_naive._get_label_or_level_values('by_col')
Out[11]: array(['2018-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
```
The results are cast to object dtype, but are passed to a cython function that expects a numpy array instead of an Index. A `.values` call is needed somewhere in the flow to cast timezone aware keys to a numpy array.
here's a patch to fix. prob could use some general refactoring in this routine (maybe), but can do in the future
```
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 4d8897fb7..58454d0cf 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1420,6 +1420,11 @@ class _AsOfMerge(_OrderedMerge):
left_by_values = flip(left_by_values)
right_by_values = flip(right_by_values)
+ # initial type conversion as needed
+ if needs_i8_conversion(left_by_values):
+ left_by_values = left_by_values.view('i8')
+ right_by_values = right_by_values.view('i8')
+
# upcast 'by' parameter because HashTable is limited
by_type = _get_cython_type_upcast(left_by_values.dtype)
by_type_caster = _type_casters[by_type]
```
I was actually looking into a fix too this morning, and in the process found out that the bug originated in a typing issue in the following function of pandas/core/generic.py (line 1327) :
```
def _get_label_or_level_values(self, key, axis=0, stacklevel=1):
"""
Return a 1-D array of values associated with `key`, a label or level
from the given `axis`.
```
The expected return type is a numpy array, but the following (line 1375) :
```
values = self.xs(key, axis=other_axes[0])._values
```
produces a DateTimeIndex in the case when _values is accessed on a tz-aware datetime Series.
This is because _values is overridden in pandas/core/indexes/datetimes.py (line 675):
```
@property
def _values(self):
# tz-naive -> ndarray
# tz-aware -> DatetimeIndex
if self.tz is not None:
return self
else:
return self.values
```
edit: The problem i'm pointing out also yields an error when doing things such as
```
left = pd.DataFrame({'on_col': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'), 'values': ['a']})
right = pd.DataFrame({'values': ['b']}, index=pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'))
merged = left.merge(right, left_on='on_col', right_index=True)
```
Sorry for the lack of details, i'm a bit out of time right now, i can elaborate later if needed.
This looks to be fixed on master now. Could use a test. | 2019-01-05T08:27:33Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 478, in merge_asof
return op.get_result()
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 1163, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 776, in _get_join_info
right_indexer) = self._get_join_indexers()
File "C:\Users\Hamb\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\reshape\merge.py", line 1437, in _get_join_indexers
tolerance)
TypeError: Argument 'left_by_values' has incorrect type (expected numpy.ndarray, got Index)
| 12,466 |
|||
pandas-dev/pandas | pandas-dev__pandas-24725 | 17a6bc56e5ab6ad3dab12d3a8b20ed69a5830b6f | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1816,6 +1816,7 @@ Reshaping
- Bug in :func:`DataFrame.unstack` where a ``ValueError`` was raised when unstacking timezone aware values (:issue:`18338`)
- Bug in :func:`DataFrame.stack` where timezone aware values were converted to timezone naive values (:issue:`19420`)
- Bug in :func:`merge_asof` where a ``TypeError`` was raised when ``by_col`` were timezone aware values (:issue:`21184`)
+- Bug showing an incorrect shape when throwing error during ``DataFrame`` construction. (:issue:`20742`)
.. _whatsnew_0240.bug_fixes.sparse:
@@ -1853,6 +1854,7 @@ Other
- Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (:issue:`24113`)
+
.. _whatsnew_0.24.0.contributors:
Contributors
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1674,7 +1674,15 @@ def create_block_manager_from_arrays(arrays, names, axes):
def construction_error(tot_items, block_shape, axes, e=None):
""" raise a helpful message about our construction """
passed = tuple(map(int, [tot_items] + list(block_shape)))
- implied = tuple(map(int, [len(ax) for ax in axes]))
+ # Correcting the user facing error message during dataframe construction
+ if len(passed) <= 2:
+ passed = passed[::-1]
+
+ implied = tuple(len(ax) for ax in axes)
+ # Correcting the user facing error message during dataframe construction
+ if len(implied) <= 2:
+ implied = implied[::-1]
+
if passed == implied and e is not None:
raise e
if block_shape[0] == 0:
| DataFrame creation incorrect error message
The problem was already mentioned as part of other issues, but still persists in 0.22
https://github.com/pandas-dev/pandas/issues/8020
https://github.com/blaze/blaze/issues/466
Reported both expected shape and input data shape are both transposed which causes a lot of confusion. In my opinion, the reference value should be ` DataFrame.shape`.
```python
my_arr = np.array([1, 2, 3])
print("my_arr.shape: {}".format(my_arr.shape))
df = pd.DataFrame(index=[0], columns=range(0, 4), data=my_arr)
```
```python
my_arr.shape: (3,)
Traceback (most recent call last):
...
ValueError: Shape of passed values is (1, 3), indices imply (4, 1)
```
Below are shapes which are expected to be reported:
```python
my_arr = np.array([[0, 1, 2, 3]])
print("my_arr.shape: {}".format(my_arr.shape))
df = pd.DataFrame(index=[0], columns=range(0, 4), data=my_arr)
print(df.shape)`
```
```python
my_arr.shape: (1, 4)
(1, 4)
```
I'm not sure, whether this is another issue, but in the first example, the error cause is 1-dimensional data while constructor expects 2-dimensional data. The user gets no hint about this from the error message.
| the first example is wrong. The block manager reports this, but doesn't flip the dim (like we do for everything else), so would welcome a PR to correct that.
I don't see a problem with the 2nd. You gave a 1, 4 array. That's the same as the dim of the frame, so it constructs. | 2019-01-11T15:13:07Z | [] | [] |
Traceback (most recent call last):
...
ValueError: Shape of passed values is (1, 3), indices imply (4, 1)
| 12,476 |
|||
pandas-dev/pandas | pandas-dev__pandas-24758 | 453fa85a8b88ca22c7b878a3fcf97e068f11b6c4 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1790,6 +1790,7 @@ I/O
- Bug in :meth:`DataFrame.to_dict` when the resulting dict contains non-Python scalars in the case of numeric data (:issue:`23753`)
- :func:`DataFrame.to_string()`, :func:`DataFrame.to_html()`, :func:`DataFrame.to_latex()` will correctly format output when a string is passed as the ``float_format`` argument (:issue:`21625`, :issue:`22270`)
- Bug in :func:`read_csv` that caused it to raise ``OverflowError`` when trying to use 'inf' as ``na_value`` with integer index column (:issue:`17128`)
+- Bug in :func:`read_csv` that caused the C engine on Python 3.6+ on Windows to improperly read CSV filenames with accented or special characters (:issue:`15086`)
- Bug in :func:`read_fwf` in which the compression type of a file was not being properly inferred (:issue:`22199`)
- Bug in :func:`pandas.io.json.json_normalize` that caused it to raise ``TypeError`` when two consecutive elements of ``record_path`` are dicts (:issue:`22706`)
- Bug in :meth:`DataFrame.to_stata`, :class:`pandas.io.stata.StataWriter` and :class:`pandas.io.stata.StataWriter117` where a exception would leave a partially written and invalid dta file (:issue:`23573`)
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -677,7 +677,13 @@ cdef class TextReader:
if isinstance(source, basestring):
if not isinstance(source, bytes):
- source = source.encode(sys.getfilesystemencoding() or 'utf-8')
+ if compat.PY36 and compat.is_platform_windows():
+ # see gh-15086.
+ encoding = "mbcs"
+ else:
+ encoding = sys.getfilesystemencoding() or "utf-8"
+
+ source = source.encode(encoding)
if self.memory_map:
ptr = new_mmap(source)
| OSError when reading file with accents in file path
#### Code Sample, a copy-pastable example if possible
`test.txt` and `test_é.txt` are the same file, only the name change:
```python
pd.read_csv('test.txt')
Out[3]:
1 1 1
0 1 1 1
1 1 1 1
pd.read_csv('test_é.txt')
Traceback (most recent call last):
File "<ipython-input-4-fd67679d1d17>", line 1, in <module>
pd.read_csv('test_é.txt')
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 646, in parser_f
return _read(filepath_or_buffer, kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 389, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 730, in __init__
self._make_engine(self.engine)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 923, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 1390, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas\parser.pyx", line 373, in pandas.parser.TextReader.__cinit__ (pandas\parser.c:4184)
File "pandas\parser.pyx", line 669, in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:8471)
OSError: Initializing from file failed
```
#### Problem description
Pandas return OSError when trying to read a file with accents in file path.
The problem is new (Since I upgraded to Python 3.6 and Pandas 0.19.2)
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: fr
LOCALE: None.None
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 32.3.1
Cython: 0.25.2
numpy: 1.11.3
scipy: 0.18.1
statsmodels: None
xarray: None
IPython: 5.1.0
sphinx: 1.5.1
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: 1.2.0
tables: None
numexpr: 2.6.1
matplotlib: 1.5.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
httplib2: None
apiclient: None
sqlalchemy: 1.1.4
pymysql: None
psycopg2: None
jinja2: 2.9.3
boto: None
pandas_datareader: None
</details>
| Just my pennies worth. Quickly tried it out on Mac OSX and Ubuntu with no
problems. See below.
Could this be an environment/platform problem? I noticed that the `LOCALE` is
set to `None.None`. Unfortunately I do not have a windows machine to try this
example on. Admittedly this would not explain why you've seen this *after* the
upgrade to python3.6 and pandas 0.19.2.
Note: I just set up a virtualenv with python3.6 and installed pandas 0.19.2 using pip.
```python
>>> import pandas as pd
>>> pd.read_csv('test_é.txt')
a b c
0 1 2 3
1 4 5 6
```
Output of **pd.show_versions()**
<details>
INSTALLED VERSIONS
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-57-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 32.3.1
Cython: None
numpy: 1.11.3
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None
</details>
I believe 3.6 switches the file system encoding on windows to utf8 (from ascii). Apart from that we don't have testing enable yet on windows for 3.6 (as some of the required packages are just now becoming available).
@JGoutin
so I just added build support on appveyor (windows) for 3.6, so if you'd push up your tests to see if it works, would be great.
I also faced the same problem when the program stopped at pd.read_csv(file_path). The situation is similar to me after I upgraded my python to 3.6 (I'm not sure the last time the python I installed is exactly what version, maybe 3.5......).
@jreback what is the next step towards a fix here?
You have mentioned a PR that got 'blown away' - what does it mean?
While I do not use Windows, I could try to help (just got a VM to debug a piece of my code that apparently does not work on windows)
BTW, a workaround: pass a file handle instead of a name
`pd.read_csv(open('test_é.txt', 'r'))`
(there are several workarounds in related issues, but I have not seen this one)
@tpietruszka see comments on the PR: https://github.com/pandas-dev/pandas/pull/15092 (it got removed from a private fork, was pretty much there).
you basically need to encode the paths differently on py3.6 (vs other pythons) on wnidows. basically need to implement: https://docs.python.org/3/whatsnew/3.6.html#pep-529-change-windows-filesystem-encoding-to-utf-8
my old code (can't run):
```
import pandas as pd
import os
file_path='./dict/字典.csv'
df_name = pd.read_csv(file_path,sep=',' )
```
new code (sucessful):
```
import pandas as pd
import os
file_path='./dict/dict.csv'
df_name = pd.read_csv(file_path,sep=',' )
```
I think this bug is filename problem.
I change filename from chinese to english, it can run now.
If anyone comes here like me because he/she hit the same problem, here is a solution until pandas is fixed to work with pep 529 (basically any non ascii chars will in your path or filename will result in errors):
Insert the following two lines at the beginning of your code to revert back to the old way of handling paths on windows:
```
import sys
sys._enablelegacywindowsfsencoding()
```
I use the solution above and it works. Thanks very much @fotisj !
However I'm still confused on why DataFrame.to_csv() doesn't occur same problem. In other words, for unicode file path, write is ok, while read isn't.
path=os.path.join('E:\语料','sina.csv')
pd.read_csv(open(path, 'r',encoding='utf8'))
It is successful.
Can someone with an affected system check if changing this line
https://github.com/pandas-dev/pandas/blob/e8620abc12a4c468a75adb8607fd8e0eb1c472e7/pandas/io/common.py#L209
to
```python
return _expand_user(os.fsencode(filepath_or_buffer)), None, compression
```
fixes it?
No, it does not.
Results in: OSError: Expected file path name or file-like object, got <class 'bytes'> type
(on Windows 10)
OSError Traceback (most recent call last)
<ipython-input-2-e8247998d6d4> in <module>()
1
----> 2 df = pd.read_csv(r'D:\mydata\Dropbox\uni\progrs\test öäau\n\teu.csv', sep='\t')
C:\conda\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
707 skip_blank_lines=skip_blank_lines)
708
--> 709 return _read(filepath_or_buffer, kwds)
710
711 parser_f.__name__ = name
C:\conda\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
447
448 # Create the parser.
--> 449 parser = TextFileReader(filepath_or_buffer, **kwds)
450
451 if chunksize or iterator:
C:\conda\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
816 self.options['has_index_names'] = kwds['has_index_names']
817
--> 818 self._make_engine(self.engine)
819
820 def close(self):
C:\conda\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
1047 def _make_engine(self, engine='c'):
1048 if engine == 'c':
-> 1049 self._engine = CParserWrapper(self.f, **self.options)
1050 else:
1051 if engine == 'python':
C:\conda\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
1693 kwds['allow_leading_cols'] = self.index_col is not False
1694
-> 1695 self._reader = parsers.TextReader(src, **kwds)
1696
1697 # XXX
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()
OSError: Expected file path name or file-like object, got <class 'bytes'> type
Oh, sorry. Does fsdecode work there?
________________________________
From: Fotis Jannidis <notifications@github.com>
Sent: Saturday, February 3, 2018 8:00:13 AM
To: pandas-dev/pandas
Cc: Tom Augspurger; Comment
Subject: Re: [pandas-dev/pandas] OSError when reading file with accents in file path (#15086)
No, it does not.
Results in: OSError: Expected file path name or file-like object, got <class 'bytes'> type
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<https://github.com/pandas-dev/pandas/issues/15086#issuecomment-362809602>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABQHIplv8thHxpjsP3knUCpET0Fjy0kLks5tRGZsgaJpZM4LeTSB>.
No. Using fsdecode produces the same error we originally had ([error_msg.txt](https://github.com/pandas-dev/pandas/files/1691837/error_msg.txt))
Ok thanks for trying.
________________________________
From: Fotis Jannidis <notifications@github.com>
Sent: Saturday, February 3, 2018 8:57:07 AM
To: pandas-dev/pandas
Cc: Tom Augspurger; Comment
Subject: Re: [pandas-dev/pandas] OSError when reading file with accents in file path (#15086)
No. Using fsdecode produces the same error we originally had (error_msg.txt<https://github.com/pandas-dev/pandas/files/1691837/error_msg.txt>)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<https://github.com/pandas-dev/pandas/issues/15086#issuecomment-362818153>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABQHIpeYsj9Bv3OsoHAsOufXzU3AYSBSks5tRHPCgaJpZM4LeTSB>.
Talked with Steve Dower today, and he suspects this may be the problematic line: https://github.com/pandas-dev/pandas/blob/e8f206d8192b409bc39da1ba1b2c5bcd8b65cc9f/pandas/_libs/src/parser/io.c#L30
IIUC, the Windows filesystem API is expecting those bytes to be in the MBCS, but we're using utf-8.
A user-level workaround is to explicitly encode your filename as mbcs before passing the bytestring to pandas. https://www.python.org/dev/peps/pep-0529/#explicitly-using-mbcs
```python
pd.read_csv(filename.encode('mbcs'))
```
is anyone able to test out that workaround?
just need a small change in the parser code to fix this (there was a PR doing this) but was deleted
@TomAugspurger that does not work. read_csv expects a `str` and not a `bytes` value. It fails with
OSError: Expected file path name or file-like object, got <class 'bytes'> type
Thanks for checking.
On Fri, Apr 20, 2018 at 3:43 PM, João D. Ferreira <notifications@github.com>
wrote:
> @TomAugspurger <https://github.com/TomAugspurger> that does not work.
> read_csv expects a str and not a bytes value. It fails with
>
> OSError: Expected file path name or file-like object, got <class 'bytes'> type
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/15086#issuecomment-383217062>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIiOHyt3sT7B0pHJuY5lB-cJtT5JHks5tqkiEgaJpZM4LeTSB>
> .
>
Just pinging this - I have the same issue, I'm using a workaround but it would be great if that was not required.
this needs a community patch
I am encountering this issue. I want to try and contribute a patchc Any pointers on how to start fixing this?
I think none of the maintainers have access to a system that can reproduce this.
Perhaps some of the others in this issue can help put together a solution. | 2019-01-13T23:42:56Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-4-fd67679d1d17>", line 1, in <module>
pd.read_csv('test_é.txt')
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 646, in parser_f
return _read(filepath_or_buffer, kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 389, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 730, in __init__
self._make_engine(self.engine)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 923, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 1390, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas\parser.pyx", line 373, in pandas.parser.TextReader.__cinit__ (pandas\parser.c:4184)
File "pandas\parser.pyx", line 669, in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:8471)
OSError: Initializing from file failed
| 12,482 |
|||
pandas-dev/pandas | pandas-dev__pandas-24837 | f4458c18287288562b21adece524fc1b046e9724 | diff --git a/asv_bench/benchmarks/io/csv.py b/asv_bench/benchmarks/io/csv.py
--- a/asv_bench/benchmarks/io/csv.py
+++ b/asv_bench/benchmarks/io/csv.py
@@ -214,4 +214,23 @@ def time_baseline(self):
names=list(string.digits[:9]))
+class ReadCSVMemoryGrowth(BaseIO):
+
+ chunksize = 20
+ num_rows = 1000
+ fname = "__test__.csv"
+
+ def setup(self):
+ with open(self.fname, "w") as f:
+ for i in range(self.num_rows):
+ f.write("{i}\n".format(i=i))
+
+ def mem_parser_chunks(self):
+ # see gh-24805.
+ result = read_csv(self.fname, chunksize=self.chunksize)
+
+ for _ in result:
+ pass
+
+
from ..pandas_vb_common import setup # noqa: F401
diff --git a/pandas/_libs/src/parser/tokenizer.c b/pandas/_libs/src/parser/tokenizer.c
--- a/pandas/_libs/src/parser/tokenizer.c
+++ b/pandas/_libs/src/parser/tokenizer.c
@@ -300,7 +300,7 @@ static int make_stream_space(parser_t *self, size_t nbytes) {
* just because a recent chunk did not have as many words.
*/
if (self->words_len + nbytes < self->max_words_cap) {
- length = self->max_words_cap - nbytes;
+ length = self->max_words_cap - nbytes - 1;
} else {
length = self->words_len;
}
| read_csv using C engine and chunksize can grow memory usage exponentially in 0.24.0rc1
#### Code Sample
```python
import pandas as pd
NUM_ROWS = 1000
CHUNKSIZE = 20
with open('test.csv', 'w') as f:
for i in range(NUM_ROWS):
f.write('{}\n'.format(i))
for chunk_index, chunk in enumerate(pd.read_csv('test.csv', chunksize=CHUNKSIZE, engine='c')):
print(chunk_index)
```
#### Problem description
In v0.24.0rc1, using `chunksize` in `pandas.read_csv` with the C engine causes exponential memory growth (`engine='python'` works fine).
The code sample I listed uses a very small chunksize to better illustrate the issue but the issue happens with more realistic values like `NUM_ROWS = 1000000` and `CHUNKSIZE = 1024`. The `low_memory` parameter in `pd.read_csv()` doesn't affect the behavior.
On Windows, the process becomes very slow as memory usage grows.
On Linux, an out-of-memory exception is raised after some chunks are processed and the buffer length grows too much. For example:
<details>
```
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
```
```pytb
Traceback (most recent call last):
File "test_csv.py", line 10, in <module>
for chunk_index, chunk in enumerate(pd.read_csv('test.csv', chunksize=CHUNKSIZE, engine='c')):
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1110, in __next__
return self.get_chunk()
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1168, in get_chunk
return self.read(nrows=size)
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1134, in read
ret = self._engine.read(nrows)
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1977, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 893, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 920, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 962, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 949, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2166, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: out of memory
```
</details>
I tried to debug the C code from the tokenizer as of 0bd454cdc9307d3a7e73403c49cc8350965628ce. The unexpected behavior seems present since 011b79fbf73b45313b47c08b4be1fc07dcb99365 which introduces these lines (and other changes) to fix #23509:
https://github.com/pandas-dev/pandas/blob/0bd454cdc9307d3a7e73403c49cc8350965628ce/pandas/_libs/src/parser/tokenizer.c#L294-L306
I'm not familiar with the code, so I could be misinterpreting it, but I believe that code block, coupled with how `self->words_cap` and `self->max_words_cap` are handled could be the source of the issue. There are some potentially misleading variables names like `nbytes` that seem to refer to the number of bytes that are later interpreted as `nbytes tokens` -- I couldn't follow what's happening but hopefully this report helps.
It seems the issue could also be related to https://github.com/pandas-dev/pandas/issues/16537 and https://github.com/pandas-dev/pandas/issues/21516 but the specific changes that cause it are newer, not present in previous releases.
#### Expected Output
```
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
```
#### Output of ``pd.show_versions()``
<details>
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Linux
OS-release: 4.12.14-lp150.12.25-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.0rc1
pytest: None
pip: 18.1
setuptools: 40.6.3
Cython: None
numpy: 1.15.4
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.9
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
```
</details>
| cc @gfyoung
@PMeira : Thanks for reporting this! I can confirm this error as well.
Did your code example previously work with `0.23.4` by any chance?
Code from the OP works for me on 0.23.4.
@gfyoung Yes, it worked fine with version `0.23.4`, just checked to be sure.
I first noticed it from a failing test in another module that I'm trying to port to recent versions of Pandas.
@PMeira @h-vetinari : Thanks for looking into this! Sounds like we have a regression on our hands...
@gfyoung do you have time to do this for 0.24.0? I don't think we have a release date set yet, but sometime in the next week or so?
@TomAugspurger : Yep, I'm going to look into this on the weekend.
@PMeira : Your observations are validated by what happens behind the scenes, as your numbers produce a snowball effect that causes the memory allocation to double with every iteration of reading.
It is indeed an edge case, as your numbers work just perfectly to cause memory allocated to be powers of 2. In fact, your "smaller example" fails for me for that reason on my local machine.
I think I have a patch for this that prevents the memory usage from growing exponentially, but I need to test to make sure I didn't break anything else with it. | 2019-01-19T11:40:03Z | [] | [] |
Traceback (most recent call last):
File "test_csv.py", line 10, in <module>
for chunk_index, chunk in enumerate(pd.read_csv('test.csv', chunksize=CHUNKSIZE, engine='c')):
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1110, in __next__
return self.get_chunk()
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1168, in get_chunk
return self.read(nrows=size)
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1134, in read
ret = self._engine.read(nrows)
File "/home/meira/.conda/envs/pandas024/lib/python3.6/site-packages/pandas/io/parsers.py", line 1977, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 893, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 920, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 962, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 949, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2166, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: out of memory
| 12,490 |
|||
pandas-dev/pandas | pandas-dev__pandas-24984 | 3855a27be4f04d15e7ba7aee12f0220c93148d3d | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -22,6 +22,7 @@ Other Enhancements
- Indexing of ``DataFrame`` and ``Series`` now accepts zerodim ``np.ndarray`` (:issue:`24919`)
- :meth:`Timestamp.replace` now supports the ``fold`` argument to disambiguate DST transition times (:issue:`25017`)
- :meth:`DataFrame.at_time` and :meth:`Series.at_time` now support :meth:`datetime.time` objects with timezones (:issue:`24043`)
+- :meth:`DataFrame.set_index` now works for instances of ``abc.Iterator``, provided their output is of the same length as the calling frame (:issue:`22484`, :issue:`24984`)
- :meth:`DatetimeIndex.union` now supports the ``sort`` argument. The behaviour of the sort parameter matches that of :meth:`Index.union` (:issue:`24994`)
-
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -137,6 +137,7 @@ def lfilter(*args, **kwargs):
reload = reload
Hashable = collections.abc.Hashable
Iterable = collections.abc.Iterable
+ Iterator = collections.abc.Iterator
Mapping = collections.abc.Mapping
MutableMapping = collections.abc.MutableMapping
Sequence = collections.abc.Sequence
@@ -199,6 +200,7 @@ def get_range_parameters(data):
Hashable = collections.Hashable
Iterable = collections.Iterable
+ Iterator = collections.Iterator
Mapping = collections.Mapping
MutableMapping = collections.MutableMapping
Sequence = collections.Sequence
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -33,7 +33,7 @@
from pandas import compat
from pandas.compat import (range, map, zip, lmap, lzip, StringIO, u,
- PY36, raise_with_traceback,
+ PY36, raise_with_traceback, Iterator,
string_and_binary_types)
from pandas.compat.numpy import function as nv
from pandas.core.dtypes.cast import (
@@ -4025,7 +4025,8 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
This parameter can be either a single column key, a single array of
the same length as the calling DataFrame, or a list containing an
arbitrary combination of column keys and arrays. Here, "array"
- encompasses :class:`Series`, :class:`Index` and ``np.ndarray``.
+ encompasses :class:`Series`, :class:`Index`, ``np.ndarray``, and
+ instances of :class:`abc.Iterator`.
drop : bool, default True
Delete columns to be used as the new index.
append : bool, default False
@@ -4104,6 +4105,32 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
if not isinstance(keys, list):
keys = [keys]
+ err_msg = ('The parameter "keys" may be a column key, one-dimensional '
+ 'array, or a list containing only valid column keys and '
+ 'one-dimensional arrays.')
+
+ missing = []
+ for col in keys:
+ if isinstance(col, (ABCIndexClass, ABCSeries, np.ndarray,
+ list, Iterator)):
+ # arrays are fine as long as they are one-dimensional
+ # iterators get converted to list below
+ if getattr(col, 'ndim', 1) != 1:
+ raise ValueError(err_msg)
+ else:
+ # everything else gets tried as a key; see GH 24969
+ try:
+ found = col in self.columns
+ except TypeError:
+ raise TypeError(err_msg + ' Received column of '
+ 'type {}'.format(type(col)))
+ else:
+ if not found:
+ missing.append(col)
+
+ if missing:
+ raise KeyError('None of {} are in the columns'.format(missing))
+
if inplace:
frame = self
else:
@@ -4132,6 +4159,9 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
elif isinstance(col, (list, np.ndarray)):
arrays.append(col)
names.append(None)
+ elif isinstance(col, Iterator):
+ arrays.append(list(col))
+ names.append(None)
# from here, col can only be a column label
else:
arrays.append(frame[col]._values)
@@ -4139,6 +4169,15 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
if drop:
to_remove.append(col)
+ if len(arrays[-1]) != len(self):
+ # check newest element against length of calling frame, since
+ # ensure_index_from_sequences would not raise for append=False.
+ raise ValueError('Length mismatch: Expected {len_self} rows, '
+ 'received array of length {len_col}'.format(
+ len_self=len(self),
+ len_col=len(arrays[-1])
+ ))
+
index = ensure_index_from_sequences(arrays, names)
if verify_integrity and not index.is_unique:
| Regression in DataFrame.set_index with class instance column keys
The following code worked in Pandas 0.23.4 but not in Pandas 0.24.0 (I'm on Python 3.7.2).
```python
import pandas as pd
class Thing:
# (Production code would also ensure a Thing instance's hash
# and equality testing depended on name and color)
def __init__(self, name, color):
self.name = name
self.color = color
def __str__(self):
return "<Thing %r>" % (self.name,)
thing1 = Thing('One', 'red')
thing2 = Thing('Two', 'blue')
df = pd.DataFrame({thing1: [0, 1], thing2: [2, 3]})
df.set_index([thing2])
```
In Pandas 0.23.4, I get the following correct result:
```
<Thing 'One'>
<Thing 'Two'>
2 0
3 1
```
In Pandas 0.24.0, I get the following error:
```Python-traceback
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../venv/lib/python3.7/site-packages/pandas/core/frame.py", line 4153, in set_index
raise ValueError(err_msg)
ValueError: The parameter "keys" may be a column key, one-dimensional array, or a list containing only valid column keys and one-dimensional arrays.
```
After looking at Pandas 0.24.0's implementation of `DataFrame.set_index`:
https://github.com/pandas-dev/pandas/blob/83eb2428ceb6257042173582f3f436c2c887aa69/pandas/core/frame.py#L4144-L4153
I noticed that `is_scalar` returns `False` for `thing1` in Pandas 0.24.0:
```Python-console
>>> from pandas.core.dtypes.common import is_scalar
>>> is_scalar(thing1)
False
```
I suspect that it is incorrect to test DataFrame column keys using `is_scalar`.
# Output of ``pd.show_versions()``
<details>
## `pd.show_versions()` from Pandas 0.23.4
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.2.final.0
python-bits: 64
OS: Darwin
OS-release: 17.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.4
pytest: None
pip: 18.1
setuptools: 40.4.3
Cython: None
numpy: 1.16.0
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 1.1.2
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
## `pd.show_versions()` from Pandas 0.24.0
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.2.final.0
python-bits: 64
OS: Darwin
OS-release: 17.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.0
pytest: None
pip: 18.1
setuptools: 40.4.3
Cython: None
numpy: 1.16.0
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 1.1.2
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| We have had quite some discussion lately about `set_index` (see eg https://github.com/pandas-dev/pandas/issues/24046), and the actual change (that started to use `is_scalar`, I think) that caused this regression is https://github.com/pandas-dev/pandas/pull/22486 and https://github.com/pandas-dev/pandas/pull/24762
In general the usage `is_scalar` gives problems with custom objects. Eg we also fixed this in fillna (https://github.com/pandas-dev/pandas/issues/20411).
cc @h-vetinari
@jorisvandenbossche
Does pandas support custom objects as labels? I think that's bound to break in many places. The code previously tried *everything* it got as a key, so in this sense this is a regression, yes.
I'm a bit stumped as for how to deal with this. Column keys should IMO clearly be scalar (or tuples, grudgingly) - and that's the only reason `is_scalar` is there. CC @jreback
@wkschwartz
Your object (at least the toy version) looks a bit like a tuple. As an immediate workaround, I'd suggest to try inheriting `Thing` from `tuple`, then the `isinstance(..., tuple)`-side should work at least.
In my production code, I use [dataclasses](https://docs.python.org/3/library/dataclasses.html) as custom objects in both column keys and row indices, which worked throughout Pandas 0.23.4. If Pandas 0.24 or later drop support for custom classes in row/column indices, I would be stuck at 0.23.4 forever. This is why I view the change as a regression.
> Does pandas support custom objects as labels?
We didn't disallow it previously, so yes.
This may have *happened* to work, but we don't support custom objects as labels explicity. Not against reverting this, but its buyer beware here.
I never could find anything in the documentation that takes a stance on what can or can’t be column keys, except the general notion that DataFrames are dict-like. From this I surmised that column keys should be hashable and immutable. Did I miss something in the documentation?
| 2019-01-28T17:52:56Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../venv/lib/python3.7/site-packages/pandas/core/frame.py", line 4153, in set_index
raise ValueError(err_msg)
ValueError: The parameter "keys" may be a column key, one-dimensional array, or a list containing only valid column keys and one-dimensional arrays.
| 12,509 |
|||
pandas-dev/pandas | pandas-dev__pandas-25058 | 5e224fb8b474df8e7d8053bfbae171f500a65f54 | diff --git a/doc/source/whatsnew/v0.24.2.rst b/doc/source/whatsnew/v0.24.2.rst
--- a/doc/source/whatsnew/v0.24.2.rst
+++ b/doc/source/whatsnew/v0.24.2.rst
@@ -51,7 +51,7 @@ Bug Fixes
**I/O**
--
+- Bug in reading a HDF5 table-format ``DataFrame`` created in Python 2, in Python 3 (:issue:`24925`)
-
-
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -3288,7 +3288,7 @@ def get_attrs(self):
self.nan_rep = getattr(self.attrs, 'nan_rep', None)
self.encoding = _ensure_encoding(
getattr(self.attrs, 'encoding', None))
- self.errors = getattr(self.attrs, 'errors', 'strict')
+ self.errors = _ensure_decoded(getattr(self.attrs, 'errors', 'strict'))
self.levels = getattr(
self.attrs, 'levels', None) or []
self.index_axes = [
| reading of old pandas dataframe (created in python 2) failed with 0.23.4
Hi,
Firstly I have to apologize, that my description will be very vague.
I have a problem with one of my dataframe that was created earlier with python 2 and older version of pandas (unfortunately I do not know what version). Now I cannot open it in python 3 and pandas 0.23.4 (loading in python 3 with pandas 0.22.0 works fine).
For reading, I am using:
```python
hdf = pd.HDFStore(src_filename, mode=”r”)
data_frame = hdf.select(src_tablename)
```
My stack trace in pandas 0.23.4 is:
```
Traceback (most recent call last):
data_frame = hdf.select(src_tablename)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 743, in select
return it.get_result()
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 1485, in get_result
results = self.func(self.start, self.stop, where)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 734, in func
columns=columns)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 4182, in read
if not self.read_axes(where=where, **kwargs):
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 3385, in read_axes
errors=self.errors)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 2195, in convert
self.data, nan_rep=nan_rep, encoding=encoding, errors=errors)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 4658, in _unconvert_string_array
data = libwriters.string_array_replace_from_nan_rep(data, nan_rep)
File "pandas/_libs/writers.pyx", line 158, in pandas._libs.writers.string_array_replace_from_nan_rep
ValueError: Buffer dtype mismatch, expected 'Python object' but got 'double'
```
This stack trace led me to this pull request: https://github.com/pandas-dev/pandas/pull/24510
If I list it e.g. with h5ls it looks fine (it is loaded and content looks fine).
Unfortunately, I cannot share the dataframe, because it is private and I cannot reproduce process of the creation with older versions any more :-(. So I am not able to deliver that unreable dataframe.
I debuged pandas and found, that this patch helped me.
```
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 4e103482f..2ab6ddb5b 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -3288,7 +3288,7 @@ class Table(Fixed):
self.nan_rep = getattr(self.attrs, 'nan_rep', None)
self.encoding = _ensure_encoding(
getattr(self.attrs, 'encoding', None))
- self.errors = getattr(self.attrs, 'errors', 'strict')
+ self.errors = _ensure_decoded(getattr(self.attrs, 'errors', 'strict'))
self.levels = getattr(
self.attrs, 'levels', None) or []
self.index_axes = [
```
Can anyone advice me, if such a fix is fine and if yes, can I send it as pull request without any reproducer?
Thank you.
| you should try with 0.24.0 which is releasing today and has that patch
I know, that pull reques: #24510 will be in 0.24, but my adition of _ensure_decoded() in my patch is on different place.
your diff looks the same
I am not sure...
My patch is trying to change this - https://github.com/pandas-dev/pandas/blob/master/pandas/io/pytables.py#L3291
But the pull request mentioned changed this - https://github.com/pandas-dev/pandas/blob/master/pandas/io/pytables.py#L2524
Maybe there is some hierarchy, that I do not see, but without my patch the master (that will probably be the base for 0.24?) fails in my case (I know, my case is specific).
well this would require a test ; construct a dummy file that fails and a patch fixes, just like the ref issue
I found the reproducer
saving of dataframe:
```
df_orig = pd.DataFrame({
"a": ["a", "b"],
"b": [2, 3]
})
filename = "a.h5"
hdf = pd.HDFStore(filename, mode="w")
hdf.put("table", df_orig, format='table', data_columns=True, index=None)
hdf.close()
```
env:
Python 2.7.15
pandas 0.23.4
numpy 1.16.0
loading:
```
hdf = pd.HDFStore(filename, mode="r")
df_loaded = hdf.select("table")
hdf.close()
print("loaded")
print(df_loaded.equals(df_orig))
```
env:
Python 3.6.7
pandas 0.23.4
numpy 1.14.3
```
Traceback (most recent call last):
File "pandas_test.py", line 19, in <module>
df_loaded = hdf.select("table")
File "/home/rbenes/virtual_envs/venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 741, in select
return it.get_result()
File "/home/rbenes/virtual_envs/venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 1483, in get_result
results = self.func(self.start, self.stop, where)
File "/home/rbenes/virtual_envs/venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 734, in func
columns=columns)
File "/home/rbenes/virtual_envs/venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 4180, in read
if not self.read_axes(where=where, **kwargs):
File "/home/rbenes/virtual_envs/venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 3383, in read_axes
errors=self.errors)
File "/home/rbenes/virtual_envs/venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 2193, in convert
self.data, nan_rep=nan_rep, encoding=encoding, errors=errors)
File "/home/rbenes/virtual_envs/venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 4656, in _unconvert_string_array
data = libwriters.string_array_replace_from_nan_rep(data, nan_rep)
File "pandas/_libs/writers.pyx", line 158, in pandas._libs.writers.string_array_replace_from_nan_rep
ValueError: Buffer dtype mismatch, expected 'Python object' but got 'double'
```
so i will prepare pull request with test with this dummy dataframe... | 2019-01-31T17:54:28Z | [] | [] |
Traceback (most recent call last):
data_frame = hdf.select(src_tablename)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 743, in select
return it.get_result()
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 1485, in get_result
results = self.func(self.start, self.stop, where)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 734, in func
columns=columns)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 4182, in read
if not self.read_axes(where=where, **kwargs):
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 3385, in read_axes
errors=self.errors)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 2195, in convert
self.data, nan_rep=nan_rep, encoding=encoding, errors=errors)
File "/home/rbenes/virtual_envs/iface_venv36_new_pkgs/lib/python3.6/site-packages/pandas/io/pytables.py", line 4658, in _unconvert_string_array
data = libwriters.string_array_replace_from_nan_rep(data, nan_rep)
File "pandas/_libs/writers.pyx", line 158, in pandas._libs.writers.string_array_replace_from_nan_rep
ValueError: Buffer dtype mismatch, expected 'Python object' but got 'double'
| 12,517 |
|||
pandas-dev/pandas | pandas-dev__pandas-25124 | 659e0cae6be2d7ab3370cc7d8ab936bc3ee1b159 | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -28,6 +28,8 @@ Other Enhancements
Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- :meth:`Timestamp.strptime` will now rise a NotImplementedError (:issue:`21257`)
+
.. _whatsnew_0250.api.other:
Other API Changes
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -374,7 +374,6 @@ class NaTType(_NaT):
utctimetuple = _make_error_func('utctimetuple', datetime)
timetz = _make_error_func('timetz', datetime)
timetuple = _make_error_func('timetuple', datetime)
- strptime = _make_error_func('strptime', datetime)
strftime = _make_error_func('strftime', datetime)
isocalendar = _make_error_func('isocalendar', datetime)
dst = _make_error_func('dst', datetime)
@@ -388,6 +387,14 @@ class NaTType(_NaT):
# The remaining methods have docstrings copy/pasted from the analogous
# Timestamp methods.
+ strptime = _make_error_func('strptime', # noqa:E128
+ """
+ Timestamp.strptime(string, format)
+
+ Function is not implemented. Use pd.to_datetime().
+ """
+ )
+
utcfromtimestamp = _make_error_func('utcfromtimestamp', # noqa:E128
"""
Timestamp.utcfromtimestamp(ts)
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -697,6 +697,17 @@ class Timestamp(_Timestamp):
"""
return cls(datetime.fromtimestamp(ts))
+ # Issue 25016.
+ @classmethod
+ def strptime(cls, date_string, format):
+ """
+ Timestamp.strptime(string, format)
+
+ Function is not implemented. Use pd.to_datetime().
+ """
+ raise NotImplementedError("Timestamp.strptime() is not implmented."
+ "Use to_datetime() to parse date strings.")
+
@classmethod
def combine(cls, date, time):
"""
| Timestamp.strptime %z not supported
#### Code Sample, a copy-pastable example if possible
```python
fmt = '%Y%m%d-%H%M%S-%f%z'
ts = '20190129-235348-183747+0000'
pd.Timestamp.strptime(ts, fmt)
```
```python
Traceback (most recent call last):
File "/scratch.py", line 6, in <module>
pd.Timestamp.strptime(ts, fmt)
File "/python/lib/python3.6/_strptime.py", line 576, in _strptime_datetime
return cls(*args)
File "pandas/_libs/tslibs/timestamps.pyx", line 748, in pandas._libs.tslibs.timestamps.Timestamp.__new__
TypeError: an integer is required
```
#### Problem description
Timestamp.strptime does not support %z. Issue was fixed with `pd.to_datetime` in #19979.
#### Expected Output
The same as `pd.to_datetime(ts, format=fmt)`:
```python
Timestamp('2019-01-29 23:53:48.183747+0000', tz='UTC')
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.8.final.0
python-bits: 64
OS: Linux
OS-release: 4.15.0-43-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C.UTF-8
LANG: C.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.0
pytest: 4.1.1
pip: 18.1
setuptools: 40.6.3
Cython: 0.29.2
numpy: 1.15.4
scipy: None
pyarrow: None
xarray: None
IPython: 7.2.0
sphinx: 1.8.2
patsy: None
dateutil: 2.7.5
pytz: 2018.9
blosc: None
bottleneck: None
tables: 3.4.4
numexpr: 2.6.9
feather: None
matplotlib: 3.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: 4.3.0
bs4: None
html5lib: None
sqlalchemy: 1.2.16
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| Thanks for the report.
The fix is luckily straightforward.
```
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -736,8 +736,8 @@ class Timestamp(_Timestamp):
# microsecond[, nanosecond[, tzinfo]]]]]])
ts_input = datetime(ts_input, freq, tz, unit or 0,
year or 0, month or 0, day or 0)
- nanosecond = hour
- tz = minute
+ nanosecond = minute
+ tz = hour
freq = None
```
However, we need to make an API change in the `Timestamp` constructor as well and switch the positions of the `tzinfo` and `nanosecond` arguments.
Can I give this a try if no one is working on this?
Go for it, unless @mroeschke was planning on?
Sure go for it @saurav2608 | 2019-02-03T19:01:57Z | [] | [] |
Traceback (most recent call last):
File "/scratch.py", line 6, in <module>
pd.Timestamp.strptime(ts, fmt)
File "/python/lib/python3.6/_strptime.py", line 576, in _strptime_datetime
return cls(*args)
File "pandas/_libs/tslibs/timestamps.pyx", line 748, in pandas._libs.tslibs.timestamps.Timestamp.__new__
TypeError: an integer is required
| 12,528 |
|||
pandas-dev/pandas | pandas-dev__pandas-25246 | 2448e5229683acbe7de57f2d53065247aa085b1f | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -139,7 +139,7 @@ Indexing
Missing
^^^^^^^
--
+- Fixed misleading exception message in :meth:`Series.missing` if argument ``order`` is required, but omitted (:issue:`10633`, :issue:`24014`).
-
-
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1115,24 +1115,18 @@ def check_int_bool(self, inplace):
fill_value=fill_value,
coerce=coerce,
downcast=downcast)
- # try an interp method
- try:
- m = missing.clean_interp_method(method, **kwargs)
- except ValueError:
- m = None
-
- if m is not None:
- r = check_int_bool(self, inplace)
- if r is not None:
- return r
- return self._interpolate(method=m, index=index, values=values,
- axis=axis, limit=limit,
- limit_direction=limit_direction,
- limit_area=limit_area,
- fill_value=fill_value, inplace=inplace,
- downcast=downcast, **kwargs)
-
- raise ValueError("invalid method '{0}' to interpolate.".format(method))
+ # validate the interp method
+ m = missing.clean_interp_method(method, **kwargs)
+
+ r = check_int_bool(self, inplace)
+ if r is not None:
+ return r
+ return self._interpolate(method=m, index=index, values=values,
+ axis=axis, limit=limit,
+ limit_direction=limit_direction,
+ limit_area=limit_area,
+ fill_value=fill_value, inplace=inplace,
+ downcast=downcast, **kwargs)
def _interpolate_with_fill(self, method='pad', axis=0, inplace=False,
limit=None, fill_value=None, coerce=False,
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -293,9 +293,10 @@ def _interpolate_scipy_wrapper(x, y, new_x, method, fill_value=None,
bounds_error=bounds_error)
new_y = terp(new_x)
elif method == 'spline':
- # GH #10633
- if not order:
- raise ValueError("order needs to be specified and greater than 0")
+ # GH #10633, #24014
+ if isna(order) or (order <= 0):
+ raise ValueError("order needs to be specified and greater than 0; "
+ "got order: {}".format(order))
terp = interpolate.UnivariateSpline(x, y, k=order, **kwargs)
new_y = terp(new_x)
else:
| Unnecessary bare except at class Block, function interpolate hides actual error
#### Code Sample, a copy-pastable example if possible
```python
# Minimal example:
import pandas as pd
df = pd.Series([0,1,pd.np.nan,3,4])
df.interpolate(method='spline')
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\generic.py", line 6034, in interpolate
**kwargs)
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\internals.py", line 3702, in interpolate
return self.apply('interpolate', **kwargs)
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\internals.py", line 3581, in apply
applied = getattr(b, f)(**kwargs)
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\internals.py", line 1168, in interpolate
raise ValueError("invalid method '{0}' to interpolate.".format(method))
ValueError: invalid method 'spline' to interpolate.
```
##### Expected output
```python
ValueError: You must specify the order of the spline or polynomial.
```
#### Problem description
If interpolation parameter not specified, it raises an error, which states invalid method
internals.py:1152:1155
try:
m = missing.clean_interp_method(method, **kwargs)
except:
m = None
If there is no such try/except block around the missing.clean_interp_method function call, we would get the proper exception from mising.py/clean_interp_method.
Pandas version: 0.23.4
| Do you have a minimal example? http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
Minimal example:
import pandas as pd
df = pd.Series([0,1,pd.np.nan,3,4])
df.interpolate(method='spline')
For clarity, the expected output is
```
ValueError: You must specify the order of the spline or polynomial.
```
@seboktamas can you edit your original post to include the minimal example and the expected output? | 2019-02-09T18:33:57Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\generic.py", line 6034, in interpolate
**kwargs)
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\internals.py", line 3702, in interpolate
return self.apply('interpolate', **kwargs)
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\internals.py", line 3581, in apply
applied = getattr(b, f)(**kwargs)
File "D:\venvs\food_and_drinking\lib\site-packages\pandas\core\internals.py", line 1168, in interpolate
raise ValueError("invalid method '{0}' to interpolate.".format(method))
ValueError: invalid method 'spline' to interpolate.
| 12,544 |
|||
pandas-dev/pandas | pandas-dev__pandas-25469 | c9863865c217867583e8f6592ba88d9200601992 | diff --git a/doc/source/reference/groupby.rst b/doc/source/reference/groupby.rst
--- a/doc/source/reference/groupby.rst
+++ b/doc/source/reference/groupby.rst
@@ -99,6 +99,7 @@ application to columns of a specific data type.
DataFrameGroupBy.idxmax
DataFrameGroupBy.idxmin
DataFrameGroupBy.mad
+ DataFrameGroupBy.nunique
DataFrameGroupBy.pct_change
DataFrameGroupBy.plot
DataFrameGroupBy.quantile
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -210,6 +210,7 @@ Groupby/Resample/Rolling
^^^^^^^^^^^^^^^^^^^^^^^^
- Bug in :meth:`pandas.core.resample.Resampler.agg` with a timezone aware index where ``OverflowError`` would raise when passing a list of functions (:issue:`22660`)
+- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.nunique` in which the names of column levels were lost (:issue:`23222`)
-
-
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1579,6 +1579,7 @@ def groupby_series(obj, col=None):
from pandas.core.reshape.concat import concat
results = [groupby_series(obj[col], col) for col in obj.columns]
results = concat(results, axis=1)
+ results.columns.names = obj.columns.names
if not self.as_index:
results.index = ibase.default_index(len(results))
| Resampling using `nunique` causes multi-level columns to lose their level names
#### Problem description
Resampling using `nunique` causes multi-level columns to lose their level names.
https://nbviewer.jupyter.org/gist/taljaards/20e945b7572aea1f4eb4aa4c6e823037
I only ran into this issue with `nunique`; I do not know if this is also the case for some other functions.
#### Expected Output
To not drop the level names, as in the first resample example.
#### Output of ``pd.show_versions()``
<details>
```
import pandas as pd
pd.show_versions()
Backend TkAgg is interactive backend. Turning interactive mode on.
Matplotlib support failed
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\183.3647.8\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 23, in do_import
succeeded = activate_func()
File "C:\Users\Admin\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\183.3647.8\helpers\pydev\pydev_ipython\matplotlibtools.py", line 141, in activate_pylab
pylab = sys.modules['pylab']
KeyError: 'pylab'
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.6.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 142 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.23.4
pytest: 3.8.2
pip: 18.1
setuptools: 40.4.3
Cython: 0.28.5
numpy: 1.15.1
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 7.0.1
sphinx: 1.8.1
patsy: 0.5.0
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.8
feather: None
matplotlib: 3.0.0
openpyxl: 2.5.8
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.1.1
lxml: 4.2.5
bs4: 4.6.3
html5lib: 1.0.1
sqlalchemy: 1.2.12
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
```
</details>
| I'll have a look at this today. | 2019-02-28T04:25:05Z | [] | [] |
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\183.3647.8\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 23, in do_import
succeeded = activate_func()
File "C:\Users\Admin\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\183.3647.8\helpers\pydev\pydev_ipython\matplotlibtools.py", line 141, in activate_pylab
pylab = sys.modules['pylab']
KeyError: 'pylab'
| 12,580 |
|||
pandas-dev/pandas | pandas-dev__pandas-25474 | c9863865c217867583e8f6592ba88d9200601992 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3797,7 +3797,12 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
axis : {0 or 'index', 1 or 'columns'}, default 0
Whether to drop labels from the index (0 or 'index') or
columns (1 or 'columns').
- index, columns : single label or list-like
+ index : single label or list-like
+ Alternative to specifying axis (``labels, axis=0``
+ is equivalent to ``index=labels``).
+
+ .. versionadded:: 0.21.0
+ columns : single label or list-like
Alternative to specifying axis (``labels, axis=1``
is equivalent to ``columns=labels``).
@@ -3813,11 +3818,12 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
Returns
-------
DataFrame
+ DataFrame without the removed index or column labels.
Raises
------
KeyError
- If none of the labels are found in the selected axis
+ If any of the labels is not found in the selected axis.
See Also
--------
@@ -3830,7 +3836,7 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
Examples
--------
- >>> df = pd.DataFrame(np.arange(12).reshape(3,4),
+ >>> df = pd.DataFrame(np.arange(12).reshape(3, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
@@ -3867,7 +3873,7 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
>>> df = pd.DataFrame(index=midx, columns=['big', 'small'],
... data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
... [250, 150], [1.5, 0.8], [320, 250],
- ... [1, 0.8], [0.3,0.2]])
+ ... [1, 0.8], [0.3, 0.2]])
>>> df
big small
lama speed 45.0 30.0
| Error in documentation of DataFrame.drop
#### Code Sample, a copy-pastable example if possible
```python
df = pd.DataFrame(np.arange(12).reshape(3,4), columns=['A', 'B', 'C', 'D'])
df.drop(columns=['A','not_occurring'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py", line 3697, in drop
errors=errors)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py", line 3111, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py", line 3143, in _drop_axis
new_axis = axis.drop(labels, errors=errors)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 4404, in drop
'{} not found in axis'.format(labels[mask]))
KeyError: "['not_occurring'] not found in axis"
```
#### Problem description
In the pandas documentation for DataFrame drop (https://pandas-docs.github.io/pandas-docs-travis/reference/api/pandas.DataFrame.drop.html#pandas.DataFrame.drop), the following is mentioned:
`KeyError: If none of the labels are found in the selected axis`
However, when looking at the provided code snippet, we see that even though there is a label which is found in the selected axis (`'A'`), the KeyError is thrown.
#### Expected Output
```python
df = pd.DataFrame(np.arange(12).reshape(3,4), columns=['A', 'B', 'C', 'D'])
df.drop(columns=['A','not_occurring'])
B C D
0 1 2 3
1 5 6 7
2 9 10 11
```
### Suggested Fix
Although from this issue, it seems like the code is at fault, I would suggest to change the documentation to `KeyError: If one of the labels are not found in the selected axis`. If the core team agrees with this fix, then I would be happy to provide a pull request that does this.
#### Output of ``pd.show_versions()``
<details>
commit: None
python: 3.7.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.15.0-45-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.25.0.dev0+162.gc9863865c.dirty
pytest: 4.2.1
pip: 19.0.1
setuptools: 40.8.0
Cython: 0.29.5
numpy: 1.15.4
scipy: 1.2.0
pyarrow: 0.11.1
xarray: 0.11.3
IPython: 7.2.0
sphinx: 1.8.4
patsy: 0.5.1
dateutil: 2.7.5
pytz: 2018.9
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.9
feather: None
matplotlib: 3.0.2
openpyxl: 2.6.0
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.2
lxml.etree: 4.3.1
bs4: 4.7.1
html5lib: 1.0.1
sqlalchemy: 1.2.18
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: 0.2.0
fastparquet: 0.2.1
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| 2019-02-28T11:41:56Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py", line 3697, in drop
errors=errors)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py", line 3111, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py", line 3143, in _drop_axis
new_axis = axis.drop(labels, errors=errors)
File "/opt/anaconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 4404, in drop
'{} not found in axis'.format(labels[mask]))
KeyError: "['not_occurring'] not found in axis"
| 12,581 |
||||
pandas-dev/pandas | pandas-dev__pandas-25479 | 50c40ff1afa4a4a6772225e02c320294c422ed1a | diff --git a/doc/source/development/contributing.rst b/doc/source/development/contributing.rst
--- a/doc/source/development/contributing.rst
+++ b/doc/source/development/contributing.rst
@@ -435,7 +435,7 @@ reducing the turn-around time for checking your changes.
# compile the reference docs for a single function
python make.py clean
- python make.py --single DataFrame.join
+ python make.py --single pandas.DataFrame.join
For comparison, a full documentation build may take 15 minutes, but a single
section may take 15 seconds. Subsequent builds, which only process portions
| Contribution Guide - Building the Documentation single function error
#### Code Sample, a copy-pastable example if possible
```
python make.py --single DataFrame.join
Traceback (most recent call last):
File "make.py", line 339, in <module>
sys.exit(main())
File "make.py", line 334, in main
args.verbosity, args.warnings_are_errors)
File "make.py", line 46, in __init__
single_doc = self._process_single_doc(single_doc)
File "make.py", line 88, in _process_single_doc
'pandas.DataFrame.head)').format(single_doc))
ValueError: --single=DataFrame.join not understood. Value should be a valid path to a .rst or .ipynb file, or a valid pandas object (e.g. categorical.rst or pandas.DataFrame.head)
```
#### Problem description
Using the code snippet described in the pandas contribution guide (https://pandas-docs.github.io/pandas-docs-travis/development/contributing.html#id48), we get a ValueError. This is fixed by instead using the following command:
```
python make.py --single pandas.DataFrame.join
```
I will create a PR updating the documentation.
| 2019-02-28T14:29:07Z | [] | [] |
Traceback (most recent call last):
File "make.py", line 339, in <module>
sys.exit(main())
File "make.py", line 334, in main
args.verbosity, args.warnings_are_errors)
File "make.py", line 46, in __init__
single_doc = self._process_single_doc(single_doc)
File "make.py", line 88, in _process_single_doc
'pandas.DataFrame.head)').format(single_doc))
ValueError: --single=DataFrame.join not understood. Value should be a valid path to a .rst or .ipynb file, or a valid pandas object (e.g. categorical.rst or pandas.DataFrame.head)
| 12,582 |
||||
pandas-dev/pandas | pandas-dev__pandas-25588 | 46639512c06300a9844ea27f90167d5648c9b93a | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -176,6 +176,7 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
+
Categorical
^^^^^^^^^^^
@@ -211,6 +212,7 @@ Numeric
- Bug in :meth:`to_numeric` in which large negative numbers were being improperly handled (:issue:`24910`)
- Bug in :meth:`to_numeric` in which numbers were being coerced to float, even though ``errors`` was not ``coerce`` (:issue:`24910`)
- Bug in error messages in :meth:`DataFrame.corr` and :meth:`Series.corr`. Added the possibility of using a callable. (:issue:`25729`)
+- Bug in :meth:`Series.divmod` and :meth:`Series.rdivmod` which would raise an (incorrect) ``ValueError`` rather than return a pair of :class:`Series` objects as result (:issue:`25557`)
-
-
-
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1660,7 +1660,7 @@ def _construct_result(left, result, index, name, dtype=None):
not be enough; we still need to override the name attribute.
"""
out = left._constructor(result, index=index, dtype=dtype)
-
+ out = out.__finalize__(left)
out.name = name
return out
@@ -1668,10 +1668,11 @@ def _construct_result(left, result, index, name, dtype=None):
def _construct_divmod_result(left, result, index, name, dtype=None):
"""divmod returns a tuple of like indexed series instead of a single series.
"""
- constructor = left._constructor
return (
- constructor(result[0], index=index, name=name, dtype=dtype),
- constructor(result[1], index=index, name=name, dtype=dtype),
+ _construct_result(left, result[0], index=index, name=name,
+ dtype=dtype),
+ _construct_result(left, result[1], index=index, name=name,
+ dtype=dtype),
)
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2527,6 +2527,7 @@ def _binop(self, other, func, level=None, fill_value=None):
-------
Series
"""
+
if not isinstance(other, Series):
raise AssertionError('Other operand must be Series')
@@ -2543,13 +2544,13 @@ def _binop(self, other, func, level=None, fill_value=None):
with np.errstate(all='ignore'):
result = func(this_vals, other_vals)
+
name = ops.get_op_result_name(self, other)
- result = self._constructor(result, index=new_index, name=name)
- result = result.__finalize__(self)
- if name is None:
- # When name is None, __finalize__ overwrites current name
- result.name = None
- return result
+ if func.__name__ in ['divmod', 'rdivmod']:
+ ret = ops._construct_divmod_result(self, result, new_index, name)
+ else:
+ ret = ops._construct_result(self, result, new_index, name)
+ return ret
def combine(self, other, func, fill_value=None):
"""
| BUG: ValueError in Series.divmod
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
import numpy as np
a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
##Working:
divmod(a,b)
##Fails:
a.divmod(b)
```
#### Problem description
divmod(series_a,series_b) works as expected, but series_a.divmod(b) returns the following error:
```
>>> a.divmod(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/danlaw/Projects/pandas/pandas/core/ops.py", line 1892, in flex_wrapper
return self._binop(other, op, level=level, fill_value=fill_value)
File "/Users/danlaw/Projects/pandas/pandas/core/series.py", line 2522, in _binop
result = self._constructor(result, index=new_index, name=name)
File "/Users/danlaw/Projects/pandas/pandas/core/series.py", line 250, in __init__
.format(val=len(data), ind=len(index)))
ValueError: Length of passed values is 2, index implies 4
```
#### Expected Output
```python
(a 0.0
b 0.0
c NaN
d NaN
e NaN
dtype: float64, a 1.0
b 1.0
c NaN
d NaN
e NaN
dtype: float64)
```
#### Output of ``pd.show_versions()``
<details>
commit: 221be3b4adde0f45927803b1c593b56d4678faeb
python: 3.7.2.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.25.0.dev0+200.g221be3b4a
pytest: 4.3.0
pip: 19.0.3
setuptools: 40.8.0
Cython: 0.29.5
numpy: 1.16.2
scipy: 1.2.1
pyarrow: 0.11.1
xarray: 0.11.3
IPython: 7.3.0
sphinx: 1.8.4
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2018.9
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.9
feather: None
matplotlib: 3.0.2
openpyxl: 2.6.0
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.5
lxml.etree: 4.3.1
bs4: 4.7.1
html5lib: 1.0.1
sqlalchemy: 1.2.18
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: 0.2.0
fastparquet: 0.2.1
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| I will look at this issue. | 2019-03-07T11:49:30Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/danlaw/Projects/pandas/pandas/core/ops.py", line 1892, in flex_wrapper
return self._binop(other, op, level=level, fill_value=fill_value)
File "/Users/danlaw/Projects/pandas/pandas/core/series.py", line 2522, in _binop
result = self._constructor(result, index=new_index, name=name)
File "/Users/danlaw/Projects/pandas/pandas/core/series.py", line 250, in __init__
.format(val=len(data), ind=len(index)))
ValueError: Length of passed values is 2, index implies 4
| 12,603 |
|||
pandas-dev/pandas | pandas-dev__pandas-25620 | 976a2db444c20ee71895bda394193aa24e1e5734 | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -214,7 +214,7 @@ I/O
- Bug in :func:`read_json` for ``orient='table'`` when it tries to infer dtypes by default, which is not applicable as dtypes are already defined in the JSON schema (:issue:`21345`)
- Bug in :func:`read_json` for ``orient='table'`` and float index, as it infers index dtype by default, which is not applicable because index dtype is already defined in the JSON schema (:issue:`25433`)
- Bug in :func:`read_json` for ``orient='table'`` and string of float column names, as it makes a column name type conversion to Timestamp, which is not applicable because column names are already defined in the JSON schema (:issue:`25435`)
--
+- :meth:`DataFrame.to_html` now raises ``TypeError`` when using an invalid type for the ``classes`` parameter instead of ``AsseertionError`` (:issue:`25608`)
-
-
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -163,8 +163,8 @@ def _write_table(self, indent=0):
if isinstance(self.classes, str):
self.classes = self.classes.split()
if not isinstance(self.classes, (list, tuple)):
- raise AssertionError('classes must be list or tuple, not {typ}'
- .format(typ=type(self.classes)))
+ raise TypeError('classes must be a string, list, or tuple, '
+ 'not {typ}'.format(typ=type(self.classes)))
_classes.extend(self.classes)
if self.table_id is None:
| BUG: User-facing AssertionError with DataFrame.to_html(classes=<invalid type>)
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
pd.DataFrame().to_html(classes=True)
```
#### Problem description
```python-traceback
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\core\frame.py", line 2212, in to_html
formatter.to_html(classes=classes, notebook=notebook, border=border)
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\io\formats\format.py", line 729, in to_html
html = Klass(self, classes=classes, border=border).render()
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\io\formats\html.py", line 146, in render
self._write_table()
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\io\formats\html.py", line 167, in _write_table
.format(typ=type(self.classes)))
AssertionError: classes must be list or tuple, not <class 'bool'>
```
#### Expected Output
```python-traceback
TypeError: classes must be a string, list or tuple, not <class 'bool'>
```
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
</details>
| @mroeschke : the solution requires a change to one line of code, add a simple parametrised test and a whatsnew entry under bugfix. Can this be labelled good first issue?
Thanks for the suggestion!
I can work on this | 2019-03-09T16:58:22Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\core\frame.py", line 2212, in to_html
formatter.to_html(classes=classes, notebook=notebook, border=border)
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\io\formats\format.py", line 729, in to_html
html = Klass(self, classes=classes, border=border).render()
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\io\formats\html.py", line 146, in render
self._write_table()
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\io\formats\html.py", line 167, in _write_table
.format(typ=type(self.classes)))
AssertionError: classes must be list or tuple, not <class 'bool'>
| 12,609 |
|||
pandas-dev/pandas | pandas-dev__pandas-25769 | 46639512c06300a9844ea27f90167d5648c9b93a | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -271,6 +271,7 @@ I/O
- Bug in :func:`json_normalize` for ``errors='ignore'`` where missing values in the input data, were filled in resulting ``DataFrame`` with the string "nan" instead of ``numpy.nan`` (:issue:`25468`)
- :meth:`DataFrame.to_html` now raises ``TypeError`` when using an invalid type for the ``classes`` parameter instead of ``AsseertionError`` (:issue:`25608`)
- Bug in :meth:`DataFrame.to_string` and :meth:`DataFrame.to_latex` that would lead to incorrect output when the ``header`` keyword is used (:issue:`16718`)
+- Bug in :func:`read_csv` not properly interpreting the UTF8 encoded filenames on Windows on Python 3.6+ (:issue:`15086`)
-
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -678,11 +678,7 @@ cdef class TextReader:
if isinstance(source, basestring):
if not isinstance(source, bytes):
- if compat.PY36 and compat.is_platform_windows():
- # see gh-15086.
- encoding = "mbcs"
- else:
- encoding = sys.getfilesystemencoding() or "utf-8"
+ encoding = sys.getfilesystemencoding() or "utf-8"
source = source.encode(encoding)
diff --git a/pandas/_libs/src/parser/io.c b/pandas/_libs/src/parser/io.c
--- a/pandas/_libs/src/parser/io.c
+++ b/pandas/_libs/src/parser/io.c
@@ -17,6 +17,11 @@ The full license is in the LICENSE file, distributed with this software.
#define O_BINARY 0
#endif // O_BINARY
+#if PY_VERSION_HEX >= 0x03060000 && defined(_WIN32)
+#define USE_WIN_UTF16
+#include <Windows.h>
+#endif
+
/*
On-disk FILE, uncompressed
*/
@@ -27,7 +32,35 @@ void *new_file_source(char *fname, size_t buffer_size) {
return NULL;
}
+#ifdef USE_WIN_UTF16
+ // Fix gh-15086 properly - convert UTF8 to UTF16 that Windows widechar API
+ // accepts. This is needed because UTF8 might _not_ be convertible to MBCS
+ // for some conditions, as MBCS is locale-dependent, and not all unicode
+ // symbols can be expressed in it.
+ {
+ wchar_t* wname = NULL;
+ int required = MultiByteToWideChar(CP_UTF8, 0, fname, -1, NULL, 0);
+ if (required == 0) {
+ free(fs);
+ return NULL;
+ }
+ wname = (wchar_t*)malloc(required * sizeof(wchar_t));
+ if (wname == NULL) {
+ free(fs);
+ return NULL;
+ }
+ if (MultiByteToWideChar(CP_UTF8, 0, fname, -1, wname, required) <
+ required) {
+ free(wname);
+ free(fs);
+ return NULL;
+ }
+ fs->fd = _wopen(wname, O_RDONLY | O_BINARY);
+ free(wname);
+ }
+#else
fs->fd = open(fname, O_RDONLY | O_BINARY);
+#endif
if (fs->fd == -1) {
free(fs);
return NULL;
| OSError when reading file with accents in file path
#### Code Sample, a copy-pastable example if possible
`test.txt` and `test_é.txt` are the same file, only the name change:
```python
pd.read_csv('test.txt')
Out[3]:
1 1 1
0 1 1 1
1 1 1 1
pd.read_csv('test_é.txt')
Traceback (most recent call last):
File "<ipython-input-4-fd67679d1d17>", line 1, in <module>
pd.read_csv('test_é.txt')
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 646, in parser_f
return _read(filepath_or_buffer, kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 389, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 730, in __init__
self._make_engine(self.engine)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 923, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 1390, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas\parser.pyx", line 373, in pandas.parser.TextReader.__cinit__ (pandas\parser.c:4184)
File "pandas\parser.pyx", line 669, in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:8471)
OSError: Initializing from file failed
```
#### Problem description
Pandas return OSError when trying to read a file with accents in file path.
The problem is new (Since I upgraded to Python 3.6 and Pandas 0.19.2)
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: fr
LOCALE: None.None
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 32.3.1
Cython: 0.25.2
numpy: 1.11.3
scipy: 0.18.1
statsmodels: None
xarray: None
IPython: 5.1.0
sphinx: 1.5.1
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: 1.2.0
tables: None
numexpr: 2.6.1
matplotlib: 1.5.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
httplib2: None
apiclient: None
sqlalchemy: 1.1.4
pymysql: None
psycopg2: None
jinja2: 2.9.3
boto: None
pandas_datareader: None
</details>
| Just my pennies worth. Quickly tried it out on Mac OSX and Ubuntu with no
problems. See below.
Could this be an environment/platform problem? I noticed that the `LOCALE` is
set to `None.None`. Unfortunately I do not have a windows machine to try this
example on. Admittedly this would not explain why you've seen this *after* the
upgrade to python3.6 and pandas 0.19.2.
Note: I just set up a virtualenv with python3.6 and installed pandas 0.19.2 using pip.
```python
>>> import pandas as pd
>>> pd.read_csv('test_é.txt')
a b c
0 1 2 3
1 4 5 6
```
Output of **pd.show_versions()**
<details>
INSTALLED VERSIONS
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-57-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 32.3.1
Cython: None
numpy: 1.11.3
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None
</details>
I believe 3.6 switches the file system encoding on windows to utf8 (from ascii). Apart from that we don't have testing enable yet on windows for 3.6 (as some of the required packages are just now becoming available).
@JGoutin
so I just added build support on appveyor (windows) for 3.6, so if you'd push up your tests to see if it works, would be great.
I also faced the same problem when the program stopped at pd.read_csv(file_path). The situation is similar to me after I upgraded my python to 3.6 (I'm not sure the last time the python I installed is exactly what version, maybe 3.5......).
@jreback what is the next step towards a fix here?
You have mentioned a PR that got 'blown away' - what does it mean?
While I do not use Windows, I could try to help (just got a VM to debug a piece of my code that apparently does not work on windows)
BTW, a workaround: pass a file handle instead of a name
`pd.read_csv(open('test_é.txt', 'r'))`
(there are several workarounds in related issues, but I have not seen this one)
@tpietruszka see comments on the PR: https://github.com/pandas-dev/pandas/pull/15092 (it got removed from a private fork, was pretty much there).
you basically need to encode the paths differently on py3.6 (vs other pythons) on wnidows. basically need to implement: https://docs.python.org/3/whatsnew/3.6.html#pep-529-change-windows-filesystem-encoding-to-utf-8
my old code (can't run):
```
import pandas as pd
import os
file_path='./dict/字典.csv'
df_name = pd.read_csv(file_path,sep=',' )
```
new code (sucessful):
```
import pandas as pd
import os
file_path='./dict/dict.csv'
df_name = pd.read_csv(file_path,sep=',' )
```
I think this bug is filename problem.
I change filename from chinese to english, it can run now.
If anyone comes here like me because he/she hit the same problem, here is a solution until pandas is fixed to work with pep 529 (basically any non ascii chars will in your path or filename will result in errors):
Insert the following two lines at the beginning of your code to revert back to the old way of handling paths on windows:
```
import sys
sys._enablelegacywindowsfsencoding()
```
I use the solution above and it works. Thanks very much @fotisj !
However I'm still confused on why DataFrame.to_csv() doesn't occur same problem. In other words, for unicode file path, write is ok, while read isn't.
path=os.path.join('E:\语料','sina.csv')
pd.read_csv(open(path, 'r',encoding='utf8'))
It is successful.
Can someone with an affected system check if changing this line
https://github.com/pandas-dev/pandas/blob/e8620abc12a4c468a75adb8607fd8e0eb1c472e7/pandas/io/common.py#L209
to
```python
return _expand_user(os.fsencode(filepath_or_buffer)), None, compression
```
fixes it?
No, it does not.
Results in: OSError: Expected file path name or file-like object, got <class 'bytes'> type
(on Windows 10)
OSError Traceback (most recent call last)
<ipython-input-2-e8247998d6d4> in <module>()
1
----> 2 df = pd.read_csv(r'D:\mydata\Dropbox\uni\progrs\test öäau\n\teu.csv', sep='\t')
C:\conda\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
707 skip_blank_lines=skip_blank_lines)
708
--> 709 return _read(filepath_or_buffer, kwds)
710
711 parser_f.__name__ = name
C:\conda\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
447
448 # Create the parser.
--> 449 parser = TextFileReader(filepath_or_buffer, **kwds)
450
451 if chunksize or iterator:
C:\conda\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
816 self.options['has_index_names'] = kwds['has_index_names']
817
--> 818 self._make_engine(self.engine)
819
820 def close(self):
C:\conda\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
1047 def _make_engine(self, engine='c'):
1048 if engine == 'c':
-> 1049 self._engine = CParserWrapper(self.f, **self.options)
1050 else:
1051 if engine == 'python':
C:\conda\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
1693 kwds['allow_leading_cols'] = self.index_col is not False
1694
-> 1695 self._reader = parsers.TextReader(src, **kwds)
1696
1697 # XXX
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()
OSError: Expected file path name or file-like object, got <class 'bytes'> type
Oh, sorry. Does fsdecode work there?
________________________________
From: Fotis Jannidis <notifications@github.com>
Sent: Saturday, February 3, 2018 8:00:13 AM
To: pandas-dev/pandas
Cc: Tom Augspurger; Comment
Subject: Re: [pandas-dev/pandas] OSError when reading file with accents in file path (#15086)
No, it does not.
Results in: OSError: Expected file path name or file-like object, got <class 'bytes'> type
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<https://github.com/pandas-dev/pandas/issues/15086#issuecomment-362809602>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABQHIplv8thHxpjsP3knUCpET0Fjy0kLks5tRGZsgaJpZM4LeTSB>.
No. Using fsdecode produces the same error we originally had ([error_msg.txt](https://github.com/pandas-dev/pandas/files/1691837/error_msg.txt))
Ok thanks for trying.
________________________________
From: Fotis Jannidis <notifications@github.com>
Sent: Saturday, February 3, 2018 8:57:07 AM
To: pandas-dev/pandas
Cc: Tom Augspurger; Comment
Subject: Re: [pandas-dev/pandas] OSError when reading file with accents in file path (#15086)
No. Using fsdecode produces the same error we originally had (error_msg.txt<https://github.com/pandas-dev/pandas/files/1691837/error_msg.txt>)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<https://github.com/pandas-dev/pandas/issues/15086#issuecomment-362818153>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABQHIpeYsj9Bv3OsoHAsOufXzU3AYSBSks5tRHPCgaJpZM4LeTSB>.
Talked with Steve Dower today, and he suspects this may be the problematic line: https://github.com/pandas-dev/pandas/blob/e8f206d8192b409bc39da1ba1b2c5bcd8b65cc9f/pandas/_libs/src/parser/io.c#L30
IIUC, the Windows filesystem API is expecting those bytes to be in the MBCS, but we're using utf-8.
A user-level workaround is to explicitly encode your filename as mbcs before passing the bytestring to pandas. https://www.python.org/dev/peps/pep-0529/#explicitly-using-mbcs
```python
pd.read_csv(filename.encode('mbcs'))
```
is anyone able to test out that workaround?
just need a small change in the parser code to fix this (there was a PR doing this) but was deleted
@TomAugspurger that does not work. read_csv expects a `str` and not a `bytes` value. It fails with
OSError: Expected file path name or file-like object, got <class 'bytes'> type
Thanks for checking.
On Fri, Apr 20, 2018 at 3:43 PM, João D. Ferreira <notifications@github.com>
wrote:
> @TomAugspurger <https://github.com/TomAugspurger> that does not work.
> read_csv expects a str and not a bytes value. It fails with
>
> OSError: Expected file path name or file-like object, got <class 'bytes'> type
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/15086#issuecomment-383217062>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABQHIiOHyt3sT7B0pHJuY5lB-cJtT5JHks5tqkiEgaJpZM4LeTSB>
> .
>
Just pinging this - I have the same issue, I'm using a workaround but it would be great if that was not required.
this needs a community patch
I am encountering this issue. I want to try and contribute a patchc Any pointers on how to start fixing this?
I think none of the maintainers have access to a system that can reproduce this.
Perhaps some of the others in this issue can help put together a solution. | 2019-03-18T15:35:16Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-4-fd67679d1d17>", line 1, in <module>
pd.read_csv('test_é.txt')
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 646, in parser_f
return _read(filepath_or_buffer, kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 389, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 730, in __init__
self._make_engine(self.engine)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 923, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "d:\app\python36\lib\site-packages\pandas\io\parsers.py", line 1390, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas\parser.pyx", line 373, in pandas.parser.TextReader.__cinit__ (pandas\parser.c:4184)
File "pandas\parser.pyx", line 669, in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:8471)
OSError: Initializing from file failed
| 12,631 |
|||
pandas-dev/pandas | pandas-dev__pandas-26188 | fecee8ffe39446d213f425257c6de24b5a7f9021 | diff --git a/environment.yml b/environment.yml
--- a/environment.yml
+++ b/environment.yml
@@ -24,7 +24,7 @@ dependencies:
- pytest>=4.0.2
- pytest-mock
- sphinx
- - numpydoc
+ - numpydoc>=0.9.0
- pip
# optional
diff --git a/requirements-dev.txt b/requirements-dev.txt
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -15,7 +15,7 @@ pycodestyle
pytest>=4.0.2
pytest-mock
sphinx
-numpydoc
+numpydoc>=0.9.0
pip
beautifulsoup4>=4.2.1
blosc
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -472,9 +472,12 @@ def parameter_desc(self, param):
@property
def see_also(self):
- return collections.OrderedDict((name, ''.join(desc))
- for name, desc, _
- in self.doc['See Also'])
+ result = collections.OrderedDict()
+ for funcs, desc in self.doc['See Also']:
+ for func, _ in funcs:
+ result[func] = ''.join(desc)
+
+ return result
@property
def examples(self):
@@ -731,7 +734,7 @@ def get_validation_data(doc):
if doc.method_returns_something:
errs.append(error('RT01'))
else:
- if len(doc.returns) == 1 and doc.returns[0][1]:
+ if len(doc.returns) == 1 and doc.returns[0].name:
errs.append(error('RT02'))
for name_or_type, type_, desc in doc.returns:
if not desc:
| Doc Check Failures
This just started showing up in the CI failures today:
```sh
Traceback (most recent call last):
File "ci/../scripts/validate_docstrings.py", line 991, in <module>
args.ignore_deprecated))
File "ci/../scripts/validate_docstrings.py", line 891, in main
result = validate_all(prefix, ignore_deprecated)
File "ci/../scripts/validate_docstrings.py", line 845, in validate_all
doc_info = validate_one(func_name)
File "ci/../scripts/validate_docstrings.py", line 801, in validate_one
errs, wrns, examples_errs = get_validation_data(doc)
File "ci/../scripts/validate_docstrings.py", line 749, in get_validation_data
if not doc.see_also:
File "ci/../scripts/validate_docstrings.py", line 477, in see_also
in self.doc['See Also'])
File "ci/../scripts/validate_docstrings.py", line 476, in <genexpr>
for name, desc, _
ValueError: not enough values to unpack (expected 3, got 2)
```
I think it's an issue with upgrading to numpydoc 0.9 but will look in more detail and confirm
| Here's the relevant enhancement on the numpydoc side released in v0.9:
https://github.com/numpy/numpydoc/pull/172
| 2019-04-22T22:36:45Z | [] | [] |
Traceback (most recent call last):
File "ci/../scripts/validate_docstrings.py", line 991, in <module>
args.ignore_deprecated))
File "ci/../scripts/validate_docstrings.py", line 891, in main
result = validate_all(prefix, ignore_deprecated)
File "ci/../scripts/validate_docstrings.py", line 845, in validate_all
doc_info = validate_one(func_name)
File "ci/../scripts/validate_docstrings.py", line 801, in validate_one
errs, wrns, examples_errs = get_validation_data(doc)
File "ci/../scripts/validate_docstrings.py", line 749, in get_validation_data
if not doc.see_also:
File "ci/../scripts/validate_docstrings.py", line 477, in see_also
in self.doc['See Also'])
File "ci/../scripts/validate_docstrings.py", line 476, in <genexpr>
for name, desc, _
ValueError: not enough values to unpack (expected 3, got 2)
| 12,698 |
|||
pandas-dev/pandas | pandas-dev__pandas-26228 | 971dcc11c8d9d71605582e6d37a4cdc65d996ff3 | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -403,6 +403,7 @@ Groupby/Resample/Rolling
- Bug in :meth:`pandas.core.groupby.GroupBy.idxmax` and :meth:`pandas.core.groupby.GroupBy.idxmin` with datetime column would return incorrect dtype (:issue:`25444`, :issue:`15306`)
- Bug in :meth:`pandas.core.groupby.GroupBy.cumsum`, :meth:`pandas.core.groupby.GroupBy.cumprod`, :meth:`pandas.core.groupby.GroupBy.cummin` and :meth:`pandas.core.groupby.GroupBy.cummax` with categorical column having absent categories, would return incorrect result or segfault (:issue:`16771`)
- Bug in :meth:`pandas.core.groupby.GroupBy.nth` where NA values in the grouping would return incorrect results (:issue:`26011`)
+- Bug in :meth:`pandas.core.groupby.SeriesGroupBy.transform` where transforming an empty group would raise error (:issue:`26208`)
Reshaping
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -916,8 +916,12 @@ def transform(self, func, *args, **kwargs):
s = klass(res, indexer)
results.append(s)
- from pandas.core.reshape.concat import concat
- result = concat(results).sort_index()
+ # check for empty "results" to avoid concat ValueError
+ if results:
+ from pandas.core.reshape.concat import concat
+ result = concat(results).sort_index()
+ else:
+ result = Series()
# we will only try to coerce the result type if
# we have a numeric dtype, as these are *always* udfs
| SeriesGroupBy.transform cannot handle empty series
#### Code Sample, a copy-pastable example if possible
```python
d = pd.DataFrame({1: [], 2: []})
g = d.groupby(1)
g[2].transform(lambda x: x)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\python37\lib\site-packages\pandas\core\groupby\generic.py", line 945, in transform
result = concat(results).sort_index()
File "C:\python37\lib\site-packages\pandas\core\reshape\concat.py", line 228, in concat
copy=copy, sort=sort)
File "C:\python37\lib\site-packages\pandas\core\reshape\concat.py", line 262, in __init__
raise ValueError('No objects to concatenate')
```
#### Problem description
Crashes on SeriesGroupby obj with zero length, which came from an empty dataframe. Would be nicer if pandas can handle this case without raising errors, by for example, just return an empty series. Thanks.
#### Expected Output
```
Series([], Name: 2, dtype: float64)
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.3.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 142 Stepping 9, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.24.2
pytest: 4.4.1
pip: 19.0.3
setuptools: 41.0.1
Cython: None
numpy: 1.15.4
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: 0.5.1
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 3.0.2
openpyxl: 2.5.12
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| I suppose that would make this consistent with apply and agg:
```python
>>> g[2].apply(lambda x: x)
Series([], Name: 2, dtype: float64)
>>> g[2].agg(lambda x: x)
Series([], Name: 2, dtype: float64)
```
If you want to take a look and have a simple way of making it work would take a PR
Related to #17093
Okay, thanks. I see a possible patch. Need to read over contribution guideline and set up a working env to create a PR. | 2019-04-27T18:01:07Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\python37\lib\site-packages\pandas\core\groupby\generic.py", line 945, in transform
result = concat(results).sort_index()
File "C:\python37\lib\site-packages\pandas\core\reshape\concat.py", line 228, in concat
copy=copy, sort=sort)
File "C:\python37\lib\site-packages\pandas\core\reshape\concat.py", line 262, in __init__
raise ValueError('No objects to concatenate')
```
#### Problem description
Crashes on SeriesGroupby obj with zero length, which came from an empty dataframe. Would be nicer if pandas can handle this case without raising errors, by for example, just return an empty series. Thanks.
| 12,700 |
|||
pandas-dev/pandas | pandas-dev__pandas-26456 | 9c7e60403d60cdd3ab2991d31a5c293396fd0843 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -229,6 +229,7 @@ Plotting
- Bug in :meth:`DataFrame.plot` producing incorrect legend markers when plotting multiple series on the same axis (:issue:`18222`)
- Bug in :meth:`DataFrame.plot` when ``kind='box'`` and data contains datetime or timedelta data. These types are now automatically dropped (:issue:`22799`)
- Bug in :meth:`DataFrame.plot.line` and :meth:`DataFrame.plot.area` produce wrong xlim in x-axis (:issue:`27686`, :issue:`25160`, :issue:`24784`)
+- Bug where :meth:`DataFrame.boxplot` would not accept a `color` parameter like `DataFrame.plot.box` (:issue:`26214`)
- :func:`set_option` now validates that the plot backend provided to ``'plotting.backend'`` implements the backend when the option is set, rather than when a plot is created (:issue:`28163`)
Groupby/resample/rolling
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -4,6 +4,7 @@
from matplotlib.artist import setp
import numpy as np
+from pandas.core.dtypes.common import is_dict_like
from pandas.core.dtypes.generic import ABCSeries
from pandas.core.dtypes.missing import remove_na_arraylike
@@ -250,13 +251,38 @@ def boxplot(
def _get_colors():
# num_colors=3 is required as method maybe_color_bp takes the colors
# in positions 0 and 2.
- return _get_standard_colors(color=kwds.get("color"), num_colors=3)
+ # if colors not provided, use same defaults as DataFrame.plot.box
+ result = _get_standard_colors(num_colors=3)
+ result = np.take(result, [0, 0, 2])
+ result = np.append(result, "k")
+
+ colors = kwds.pop("color", None)
+ if colors:
+ if is_dict_like(colors):
+ # replace colors in result array with user-specified colors
+ # taken from the colors dict parameter
+ # "boxes" value placed in position 0, "whiskers" in 1, etc.
+ valid_keys = ["boxes", "whiskers", "medians", "caps"]
+ key_to_index = dict(zip(valid_keys, range(4)))
+ for key, value in colors.items():
+ if key in valid_keys:
+ result[key_to_index[key]] = value
+ else:
+ raise ValueError(
+ "color dict contains invalid "
+ "key '{0}' "
+ "The key must be either {1}".format(key, valid_keys)
+ )
+ else:
+ result.fill(colors)
+
+ return result
def maybe_color_bp(bp):
- if "color" not in kwds:
- setp(bp["boxes"], color=colors[0], alpha=1)
- setp(bp["whiskers"], color=colors[0], alpha=1)
- setp(bp["medians"], color=colors[2], alpha=1)
+ setp(bp["boxes"], color=colors[0], alpha=1)
+ setp(bp["whiskers"], color=colors[1], alpha=1)
+ setp(bp["medians"], color=colors[2], alpha=1)
+ setp(bp["caps"], color=colors[3], alpha=1)
def plot_group(keys, values, ax):
keys = [pprint_thing(x) for x in keys]
| _dataframe.boxplot_ with _where_ and _by_ does not respect color keyword
### Bug report
**Bug summary**
The boxplot method on a dataframe which is using the "column, by" keywords does
not respect the _color_ keyword, and in fact crashes if it is present. This is not consistent with the documentation [here](http://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html#box-plots).
**Code for reproduction**
```python
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
def make_dummy_data():
""" Return """
df1 = pd.DataFrame(np.random.rand(10, 3), columns = ['x', 'y', 'z'])
df2 = pd.DataFrame(2*np.random.rand(10, 3), columns = ['x', 'y', 'z'])
return df1, df2
def comparative_results():
""" stuff """
df1, df2 = make_dummy_data()
def draw_plot(ax, data, edge_color, fill_color=None):
""" Controls details of color"""
colors = dict(boxes=edge_color, whiskers=edge_color, medians=edge_color, caps=edge_color)
ax = data.boxplot(column=['x'], by=['z'], showfliers=False, ax=ax, color=colors)
return ax
ax = None
ax = draw_plot(ax, df1, 'k')
ax = draw_plot(ax, df2, 'r')
ax.set_title('dummy to expose bug')
plt.show()
if __name__ == "__main__":
comparative_results()
```
**Actual outcome**
<!--The output produced by the above code, which may be a screenshot, console output, etc.-->
```
Traceback (most recent call last):
File "/Users/BNL28/Code/DataPerformance/bug_report.py", line 33, in <module>
comparative_results()
File "/Users/BNL28/Code/DataPerformance/bug_report.py", line 26, in comparative_results
ax = draw_plot(ax, df1, 'k')
File "/Users/BNL28/Code/DataPerformance/bug_report.py", line 22, in draw_plot
ax = data.boxplot(column=['x'], by=['z'], showfliers=False, ax=ax, color=colors)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2254, in boxplot_frame
return_type=return_type, **kwds)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2223, in boxplot
return_type=return_type)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2683, in _grouped_plot_by_column
re_plotf = plotf(keys, values, ax, **kwargs)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2191, in plot_group
bp = ax.boxplot(values, **kwds)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py", line 1810, in inner
return func(ax, *args, **kwargs)
TypeError: boxplot() got an unexpected keyword argument 'color'
Process finished with exit code 1
```
**Expected outcome**
Expect two sets of box plots, one coloured black, and one coloured red. Code runs ok with no color keyword, but the boxes are indistinguishable without colour control.
**Environment**
* Operating system: OSX
* Matplotlib version: 3.0.2
* Matplotlib backend (`print(matplotlib.get_backend())`):
* Python version: Python 3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46)
* Pandas version 0.24.2
| Sorry about the title, and not noticing the mistake in the matplotlib backend version: 3.0.2 | 2019-05-19T04:30:48Z | [] | [] |
Traceback (most recent call last):
File "/Users/BNL28/Code/DataPerformance/bug_report.py", line 33, in <module>
comparative_results()
File "/Users/BNL28/Code/DataPerformance/bug_report.py", line 26, in comparative_results
ax = draw_plot(ax, df1, 'k')
File "/Users/BNL28/Code/DataPerformance/bug_report.py", line 22, in draw_plot
ax = data.boxplot(column=['x'], by=['z'], showfliers=False, ax=ax, color=colors)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2254, in boxplot_frame
return_type=return_type, **kwds)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2223, in boxplot
return_type=return_type)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2683, in _grouped_plot_by_column
re_plotf = plotf(keys, values, ax, **kwargs)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/pandas/plotting/_core.py", line 2191, in plot_group
bp = ax.boxplot(values, **kwds)
File "/Users/BNL28/anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py", line 1810, in inner
return func(ax, *args, **kwargs)
TypeError: boxplot() got an unexpected keyword argument 'color'
| 12,744 |
|||
pandas-dev/pandas | pandas-dev__pandas-26585 | 437efa6e974e506c7cc5f142d5186bf6a7f5ce13 | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -529,6 +529,7 @@ Datetimelike
- Bug in :func:`to_datetime` which does not replace the invalid argument with ``NaT`` when error is set to coerce (:issue:`26122`)
- Bug in adding :class:`DateOffset` with nonzero month to :class:`DatetimeIndex` would raise ``ValueError`` (:issue:`26258`)
- Bug in :func:`to_datetime` which raises unhandled ``OverflowError`` when called with mix of invalid dates and ``NaN`` values with ``format='%Y%m%d'`` and ``error='coerce'`` (:issue:`25512`)
+- Bug in :func:`to_datetime` which raises ``TypeError`` for ``format='%Y%m%d'`` when called for invalid integer dates with length >= 6 digits with ``errors='ignore'``
Timedelta
^^^^^^^^^
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -140,13 +140,13 @@ def array_strptime(object[:] values, object fmt,
iresult[i] = NPY_NAT
continue
raise ValueError("time data %r does not match "
- "format %r (match)" % (values[i], fmt))
+ "format %r (match)" % (val, fmt))
if len(val) != found.end():
if is_coerce:
iresult[i] = NPY_NAT
continue
raise ValueError("unconverted data remains: %s" %
- values[i][found.end():])
+ val[found.end():])
# search
else:
@@ -156,7 +156,7 @@ def array_strptime(object[:] values, object fmt,
iresult[i] = NPY_NAT
continue
raise ValueError("time data %r does not match format "
- "%r (search)" % (values[i], fmt))
+ "%r (search)" % (val, fmt))
iso_year = -1
year = 1900
| to_datetime returns TypeError for invalid integer dates with %Y%m%d format
```python
In [1]: pd.__version__
Out[1]: '0.25.0.dev0+625.g8154efb0c'
```
```python
pd.to_datetime(20199911, format="%Y%m%d", errors='ignore')
pd.to_datetime(2019121212, format="%Y%m%d", errors='ignore')
```
throws "TypeError: 'int' object is unsliceable" instead of returning initial values
```python
Traceback (most recent call last):
File "/home/talka/projects/pandas/tmp.py", line 21, in <module>
pd.to_datetime(2019121212, format="%Y%m%d", errors='ignore')
File "/home/talka/projects/pandas/pandas/util/_decorators.py", line 188, in wrapper
return func(*args, **kwargs)
File "/home/talka/projects/pandas/pandas/core/tools/datetimes.py", line 626, in to_datetime
result = convert_listlike(np.array([arg]), box, format)[0]
File "/home/talka/projects/pandas/pandas/core/tools/datetimes.py", line 270, in _convert_listlike_datetimes
arg, format, exact=exact, errors=errors)
File "pandas/_libs/tslibs/strptime.pyx", line 149, in pandas._libs.tslibs.strptime.array_strptime
TypeError: 'int' object is unsliceable
```
| 2019-05-31T02:06:51Z | [] | [] |
Traceback (most recent call last):
File "/home/talka/projects/pandas/tmp.py", line 21, in <module>
pd.to_datetime(2019121212, format="%Y%m%d", errors='ignore')
File "/home/talka/projects/pandas/pandas/util/_decorators.py", line 188, in wrapper
return func(*args, **kwargs)
File "/home/talka/projects/pandas/pandas/core/tools/datetimes.py", line 626, in to_datetime
result = convert_listlike(np.array([arg]), box, format)[0]
File "/home/talka/projects/pandas/pandas/core/tools/datetimes.py", line 270, in _convert_listlike_datetimes
arg, format, exact=exact, errors=errors)
File "pandas/_libs/tslibs/strptime.pyx", line 149, in pandas._libs.tslibs.strptime.array_strptime
TypeError: 'int' object is unsliceable
| 12,762 |
||||
pandas-dev/pandas | pandas-dev__pandas-26607 | a60888ce4ce9e106537fb410688b66baa109edc3 | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -613,7 +613,7 @@ Strings
^^^^^^^
- Bug in the ``__name__`` attribute of several methods of :class:`Series.str`, which were set incorrectly (:issue:`23551`)
--
+- Improved error message when passing :class:`Series` of wrong dtype to :meth:`Series.str.cat` (:issue:`22722`)
-
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -2,7 +2,7 @@
from functools import wraps
import re
import textwrap
-from typing import Dict
+from typing import Dict, List
import warnings
import numpy as np
@@ -31,7 +31,7 @@
_shared_docs = dict() # type: Dict[str, str]
-def cat_core(list_of_columns, sep):
+def cat_core(list_of_columns: List, sep: str):
"""
Auxiliary function for :meth:`str.cat`
@@ -53,6 +53,41 @@ def cat_core(list_of_columns, sep):
return np.sum(list_with_sep, axis=0)
+def cat_safe(list_of_columns: List, sep: str):
+ """
+ Auxiliary function for :meth:`str.cat`.
+
+ Same signature as cat_core, but handles TypeErrors in concatenation, which
+ happen if the arrays in list_of columns have the wrong dtypes or content.
+
+ Parameters
+ ----------
+ list_of_columns : list of numpy arrays
+ List of arrays to be concatenated with sep;
+ these arrays may not contain NaNs!
+ sep : string
+ The separator string for concatenating the columns
+
+ Returns
+ -------
+ nd.array
+ The concatenation of list_of_columns with sep
+ """
+ try:
+ result = cat_core(list_of_columns, sep)
+ except TypeError:
+ # if there are any non-string values (wrong dtype or hidden behind
+ # object dtype), np.sum will fail; catch and return with better message
+ for column in list_of_columns:
+ dtype = lib.infer_dtype(column, skipna=True)
+ if dtype not in ['string', 'empty']:
+ raise TypeError(
+ 'Concatenation requires list-likes containing only '
+ 'strings (or missing values). Offending values found in '
+ 'column {}'.format(dtype)) from None
+ return result
+
+
def _na_map(f, arr, na_result=np.nan, dtype=object):
# should really _check_ for NA
return _map(f, arr, na_mask=True, na_value=na_result, dtype=dtype)
@@ -2314,16 +2349,16 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
np.putmask(result, union_mask, np.nan)
not_masked = ~union_mask
- result[not_masked] = cat_core([x[not_masked] for x in all_cols],
+ result[not_masked] = cat_safe([x[not_masked] for x in all_cols],
sep)
elif na_rep is not None and union_mask.any():
# fill NaNs with na_rep in case there are actually any NaNs
all_cols = [np.where(nm, na_rep, col)
for nm, col in zip(na_masks, all_cols)]
- result = cat_core(all_cols, sep)
+ result = cat_safe(all_cols, sep)
else:
# no NaNs - can just concatenate
- result = cat_core(all_cols, sep)
+ result = cat_safe(all_cols, sep)
if isinstance(self._orig, Index):
# add dtype for case that result is all-NA
| Improve TypeError message for str.cat
Currently,
```
s = pd.Series(['a', 'b', 'c'])
s.str.cat([1, 2, 3])
```
yields
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 2222, in cat
res = str_cat(data, others=others, sep=sep, na_rep=na_rep)
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in str_cat
cats = [sep.join(tup) for tup in tuples]
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in <listcomp>
cats = [sep.join(tup) for tup in tuples]
TypeError: sequence item 1: expected str instance, int found
```
IMO, this should be improved to have a better error message, and shallower stack trace.
| What are you suggesting exactly? Error message reflects what you would get from a standard Python operation:
```python
>>> "".join(['foo', 1])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sequence item 1: expected str instance, int found
```
@WillAyd
That's of course correct, but users are not dealing with str/int instances, but usually with Series/ndarray, etc., and so the mentioned warning gives the right hint, but is not necessarily the best we can do. I opened this issue as it was a side product of cleaning up some internals/tests for #22725.
@jreback
I'll open another PR for this, but this will not be closed by #22725 (was split off due to review)
@WillAyd The code for this was splitt off of #22725 to focus that PR (I had edited the OP to reflect that) - this issue is still open. | 2019-06-01T15:51:00Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 2222, in cat
res = str_cat(data, others=others, sep=sep, na_rep=na_rep)
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in str_cat
cats = [sep.join(tup) for tup in tuples]
File "C:\Users\Axel Obermeier\eclipse-workspace\pddev\pandas\core\strings.py", line 111, in <listcomp>
cats = [sep.join(tup) for tup in tuples]
TypeError: sequence item 1: expected str instance, int found
| 12,765 |
|||
pandas-dev/pandas | pandas-dev__pandas-26746 | 13023c6515ca11a3353d98645f48a403243101cf | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -666,6 +666,7 @@ I/O
- Added ``cache_dates=True`` parameter to :meth:`read_csv`, which allows to cache unique dates when they are parsed (:issue:`25990`)
- :meth:`DataFrame.to_excel` now raises a ``ValueError`` when the caller's dimensions exceed the limitations of Excel (:issue:`26051`)
- :func:`read_excel` now raises a ``ValueError`` when input is of type :class:`pandas.io.excel.ExcelFile` and ``engine`` param is passed since :class:`pandas.io.excel.ExcelFile` has an engine defined (:issue:`26566`)
+- Bug while selecting from :class:`HDFStore` with ``where=''`` specified (:issue:`26610`).
Plotting
^^^^^^^^
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -98,7 +98,7 @@ def _ensure_term(where, scope_level):
where = wlist
elif maybe_expression(where):
where = Term(where, scope_level=level)
- return where
+ return where if where is None or len(where) else None
class PossibleDataLossError(Exception):
| Read from HDF with empty `where` throws an error
#### Code Sample
```python
df = pd.DataFrame(np.random.rand(4,4))
where = ''
with pd.HDFStore('test.h5') as store:
store.put('df', df, 't')
store.select('df', where = where)
```
#### Problem description
Wanted to be able construct "by hands" and save `where` condition for later, so declare it as variable. But some times constructed `where` becomes empty and code throws an error.
```python-traceback
Traceback (most recent call last):
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-101-48181c3b59fb>", line 6, in <module>
store.select('df', where = where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 740, in select
return it.get_result()
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 1518, in get_result
results = self.func(self.start, self.stop, where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 733, in func
columns=columns)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4254, in read
if not self.read_axes(where=where, **kwargs):
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 3443, in read_axes
self.selection = Selection(self, where=where, **kwargs)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4815, in __init__
self.terms = self.generate(where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4828, in generate
return Expr(where, queryables=q, encoding=self.table.encoding)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/pytables.py", line 548, in __init__
self.terms = self.parse()
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 766, in parse
return self._visitor.visit(self.expr)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 331, in visit
return visitor(node, **kwargs)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 335, in visit_Module
raise SyntaxError('only a single expression is allowed')
File "<string>", line unknown
SyntaxError: only a single expression is allowed
```
#### Expected Output
When empty string is passed to `where` - just select whole DataFrame. It may be easily achieved by changing last statement to `store.select('df', where = where if where else None)`. But it would be better to add this checking inside pandas, so user may not worry about it all the times using selection from HDF with `where`.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 5.0.0-16-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.2
pytest: 4.5.0
pip: 19.1.1
setuptools: 41.0.1
Cython: 0.29.7
numpy: 1.16.3
scipy: 1.2.1
pyarrow: None
xarray: 0.12.1
IPython: 7.2.0
sphinx: None
patsy: None
dateutil: 2.8.0
pytz: 2019.1
blosc: None
bottleneck: None
tables: 3.5.1
numexpr: 2.6.9
feather: None
matplotlib: 3.0.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10.1
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| Where is documented as accepting a list so if you use an empty list instead of the string you should be able to manage this the way you want
In API reference it is stated it accepts list, yes. But in [user_guide](http://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#querying-a-table) all examples are with using `where` as string. And also user_guide states: "If a list/tuple of expressions is passed they will be combined via &". The latter may become problem if one would create empty `where = []`, and starts to populate it with conditions - all of them will be forced to be combined via '&' (not '|' as may be wished). So in this case it would be ended to amending single condition inside `where = [condition]` list.
But anyway even here problem is the same. If `where` will ends up as empty list after all processing:
```python
df = pd.DataFrame(np.random.rand(4,4))
where = []
with pd.HDFStore('test.h5') as store:
store.put('df', df, 't')
store.select('df', where = where)
```
Same error will be raised:
```python
Traceback (most recent call last):
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-90-507edb4b117e>", line 6, in <module>
store.select('df', where = where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 740, in select
return it.get_result()
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 1518, in get_result
results = self.func(self.start, self.stop, where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 733, in func
columns=columns)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4254, in read
if not self.read_axes(where=where, **kwargs):
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 3443, in read_axes
self.selection = Selection(self, where=where, **kwargs)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4815, in __init__
self.terms = self.generate(where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4828, in generate
return Expr(where, queryables=q, encoding=self.table.encoding)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/pytables.py", line 548, in __init__
self.terms = self.parse()
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 766, in parse
return self._visitor.visit(self.expr)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 331, in visit
return visitor(node, **kwargs)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 335, in visit_Module
raise SyntaxError('only a single expression is allowed')
File "<string>", line unknown
SyntaxError: only a single expression is allowed
```
Thanks for the additional references. If you'd like to take a look and clean up implementation / documentation PRs would certainly be welcome!
To make sure I understand, the proposed fix is for `where=[]` to be treaded the same as `where=None`, i.e. no filtering?
@TomAugspurger, if you are asking me, yes I think it should be that way. Empty `where=[]` -> no filtering -> whole df will be return.
Sounds right. Can you submit a PR with tests?
> On Jun 3, 2019, at 17:17, BeforeFlight <notifications@github.com> wrote:
>
> If you are asking me, yes I think it should be that way. Empty where=[] -> no filtering -> whole df will be return.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub, or mute the thread.
@TomAugspurger as I explained in neighbour theme with groupby issue - just don't know how to do it correctly.
@BeforeFlight we have a contributing guide which could be helpful:
https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#testing-with-continuous-integration
If you would like to try but run into specific issues we are of course here to help. You can also use Gitter for development questions
@WillAyd well I would like to. But will start only tomorrow (now is 2pm here). If this api design proposal may serve for my 'github environment understanding' for some time - I would like to try for sure.
I'm not sure how should I test it. While putting my test into pandas context get following for now:
```python
def test_empty_where_lst():
with tm.ensure_clean() as path:
df = pd.DataFrame([[1, 2, 3], [1, 2, 3]])
with pd.HDFStore(path) as store:
store.put("df", df, "t")
store.select("df", where=[])
```
But this code raises very specific exception - `SyntaxError`. So should I prefix function with `@pytest.mark.xfail(raises=SyntaxError)`? So be more explicit on what exception is expected.
The reason why I'm asking is [discourage](https://github.com/pandas-dev/pandas/wiki/Testing#additional-imports) of checking for Exceptions. | 2019-06-09T00:05:52Z | [] | [] |
Traceback (most recent call last):
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-101-48181c3b59fb>", line 6, in <module>
store.select('df', where = where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 740, in select
return it.get_result()
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 1518, in get_result
results = self.func(self.start, self.stop, where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 733, in func
columns=columns)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4254, in read
if not self.read_axes(where=where, **kwargs):
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 3443, in read_axes
self.selection = Selection(self, where=where, **kwargs)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4815, in __init__
self.terms = self.generate(where)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/io/pytables.py", line 4828, in generate
return Expr(where, queryables=q, encoding=self.table.encoding)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/pytables.py", line 548, in __init__
self.terms = self.parse()
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 766, in parse
return self._visitor.visit(self.expr)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 331, in visit
return visitor(node, **kwargs)
File "/home/beforeflight/Coding/Python/_venvs_/main/lib/python3.7/site-packages/pandas/core/computation/expr.py", line 335, in visit_Module
raise SyntaxError('only a single expression is allowed')
File "<string>", line unknown
SyntaxError: only a single expression is allowed
| 12,788 |
|||
pandas-dev/pandas | pandas-dev__pandas-2677 | e1929f783bf87fa7b76b6420c290cc5dd7df9e59 | FAIL: test_to_string_repr_unicode
env:
windows 7 32bit english
python 2.7.3
when I run nose test:
```
nosetests pandas
```
issue:
FAIL: test_to_string_repr_unicode (pandas.tests.test_format.TestDataFrameFormatting)
Traceback (most recent call last):
File "D:\Python27\lib\site-packages\pandas\tests\test_format.py", line 141, in
test_to_string_repr_unicode
self.assert_(len(line) == line_len)
AssertionError: False is not true
how to do?
| what does the following produce on your system?
``` python
In [8]: import locale
...: import sys
...: import pandas as pd
...: print(pd.__version__)
...: print( sys.stdout.encoding)
...: print( sys.stdin.encoding)
...: print(locale.getpreferredencoding())
...: print(sys.getdefaultencoding())
...: print(pd.options.display.encoding)
```
```
In [1]: import locale, sys
In [2]: import pandas as pd
In [3]: print pd.__version__
0.10.0
In [4]: print sys.stdout.encoding
cp936
In [5]: print sys.std
sys.stderr sys.stdin sys.stdout
In [5]: print sys.stdin.encoding
cp936
In [6]: print local
Local\ Settings locale locals
In [6]: print locale.get
locale.getdefaultlocale locale.getpreferredencoding
locale.getlocale
In [6]: print locale.getpreferredencoding()
cp936
In [7]: print sys.getdefaultencoding()
ascii
In [8]: print pd.options.display.encoding
cp936
```
| 2013-01-10T14:48:53Z | [] | [] |
Traceback (most recent call last):
File "D:\Python27\lib\site-packages\pandas\tests\test_format.py", line 141, in
test_to_string_repr_unicode
self.assert_(len(line) == line_len)
AssertionError: False is not true
| 12,795 |
||||
pandas-dev/pandas | pandas-dev__pandas-26825 | a7f1d69b135bbbf649cf1af9a62d79acb963e47c | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -775,6 +775,7 @@ Reshaping
- Bug in :func:`DataFrame.sort_index` where an error is thrown when a multi-indexed ``DataFrame`` is sorted on all levels with the initial level sorted last (:issue:`26053`)
- Bug in :meth:`Series.nlargest` treats ``True`` as smaller than ``False`` (:issue:`26154`)
- Bug in :func:`DataFrame.pivot_table` with a :class:`IntervalIndex` as pivot index would raise ``TypeError`` (:issue:`25814`)
+- Bug in :meth:`DataFrame.transpose` where transposing a DataFrame with a timezone-aware datetime column would incorrectly raise ``ValueError`` (:issue:`26825`)
Sparse
^^^^^^
@@ -802,6 +803,7 @@ Other
- Removed unused C functions from vendored UltraJSON implementation (:issue:`26198`)
- Allow :class:`Index` and :class:`RangeIndex` to be passed to numpy ``min`` and ``max`` functions (:issue:`26125`)
- Use actual class name in repr of empty objects of a ``Series`` subclass (:issue:`27001`).
+- Bug in :class:`DataFrame` where passing an object array of timezone-aware `datetime` objects would incorrectly raise ``ValueError`` (:issue:`13287`)
.. _whatsnew_0.250.contributors:
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -21,10 +21,12 @@
from pandas.errors import AbstractMethodError
from pandas.util._decorators import Appender, Substitution
-from pandas.core.dtypes.cast import maybe_downcast_to_dtype
+from pandas.core.dtypes.cast import (
+ maybe_convert_objects, maybe_downcast_to_dtype)
from pandas.core.dtypes.common import (
ensure_int64, ensure_platform_int, is_bool, is_datetimelike,
- is_integer_dtype, is_interval_dtype, is_numeric_dtype, is_scalar)
+ is_integer_dtype, is_interval_dtype, is_numeric_dtype, is_object_dtype,
+ is_scalar)
from pandas.core.dtypes.missing import isna, notna
from pandas._typing import FrameOrSeries
@@ -334,7 +336,6 @@ def _decide_output_index(self, output, labels):
def _wrap_applied_output(self, keys, values, not_indexed_same=False):
from pandas.core.index import _all_indexes_same
- from pandas.core.tools.numeric import to_numeric
if len(keys) == 0:
return DataFrame(index=keys)
@@ -406,7 +407,6 @@ def first_not_none(values):
# provide a reduction (Frame -> Series) if groups are
# unique
if self.squeeze:
-
# assign the name to this series
if singular_series:
values[0].name = keys[0]
@@ -481,14 +481,7 @@ def first_not_none(values):
# as we are stacking can easily have object dtypes here
so = self._selected_obj
if so.ndim == 2 and so.dtypes.apply(is_datetimelike).any():
- result = result.apply(
- lambda x: to_numeric(x, errors='ignore'))
- date_cols = self._selected_obj.select_dtypes(
- include=['datetime', 'timedelta']).columns
- date_cols = date_cols.intersection(result.columns)
- result[date_cols] = (result[date_cols]
- ._convert(datetime=True,
- coerce=True))
+ result = _recast_datetimelike_result(result)
else:
result = result._convert(datetime=True)
@@ -1710,3 +1703,35 @@ def _normalize_keyword_aggregation(kwargs):
order.append((column,
com.get_callable_name(aggfunc) or aggfunc))
return aggspec, columns, order
+
+
+def _recast_datetimelike_result(result: DataFrame) -> DataFrame:
+ """
+ If we have date/time like in the original, then coerce dates
+ as we are stacking can easily have object dtypes here.
+
+ Parameters
+ ----------
+ result : DataFrame
+
+ Returns
+ -------
+ DataFrame
+
+ Notes
+ -----
+ - Assumes Groupby._selected_obj has ndim==2 and at least one
+ datetimelike column
+ """
+ result = result.copy()
+
+ obj_cols = [idx for idx in range(len(result.columns))
+ if is_object_dtype(result.dtypes[idx])]
+
+ # See GH#26285
+ for n in obj_cols:
+ converted = maybe_convert_objects(result.iloc[:, n].values,
+ convert_numeric=False)
+
+ result.iloc[:, n] = converted
+ return result
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -159,9 +159,28 @@ def init_ndarray(values, index, columns, dtype=None, copy=False):
# on the entire block; this is to convert if we have datetimelike's
# embedded in an object type
if dtype is None and is_object_dtype(values):
- values = maybe_infer_to_datetimelike(values)
- return create_block_manager_from_blocks([values], [columns, index])
+ if values.ndim == 2 and values.shape[0] != 1:
+ # transpose and separate blocks
+
+ dvals_list = [maybe_infer_to_datetimelike(row) for row in values]
+ for n in range(len(dvals_list)):
+ if isinstance(dvals_list[n], np.ndarray):
+ dvals_list[n] = dvals_list[n].reshape(1, -1)
+
+ from pandas.core.internals.blocks import make_block
+
+ # TODO: What about re-joining object columns?
+ block_values = [make_block(dvals_list[n], placement=[n])
+ for n in range(len(dvals_list))]
+
+ else:
+ datelike_vals = maybe_infer_to_datetimelike(values)
+ block_values = [datelike_vals]
+ else:
+ block_values = [values]
+
+ return create_block_manager_from_blocks(block_values, [columns, index])
def init_dict(data, index, columns, dtype=None):
| BUG: Pandas cannot create DataFrame from Numpy Array of TimeStamps
I have the following array of Timestamps:
``` python
ts_array = np.array([[Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 15:50:00+0000', tz='UTC', offset='5T')],
[Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 17:10:00+0000', tz='UTC', offset='5T')],
[Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T'),
Timestamp('2016-05-02 20:25:00+0000', tz='UTC', offset='5T')]], dtype=object)
```
I can't create a DataFrame from this array using the DataFrame constructor:
``` python
pd.DataFrame(ts_array)
```
```
Traceback (most recent call last):
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-46-ae20c6b6248f>", line 1, in <module>
pd.DataFrame(ts_array)
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 255, in __init__
copy=copy)
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 432, in _init_ndarray
return create_block_manager_from_blocks([values], [columns, index])
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 3986, in create_block_manager_from_blocks
mgr = BlockManager(blocks, axes)
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 2591, in __init__
(block.ndim, self.ndim))
AssertionError: Number of Block dimensions (1) must equal number of axes (2)
```
I can create the DataFrame from the array using `from_records`:
``` python
ts_df = pd.DataFrame.from_records(ts_array)
```
However, when I attempt to transpose this DataFrame, I wind up with the same `AssertionError` as before.
```
AssertionError: Number of Block dimensions (1) must equal number of axes (2)
```
If I convert the Timestamps to Datetimes, the error persists. I can, however, convert the Timestamps to Datetime64 objects, and this fixes the problem.
``` python
dt64_array = np.array([[ts.to_datetime64() for ts in sublist] for sublist in ts_array])
pd.DataFrame(dt64_array)
```
```
Out[56]:
0 1 2
0 2016-05-02 15:50:00 2016-05-02 15:50:00 2016-05-02 15:50:00
1 2016-05-02 17:10:00 2016-05-02 17:10:00 2016-05-02 17:10:00
2 2016-05-02 20:25:00 2016-05-02 20:25:00 2016-05-02 20:25:00
```
``` python
pd.DataFrame(dt64_array).transpose()
```
```
Out[57]:
0 1 2
0 2016-05-02 15:50:00 2016-05-02 17:10:00 2016-05-02 20:25:00
1 2016-05-02 15:50:00 2016-05-02 17:10:00 2016-05-02 20:25:00
2 2016-05-02 15:50:00 2016-05-02 17:10:00 2016-05-02 20:25:00
```
Though I found a suitable workaround, I feel like pandas should be able to construct and operate on DataFrames of Timestamps as easily as other other objects.
#### output of `pd.show_versions()`
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.11.final.0
python-bits: 64
OS: Darwin
OS-release: 15.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.18.1
nose: 1.3.7
pip: 8.1.2
setuptools: 20.3
Cython: 0.24
numpy: 1.11.0
scipy: 0.17.1
statsmodels: 0.8.0.dev0+970e99e
xarray: None
IPython: 4.1.2
sphinx: 1.3.5
patsy: 0.4.0
dateutil: 2.5.3
pytz: 2016.4
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.5
matplotlib: 1.5.1
openpyxl: 2.3.2
xlrd: 0.9.4
xlwt: 1.0.0
xlsxwriter: 0.8.4
lxml: 3.6.0
bs4: 4.4.1
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.12
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.39.0
pandas_datareader: None
```
| ```
In [4]: DataFrame.from_records(ts_array)
Out[4]:
0 1 2
0 2016-05-02 15:50:00+00:00 2016-05-02 15:50:00+00:00 2016-05-02 15:50:00+00:00
1 2016-05-02 17:10:00+00:00 2016-05-02 17:10:00+00:00 2016-05-02 17:10:00+00:00
2 2016-05-02 20:25:00+00:00 2016-05-02 20:25:00+00:00 2016-05-02 20:25:00+00:00
```
I suppose its a bug, but you are just going about this the wrong way to have a 2- d numpy array of Timestamps (which is completely inefficient) THEN create a frame.
yeah these are stored internally in a different way, so I guess `.T` is broken on these types of things.
If you want to step thru and submit a PR have at it.
| 2019-06-13T04:10:08Z | [] | [] |
Traceback (most recent call last):
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-46-ae20c6b6248f>", line 1, in <module>
pd.DataFrame(ts_array)
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 255, in __init__
copy=copy)
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 432, in _init_ndarray
return create_block_manager_from_blocks([values], [columns, index])
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 3986, in create_block_manager_from_blocks
mgr = BlockManager(blocks, axes)
File "/Users/jkelleher/anaconda/lib/python2.7/site-packages/pandas/core/internals.py", line 2591, in __init__
(block.ndim, self.ndim))
AssertionError: Number of Block dimensions (1) must equal number of axes (2)
| 12,802 |
|||
pandas-dev/pandas | pandas-dev__pandas-26916 | 83fe8d78b6b086f3ceabe81cd420a3c7affe9aba | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -603,6 +603,8 @@ Datetimelike
- Bug when comparing a :class:`PeriodIndex` against a zero-dimensional numpy array (:issue:`26689`)
- Bug in constructing a ``Series`` or ``DataFrame`` from a numpy ``datetime64`` array with a non-ns unit and out-of-bound timestamps generating rubbish data, which will now correctly raise an ``OutOfBoundsDatetime`` error (:issue:`26206`).
- Bug in :func:`date_range` with unnecessary ``OverflowError`` being raised for very large or very small dates (:issue:`26651`)
+- Bug where adding :class:`Timestamp` to a ``np.timedelta64`` object would raise instead of returning a :class:`Timestamp` (:issue:`24775`)
+- Bug where comparing a zero-dimensional numpy array containing a ``np.datetime64`` object to a :class:`Timestamp` would incorrect raise ``TypeError`` (:issue:`26916`)
Timedelta
^^^^^^^^^
diff --git a/pandas/_libs/tslibs/c_timestamp.pyx b/pandas/_libs/tslibs/c_timestamp.pyx
--- a/pandas/_libs/tslibs/c_timestamp.pyx
+++ b/pandas/_libs/tslibs/c_timestamp.pyx
@@ -55,6 +55,9 @@ def maybe_integer_op_deprecated(obj):
cdef class _Timestamp(datetime):
+ # higher than np.ndarray and np.matrix
+ __array_priority__ = 100
+
def __hash__(_Timestamp self):
if self.nanosecond:
return hash(self.value)
@@ -85,6 +88,15 @@ cdef class _Timestamp(datetime):
if ndim == 0:
if is_datetime64_object(other):
other = self.__class__(other)
+ elif is_array(other):
+ # zero-dim array, occurs if try comparison with
+ # datetime64 scalar on the left hand side
+ # Unfortunately, for datetime64 values, other.item()
+ # incorrectly returns an integer, so we need to use
+ # the numpy C api to extract it.
+ other = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other),
+ other)
+ other = self.__class__(other)
else:
return NotImplemented
elif is_array(other):
| BUG: timedelta64 + Timestamp raises
```
>>> np.timedelta64(3600*10**9, 'ns') + pd.Timestamp.now()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc add cannot use operands with types dtype('<m8[ns]') and dtype('O')
```
I think we can fix this by defining `Timestamp.__array_priority__`
| 2019-06-18T03:08:55Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc add cannot use operands with types dtype('<m8[ns]') and dtype('O')
| 12,813 |
||||
pandas-dev/pandas | pandas-dev__pandas-27144 | af7f2ef73e449f01acc6de47463c9b1440c6b0fb | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -566,6 +566,7 @@ Other API changes
- Removed support of gtk package for clipboards (:issue:`26563`)
- Using an unsupported version of Beautiful Soup 4 will now raise an ``ImportError`` instead of a ``ValueError`` (:issue:`27063`)
- :meth:`Series.to_excel` and :meth:`DataFrame.to_excel` will now raise a ``ValueError`` when saving timezone aware data. (:issue:`27008`, :issue:`7056`)
+- :meth:`DataFrame.to_hdf` and :meth:`Series.to_hdf` will now raise a ``NotImplementedError`` when saving a :class:`MultiIndex` with extention data types for a ``fixed`` format. (:issue:`7775`)
.. _whatsnew_0250.deprecations:
@@ -719,6 +720,7 @@ Timezones
- Bug in :func:`to_datetime` with ``unit='ns'`` would drop timezone information from the parsed argument (:issue:`26168`)
- Bug in :func:`DataFrame.join` where joining a timezone aware index with a timezone aware column would result in a column of ``NaN`` (:issue:`26335`)
- Bug in :func:`date_range` where ambiguous or nonexistent start or end times were not handled by the ``ambiguous`` or ``nonexistent`` keywords respectively (:issue:`27088`)
+- Bug in :meth:`DatetimeIndex.union` when combining a timezone aware and timezone unaware :class:`DatetimeIndex` (:issue:`21671`)
Numeric
^^^^^^^
@@ -814,6 +816,7 @@ I/O
- :func:`read_excel` now raises a ``ValueError`` when input is of type :class:`pandas.io.excel.ExcelFile` and ``engine`` param is passed since :class:`pandas.io.excel.ExcelFile` has an engine defined (:issue:`26566`)
- Bug while selecting from :class:`HDFStore` with ``where=''`` specified (:issue:`26610`).
- Fixed bug in :func:`DataFrame.to_excel()` where custom objects (i.e. `PeriodIndex`) inside merged cells were not being converted into types safe for the Excel writer (:issue:`27006`)
+- Bug in :meth:`read_hdf` where reading a timezone aware :class:`DatetimeIndex` would raise a ``TypeError`` (:issue:`11926`)
Plotting
^^^^^^^^
@@ -868,6 +871,7 @@ Reshaping
- Bug in :meth:`Series.nlargest` treats ``True`` as smaller than ``False`` (:issue:`26154`)
- Bug in :func:`DataFrame.pivot_table` with a :class:`IntervalIndex` as pivot index would raise ``TypeError`` (:issue:`25814`)
- Bug in :meth:`DataFrame.transpose` where transposing a DataFrame with a timezone-aware datetime column would incorrectly raise ``ValueError`` (:issue:`26825`)
+- Bug in :func:`pivot_table` when pivoting a timezone aware column as the ``values`` would remove timezone information (:issue:`14948`)
Sparse
^^^^^^
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -23,7 +23,8 @@
from pandas.core.dtypes.common import (
ensure_object, is_categorical_dtype, is_datetime64_dtype,
- is_datetime64tz_dtype, is_list_like, is_timedelta64_dtype)
+ is_datetime64tz_dtype, is_extension_type, is_list_like,
+ is_timedelta64_dtype)
from pandas.core.dtypes.missing import array_equivalent
from pandas import (
@@ -2647,6 +2648,9 @@ def write_multi_index(self, key, index):
index.codes,
index.names)):
# write the level
+ if is_extension_type(lev):
+ raise NotImplementedError("Saving a MultiIndex with an "
+ "extension dtype is not supported.")
level_key = '{key}_level{idx}'.format(key=key, idx=i)
conv_level = _convert_index(lev, self.encoding, self.errors,
self.format_type).set_name(level_key)
| BUG: selecting from HDFStore with a tz-aware level of a multi-index
I'm encountering a bug when I query for a multiindex dataframe with a timezoned DatetimeIndex in one of the multiindex levels.
This only happens
1) for a multiindex with one of the levels as timestamps with timezones (As seen in [1]). If timestamps have no timezone set, there is no issue (As seen in [2])
2) if the query returns no rows
3) in pandas 0.17.\* This was working fine in pandas 0.16.*
``` python
In [1]: periods = 10
...: dts = pd.date_range('20151201', periods=periods, freq='D', tz='UTC') #WITH TIMEZONE
...: mi = pd.MultiIndex.from_arrays([dts, range(periods)], names = ['DATE', 'NO'])
...: df = pd.DataFrame({'MYCOL':0}, index=mi)
...: file_path = 'table.h5'
...: key = 'mykey'
...: with pd.HDFStore(file_path, 'w') as store:
...: store.append(key, df, format='table', append=True)
...: dfres = store.select(key, where="""DATE > '20151220'""")
...: print(dfres)
...:
...:
Traceback (most recent call last):
File "<ipython-input-1-e0b7db50fd4d>", line 9, in <module>
dfres = store.select(key, where="""DATE > '20151220'""")
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 669, in select
return it.get_result()
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 1352, in get_result
results = self.func(self.start, self.stop, where)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 662, in func
columns=columns, **kwargs)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 4170, in read
df = super(AppendableMultiFrameTable, self).read(**kwargs)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 4029, in read
df = concat(frames, axis=1, verify_integrity=False).consolidate()
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/tools/merge.py", line 813, in concat
return op.get_result()
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/tools/merge.py", line 995, in get_result
mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4456, in concatenate_block_managers
for placement, join_units in concat_plan]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4456, in <listcomp>
for placement, join_units in concat_plan]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4553, in concatenate_join_units
for ju in join_units]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4553, in <listcomp>
for ju in join_units]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4801, in get_reindexed_values
missing_arr = np.empty(self.shape, dtype=empty_dtype)
TypeError: data type not understood
In [2]: periods = 10
...: dts = pd.date_range('20151201', periods=periods, freq='D') #WITHOUT TIMEZONE
...: mi = pd.MultiIndex.from_arrays([dts, range(periods)], names = ['DATE', 'NO'])
...: df = pd.DataFrame({'MYCOL':0}, index=mi)
...: file_path = 'table.h5'
...: key = 'mykey'
...: with pd.HDFStore(file_path, 'w') as store:
...: store.append(key, df, format='table', append=True)
...: dfres = store.select(key, where="""DATE > '20151220'""")
...: print(dfres)
...:
...:
Empty DataFrame
Columns: [MYCOL]
Index: []
In [3]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.1.final.0
python-bits: 64
OS: Linux
OS-release: 2.6.32-431.11.2.el6.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.17.1
nose: 1.3.7
pip: 7.1.2
setuptools: 19.1.1
Cython: 0.23.4
numpy: 1.10.2
scipy: 0.16.1
statsmodels: None
IPython: 4.0.1
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.2
pytz: 2015.7
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.4.4
matplotlib: 1.5.0
openpyxl: 2.2.6
xlrd: 0.9.4
xlwt: 1.0.0
xlsxwriter: 0.7.7
lxml: 3.5.0
bs4: 4.4.1
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.10
pymysql: None
psycopg2: None
Jinja2: None
```
| So its the readback, not the writing. I _think_ that its taking the wrong path on the dtype conversion.
```
import numpy as np
import pandas as pd
periods = 10
dts = pd.date_range('20151201', periods=periods, freq='D', tz='UTC') #WITH TIMEZONE
mi = pd.MultiIndex.from_arrays([dts, range(periods)], names = ['DATE', 'NO'])
df = pd.DataFrame({'MYCOL':0}, index=mi)
file_path = 'table.h5'
key = 'mykey'
with pd.HDFStore(file_path, 'w') as store:
store.append(key, df, format='table', append=True)
print(pd.read_hdf(file_path, key))
dfres = pd.read_hdf(file_path, key, where="DATE > 20151220")
print(dfres)
```
Has there been any update to patch this? Any ideas on which commit broke this since 0.16\* -> 0.17*?
I'm encountering the same issue when selecting datetime64[ns, tz] data using an iterator.
there are vast changes to the way tz's work in 0.17 vs. 0.16. see the whatsnew [here](http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#datetime-with-tz).
This is a relatively simple fix however. pull-requests are welcome.
| 2019-06-30T15:37:34Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-1-e0b7db50fd4d>", line 9, in <module>
dfres = store.select(key, where="""DATE > '20151220'""")
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 669, in select
return it.get_result()
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 1352, in get_result
results = self.func(self.start, self.stop, where)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 662, in func
columns=columns, **kwargs)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 4170, in read
df = super(AppendableMultiFrameTable, self).read(**kwargs)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/io/pytables.py", line 4029, in read
df = concat(frames, axis=1, verify_integrity=False).consolidate()
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/tools/merge.py", line 813, in concat
return op.get_result()
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/tools/merge.py", line 995, in get_result
mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy)
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4456, in concatenate_block_managers
for placement, join_units in concat_plan]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4456, in <listcomp>
for placement, join_units in concat_plan]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4553, in concatenate_join_units
for ju in join_units]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4553, in <listcomp>
for ju in join_units]
File "/export/data/anaconda/anaconda3.2.4/lib/python3.5/site-packages/pandas/core/internals.py", line 4801, in get_reindexed_values
missing_arr = np.empty(self.shape, dtype=empty_dtype)
TypeError: data type not understood
| 12,843 |
|||
pandas-dev/pandas | pandas-dev__pandas-27201 | 2efb60717bda9fc64344c5f6647d58564930808e | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -1094,6 +1094,7 @@ I/O
- Bug while selecting from :class:`HDFStore` with ``where=''`` specified (:issue:`26610`).
- Fixed bug in :func:`DataFrame.to_excel()` where custom objects (i.e. `PeriodIndex`) inside merged cells were not being converted into types safe for the Excel writer (:issue:`27006`)
- Bug in :meth:`read_hdf` where reading a timezone aware :class:`DatetimeIndex` would raise a ``TypeError`` (:issue:`11926`)
+- Bug in :meth:`to_msgpack` and :meth:`read_msgpack` which would raise a ``ValueError`` rather than a ``FileNotFoundError`` for an invalid path (:issue:`27160`)
Plotting
^^^^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2560,7 +2560,7 @@ def to_msgpack(self, path_or_buf=None, encoding="utf-8", **kwargs):
Parameters
----------
path : string File path, buffer-like, or None
- if None, return generated string
+ if None, return generated bytes
append : bool whether to append to an existing msgpack
(default is False)
compress : type of compressor (zlib or blosc), default to None (no
@@ -2568,9 +2568,9 @@ def to_msgpack(self, path_or_buf=None, encoding="utf-8", **kwargs):
Returns
-------
- None or str
+ None or bytes
If path_or_buf is None, returns the resulting msgpack format as a
- string. Otherwise returns None.
+ byte string. Otherwise returns None.
"""
from pandas.io import packers
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -108,7 +108,7 @@ def to_msgpack(path_or_buf, *args, **kwargs):
Parameters
----------
path_or_buf : string File path, buffer-like, or None
- if None, return generated string
+ if None, return generated bytes
args : an object or objects to serialize
encoding : encoding for unicode objects
append : boolean whether to append to an existing msgpack
@@ -139,8 +139,12 @@ def writer(fh):
path_or_buf = _stringify_path(path_or_buf)
if isinstance(path_or_buf, str):
- with open(path_or_buf, mode) as fh:
- writer(fh)
+ try:
+ with open(path_or_buf, mode) as fh:
+ writer(fh)
+ except FileNotFoundError:
+ msg = "File b'{}' does not exist".format(path_or_buf)
+ raise FileNotFoundError(msg)
elif path_or_buf is None:
buf = BytesIO()
writer(buf)
@@ -204,13 +208,11 @@ def read(fh):
# see if we have an actual file
if isinstance(path_or_buf, str):
try:
- exists = os.path.exists(path_or_buf)
- except (TypeError, ValueError):
- exists = False
-
- if exists:
with open(path_or_buf, "rb") as fh:
return read(fh)
+ except FileNotFoundError:
+ msg = "File b'{}' does not exist".format(path_or_buf)
+ raise FileNotFoundError(msg)
if isinstance(path_or_buf, bytes):
# treat as a binary-like
| Misleading error for pd.read_msgpack
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
pd.read_msgpack('this/path/does/not/exist')
```
#### Problem description
Such an error is misleading because it suggests that there is a problem with the datatype being passed, not that the path does not exist. The error raised is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".local/anaconda3/lib/python3.7/site-packages/pandas/io/packers.py", line 226, in read_msgpack
raise ValueError('path_or_buf needs to be a string file path or file-like')
ValueError: path_or_buf needs to be a string file path or file-like
```
#### Expected Output
Raise an error indicating that the path was not found.
#### Output of ``pd.show_versions()``
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.18.0-24-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.2
pytest: 4.3.1
pip: 19.0.3
setuptools: 40.8.0
Cython: 0.29.6
numpy: 1.16.2
scipy: 1.2.1
pyarrow: None
xarray: None
IPython: 7.4.0
sphinx: 1.8.5
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2018.9
blosc: None
bottleneck: 1.2.1
tables: 3.5.1
numexpr: 2.6.9
feather: None
matplotlib: 3.0.3
openpyxl: 2.6.1
xlrd: 1.2.0
xlwt: 1.3.0
xlsxwriter: 1.1.5
lxml.etree: 4.3.2
bs4: 4.7.1
html5lib: 1.0.1
sqlalchemy: 1.3.1
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datarea
</details>
| I think the first argument of read_msgpack can *also* be data.
```
In [4]: pd.read_msgpack(b'')
/Users/taugspurger/Envs/pandas-dev/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3296:
FutureWarning: The read_msgpack is deprecated and will be removed in a
future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas
objects.
exec(code_obj, self.user_global_ns, self.user_ns)
Out[4]: []
```
Regardless, I believe we're deprecating read_msgpack so this may not be
worth changing.
On Mon, Jul 1, 2019 at 8:02 AM Sam Spilsbury <notifications@github.com>
wrote:
> Code Sample, a copy-pastable example if possible
>
> import pandas as pd
> pd.read_msgpack('this/path/does/not/exist')
>
> Problem description
>
> Such an error is misleading because it suggests that there is a problem
> with the datatype being passed, not that the path does not exist. The error
> raised is:
>
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File ".local/anaconda3/lib/python3.7/site-packages/pandas/io/packers.py", line 226, in read_msgpack
> raise ValueError('path_or_buf needs to be a string file path or file-like')
> ValueError: path_or_buf needs to be a string file path or file-like
>
> Expected Output
>
> Raise an error indicating that the path was not found.
> Output of pd.show_versions() >>> pd.show_versions() INSTALLED VERSIONS
>
> commit: None
> python: 3.7.3.final.0
> python-bits: 64
> OS: Linux
> OS-release: 4.18.0-24-generic
> machine: x86_64
> processor: x86_64
> byteorder: little
> LC_ALL: None
> LANG: en_US.UTF-8
> LOCALE: en_US.UTF-8
>
> pandas: 0.24.2
> pytest: 4.3.1
> pip: 19.0.3
> setuptools: 40.8.0
> Cython: 0.29.6
> numpy: 1.16.2
> scipy: 1.2.1
> pyarrow: None
> xarray: None
> IPython: 7.4.0
> sphinx: 1.8.5
> patsy: 0.5.1
> dateutil: 2.8.0
> pytz: 2018.9
> blosc: None
> bottleneck: 1.2.1
> tables: 3.5.1
> numexpr: 2.6.9
> feather: None
> matplotlib: 3.0.3
> openpyxl: 2.6.1
> xlrd: 1.2.0
> xlwt: 1.3.0
> xlsxwriter: 1.1.5
> lxml.etree: 4.3.2
> bs4: 4.7.1
> html5lib: 1.0.1
> sqlalchemy: 1.3.1
> pymysql: None
> psycopg2: None
> jinja2: 2.10
> s3fs: None
> fastparquet: None
> pandas_gbq: None
> pandas_datarea
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/27160?email_source=notifications&email_token=AAKAOIWGKYF5VVZUERGDRRTP5H54ZA5CNFSM4H4SFK22YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4G4UIWEA>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAKAOIVMPWSJDZCUCOO5F3TP5H54ZANCNFSM4H4SFK2Q>
> .
>
yeah this is true of several routines (e.g. read_json), there is an issue about this somewhere. but for msgpack since we are deprecated, this is out of scope (would take a reasonable patch though).
as PR is submitted :->
> I think the first argument of read_msgpack can *also* be data.
I think that assuming a string passed to `pd.read_msgpack` is a filepath and then raising if not found is OK?
the data as `bytes` works as intended.
the docs for `pandas.DataFrame.to_msgpack` are misleading https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_msgpack.html?highlight=to_msgpack#pandas-dataframe-to-msgpack suggest that a string is returned when bytes are returned...
```
path : string File path, buffer-like, or None
if None, return generated string
```
```python
>>> import numpy as np
>>> import pandas as pd
>>> from pandas import DataFrame
>>> df = DataFrame(np.random.randn(10, 2))
>>>
>>>
>>> df.to_msgpack(None)
b'\x84\xa3typ\xadblock_manager\xa5klass\xa9DataFrame\xa4axes\x92\x86\xa3typ\xabrange_index\xa5klass\xaaRangeIndex\xa4name\xc0\xa5start\x00\xa4s
top\x02\xa4step\x01\x86\xa3typ\xabrange_index\xa5klass\xaaRangeIndex\xa4name\xc0\xa5start\x00\xa4stop\n\xa4step\x01\xa6blocks\x91\x86\xa4locs\x
86\xa3typ\xa7ndarray\xa5shape\x91\x02\xa4ndim\x01\xa5dtype\xa5int64\xa4data\xd8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00
\x00\xa8compress\xc0\xa6values\xc7\xa0\x00A\x10\x94Z\x0f|\xd0?F]>R\xc7\xfc\xf5\xbf\xa4\xeb\xe2:\x07X\xc5\xbf&\x1bAje\t\xbb?\x98w9\x17:"\xe1?#\x
e4\xc9\xda\x86\xdf\xaf\xbf\xec\xe63K2\x03\xee\xbf\xad0%v\x11$\xda\xbf\xa1\x02@\xff\xb7\xc8\xff?\xb0G\x11\x02\x80\x13\xe1?)\xf8l\xcb~/\xd2?\xb2\
x17I\xeb\x91k\x03@\xbf\xfaj\xb2\x89\x14\xc2\xbf\xbd5\xba\xb3j\x1c\xed?u\xe504\x17\xaf\xd0\xbf\xc7\xa5\xc3\xf3\x12\xf1\xf4?\xe6\xf0\x05\xf2\xef\
xd6\x05@\xec\xeb\xd1\x80w}\xf0\xbfx\x94\x82\x10"U\xeb?.\xbdZI\x89X\xea?\xa5shape\x92\x02\n\xa5dtype\xa7float64\xa5klass\xaaFloatBlock\xa8compre
ss\xc0'
>>>
>>> pd.read_msgpack(df.to_msgpack(None))
sys:1: FutureWarning: The read_msgpack is deprecated and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.
0 1
0 0.257572 0.284149
1 -1.374214 2.427524
2 -0.166749 -0.141252
3 0.105612 0.909719
4 0.535428 -0.260687
5 -0.062252 1.308856
6 -0.937890 2.729950
7 -0.408451 -1.030632
8 1.986504 0.854142
9 0.533630 0.823308
>>>
```
| 2019-07-03T05:29:48Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".local/anaconda3/lib/python3.7/site-packages/pandas/io/packers.py", line 226, in read_msgpack
raise ValueError('path_or_buf needs to be a string file path or file-like')
ValueError: path_or_buf needs to be a string file path or file-like
| 12,848 |
|||
pandas-dev/pandas | pandas-dev__pandas-27243 | 2efb60717bda9fc64344c5f6647d58564930808e | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -1151,6 +1151,7 @@ Reshaping
- Bug in :func:`DataFrame.pivot_table` with a :class:`IntervalIndex` as pivot index would raise ``TypeError`` (:issue:`25814`)
- Bug in :meth:`DataFrame.transpose` where transposing a DataFrame with a timezone-aware datetime column would incorrectly raise ``ValueError`` (:issue:`26825`)
- Bug in :func:`pivot_table` when pivoting a timezone aware column as the ``values`` would remove timezone information (:issue:`14948`)
+- Bug in :func:`merge_asof` when specifying multiple ``by`` columns where one is ``datetime64[ns, tz]`` dtype (:issue:`26649`)
Sparse
^^^^^^
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1686,6 +1686,9 @@ def _get_join_indexers(self):
def flip(xs):
""" unlike np.transpose, this returns an array of tuples """
+ xs = [
+ x if not is_extension_array_dtype(x) else x._ndarray_values for x in xs
+ ]
labels = list(string.ascii_lowercase[: len(xs)])
dtypes = [x.dtype for x in xs]
labeled_dtypes = list(zip(labels, dtypes))
| merge_asof with one tz-aware datetime "by" parameter and another parameter raises
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
left = pd.DataFrame({
'by_col1': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
'by_col2': ['HELLO'],
'on_col': [2],
'value': ['a']})
right = pd.DataFrame({
'by_col1': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
'by_col2': ['WORLD'],
'on_col': [1],
'value': ['b']})
pd.merge_asof(left, right, by=['by_col1', 'by_col2'], on='on_col')
```
#### Problem description
This is very similar to: https://github.com/pandas-dev/pandas/issues/21184
The only difference is that the `merge_asof` `by` is made of 2 columns (instead of one):
* one is tz-aware
* the other one is something else (string, number etc...)
When running this, I get:
```
Traceback (most recent call last):
File "test.py", line 13, in <module>
pd.merge_asof(left, right, by=['by_col1', 'by_col2'], on='on_col')
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 462, in merge_asof
return op.get_result()
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1256, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 756, in _get_join_info
right_indexer) = self._get_join_indexers()
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1504, in _get_join_indexers
left_by_values = flip(left_by_values)
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1457, in flip
return np.array(lzip(*xs), labeled_dtypes)
File "myenv/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 150, in __repr__
return str(self)
File "myenv/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 129, in __str__
return self.__unicode__()
File "myenv/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 704, in __unicode__
return "datetime64[{unit}, {tz}]".format(unit=self.unit, tz=self.tz)
SystemError: PyEval_EvalFrameEx returned a result with an error set
```
#### Expected Output
I expect the merge_asof to work, and pick up the by column accordingly
#### Output of ``0.24.2``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Linux
OS-release: 3.10.0-862.3.3.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.2
pytest: 4.5.0
pip: 19.1.1
setuptools: 40.8.0
Cython: 0.28.5
numpy: 1.16.4
scipy: 1.1.0
pyarrow: 0.12.1
xarray: None
IPython: 7.3.0
sphinx: 1.4.6
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2019.1
blosc: None
bottleneck: 1.2.1
tables: 3.4.4
numexpr: 2.6.9
feather: None
matplotlib: 3.0.2
openpyxl: 2.5.3
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: 4.3.1
bs4: None
html5lib: None
sqlalchemy: 1.2.18
pymysql: None
psycopg2: 2.7.1 (dt dec pq3 ext lo64)
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: 0.7.0
gcsfs: None
</details>
| Thanks for the report. Here's the traceback I get on master. Investigations and PRs welcome!
```
In [4]: import pandas as pd
...:
...: left = pd.DataFrame({
...: 'by_col1': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
...: 'by_col2': ['HELLO'],
...: 'on_col': [2],
...: 'value': ['a']})
...: right = pd.DataFrame({
...: 'by_col1': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
...: 'by_col2': ['WORLD'],
...: 'on_col': [1],
...: 'value': ['b']})
...: pd.merge_asof(left, right, by=['by_col1', 'by_col2'], on='on_col')
<DatetimeTZDtype object at 0x1180be080>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-450ebb8f2376> in <module>
11 'on_col': [1],
12 'value': ['b']})
---> 13 pd.merge_asof(left, right, by=['by_col1', 'by_col2'], on='on_col')
~/pandas-mroeschke/pandas/core/reshape/merge.py in merge_asof(left, right, on, left_on, right_on, left_index, right_index, by, left_by, right_by, suffixes, tolerance, allow_exact_matches, direction)
465 allow_exact_matches=allow_exact_matches,
466 direction=direction)
--> 467 return op.get_result()
468
469
~/pandas-mroeschke/pandas/core/reshape/merge.py in get_result(self)
1296
1297 def get_result(self):
-> 1298 join_index, left_indexer, right_indexer = self._get_join_info()
1299
1300 # this is a bit kludgy
~/pandas-mroeschke/pandas/core/reshape/merge.py in _get_join_info(self)
759 else:
760 (left_indexer,
--> 761 right_indexer) = self._get_join_indexers()
762
763 if self.right_index:
~/pandas-mroeschke/pandas/core/reshape/merge.py in _get_join_indexers(self)
1560 right_by_values = right_by_values[0]
1561 else:
-> 1562 left_by_values = flip(left_by_values)
1563 right_by_values = flip(right_by_values)
1564
~/pandas-mroeschke/pandas/core/reshape/merge.py in flip(xs)
1513 dtypes = [x.dtype for x in xs]
1514 labeled_dtypes = list(zip(labels, dtypes))
-> 1515 return np.array(list(zip(*xs)), labeled_dtypes)
1516
1517 # values to compare
TypeError: data type not understood
In [5]: pd.__version__
Out[5]: '0.25.0.dev0+657.gc07d71d13'
```
Good day, when debug I found this. There seems to be an error in the second column type. There's got to be, `[('a', datetime64[ns, UTC]), ('b', dtype('U'))]` or we have to send back the object. Judging by the description of the error, it looks plausible.
https://github.com/pandas-dev/pandas/blob/ea06f8d1157601b5fdb48598e27b02149828fba0/pandas/core/reshape/merge.py#L1510-L1515
```
(Pdb) xs
[<DatetimeArray>
['2018-01-01 00:00:00+00:00']
Length: 1, dtype: datetime64[ns, UTC], array(['HELLO'], dtype=object)]
(Pdb) lzip(*xs)
[(Timestamp('2018-01-01 00:00:00+0000', tz='UTC'), 'HELLO')]
(Pdb) labeled_dtypes
[('a', datetime64[ns, UTC]), ('b', dtype('O'))]
(Pdb)
```
`DatetimeArray[ns, tz].__iter__` will return an ndarray of Timestamp objects. I'm not familiar with this section of the code, but can we use i8values rather that the datetimes at this point?
> `DatetimeArray[ns, tz].__iter__` will return an ndarray of Timestamp objects. I'm not familiar with this section of the code, but can we use i8values rather that the datetimes at this point?
https://github.com/pandas-dev/pandas/blob/ea06f8d1157601b5fdb48598e27b02149828fba0/pandas/core/reshape/merge.py#L1510-L1515
Rewrote the conversion in type 'i8' thus:
```
dtypes = [x.view('i8') if needs_i8_conversion(x.dtype) else x.dtype for x in xs]
```
Error:
```
TypeError: data type not understood
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "TestStand/Main.py", line 16, in <module>
pd.merge_asof(left, right, by=['by_col1', 'by_col2'], on='on_col')
File "venv/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 462, in merge_asof
return op.get_result()
File "venv/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 1258, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "venv/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 758, in _get_join_info
right_indexer) = self._get_join_indexers()
File "venv/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 1507, in _get_join_indexers
left_by_values = flip(left_by_values)
File "venv/lib/python3.7/site-packages/pandas/core/reshape/merge.py", line 1459, in flip
return np.array(buff, labeled_dtypes)
File "venv/lib/python3.7/site-packages/numpy/core/arrayprint.py", line 1404, in _array_repr_implementation
if type(arr) is not ndarray:
SystemError: <class 'type'> returned a result with an error set
```
And here is what prints out on one of the stages, if you go deep into using pdb.
```
next
array([1514764800000000000])TypeError: data type not understood
```
It seems to me that there should be a type of float. Looks like a bug or normal?
```
(Pdb) dtypes
[array([1514764800000000000]), dtype('O')]
```
I don't really understand what `flip` is doing, but we're making a numpy record array / structured dtype. We apparently can't pass a `datetime64[ns, tz]` array into `flip`. | 2019-07-05T07:14:18Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 13, in <module>
pd.merge_asof(left, right, by=['by_col1', 'by_col2'], on='on_col')
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 462, in merge_asof
return op.get_result()
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1256, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 756, in _get_join_info
right_indexer) = self._get_join_indexers()
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1504, in _get_join_indexers
left_by_values = flip(left_by_values)
File "myenv/lib/python3.6/site-packages/pandas/core/reshape/merge.py", line 1457, in flip
return np.array(lzip(*xs), labeled_dtypes)
File "myenv/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 150, in __repr__
return str(self)
File "myenv/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 129, in __str__
return self.__unicode__()
File "myenv/lib/python3.6/site-packages/pandas/core/dtypes/dtypes.py", line 704, in __unicode__
return "datetime64[{unit}, {tz}]".format(unit=self.unit, tz=self.tz)
SystemError: PyEval_EvalFrameEx returned a result with an error set
| 12,853 |
|||
pandas-dev/pandas | pandas-dev__pandas-27317 | c74a853add15425cf44e6c6943ade28eb3240d19 | diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -219,7 +219,7 @@ class CategoricalDtype(PandasExtensionDtype, ExtensionDtype):
kind = "O" # type: str_type
str = "|O08"
base = np.dtype("O")
- _metadata = ("categories", "ordered")
+ _metadata = ("categories", "ordered", "_ordered_from_sentinel")
_cache = {} # type: Dict[str_type, PandasExtensionDtype]
def __init__(self, categories=None, ordered: OrderedType = ordered_sentinel):
@@ -356,6 +356,7 @@ def __setstate__(self, state: Dict[str_type, Any]) -> None:
# pickle -> need to set the settable private ones here (see GH26067)
self._categories = state.pop("categories", None)
self._ordered = state.pop("ordered", False)
+ self._ordered_from_sentinel = state.pop("_ordered_from_sentinel", False)
def __hash__(self) -> int:
# _hash_categories returns a uint64, so use the negative
| BUG: Calling Series.astype('category') on a categorical series loaded using pd.read_pickle errors on pandas-0.25.0rc0
#### Code Sample, a copy-pastable example if possible
```python
import os
import pandas as pd
s = pd.Series(["a", "b", "c", "a"], dtype="category")
s.astype('category')
FILEPATH = 'example.pickle'
s.to_pickle(FILEPATH)
s = pd.read_pickle(FILEPATH)
os.remove(FILEPATH)
s.astype('category')
```
Output
```python-traceback
Traceback (most recent call last):
File "mre.py", line 13, in <module>
s.astype('category')
File "/Users/roy/pandas/pandas/core/generic.py", line 5935, in astype
dtype=dtype, copy=copy, errors=errors, **kwargs
File "/Users/roy/pandas/pandas/core/internals/managers.py", line 581, in astype
return self.apply("astype", dtype=dtype, **kwargs)
File "/Users/roy/pandas/pandas/core/internals/managers.py", line 438, in apply
applied = getattr(b, f)(**kwargs)
File "/Users/roy/pandas/pandas/core/internals/blocks.py", line 555, in astype
return self._astype(dtype, copy=copy, errors=errors, values=values, **kwargs)
File "/Users/roy/pandas/pandas/core/internals/blocks.py", line 606, in _astype
return self.make_block(self.values.astype(dtype, copy=copy))
File "/Users/roy/pandas/pandas/core/arrays/categorical.py", line 524, in astype
self = self.copy() if copy else self
File "/Users/roy/pandas/pandas/core/arrays/categorical.py", line 503, in copy
values=self._codes.copy(), dtype=self.dtype, fastpath=True
File "/Users/roy/pandas/pandas/core/arrays/categorical.py", line 353, in __init__
self._dtype = self._dtype.update_dtype(dtype)
File "/Users/roy/pandas/pandas/core/dtypes/dtypes.py", line 556, in update_dtype
new_ordered_from_sentinel = dtype._ordered_from_sentinel
AttributeError: 'CategoricalDtype' object has no attribute '_ordered_from_sentinel'
```
#### Problem description
Calling `Series.astype('category')` on a categorical series loaded using `pd.read_pickle` errors with pandas 0.25.0rc0. The example code ran without error using pandas 0.24.2
#### Expected Output
```
0 a
1 b
2 c
3 a
dtype: category
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : c64c9cb44222a42f7b02d4d6007919cd0645f1be
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 18.6.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.0rc0+23.gc64c9cb44
numpy : 1.16.4
pytz : 2019.1
dateutil : 2.8.0
pip : 19.1.1
setuptools : 41.0.1
Cython : 0.29.12
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| Thanks for trying the RC.
cc @jschendel
Looks like it's just a matter of adding `_ordered_from_sentinel` to `CategoricalDtype.__setstate__`:
```diff
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index d8d910a16..54f2c6551 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -360,6 +360,7 @@ class CategoricalDtype(PandasExtensionDtype, ExtensionDtype):
# pickle -> need to set the settable private ones here (see GH26067)
self._categories = state.pop('categories', None)
self._ordered = state.pop('ordered', False)
+ self._ordered_from_sentinel = state.pop('ordered_from_sentinel', False)
def __hash__(self) -> int:
# _hash_categories returns a uint64, so use the negative
```
Should be able to get a fix in tonight or tomorrow. | 2019-07-10T04:44:36Z | [] | [] |
Traceback (most recent call last):
File "mre.py", line 13, in <module>
s.astype('category')
File "/Users/roy/pandas/pandas/core/generic.py", line 5935, in astype
dtype=dtype, copy=copy, errors=errors, **kwargs
File "/Users/roy/pandas/pandas/core/internals/managers.py", line 581, in astype
return self.apply("astype", dtype=dtype, **kwargs)
File "/Users/roy/pandas/pandas/core/internals/managers.py", line 438, in apply
applied = getattr(b, f)(**kwargs)
File "/Users/roy/pandas/pandas/core/internals/blocks.py", line 555, in astype
return self._astype(dtype, copy=copy, errors=errors, values=values, **kwargs)
File "/Users/roy/pandas/pandas/core/internals/blocks.py", line 606, in _astype
return self.make_block(self.values.astype(dtype, copy=copy))
File "/Users/roy/pandas/pandas/core/arrays/categorical.py", line 524, in astype
self = self.copy() if copy else self
File "/Users/roy/pandas/pandas/core/arrays/categorical.py", line 503, in copy
values=self._codes.copy(), dtype=self.dtype, fastpath=True
File "/Users/roy/pandas/pandas/core/arrays/categorical.py", line 353, in __init__
self._dtype = self._dtype.update_dtype(dtype)
File "/Users/roy/pandas/pandas/core/dtypes/dtypes.py", line 556, in update_dtype
new_ordered_from_sentinel = dtype._ordered_from_sentinel
AttributeError: 'CategoricalDtype' object has no attribute '_ordered_from_sentinel'
| 12,864 |
|||
pandas-dev/pandas | pandas-dev__pandas-27426 | 26bd34df233e3f103922fe11e238c1532f3e58a0 | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -1087,7 +1087,6 @@ I/O
- Bug in :meth:`DataFrame.to_html` where header numbers would ignore display options when rounding (:issue:`17280`)
- Bug in :func:`read_hdf` where reading a table from an HDF5 file written directly with PyTables fails with a ``ValueError`` when using a sub-selection via the ``start`` or ``stop`` arguments (:issue:`11188`)
- Bug in :func:`read_hdf` not properly closing store after a ``KeyError`` is raised (:issue:`25766`)
-- Bug in ``read_csv`` which would not raise ``ValueError`` if a column index in ``usecols`` was out of bounds (:issue:`25623`)
- Improved the explanation for the failure when value labels are repeated in Stata dta files and suggested work-arounds (:issue:`25772`)
- Improved :meth:`pandas.read_stata` and :class:`pandas.io.stata.StataReader` to read incorrectly formatted 118 format files saved by Stata (:issue:`25960`)
- Improved the ``col_space`` parameter in :meth:`DataFrame.to_html` to accept a string so CSS length values can be set correctly (:issue:`25941`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1947,12 +1947,6 @@ def __init__(self, src, **kwds):
):
_validate_usecols_names(usecols, self.orig_names)
- # GH 25623
- # validate that column indices in usecols are not out of bounds
- elif self.usecols_dtype == "integer":
- indices = range(self._reader.table_width)
- _validate_usecols_names(usecols, indices)
-
if len(self.names) > len(usecols):
self.names = [
n
@@ -2258,7 +2252,7 @@ def __init__(self, f, **kwds):
self.skipinitialspace = kwds["skipinitialspace"]
self.lineterminator = kwds["lineterminator"]
self.quoting = kwds["quoting"]
- self.usecols, self.usecols_dtype = _validate_usecols_arg(kwds["usecols"])
+ self.usecols, _ = _validate_usecols_arg(kwds["usecols"])
self.skip_blank_lines = kwds["skip_blank_lines"]
self.warn_bad_lines = kwds["warn_bad_lines"]
@@ -2665,13 +2659,6 @@ def _infer_columns(self):
if clear_buffer:
self._clear_buffer()
- # GH 25623
- # validate that column indices in usecols are not out of bounds
- if self.usecols_dtype == "integer":
- for col in columns:
- indices = range(len(col))
- _validate_usecols_names(self.usecols, indices)
-
if names is not None:
if (self.usecols is not None and len(names) != len(self.usecols)) or (
self.usecols is None and len(names) != len(columns[0])
@@ -2706,11 +2693,6 @@ def _infer_columns(self):
ncols = len(line)
num_original_columns = ncols
- # GH 25623
- # validate that column indices in usecols are not out of bounds
- if self.usecols_dtype == "integer":
- _validate_usecols_names(self.usecols, range(ncols))
-
if not names:
if self.prefix:
columns = [
| read_excel in version 0.25.0rc0 treats empty columns differently
I'm using this code to load an Excel file.
```python
df = pandas.read_excel(
"data.xlsx",
sheet_name="sheet1",
usecols=[0, 1],
header=None,
names=["foo", "bar"]
)
print(df.head())
```
The Excel file has the cells `A7`=`1`, `A8`=`2`, `A9`=`3`, everything else is empty.
With pandas 0.24.2 I get this:
```
foo bar
0 1 NaN
1 2 NaN
2 3 NaN
```
With pandas 0.25.0rc0 I get:
```
Traceback (most recent call last):
File "tester.py", line 8, in <module>
names=["foo", "bar"]
File "/home/me/.env/lib/python3.7/site-packages/pandas/util/_decorators.py", line 196, in wrapper
return func(*args, **kwargs)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 334, in read_excel
**kwds
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 877, in parse
**kwds
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 507, in parse
**kwds
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 2218, in TextParser
return TextFileReader(*args, **kwds)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 895, in __init__
self._make_engine(self.engine)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 1147, in _make_engine
self._engine = klass(self.f, **self.options)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 2305, in __init__
) = self._infer_columns()
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 2712, in _infer_columns
_validate_usecols_names(self.usecols, range(ncols))
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 1255, in _validate_usecols_names
"columns expected but not found: {missing}".format(missing=missing)
ValueError: Usecols do not match columns, columns expected but not found: [1]
```
The problem happens because the `bar` column does not contain any data. As soon as I put a value into it, both versions do the same thing.
I'm using Python 3.7.3 in Ubuntu 19.04.
| I think this is intentional ref #25623 so not really a regression. Do you have a particular use case for this?
@WillAyd Our use case is that we have daily reports and one of the columns only contains data when something unusual happened. Consequently, in some files this column is completely empty and "the column is completely empty" is exactly the information that we are looking for.
The change in #25623 that you referenced mentions CSV files. For CSV files I agree that the change is very useful, since the CSV file really does not contain the column. But for Excel files, there is no such thing as a non-existing column.
I don't think this is something likely to be reverted as it was a bug in core IO handling before that allowed this not to raise but let's see what others think
shouldn’t just specifying names work?
Seems to work for me locally - @snordhausen how about on your end?
@WillAyd To make sure that we are both testing the same thing, I extended my test program to also create the `data.xlsx` file:
```
import pandas
from openpyxl import Workbook
wb = Workbook()
ws = wb.active
ws['A7'] = 1
ws['A8'] = 2
ws['A9'] = 3
wb.save("data.xlsx")
df = pandas.read_excel(
"data.xlsx",
sheet_name="Sheet",
usecols=[0, 1],
header=None,
names=["foo", "bar"]
)
print(df)
```
I also tried this out in a fresh Ubuntu 18.04 docker container and could reproduce the issue.
Try removing usecols from your call
Sent from my iPhone
> On Jul 8, 2019, at 2:41 AM, Stefan Nordhausen <notifications@github.com> wrote:
>
> @WillAyd To make sure that we are both testing the same thing, I extended my test program to also create the data.xlsx file:
>
> import pandas
> from openpyxl import Workbook
>
> wb = Workbook()
> ws = wb.active
> ws['A7'] = 1
> ws['A8'] = 2
> ws['A9'] = 3
> wb.save("data.xlsx")
>
> df = pandas.read_excel(
> "data.xlsx",
> sheet_name="Sheet",
> usecols=[0, 1],
> header=None,
> names=["foo", "bar"]
> )
>
> print(df)
> I also tried this out in a fresh Ubuntu 18.04 docker container and could reproduce the issue.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or mute the thread.
Removing `usecols` makes the program work with 0.25.0rc0.
However, that looks inconsistent to me: why can I implicitly load empty columns, but when I explicitly ask for them I get an error? Also, it means I cannot load (potentially) empty columns in the middle of the table, e.g. if I only wanted column 0 and 20.
> However, that looks inconsistent to me: why can I implicitly load empty columns, but when I explicitly ask for them I get an error?
The fact that this worked previously is inconsistent with read_csv. `usecols` is typically validated and missing indexes or labels throws errors. For example:
```python
>>> data = """a,b,c\n1,2,3"""
>>> pd.read_csv(io.StringIO(test), usecols=['x'])
ValueError: Usecols do not match columns, columns expected but not found: ['x']
>>> pd.read_csv(io.StringIO(test), usecols=[10])
ValueError: Usecols do not match columns, columns expected but not found: [10]
```
So I don't think there is any reason to have Excel be excepted from that validation. You can use `names` as suggested above or reindex the output on your own
The biggest issue is using the parser to read multiple sheets from 1 excel file.
Trying to read multiple sheets in 1 IO causes a lot of issues if the column length varies within a range (eg. "AA, AG:BZ") with AA being the index and AG:BZ the potential columns.
This example will throw an error instead of omitting the empty columns, which caused a lot of headaches and let me to revert to 0.24.
@pandas-dev/pandas-core would anyone object to reverting #25623 ? It looks like this is causing confusion in the Excel world as described by users above
To support use cases above with that in place we would need to break Excel `usecols` handling from the CSV one. I'm not sure this is desired but at the same time I don't think the issue we solved to raise for bad `usecols` is that urgent so could defer that if its a hang up for RC users
I have no objections to reverting the original PR.
However, I would meet that issue half-way and issue warnings instead.
A FutureWarning or did you have something else in mind?
I would go with `UserWarning`.
`FutureWarning` to me implies some kind of deprecation, which I don't think will happen at this point (unless we have some really strong feelings about keeping this behavior).
I am fine with reverting to restore the functionality of excel for 0.25.0.
But I also wanted to mention that from a user perspective, I wouldn't mind that some options behave differently between csv and excel (in the end, they are different formats with different capabilities). Whether this is possible/desirable from a code perspective, don't know the parsing code well enough for that.
> I wouldn't mind that some options behave differently between csv and excel (in the end, they are different formats with different capabilities)
> Whether this is possible/desirable from a code perspective, don't know the parsing code well enough for that
It's definitely possible, but I would want more feedback from users, hence why I suggested the warning. That way we can draw people's attention to it (maybe even reference the two issues). | 2019-07-16T21:58:23Z | [] | [] |
Traceback (most recent call last):
File "tester.py", line 8, in <module>
names=["foo", "bar"]
File "/home/me/.env/lib/python3.7/site-packages/pandas/util/_decorators.py", line 196, in wrapper
return func(*args, **kwargs)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 334, in read_excel
**kwds
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 877, in parse
**kwds
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/excel/_base.py", line 507, in parse
**kwds
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 2218, in TextParser
return TextFileReader(*args, **kwds)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 895, in __init__
self._make_engine(self.engine)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 1147, in _make_engine
self._engine = klass(self.f, **self.options)
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 2305, in __init__
) = self._infer_columns()
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 2712, in _infer_columns
_validate_usecols_names(self.usecols, range(ncols))
File "/home/me/.env/lib/python3.7/site-packages/pandas/io/parsers.py", line 1255, in _validate_usecols_names
"columns expected but not found: {missing}".format(missing=missing)
ValueError: Usecols do not match columns, columns expected but not found: [1]
| 12,883 |
|||
pandas-dev/pandas | pandas-dev__pandas-27511 | 3b96ada3a17f5fcc8c32a238457075ec4dd8433a | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -57,6 +57,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
+- Bug when printing negative floating point complex numbers would raise an ``IndexError`` (:issue:`27484`)
-
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2593,12 +2593,12 @@ def memory_usage(self, index=True, deep=False):
... for t in dtypes])
>>> df = pd.DataFrame(data)
>>> df.head()
- int64 float64 complex128 object bool
- 0 1 1.0 1.0+0.0j 1 True
- 1 1 1.0 1.0+0.0j 1 True
- 2 1 1.0 1.0+0.0j 1 True
- 3 1 1.0 1.0+0.0j 1 True
- 4 1 1.0 1.0+0.0j 1 True
+ int64 float64 complex128 object bool
+ 0 1 1.0 1.000000+0.000000j 1 True
+ 1 1 1.0 1.000000+0.000000j 1 True
+ 2 1 1.0 1.000000+0.000000j 1 True
+ 3 1 1.0 1.000000+0.000000j 1 True
+ 4 1 1.0 1.000000+0.000000j 1 True
>>> df.memory_usage()
Index 128
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -5,6 +5,7 @@
from functools import partial
from io import StringIO
+import re
from shutil import get_terminal_size
from typing import (
TYPE_CHECKING,
@@ -1688,17 +1689,10 @@ def _trim_zeros_complex(str_complexes: ndarray, na_rep: str = "NaN") -> List[str
Separates the real and imaginary parts from the complex number, and
executes the _trim_zeros_float method on each of those.
"""
-
- def separate_and_trim(str_complex, na_rep):
- num_arr = str_complex.split("+")
- return (
- _trim_zeros_float([num_arr[0]], na_rep)
- + ["+"]
- + _trim_zeros_float([num_arr[1][:-1]], na_rep)
- + ["j"]
- )
-
- return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
+ return [
+ "".join(_trim_zeros_float(re.split(r"([j+-])", x), na_rep))
+ for x in str_complexes
+ ]
def _trim_zeros_float(
| IndexError in repr of series objects containing complex numbers with negative imaginary parts
#### Code Sample, a copy-pastable example if possible
```python
from pandas import Series
print(Series([-1j]))
```
#### Problem description
This raises the following error:
```
Traceback (most recent call last):
File "foo.py", line 3, in <module>
print(Series([-1j]))
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/series.py", line 1611, in __repr__
length=show_dimensions,
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/series.py", line 1677, in to_string
result = formatter.to_string()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 312, in to_string
fmt_values = self._get_formatted_values()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 299, in _get_formatted_values
values_to_format, None, float_format=self.float_format, na_rep=self.na_rep
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1032, in format_array
return fmt_obj.get_result()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1063, in get_result
fmt_values = self._format_strings()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1288, in _format_strings
return list(self.get_result_as_array())
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1252, in get_result_as_array
formatted_values = format_values_with(float_format)
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1234, in format_values_with
return _trim_zeros_complex(values, na_rep)
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1597, in _trim_zeros_complex
return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1597, in <listcomp>
return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1594, in separate_and_trim
+ ["j"]
IndexError: list index out of range
```
#### Expected Output
This should print something like the following:
```
0 0.0-1.0j
dtype: complex128
```
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.0-17134-Microsoft
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.0
numpy : 1.16.4
pytz : 2019.1
dateutil : 2.8.0
pip : 19.0.3
setuptools : 40.8.0
Cython : None
pytest : 5.0.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.6.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| PR https://github.com/pandas-dev/pandas/pull/25745 is probably the culprit for this regression. Investigation and PR's welcome! | 2019-07-22T00:58:27Z | [] | [] |
Traceback (most recent call last):
File "foo.py", line 3, in <module>
print(Series([-1j]))
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/series.py", line 1611, in __repr__
length=show_dimensions,
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/series.py", line 1677, in to_string
result = formatter.to_string()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 312, in to_string
fmt_values = self._get_formatted_values()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 299, in _get_formatted_values
values_to_format, None, float_format=self.float_format, na_rep=self.na_rep
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1032, in format_array
return fmt_obj.get_result()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1063, in get_result
fmt_values = self._format_strings()
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1288, in _format_strings
return list(self.get_result_as_array())
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1252, in get_result_as_array
formatted_values = format_values_with(float_format)
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1234, in format_values_with
return _trim_zeros_complex(values, na_rep)
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1597, in _trim_zeros_complex
return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1597, in <listcomp>
return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
File "/home/david/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1594, in separate_and_trim
+ ["j"]
IndexError: list index out of range
| 12,890 |
|||
pandas-dev/pandas | pandas-dev__pandas-27580 | 3b96ada3a17f5fcc8c32a238457075ec4dd8433a | diff --git a/doc/source/install.rst b/doc/source/install.rst
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -15,35 +15,10 @@ Instructions for installing from source,
`PyPI <https://pypi.org/project/pandas>`__, `ActivePython <https://www.activestate.com/activepython/downloads>`__, various Linux distributions, or a
`development version <http://github.com/pandas-dev/pandas>`__ are also provided.
-.. _install.dropping-27:
-
-Plan for dropping Python 2.7
-----------------------------
-
-The Python core team plans to stop supporting Python 2.7 on January 1st, 2020.
-In line with `NumPy's plans`_, all pandas releases through December 31, 2018
-will support Python 2.
-
-The 0.24.x feature release will be the last release to
-support Python 2. The released package will continue to be available on
-PyPI and through conda.
-
- Starting **January 1, 2019**, all new feature releases (> 0.24) will be Python 3 only.
-
-If there are people interested in continued support for Python 2.7 past December
-31, 2018 (either backporting bug fixes or funding) please reach out to the
-maintainers on the issue tracker.
-
-For more information, see the `Python 3 statement`_ and the `Porting to Python 3 guide`_.
-
-.. _NumPy's plans: https://github.com/numpy/numpy/blob/master/doc/neps/nep-0014-dropping-python2.7-proposal.rst#plan-for-dropping-python-27-support
-.. _Python 3 statement: http://python3statement.org/
-.. _Porting to Python 3 guide: https://docs.python.org/3/howto/pyporting.html
-
Python version support
----------------------
-Officially Python 2.7, 3.5, 3.6, and 3.7.
+Officially Python 3.5.3 and above, 3.6, and 3.7.
Installing pandas
-----------------
diff --git a/doc/source/whatsnew/v0.23.0.rst b/doc/source/whatsnew/v0.23.0.rst
--- a/doc/source/whatsnew/v0.23.0.rst
+++ b/doc/source/whatsnew/v0.23.0.rst
@@ -31,7 +31,7 @@ Check the :ref:`API Changes <whatsnew_0230.api_breaking>` and :ref:`deprecations
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.0
:local:
diff --git a/doc/source/whatsnew/v0.23.1.rst b/doc/source/whatsnew/v0.23.1.rst
--- a/doc/source/whatsnew/v0.23.1.rst
+++ b/doc/source/whatsnew/v0.23.1.rst
@@ -12,7 +12,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.1
:local:
diff --git a/doc/source/whatsnew/v0.23.2.rst b/doc/source/whatsnew/v0.23.2.rst
--- a/doc/source/whatsnew/v0.23.2.rst
+++ b/doc/source/whatsnew/v0.23.2.rst
@@ -17,7 +17,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.2
:local:
diff --git a/doc/source/whatsnew/v0.23.4.rst b/doc/source/whatsnew/v0.23.4.rst
--- a/doc/source/whatsnew/v0.23.4.rst
+++ b/doc/source/whatsnew/v0.23.4.rst
@@ -12,7 +12,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.4
:local:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -6,7 +6,7 @@ What's new in 0.24.0 (January 25, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more
details.
{{ header }}
diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -6,7 +6,7 @@ Whats new in 0.24.1 (February 3, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
{{ header }}
diff --git a/doc/source/whatsnew/v0.24.2.rst b/doc/source/whatsnew/v0.24.2.rst
--- a/doc/source/whatsnew/v0.24.2.rst
+++ b/doc/source/whatsnew/v0.24.2.rst
@@ -6,7 +6,7 @@ Whats new in 0.24.2 (March 12, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
{{ header }}
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -6,7 +6,7 @@ What's new in 0.25.0 (July 18, 2019)
.. warning::
Starting with the 0.25.x series of releases, pandas only supports Python 3.5.3 and higher.
- See :ref:`install.dropping-27` for more details.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more details.
.. warning::
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -6,7 +6,7 @@ What's new in 1.0.0 (??)
.. warning::
Starting with the 0.25.x series of releases, pandas only supports Python 3.5.3 and higher.
- See :ref:`install.dropping-27` for more details.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more details.
.. warning::
| DOC: Unable to import pandas on python 3.5.2
#### Code Sample, a copy-pastable example if possible
```python
import pandas
```
#### Problem description
Although it seems like a typing issue pandas is still affected, error:
```
root@ae9a5374fe6d:/buildbot# python -c "import pandas"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/pandas/__init__.py", line 55, in <module>
from pandas.core.api import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/api.py", line 5, in <module>
from pandas.core.arrays.integer import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/__init__.py", line 1, in <module>
from .array_ import array # noqa: F401
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/array_.py", line 7, in <module>
from pandas.core.dtypes.common import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/common.py", line 11, in <module>
from pandas.core.dtypes.dtypes import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 53, in <module>
class Registry:
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 84, in Registry
self, dtype: Union[Type[ExtensionDtype], str]
File "/usr/lib/python3.5/typing.py", line 552, in __getitem__
dict(self.__dict__), parameters, _root=True)
File "/usr/lib/python3.5/typing.py", line 512, in __new__
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 512, in <genexpr>
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 1077, in __subclasscheck__
if super().__subclasscheck__(cls):
File "/usr/lib/python3.5/abc.py", line 225, in __subclasscheck__
for scls in cls.__subclasses__():
TypeError: descriptor '__subclasses__' of 'type' object needs an argument
```
To reproduce:
```
$ docker pull ursalab/amd64-ubuntu-16.04-python-3:worker
$ docker run -it ursalab/amd64-ubuntu-16.04-python-3:worker bash
# python -c "import pandas"
```
#### Output of ``pip freeze | grep pandas``
```
pandas==0.25.0
```
| 3.5.3 is the minimum on 0.25; see the release notes
@jreback Thanks! May I suggest to update the documentation about that https://pandas.pydata.org/pandas-docs/stable/install.html#python-version-support ?
yes that needs updating (and removing the 2.7) | 2019-07-25T06:05:45Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/pandas/__init__.py", line 55, in <module>
from pandas.core.api import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/api.py", line 5, in <module>
from pandas.core.arrays.integer import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/__init__.py", line 1, in <module>
from .array_ import array # noqa: F401
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/array_.py", line 7, in <module>
from pandas.core.dtypes.common import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/common.py", line 11, in <module>
from pandas.core.dtypes.dtypes import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 53, in <module>
class Registry:
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 84, in Registry
self, dtype: Union[Type[ExtensionDtype], str]
File "/usr/lib/python3.5/typing.py", line 552, in __getitem__
dict(self.__dict__), parameters, _root=True)
File "/usr/lib/python3.5/typing.py", line 512, in __new__
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 512, in <genexpr>
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 1077, in __subclasscheck__
if super().__subclasscheck__(cls):
File "/usr/lib/python3.5/abc.py", line 225, in __subclasscheck__
for scls in cls.__subclasses__():
TypeError: descriptor '__subclasses__' of 'type' object needs an argument
| 12,897 |
|||
pandas-dev/pandas | pandas-dev__pandas-27691 | ac6dca29cd4b433d7436c2bbd408a03542a576e3 | diff --git a/doc/source/install.rst b/doc/source/install.rst
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -15,35 +15,10 @@ Instructions for installing from source,
`PyPI <https://pypi.org/project/pandas>`__, `ActivePython <https://www.activestate.com/activepython/downloads>`__, various Linux distributions, or a
`development version <http://github.com/pandas-dev/pandas>`__ are also provided.
-.. _install.dropping-27:
-
-Plan for dropping Python 2.7
-----------------------------
-
-The Python core team plans to stop supporting Python 2.7 on January 1st, 2020.
-In line with `NumPy's plans`_, all pandas releases through December 31, 2018
-will support Python 2.
-
-The 0.24.x feature release will be the last release to
-support Python 2. The released package will continue to be available on
-PyPI and through conda.
-
- Starting **January 1, 2019**, all new feature releases (> 0.24) will be Python 3 only.
-
-If there are people interested in continued support for Python 2.7 past December
-31, 2018 (either backporting bug fixes or funding) please reach out to the
-maintainers on the issue tracker.
-
-For more information, see the `Python 3 statement`_ and the `Porting to Python 3 guide`_.
-
-.. _NumPy's plans: https://github.com/numpy/numpy/blob/master/doc/neps/nep-0014-dropping-python2.7-proposal.rst#plan-for-dropping-python-27-support
-.. _Python 3 statement: http://python3statement.org/
-.. _Porting to Python 3 guide: https://docs.python.org/3/howto/pyporting.html
-
Python version support
----------------------
-Officially Python 2.7, 3.5, 3.6, and 3.7.
+Officially Python 3.5.3 and above, 3.6, and 3.7.
Installing pandas
-----------------
diff --git a/doc/source/whatsnew/v0.23.0.rst b/doc/source/whatsnew/v0.23.0.rst
--- a/doc/source/whatsnew/v0.23.0.rst
+++ b/doc/source/whatsnew/v0.23.0.rst
@@ -31,7 +31,7 @@ Check the :ref:`API Changes <whatsnew_0230.api_breaking>` and :ref:`deprecations
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.0
:local:
diff --git a/doc/source/whatsnew/v0.23.1.rst b/doc/source/whatsnew/v0.23.1.rst
--- a/doc/source/whatsnew/v0.23.1.rst
+++ b/doc/source/whatsnew/v0.23.1.rst
@@ -12,7 +12,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.1
:local:
diff --git a/doc/source/whatsnew/v0.23.2.rst b/doc/source/whatsnew/v0.23.2.rst
--- a/doc/source/whatsnew/v0.23.2.rst
+++ b/doc/source/whatsnew/v0.23.2.rst
@@ -17,7 +17,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.2
:local:
diff --git a/doc/source/whatsnew/v0.23.4.rst b/doc/source/whatsnew/v0.23.4.rst
--- a/doc/source/whatsnew/v0.23.4.rst
+++ b/doc/source/whatsnew/v0.23.4.rst
@@ -12,7 +12,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.4
:local:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -6,7 +6,7 @@ What's new in 0.24.0 (January 25, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more
details.
{{ header }}
diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -6,7 +6,7 @@ Whats new in 0.24.1 (February 3, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
{{ header }}
diff --git a/doc/source/whatsnew/v0.24.2.rst b/doc/source/whatsnew/v0.24.2.rst
--- a/doc/source/whatsnew/v0.24.2.rst
+++ b/doc/source/whatsnew/v0.24.2.rst
@@ -6,7 +6,7 @@ Whats new in 0.24.2 (March 12, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
{{ header }}
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -6,7 +6,7 @@ What's new in 0.25.0 (July 18, 2019)
.. warning::
Starting with the 0.25.x series of releases, pandas only supports Python 3.5.3 and higher.
- See :ref:`install.dropping-27` for more details.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more details.
.. warning::
| DOC: Unable to import pandas on python 3.5.2
#### Code Sample, a copy-pastable example if possible
```python
import pandas
```
#### Problem description
Although it seems like a typing issue pandas is still affected, error:
```
root@ae9a5374fe6d:/buildbot# python -c "import pandas"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/pandas/__init__.py", line 55, in <module>
from pandas.core.api import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/api.py", line 5, in <module>
from pandas.core.arrays.integer import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/__init__.py", line 1, in <module>
from .array_ import array # noqa: F401
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/array_.py", line 7, in <module>
from pandas.core.dtypes.common import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/common.py", line 11, in <module>
from pandas.core.dtypes.dtypes import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 53, in <module>
class Registry:
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 84, in Registry
self, dtype: Union[Type[ExtensionDtype], str]
File "/usr/lib/python3.5/typing.py", line 552, in __getitem__
dict(self.__dict__), parameters, _root=True)
File "/usr/lib/python3.5/typing.py", line 512, in __new__
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 512, in <genexpr>
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 1077, in __subclasscheck__
if super().__subclasscheck__(cls):
File "/usr/lib/python3.5/abc.py", line 225, in __subclasscheck__
for scls in cls.__subclasses__():
TypeError: descriptor '__subclasses__' of 'type' object needs an argument
```
To reproduce:
```
$ docker pull ursalab/amd64-ubuntu-16.04-python-3:worker
$ docker run -it ursalab/amd64-ubuntu-16.04-python-3:worker bash
# python -c "import pandas"
```
#### Output of ``pip freeze | grep pandas``
```
pandas==0.25.0
```
| 3.5.3 is the minimum on 0.25; see the release notes
@jreback Thanks! May I suggest to update the documentation about that https://pandas.pydata.org/pandas-docs/stable/install.html#python-version-support ?
yes that needs updating (and removing the 2.7)
@kszucs how is pandas being installed? (I don't directly find this profile in the configuration)
As normally this should be catched during installation (see discussion i https://github.com/pandas-dev/pandas/pull/27288)
@jorisvandenbossche
```dockerfile
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3 python3-pip
RUN pip3 install pandas
CMD python3 -c "import pandas"
```
```bash
$ docker build -t pandas-py35 -f <the-dockerfile-above> .
$ docker run pandas-py35
```
@kszucs that had a warning for me locally
```
Step 3/4 : RUN pip3 install pandas
---> Running in 2181656518ff
Collecting pandas
Downloading https://files.pythonhosted.org/packages/a7/d9/e03b615e973c2733ff8fd53d95bd3633ecbfa81b5af2f83fe39647c02344/pandas-0.25.0-cp35-cp35m-manylinux1_x86_64.whl (10.3MB)
Collecting python-dateutil>=2.6.1 (from pandas)
Downloading https://files.pythonhosted.org/packages/41/17/c62faccbfbd163c7f57f3844689e3a78bae1f403648a6afb1d0866d87fbb/python_dateutil-2.8.0-py2.py3-none-any.whl (226kB)
Collecting numpy>=1.13.3 (from pandas)
Downloading https://files.pythonhosted.org/packages/69/25/eef8d362bd216b11e7d005331a3cca3d19b0aa57569bde680070109b745c/numpy-1.17.0-cp35-cp35m-manylinux1_x86_64.whl (20.2MB)
Collecting pytz>=2017.2 (from pandas)
Downloading https://files.pythonhosted.org/packages/3d/73/fe30c2daaaa0713420d0382b16fbb761409f532c56bdcc514bf7b6262bb6/pytz-2019.1-py2.py3-none-any.whl (510kB)
Collecting six>=1.5 (from python-dateutil>=2.6.1->pandas)
Downloading https://files.pythonhosted.org/packages/73/fb/00a976f728d0d1fecfe898238ce23f502a721c0ac0ecfedb80e0d88c64e9/six-1.12.0-py2.py3-none-any.whl
Installing collected packages: six, python-dateutil, numpy, pytz, pandas
Successfully installed numpy-1.17.0 pandas-0.25.0 python-dateutil-2.8.0 pytz-2019.1 six-1.12.0
You are using pip version 8.1.1, however version 19.2.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
I suspect that with a newer version of pip `(RUN pip3 install -U pip setuptools`), the build would error.
Probably. | 2019-08-01T12:24:09Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/pandas/__init__.py", line 55, in <module>
from pandas.core.api import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/api.py", line 5, in <module>
from pandas.core.arrays.integer import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/__init__.py", line 1, in <module>
from .array_ import array # noqa: F401
File "/usr/local/lib/python3.5/dist-packages/pandas/core/arrays/array_.py", line 7, in <module>
from pandas.core.dtypes.common import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/common.py", line 11, in <module>
from pandas.core.dtypes.dtypes import (
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 53, in <module>
class Registry:
File "/usr/local/lib/python3.5/dist-packages/pandas/core/dtypes/dtypes.py", line 84, in Registry
self, dtype: Union[Type[ExtensionDtype], str]
File "/usr/lib/python3.5/typing.py", line 552, in __getitem__
dict(self.__dict__), parameters, _root=True)
File "/usr/lib/python3.5/typing.py", line 512, in __new__
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 512, in <genexpr>
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 1077, in __subclasscheck__
if super().__subclasscheck__(cls):
File "/usr/lib/python3.5/abc.py", line 225, in __subclasscheck__
for scls in cls.__subclasses__():
TypeError: descriptor '__subclasses__' of 'type' object needs an argument
| 12,916 |
|||
pandas-dev/pandas | pandas-dev__pandas-27773 | 584b154cbf667ec4dd3482025718ea28b5827a46 | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -54,7 +54,7 @@ Numeric
^^^^^^^
- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
- Bug when printing negative floating point complex numbers would raise an ``IndexError`` (:issue:`27484`)
--
+- Bug where :class:`DataFrame` arithmetic operators such as :meth:`DataFrame.mul` with a :class:`Series` with axis=1 would raise an ``AttributeError`` on :class:`DataFrame` larger than the minimum threshold to invoke numexpr (:issue:`27636`)
-
Conversion
diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py
--- a/pandas/core/computation/expressions.py
+++ b/pandas/core/computation/expressions.py
@@ -76,16 +76,17 @@ def _can_use_numexpr(op, op_str, a, b, dtype_check):
# required min elements (otherwise we are adding overhead)
if np.prod(a.shape) > _MIN_ELEMENTS:
-
# check for dtype compatibility
dtypes = set()
for o in [a, b]:
- if hasattr(o, "dtypes"):
+ # Series implements dtypes, check for dimension count as well
+ if hasattr(o, "dtypes") and o.ndim > 1:
s = o.dtypes.value_counts()
if len(s) > 1:
return False
dtypes |= set(s.index.astype(str))
- elif isinstance(o, np.ndarray):
+ # ndarray and Series Case
+ elif hasattr(o, "dtype"):
dtypes |= {o.dtype.name}
# allowed are a superset
| Operators between DataFrame and Series fail on large dataframes
#### Code Sample
```python
import pandas as pd
ind = list(range(0, 100))
cols = list(range(0, 300))
df = pd.DataFrame(index=ind, columns=cols, data=1.0)
series = pd.Series(index=cols, data=cols)
print(df.multiply(series, axis=1).head()) # Works fine
ind = list(range(0, 100000))
cols = list(range(0, 300))
df = pd.DataFrame(index=ind, columns=cols, data=1.0)
series = pd.Series(index=cols, data=cols)
print(df.add(series,axis=1).head())
```
#### Code Output:
```
0 1 2 3 4 5 ... 294 295 296 297 298 299
0 0.0 1.0 2.0 3.0 4.0 5.0 ... 294.0 295.0 296.0 297.0 298.0 299.0
1 0.0 1.0 2.0 3.0 4.0 5.0 ... 294.0 295.0 296.0 297.0 298.0 299.0
2 0.0 1.0 2.0 3.0 4.0 5.0 ... 294.0 295.0 296.0 297.0 298.0 299.0
3 0.0 1.0 2.0 3.0 4.0 5.0 ... 294.0 295.0 296.0 297.0 298.0 299.0
4 0.0 1.0 2.0 3.0 4.0 5.0 ... 294.0 295.0 296.0 297.0 298.0 299.0
[5 rows x 300 columns]
Traceback (most recent call last):
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\IPython\core\interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-25-4d9165e5df4a>", line 15, in <module>
print(df.add(series,axis=1).head())
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 1499, in f
self, other, pass_op, fill_value=fill_value, axis=axis, level=level
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 1388, in _combine_series_frame
return self._combine_match_columns(other, func, level=level)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\frame.py", line 5392, in _combine_match_columns
return ops.dispatch_to_series(left, right, func, axis="columns")
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 596, in dispatch_to_series
new_data = expressions.evaluate(column_op, str_rep, left, right)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 220, in evaluate
return _evaluate(op, op_str, a, b, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 126, in _evaluate_numexpr
result = _evaluate_standard(op, op_str, a, b)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 70, in _evaluate_standard
return op(a, b)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 584, in column_op
return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 584, in <dictcomp>
return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 1473, in na_op
result = expressions.evaluate(op, str_rep, x, y, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 220, in evaluate
return _evaluate(op, op_str, a, b, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 101, in _evaluate_numexpr
if _can_use_numexpr(op, op_str, a, b, "evaluate"):
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 84, in _can_use_numexpr
s = o.dtypes.value_counts()
AttributeError: 'numpy.dtype' object has no attribute 'value_counts'
```
#### Problem description
I think this is a regression somewhere between pandas 0.19.2 and 0.25. If you multiply or use any other operator function such as add/divide on a DataFrame by a Series where axis=1 pandas will crash in the `_can_use_numexpr` functon when the DataFrame/Series becomes very large. This is presumably down to check of the size of the objects being operated on not passing for small datasets but for larger ones it gets to the failing line.
```python
#pandas/core/computation/expressions.py : 73
def _can_use_numexpr(op, op_str, a, b, dtype_check):
""" return a boolean if we WILL be using numexpr """
if op_str is not None:
# required min elements (otherwise we are adding overhead)
if np.prod(a.shape) > _MIN_ELEMENTS:
# check for dtype compatibility
dtypes = set()
for o in [a, b]:
if hasattr(o, "dtypes"):
s = o.dtypes.value_counts() # Fails here
```
In pandas 0.19.2 the function uses the get_dtype_counts() method instead to inspect if the dtype is uniform in the object:
```python
def _can_use_numexpr(op, op_str, a, b, dtype_check):
""" return a boolean if we WILL be using numexpr """
if op_str is not None:
# required min elements (otherwise we are adding overhead)
if np.prod(a.shape) > _MIN_ELEMENTS:
# check for dtype compatiblity
dtypes = set()
for o in [a, b]:
if hasattr(o, 'get_dtype_counts'):
s = o.get_dtype_counts()
```
I have a workaround which is to transpose the dataframe and use axis=0:
```python
df.T.add(series,axis=0).T.head()
```
I noticed get_dtype_counts() is deprecated ( #27145 ) which appears to be the PR that has caused this regression as a Series only returns a single numpy dtype which does not have a value_counts() method.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.6.5.final.0
python-bits : 64
OS : Windows
OS-release : 7
machine : AMD64
processor : Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 0.25.0
numpy : 1.16.4
pytz : 2018.4
dateutil : 2.7.3
pip : 10.0.1
setuptools : 39.1.0
Cython : None
pytest : 3.5.1
hypothesis : None
sphinx : 1.8.2
blosc : None
feather : None
xlsxwriter : 1.0.4
lxml.etree : 4.1.1
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.10
IPython : 6.4.0
pandas_datareader: None
bs4 : 4.7.1
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.1.1
matplotlib : 2.2.2
numexpr : 2.6.5
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.1.0
sqlalchemy : 1.2.8
tables : 3.5.2
xarray : None
xlrd : 1.1.0
xlwt : None
xlsxwriter : 1.0.4
</details>
| cc @jbrockmendel.
Looks like this was changed from obj.get_dtype_counts, which returns Series for either Series or DataFrame, to obj.dtypes.value_counts, but Series.dtypes returns a Scalar, which is why value_counts raises AttributeError.
I can raise a PR to do an extra hasattr on the dtypes. That should fix it?
maybe change
```
if hasattr(o, "dtypes"):
```
to
```
if hasattr(o, "dtypes") and o.ndim > 1:
...
```
But yes, a PR with tests and a release note in 0.25.1.rst would be very welcome.
Your suggestion worked as well and removed the extra if statement. I added to the test suite in test_expression.py which has uncovered some more issues with operators on DataFrames and Series, with axis=1. Will update this issue once I know the cause.
> Will update this issue once I know the cause.
If its feasible, it would be easier if you made a small PR specific to the bug here, then address the newly-found bugs in separate steps.
It is feasible but it would require a very narrow test. The issue I am having now is that numexpr is failing to work on floordiv when operating on a DataFrame by a series with axis=1. This issue was never caught because the test suite doesn't cover this case currently.
If we modify the example code snippet, with the fix suggested by @TomAugspurger to:
```python
import pandas as pd
ind = list(range(0, 100))
cols = list(range(0, 300))
df = pd.DataFrame(index=ind, columns=cols, data=1.0)
series = pd.Series(index=cols, data=cols)
print(df.floordiv(series, axis=1).head()) # Works fine
ind = list(range(0, 100000))
cols = list(range(0, 300))
df = pd.DataFrame(index=ind, columns=cols, data=1.0)
series = pd.Series(index=cols, data=cols)
print(df.floordiv(series,axis=1).head())
```
We get the following traceback:
<details>
```
Traceback (most recent call last):
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 1473, in na_op
result = expressions.evaluate(op, str_rep, x, y, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\computation\expressions.py", line 220, in evaluate
return _evaluate(op, op_str, a, b, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\computation\expressions.py", line 116, in _evaluate_numexpr
**eval_kwargs
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\numexpr\necompiler.py", line 802, in evaluate
* 'no' means the data types should not be cast at all.
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\numexpr\necompiler.py", line 709, in getExprNames
input_order = getInputOrder(ast, None)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\numexpr\necompiler.py", line 299, in stringToExpression
ex = eval(c, names)
File "<expr>", line 1, in <module>
TypeError: unsupported operand type(s) for //: 'VariableNode' and 'VariableNode'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 12, in <module>
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 1499, in f
self, other, pass_op, fill_value=fill_value, axis=axis, level=level
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 1388, in _combine_series_frame
return self._combine_match_columns(other, func, level=level)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\frame.py", line 5392, in _combine_match_columns
return ops.dispatch_to_series(left, right, func, axis="columns")
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 596, in dispatch_to_series
new_data = expressions.evaluate(column_op, str_rep, left, right)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\computation\expressions.py", line 220, in evaluate
return _evaluate(op, op_str, a, b, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\computation\expressions.py", line 126, in _evaluate_numexpr
result = _evaluate_standard(op, op_str, a, b)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\computation\expressions.py", line 70, in _evaluate_standard
return op(a, b)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 584, in column_op
return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 584, in <dictcomp>
return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 1475, in na_op
result = masked_arith_op(x, y, op)
File "C:\dev\bin\anaconda\envs\py36pd25\lib\site-packages\pandas\core\ops\__init__.py", line 451, in masked_arith_op
assert isinstance(x, np.ndarray), type(x)
AssertionError: <class 'pandas.core.series.Series'>
```
</details>
masked_arith_op expects its params x and y to be ndarray but in this specific case x is a Series:
```python
# pandas/core/ops/__init__.py : 423
# For Series `x` is 1D so ravel() is a no-op; calling it anyway makes
# the logic valid for both Series and DataFrame ops.
xrav = x.ravel()
assert isinstance(x, np.ndarray), type(x)
```
Modifying this function to use xrav instead of just x does fix the issue and all unit tests still pass but I am not sure if this is the true intention of the in line comment here?
Happy to restrict the tests to try every operator BUT floordiv if that is better to reduce the scope of the PR.
> Happy to restrict the tests to try every operator BUT floordiv if that is better to reduce the scope of the PR.
Let's do that for now. You can open another issue for the floordiv problem I think. | 2019-08-06T10:27:57Z | [] | [] |
Traceback (most recent call last):
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\IPython\core\interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-25-4d9165e5df4a>", line 15, in <module>
print(df.add(series,axis=1).head())
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 1499, in f
self, other, pass_op, fill_value=fill_value, axis=axis, level=level
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 1388, in _combine_series_frame
return self._combine_match_columns(other, func, level=level)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\frame.py", line 5392, in _combine_match_columns
return ops.dispatch_to_series(left, right, func, axis="columns")
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 596, in dispatch_to_series
new_data = expressions.evaluate(column_op, str_rep, left, right)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 220, in evaluate
return _evaluate(op, op_str, a, b, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 126, in _evaluate_numexpr
result = _evaluate_standard(op, op_str, a, b)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 70, in _evaluate_standard
return op(a, b)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 584, in column_op
return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 584, in <dictcomp>
return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\ops\__init__.py", line 1473, in na_op
result = expressions.evaluate(op, str_rep, x, y, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 220, in evaluate
return _evaluate(op, op_str, a, b, **eval_kwargs)
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 101, in _evaluate_numexpr
if _can_use_numexpr(op, op_str, a, b, "evaluate"):
File "C:\dev\bin\anaconda\envs\py36\lib\site-packages\pandas\core\computation\expressions.py", line 84, in _can_use_numexpr
s = o.dtypes.value_counts()
AttributeError: 'numpy.dtype' object has no attribute 'value_counts'
| 12,925 |
|||
pandas-dev/pandas | pandas-dev__pandas-27777 | 61819aba14dd7b3996336aaed84d07cd936d92b5 | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -103,7 +103,7 @@ MultiIndex
I/O
^^^
--
+- Avoid calling ``S3File.s3`` when reading parquet, as this was removed in s3fs version 0.3.0 (:issue:`27756`)
-
-
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -184,12 +184,14 @@ def write(
def read(self, path, columns=None, **kwargs):
if is_s3_url(path):
+ from pandas.io.s3 import get_file_and_filesystem
+
# When path is s3:// an S3File is returned.
# We need to retain the original path(str) while also
# pass the S3File().open function to fsatparquet impl.
- s3, _, _, should_close = get_filepath_or_buffer(path)
+ s3, filesystem = get_file_and_filesystem(path)
try:
- parquet_file = self.api.ParquetFile(path, open_with=s3.s3.open)
+ parquet_file = self.api.ParquetFile(path, open_with=filesystem.open)
finally:
s3.close()
else:
diff --git a/pandas/io/s3.py b/pandas/io/s3.py
--- a/pandas/io/s3.py
+++ b/pandas/io/s3.py
@@ -1,8 +1,11 @@
""" s3 support for remote file interactivity """
+from typing import IO, Any, Optional, Tuple
from urllib.parse import urlparse as parse_url
from pandas.compat._optional import import_optional_dependency
+from pandas._typing import FilePathOrBuffer
+
s3fs = import_optional_dependency(
"s3fs", extra="The s3fs package is required to handle s3 files."
)
@@ -14,9 +17,9 @@ def _strip_schema(url):
return result.netloc + result.path
-def get_filepath_or_buffer(
- filepath_or_buffer, encoding=None, compression=None, mode=None
-):
+def get_file_and_filesystem(
+ filepath_or_buffer: FilePathOrBuffer, mode: Optional[str] = None
+) -> Tuple[IO, Any]:
from botocore.exceptions import NoCredentialsError
if mode is None:
@@ -24,7 +27,7 @@ def get_filepath_or_buffer(
fs = s3fs.S3FileSystem(anon=False)
try:
- filepath_or_buffer = fs.open(_strip_schema(filepath_or_buffer), mode)
+ file = fs.open(_strip_schema(filepath_or_buffer), mode)
except (FileNotFoundError, NoCredentialsError):
# boto3 has troubles when trying to access a public file
# when credentialed...
@@ -33,5 +36,15 @@ def get_filepath_or_buffer(
# A NoCredentialsError is raised if you don't have creds
# for that bucket.
fs = s3fs.S3FileSystem(anon=True)
- filepath_or_buffer = fs.open(_strip_schema(filepath_or_buffer), mode)
- return filepath_or_buffer, None, compression, True
+ file = fs.open(_strip_schema(filepath_or_buffer), mode)
+ return file, fs
+
+
+def get_filepath_or_buffer(
+ filepath_or_buffer: FilePathOrBuffer,
+ encoding: Optional[str] = None,
+ compression: Optional[str] = None,
+ mode: Optional[str] = None,
+) -> Tuple[IO, Optional[str], Optional[str], bool]:
+ file, _fs = get_file_and_filesystem(filepath_or_buffer, mode=mode)
+ return file, None, compression, True
| Error reading parquet from s3 with s3fs >= 0.3.0
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
df = pd.read_parquet('s3://my-bucket/df.parquet')
```
Raises
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../pandas/io/parquet.py", line 294, in read_parquet
return impl.read(path, columns=columns, **kwargs)
File "/.../pandas/io/parquet.py", line 192, in read
parquet_file = self.api.ParquetFile(path, open_with=s3.s3.open)
AttributeError: 'S3File' object has no attribute 's3'
```
#### Problem description
In version 0.3.0 s3fs removed the `S3File.s3` attribute. It is replaced by `S3File.fs` (which is inherited from `fsspec.AbstractBufferedFile.fs`.
Should pandas check the s3fs version and call the right attribute based on that?
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Darwin
OS-release : 18.6.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.0
numpy : 1.17.0
pytz : 2019.1
dateutil : 2.8.0
pip : 19.2.1
setuptools : 41.0.1
Cython : None
pytest : 4.4.1
hypothesis : None
sphinx : 2.1.2
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.3 (dt dec pq3 ext lo64)
jinja2 : 2.10.1
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : 0.3.1
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : 0.3.1
scipy : 1.3.0
sqlalchemy : 1.3.5
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| > Should pandas check the s3fs version and call the right attribute based on that?
Sure.
cc @martindurant for the (possibly unintentional) API change.
So the `open_with` in https://github.com/pandas-dev/pandas/blob/61362be9ea4d69b33ae421f1f98b8db50be611a2/pandas/io/parquet.py#L192 will need to depend on the version of s3fs.
Indeed this is an API change. However, I am surprised that anyone is opening a file and then using the FS methods of the attribute of that file - you presumably have the FS available directly anyway at this point.
Indeed, rather than test specifically for s3 URLs, I would strongly encourage pandas to use fsspec directly, so that then you can read from any of the implementations supported by fsspec.
Perhaps there should be a function returning both the file and the filesystem, which can be used here instead of `get_filepath_or_buffer`. That would avoid `S3File.s3`/`S3File.fs`.
If that sounds like a reasonable direction I will work on a PR.
I'm not sure what's best.
On Mon, Aug 5, 2019 at 9:58 AM Chris Stadler <notifications@github.com>
wrote:
> Perhaps there should be a function returning both the file and the
> filesystem, which can be used here instead of get_filepath_or_buffer.
> That would avoid S3File.s3/S3File.fs.
>
> If that sounds like a reasonable direction I will work on a PR.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/27756?email_source=notifications&email_token=AAKAOIX27VNYLVWZZADDDFTQDA5Z3A5CNFSM4IJLDNJ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3SCWKI#issuecomment-518269737>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAKAOIX6R6HBTG6K5TWDRYLQDA5Z3ANCNFSM4IJLDNJQ>
> .
>
Ran into this issue today; just made a local, hacky in-vivo fix to the API break. Happy to help in any way to fix the issue properly.
Cheers.
For the sake of compatibility, I can make S3File.s3 -> S3File.fs alias, if that makes life easier. | 2019-08-06T12:48:03Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../pandas/io/parquet.py", line 294, in read_parquet
return impl.read(path, columns=columns, **kwargs)
File "/.../pandas/io/parquet.py", line 192, in read
parquet_file = self.api.ParquetFile(path, open_with=s3.s3.open)
AttributeError: 'S3File' object has no attribute 's3'
| 12,926 |
|||
pandas-dev/pandas | pandas-dev__pandas-27788 | 54e58039fddc79492e598e85279c42e85d06967c | DataFrame.groupby(grp, axis=1) with categorical grp breaks
While attempting to use `pd.qcut` (which returned a Categorical) to bin some data in groups for plotting, I encountered the following error. The idea is to group a DataFrame by columns (`axis=1`) using a Categorical.
#### Minimal breaking example
```
>>> import pandas
>>> df = pandas.DataFrame({'a':[1,2,3,4], 'b':[-1,-2,-3,-4], 'c':[5,6,7,8]})
>>> df
a b c
0 1 -1 5
1 2 -2 6
2 3 -3 7
3 4 -4 8
>>> grp = pandas.Categorical([1,0,1])
>>> df.groupby(grp, axis=1).mean()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/generic.py", line 3778, in groupby
**kwargs)
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 1427, in groupby
return klass(obj, by, **kwds)
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 354, in __init__
mutated=self.mutated)
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 2390, in _get_grouper
raise ValueError("Categorical dtype grouper must "
ValueError: Categorical dtype grouper must have len(grouper) == len(data)
```
#### Expected behaviour
Same as
```
>>> df.T.groupby(grp, axis=0).mean().T
0 1
0 -1 3
1 -2 4
2 -3 5
3 -4 6
```
So, it works as expected when doubly transposed. This makes it appear as a bug to me.
#### Proposed solution
In [`if is_categorical_dtype(gpr) and len(gpr) != len(obj):`](https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L2406), change `len(obj)` to `obj.shape[axis]`. This assumes that `len(obj) == obj.shape[0]` for all `obj`.
So, supposing you agree that this is a bug, should a test be put in [`test_groupby_categorical`](https://github.com/pydata/pandas/blob/master/pandas/tests/test_groupby.py#L3968)?
#### output of `pd.show_versions()`
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.1.final.0
python-bits: 64
OS: Linux
OS-release: 3.19.0-59-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.18.1
nose: 1.3.7
pip: 8.1.2
setuptools: 22.0.5
Cython: 0.24
numpy: 1.10.4
scipy: 0.17.1
statsmodels: 0.6.1
xarray: None
IPython: 4.2.0
sphinx: 1.4.1
patsy: 0.4.1
dateutil: 2.5.3
pytz: 2016.4
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.5.2
matplotlib: 1.5.1
openpyxl: 2.3.2
xlrd: 1.0.0
xlwt: 1.1.1
xlsxwriter: 0.8.9
lxml: 3.6.0
bs4: 4.4.1
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.13
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.40.0
pandas_datareader: None
```
| Your grouper is not a valid categorical as it doesn't map anything. Though this still fails.
```
In [30]: grp = pd.Categorical.from_codes([1,0,1],categories=list('abc'))
In [31]: grp
Out[31]:
[b, a, b]
Categories (3, object): [a, b, c]
In [32]: grp.codes
Out[32]: array([1, 0, 1], dtype=int8)
```
So i'd say this is a bug, but need a bit of workout on the tests.
That's great!
A question for your answer, though: You say that `grp = pd.Categorical([1,0,1])` is
> not a valid categorical as it doesn't map anything.
What do you mean by this? The counter-example shown above has the categories given explicitly, but the first example (giving only values) should work fine, as [the categories, if not given, are assumed to be the unique values of values.](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Categorical.html#pandas.Categorical). What am I missing?
#### Small demo of codes and categories from first example
```
In [4]: grp = pd.Categorical([1,0,1])
In [5]: grp
Out[5]:
[1, 0, 1]
Categories (2, int64): [0, 1]
In [6]: grp.codes
Out[6]: array([1, 0, 1], dtype=int8)
In [7]: grp.categories
Out[7]: Int64Index([0, 1], dtype='int64')
```
Thank you for your work!
the problem in your example is that nothing maps
iow need to map the column names to groups
but you are mapping integers - I don't think we error on this but everything gets into the man group and it should return an empty frame I think
This is in fact related to grouping by categories. Here is an example:
```
In [1]: import pandas
...: df = pandas.DataFrame({'A': ["pos", "neg", "pos"], 'B': [1, -1, 2]})
...: df.A = df.A.astype("category")
...: df
Out[1]:
A B
0 pos 1
1 neg -1
2 pos 2
In [2]: grp = df.A[1:] # Same indexing, different lengths
In [4]: df.groupby(grp).mean() # Categorical + different length = bug
~/Library/Python/3.6/lib/python/site-packages/pandas/core/groupby.py in _get_grouper(obj, key, axis, level, sort, mutated)
2624
2625 if is_categorical_dtype(gpr) and len(gpr) != len(obj):
-> 2626 raise ValueError("Categorical dtype grouper must "
2627 "have len(grouper) == len(data)")
2628
ValueError: Categorical dtype grouper must have len(grouper) == len(data)
In [5]: df.groupby(grp.astype(str)).mean() # Convert to string to avoid the buggy check
Out[5]:
B
A
neg -1
pos 2
``` | 2019-08-06T20:05:29Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/generic.py", line 3778, in groupby
**kwargs)
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 1427, in groupby
return klass(obj, by, **kwds)
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 354, in __init__
mutated=self.mutated)
File "/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 2390, in _get_grouper
raise ValueError("Categorical dtype grouper must "
ValueError: Categorical dtype grouper must have len(grouper) == len(data)
| 12,929 |
||||
pandas-dev/pandas | pandas-dev__pandas-27814 | 8f6118c6a1547ffd39d9b89df1b8e52128b63aa0 | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -108,6 +108,7 @@ Other
^^^^^
- Bug in :meth:`Series.replace` and :meth:`DataFrame.replace` when replacing timezone-aware timestamps using a dict-like replacer (:issue:`27720`)
+- Bug in :meth:`Series.rename` when using a custom type indexer. Now any value that isn't callable or dict-like is treated as a scalar. (:issue:`27814`)
.. _whatsnew_0.251.contributors:
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4165,12 +4165,10 @@ def rename(self, index=None, **kwargs):
"""
kwargs["inplace"] = validate_bool_kwarg(kwargs.get("inplace", False), "inplace")
- non_mapping = is_scalar(index) or (
- is_list_like(index) and not is_dict_like(index)
- )
- if non_mapping:
+ if callable(index) or is_dict_like(index):
+ return super().rename(index=index, **kwargs)
+ else:
return self._set_name(index, inplace=kwargs.get("inplace"))
- return super().rename(index=index, **kwargs)
@Substitution(**_shared_doc_kwargs)
@Appender(generic.NDFrame.reindex.__doc__)
| BUG: Series.rename raises error on values accepted by Series constructor.
#### Sample
```python
import pandas as pd
class MyIndexer:
pass
i1 = MyIndexer()
s = pd.Series([1, 2, 3], name=i1) # allowed
s.rename(i1) # raises error
```
The error stack trace is the following:
```python
Traceback (most recent call last):
File "test.py", line 8, in <module>
s.rename(i1) # raises error
File "/usr/local/lib/python3.6/dist-packages/pandas/core/series.py", line 3736, in rename
return super(Series, self).rename(index=index, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py", line 1091, in rename
level=level)
File "/usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py", line 171, in rename_axis
obj.set_axis(axis, _transform_index(self.axes[axis], mapper, level))
File "/usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py", line 2004, in _transform_index
items = [func(x) for x in index]
File "/usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py", line 2004, in <listcomp>
items = [func(x) for x in index]
TypeError: 'MyIndexer' object is not callable
```
#### Description
Series.rename handle anything that isn't a scalar or list-like as a mapping.
#### Proposed change
Change the following code (from Series.rename):
```python
non_mapping = is_scalar(index) or (is_list_like(index) and not is_dict_like(index))
if non_mapping:
return self._set_name(index, inplace=kwargs.get("inplace"))
return super().rename(index=index, **kwargs)
```
to
```python
if callable(index) or is_dict_like(index):
return super().rename(index=index, **kwargs)
else:
return self._set_name(index, inplace=kwargs.get("inplace"))
````
so anything that isn't a dict or a callable will be treated the same way as a scalar or list-like.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.8.final.0
python-bits: 64
OS: Linux
OS-release: 4.15.0-55-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: pt_BR.UTF-8
pandas: 0.24.2
pytest: 3.6.0
pip: 19.1.1
setuptools: 41.0.0
Cython: 0.26.1
numpy: 1.16.4
scipy: 1.3.0
pyarrow: None
xarray: None
IPython: 6.4.0
sphinx: None
patsy: 0.5.1
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 3.1.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: 4.2.1
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
</details>
| 2019-08-08T02:04:10Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 8, in <module>
s.rename(i1) # raises error
File "/usr/local/lib/python3.6/dist-packages/pandas/core/series.py", line 3736, in rename
return super(Series, self).rename(index=index, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py", line 1091, in rename
level=level)
File "/usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py", line 171, in rename_axis
obj.set_axis(axis, _transform_index(self.axes[axis], mapper, level))
File "/usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py", line 2004, in _transform_index
items = [func(x) for x in index]
File "/usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py", line 2004, in <listcomp>
items = [func(x) for x in index]
TypeError: 'MyIndexer' object is not callable
| 12,934 |
||||
pandas-dev/pandas | pandas-dev__pandas-27827 | 69c58da27cb61a81a94cc3a5da3a2c1870b4e693 | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -117,6 +117,7 @@ Plotting
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
+- Fixed regression in :meth:`pands.core.groupby.DataFrameGroupBy.quantile` raising when multiple quantiles are given (:issue:`27526`)
- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.transform` where applying a timezone conversion lambda function would drop timezone information (:issue:`27496`)
- Bug in windowing over read-only arrays (:issue:`27766`)
- Fixed segfault in `pandas.core.groupby.DataFrameGroupBy.quantile` when an invalid quantile was passed (:issue:`27470`)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1870,6 +1870,7 @@ def quantile(self, q=0.5, interpolation="linear"):
a 2.0
b 3.0
"""
+ from pandas import concat
def pre_processor(vals: np.ndarray) -> Tuple[np.ndarray, Optional[Type]]:
if is_object_dtype(vals):
@@ -1897,18 +1898,57 @@ def post_processor(vals: np.ndarray, inference: Optional[Type]) -> np.ndarray:
return vals
- return self._get_cythonized_result(
- "group_quantile",
- self.grouper,
- aggregate=True,
- needs_values=True,
- needs_mask=True,
- cython_dtype=np.float64,
- pre_processing=pre_processor,
- post_processing=post_processor,
- q=q,
- interpolation=interpolation,
- )
+ if is_scalar(q):
+ return self._get_cythonized_result(
+ "group_quantile",
+ self.grouper,
+ aggregate=True,
+ needs_values=True,
+ needs_mask=True,
+ cython_dtype=np.float64,
+ pre_processing=pre_processor,
+ post_processing=post_processor,
+ q=q,
+ interpolation=interpolation,
+ )
+ else:
+ results = [
+ self._get_cythonized_result(
+ "group_quantile",
+ self.grouper,
+ aggregate=True,
+ needs_values=True,
+ needs_mask=True,
+ cython_dtype=np.float64,
+ pre_processing=pre_processor,
+ post_processing=post_processor,
+ q=qi,
+ interpolation=interpolation,
+ )
+ for qi in q
+ ]
+ result = concat(results, axis=0, keys=q)
+ # fix levels to place quantiles on the inside
+ # TODO(GH-10710): Ideally, we could write this as
+ # >>> result.stack(0).loc[pd.IndexSlice[:, ..., q], :]
+ # but this hits https://github.com/pandas-dev/pandas/issues/10710
+ # which doesn't reorder the list-like `q` on the inner level.
+ order = np.roll(list(range(result.index.nlevels)), -1)
+ result = result.reorder_levels(order)
+ result = result.reindex(q, level=-1)
+
+ # fix order.
+ hi = len(q) * self.ngroups
+ arr = np.arange(0, hi, self.ngroups)
+ arrays = []
+
+ for i in range(self.ngroups):
+ arr = arr + i
+ arrays.append(arr)
+
+ indices = np.concatenate(arrays)
+ assert len(indices) == len(result)
+ return result.take(indices)
@Substitution(name="groupby")
def ngroup(self, ascending=True):
| Groupby Array-Type Quantiles Broken in 0.25.0
#### Code Sample
```python
import pandas as pd
df = pd.DataFrame({
'category': ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B'],
'value': [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6]
})
quantiles = df.groupby('category').quantile([0.25, 0.5, 0.75])
print(quantiles)
```
#### Problem description
In previous versions of Pandas `< 0.25.0` and in the documentation it is possible to pass an array-type of quantiles into the `DataFrameGroupBy.quantile()` method to return multiple quantile values in a single call. However, upon installation of `0.25.0` the following error results instead:
```
Traceback (most recent call last):
File "example.py", line 8, in <module>
quantiles = df.groupby('category').quantile([0.25, 0.5, 0.75])
File "/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py", line 1908, in quantile
interpolation=interpolation,
File "/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py", line 2248, in _get_cythonized_result
func(**kwargs) # Call func to modify indexer values in place
File "pandas/_libs/groupby.pyx", line 69
```
#### Expected Output
Using Pandas `0.24.2` the output is:
```
value
category
A 0.25 2.25
0.50 3.50
0.75 4.75
B 0.25 2.25
0.50 3.50
0.75 4.75
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Linux
OS-release : 4.9.125-linuxkit
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.0
numpy : 1.16.4
pytz : 2019.1
dateutil : 2.8.0
pip : 19.1.1
setuptools : 41.0.1
Cython : None
pytest : 5.0.1
hypothesis : None
sphinx : 2.1.2
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.1
IPython : None
pandas_datareader: None
bs4 : 4.8.0
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.1
numexpr : 2.6.9
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : 0.3.0
scipy : 1.3.0
sqlalchemy : None
tables : 3.5.2
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| I got this error message when using a numpy array (form np.linspace()):
TypeError: only size-1 arrays can be converted to Python scalars
Downgrade to pandas 0.24 solves this.
my test code (snipplet):
percs = (np.linspace(0, 1, num=intervals + 1).round(decimals=3))
d = df[['x', 'y']]
g = d.groupby('x')
quants = g.quantile(percs)
breaks in last line with 0.25, works in 0.24
there is a PR #27473 which solves this and just needs some touching up to fix
That PR was about #20405 not validating inputs. This issue is about #20405 deleting functionality so different bugs.
Is the fix to change https://github.com/pandas-dev/pandas/blob/c0ff67a22df9c18da1172766e313732ed2ab6c30/pandas/core/groupby/groupby.py#L1900-L1911 to be called once per value in `q`, when a list of quintiles is provide? Then concat the results together with `concat(results, axis=1, keys=q)`?
The output of `DataFrameGroupBy.quantile` is a DataFrame whose
* index is the group keys
* columns are the (numeric) columns
```python
In [68]: df = pd.DataFrame({"A": [0, 1, 2, 3, 4]})
In [69]: df.groupby([0, 0, 1, 1, 1]).quantile(0.25)
Out[69]:
A
0 0.25
```
What's the expected output of `.quantile(List[float])`?
It's not the most useful, but I think the best option is a MultiIndex in the columns.
```python
In [70]: a = df.iloc[:2].quantile([0.25]).unstack()
In [71]: b = df.iloc[2:].quantile([0.25]).unstack()
In [72]: pd.concat([a, b], keys=[0, 1]).unstack([1, 2])
Out[72]:
A
0.25
0 0.25
1 2.50
```
The other option is to have the `q`s in the index, but that breaks my mental model that the index should be the unique group keys.
Oh, whoops, I missed the 0.24 output. We'll match that. | 2019-08-08T20:36:24Z | [] | [] |
Traceback (most recent call last):
File "example.py", line 8, in <module>
quantiles = df.groupby('category').quantile([0.25, 0.5, 0.75])
File "/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py", line 1908, in quantile
interpolation=interpolation,
File "/usr/local/lib/python3.7/site-packages/pandas/core/groupby/groupby.py", line 2248, in _get_cythonized_result
func(**kwargs) # Call func to modify indexer values in place
File "pandas/_libs/groupby.pyx", line 69
```
#### Expected Output
Using Pandas `0.24.2` the output is:
| 12,937 |
|||
pandas-dev/pandas | pandas-dev__pandas-27926 | 6813d7796e759435e915f3dda84ad9db81ebbadb | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -85,6 +85,7 @@ Indexing
- Bug in partial-string indexing returning a NumPy array rather than a ``Series`` when indexing with a scalar like ``.loc['2015']`` (:issue:`27516`)
- Break reference cycle involving :class:`Index` to allow garbage collection of :class:`Index` objects without running the GC. (:issue:`27585`)
- Fix regression in assigning values to a single column of a DataFrame with a ``MultiIndex`` columns (:issue:`27841`).
+- Fix regression in ``.ix`` fallback with an ``IntervalIndex`` (:issue:`27865`).
-
Missing
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -124,7 +124,7 @@ def __getitem__(self, key):
key = tuple(com.apply_if_callable(x, self.obj) for x in key)
try:
values = self.obj._get_value(*key)
- except (KeyError, TypeError, InvalidIndexError):
+ except (KeyError, TypeError, InvalidIndexError, AttributeError):
# TypeError occurs here if the key has non-hashable entries,
# generally slice or list.
# TODO(ix): most/all of the TypeError cases here are for ix,
@@ -132,6 +132,9 @@ def __getitem__(self, key):
# The InvalidIndexError is only catched for compatibility
# with geopandas, see
# https://github.com/pandas-dev/pandas/issues/27258
+ # TODO: The AttributeError is for IntervalIndex which
+ # incorrectly implements get_value, see
+ # https://github.com/pandas-dev/pandas/issues/27865
pass
else:
if is_scalar(values):
| Cannot use .ix in IntervaIndex('pandas._libs.interval.IntervalTree' object has no attribute 'get_value')
#### Code Sample, a copy-pastable example if possible
```python
x = pd.Series([-2.801298, -2.882724, -3.007899, -2.704554, -3.398761, -2.805034, -2.87554, -2.805034, -2.886459, -2.471618])
y= pd.Series([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
init_cut = pd.qcut(x, 5, duplicates='drop')
retbin = pd.Series(init_cut.values.categories).sort_values()
retbin.iloc[0] = pd.Interval(-np.inf, retbin.iloc[0].right)
retbin.iloc[-1] = pd.Interval(retbin.iloc[-1].left, np.inf)
init_cut = pd.cut(x, pd.IntervalIndex(retbin))
init_cut = init_cut.astype(object)
bin_df = pd.crosstab(index=init_cut, columns=y)
bin_df = bin_df.reindex(retbin)
bin_df = bin_df.sort_index()
bin_df = bin_df.fillna(0.0)
bin_df['nbin'] = np.nan
```
#### Problem description
bin_df =
col_0 0 nbin
(-inf, -2.911] 2 NaN
(-2.911, -2.878] 2 NaN
(-2.878, -2.805] 3 NaN
(-2.805, -2.782] 1 NaN
(-2.782, inf] 2 NaN
if I use bin_df.ix[0:2,0], I got an error like:
```pytb
Traceback (most recent call last):
File "D:\anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-12-1ae8ba69565c>", line 1, in <module>
bin_df.ix[0:1,'nbin']
File "D:\PyTest\venv\lib\site-packages\pandas\core\indexing.py", line 125, in __getitem__
values = self.obj._get_value(*key)
File "D:\PyTest\venv\lib\site-packages\pandas\core\frame.py", line 2827, in _get_value
return engine.get_value(series._values, index)
AttributeError: 'pandas._libs.interval.IntervalTree' object has no attribute 'get_value'
```
the version is 0.25.0
but it works well in 0.24.x
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
</details>
| In fact I've got another problem beside, I cannot use:
init_cut = init_cut.astype(pd.Interval)
I've got another error:
Traceback (most recent call last):
File "D:\anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-14-614614131098>", line 1, in <module>
init_cut.astype(pd.Interval)
File "D:\PyTest\venv\lib\site-packages\pandas\core\generic.py", line 5883, in astype
dtype=dtype, copy=copy, errors=errors, **kwargs
File "D:\PyTest\venv\lib\site-packages\pandas\core\internals\managers.py", line 581, in astype
return self.apply("astype", dtype=dtype, **kwargs)
File "D:\PyTest\venv\lib\site-packages\pandas\core\internals\managers.py", line 438, in apply
applied = getattr(b, f)(**kwargs)
File "D:\PyTest\venv\lib\site-packages\pandas\core\internals\blocks.py", line 557, in astype
return self._astype(dtype, copy=copy, errors=errors, values=values, **kwargs)
File "D:\PyTest\venv\lib\site-packages\pandas\core\internals\blocks.py", line 612, in _astype
dtype = pandas_dtype(dtype)
File "D:\PyTest\venv\lib\site-packages\pandas\core\dtypes\common.py", line 2067, in pandas_dtype
raise TypeError("dtype '{}' not understood".format(dtype))
TypeError: dtype '<class 'pandas._libs.interval.Interval'>' not understood
which is ok in version 0.23.4
but error in version 0.24.x and later version
Any can help me.... | 2019-08-15T06:54:38Z | [] | [] |
Traceback (most recent call last):
File "D:\anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-12-1ae8ba69565c>", line 1, in <module>
bin_df.ix[0:1,'nbin']
File "D:\PyTest\venv\lib\site-packages\pandas\core\indexing.py", line 125, in __getitem__
values = self.obj._get_value(*key)
File "D:\PyTest\venv\lib\site-packages\pandas\core\frame.py", line 2827, in _get_value
return engine.get_value(series._values, index)
AttributeError: 'pandas._libs.interval.IntervalTree' object has no attribute 'get_value'
| 12,955 |
|||
pandas-dev/pandas | pandas-dev__pandas-28131 | 5c0da7dd4034427745038381e8e2b77ac8c59d08 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -140,7 +140,7 @@ Interval
Indexing
^^^^^^^^
--
+- Bug in assignment using a reverse slicer (:issue:`26939`)
-
Missing
diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -226,6 +226,7 @@ def length_of_indexer(indexer, target=None) -> int:
if step is None:
step = 1
elif step < 0:
+ start, stop = stop + 1, start + 1
step = -step
return (stop - start + step - 1) // step
elif isinstance(indexer, (ABCSeries, ABCIndexClass, np.ndarray, list)):
| BUG: cannot set Series with reverse slicer
Minimal example:
```
>>> import pandas as pd
>>> s = pd.Series(index=range(2010, 2020))
>>> s.loc[2015:2010:-1] = [6, 5, 4, 3, 2, 1]
Traceback (most recent call last):
[...]
ValueError: cannot set using a slice indexer with a different length than the value
```
I see no reason why this shouldn't work, as setting with the forward slicer works without problems, and *getting* with the reverse slicer also works without issue:
```
>>> # turn list around because slicer is (not) reversed compared to above
>>> s.loc[2010:2015] = [6, 5, 4, 3, 2, 1][::-1]
>>> s
2010 1.0
2011 2.0
2012 3.0
2013 4.0
2014 5.0
2015 6.0
2016 NaN
2017 NaN
2018 NaN
2019 NaN
dtype: float64
>>> s.loc[2015:2010:-1] == [6, 5, 4, 3, 2, 1] # comparison, not assignment
2015 True
2014 True
2013 True
2012 True
2011 True
2010 True
dtype: bool
```
PS: For the failure, it does not matter if the RHS is a np.array, etc.
| can I try it?
Sure, thanks.
On Thu, Aug 15, 2019 at 10:17 PM kmin-jeong <notifications@github.com>
wrote:
> can I try it?
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/pandas-dev/pandas/issues/26939?email_source=notifications&email_token=AAKAOIVUYC6UPJGLYUEX65LQEYL3LA5CNFSM4HZIJBZKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4NRRUA#issuecomment-521869520>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAKAOIS7VDSD6W53V2S2X43QEYL3LANCNFSM4HZIJBZA>
> .
>
| 2019-08-25T03:07:38Z | [] | [] |
Traceback (most recent call last):
[...]
ValueError: cannot set using a slice indexer with a different length than the value
| 12,983 |
|||
pandas-dev/pandas | pandas-dev__pandas-28412 | 0ab32e88481440bfb4a102bb7731cbde2e5ceafe | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -234,7 +234,7 @@ Other
- Trying to set the ``display.precision``, ``display.max_rows`` or ``display.max_columns`` using :meth:`set_option` to anything but a ``None`` or a positive int will raise a ``ValueError`` (:issue:`23348`)
- Using :meth:`DataFrame.replace` with overlapping keys in a nested dictionary will no longer raise, now matching the behavior of a flat dictionary (:issue:`27660`)
- :meth:`DataFrame.to_csv` and :meth:`Series.to_csv` now support dicts as ``compression`` argument with key ``'method'`` being the compression method and others as additional compression options when the compression method is ``'zip'``. (:issue:`26023`)
--
+- :meth:`Series.append` will no longer raise a ``TypeError`` when passed a tuple of ``Series`` (:issue:`28410`)
.. _whatsnew_1000.contributors:
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2730,7 +2730,8 @@ def append(self, to_append, ignore_index=False, verify_integrity=False):
from pandas.core.reshape.concat import concat
if isinstance(to_append, (list, tuple)):
- to_concat = [self] + to_append
+ to_concat = [self]
+ to_concat.extend(to_append)
else:
to_concat = [self, to_append]
return concat(
| Series.append raises TypeError with tuple of Series
mypy error:
```
pandas\core\series.py:2733:25: error: Unsupported operand types for + ("List[Any]" and "Tuple[Any, ...]")
pandas\core\series.py:2733:25: note: Right operand is of type "Union[List[Any], Tuple[Any, ...]]"
```
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> pd.__version__
'0.25.0+332.g261c3a667'
>>>
>>> ser = pd.Series([1,2,3])
>>>
>>> ser
0 1
1 2
2 3
dtype: int64
>>>
>>> ser.append(ser)
0 1
1 2
2 3
0 1
1 2
2 3
dtype: int64
>>>
>>> ser.append([ser,ser])
0 1
1 2
2 3
0 1
1 2
2 3
0 1
1 2
2 3
dtype: int64
>>>
>>> ser.append((ser,ser))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\core\series.py", line 2733, in append
to_concat = [self] + to_append
TypeError: can only concatenate list (not "tuple") to list
```
#### Problem description
The docstring for Series.append states `to_append : Series or list/tuple of Series`. Appending a tuple of Series raises `TypeError: can only concatenate list (not "tuple") to list`
| 2019-09-12T14:41:37Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\simon\OneDrive\code\pandas-simonjayhawkins\pandas\core\series.py", line 2733, in append
to_concat = [self] + to_append
TypeError: can only concatenate list (not "tuple") to list
| 13,019 |