repo
stringclasses 32
values | instance_id
stringlengths 13
37
| base_commit
stringlengths 40
40
| patch
stringlengths 1
1.89M
| test_patch
stringclasses 1
value | problem_statement
stringlengths 304
69k
| hints_text
stringlengths 0
246k
| created_at
stringlengths 20
20
| version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value | traceback
stringlengths 64
23.4k
| __index_level_0__
int64 29
19k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pandas-dev/pandas | pandas-dev__pandas-3864 | 45d298d3232c2c769d2bc6b3fe461de7c4e8b72c | diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -984,11 +984,11 @@ def _prepare_blocks(self):
return blockmaps, reindexed_data
def _get_concatenated_data(self):
- try:
- # need to conform to same other (joined) axes for block join
- blockmaps, rdata = self._prepare_blocks()
- kinds = _get_all_block_kinds(blockmaps)
+ # need to conform to same other (joined) axes for block join
+ blockmaps, rdata = self._prepare_blocks()
+ kinds = _get_all_block_kinds(blockmaps)
+ try:
new_blocks = []
for kind in kinds:
klass_blocks = [mapping.get(kind) for mapping in blockmaps]
| UnboundLocalError in _concat_single_item()
The exception handling code in _get_concatenated_data() relies on the exception not being generated in self._prepare_blocks() and can generate an exception with this non-informative error message:
```
Traceback (most recent call last):
File "<pyshell#5>", line 12, in <module>
df = pd.concat([activedf,closeddf])
File "C:\Python27\lib\site-packages\pandas\tools\merge.py", line 873, in concat
return op.get_result()
File "C:\Python27\lib\site-packages\pandas\tools\merge.py", line 957, in get_result
new_data = self._get_concatenated_data()
File "C:\Python27\lib\site-packages\pandas\tools\merge.py", line 1001, in _get_concatenated_data
new_data[item] = self._concat_single_item(rdata, item)
UnboundLocalError: local variable 'rdata' referenced before assignment
```
| though I fixed this....just need to move the first statement outside of the try: except: PR?
yes, thanks; just wanted to report. sorry didn't find any previous report.
oh....it still looks like an error, what I meant was do you want to submit a PR to fix?
@stefan-pmc do you have a case that reproduces this?
@stefan-pmc if u provide the case i will fix :)
unless u want to submit a PR that is great too!
hard to supply the actual data but basically the error will happen for example if you try to do pd.concat() on a bunch of dataframes where one of them has an index with duplicates in it. if you move the first line outside the try block then it will generate the correct error message (cannot reindex with non unique index values), so that should do the job.
can submit a PR if that is of additional use and if you tell me what a PR is.
oh sorry :) a PR == pull request...do u need additional details about pull requests? happy to oblige!
| 2013-06-12T14:22:18Z | [] | [] |
Traceback (most recent call last):
File "<pyshell#5>", line 12, in <module>
df = pd.concat([activedf,closeddf])
File "C:\Python27\lib\site-packages\pandas\tools\merge.py", line 873, in concat
return op.get_result()
File "C:\Python27\lib\site-packages\pandas\tools\merge.py", line 957, in get_result
new_data = self._get_concatenated_data()
File "C:\Python27\lib\site-packages\pandas\tools\merge.py", line 1001, in _get_concatenated_data
new_data[item] = self._concat_single_item(rdata, item)
UnboundLocalError: local variable 'rdata' referenced before assignment
| 14,547 |
|||
pandas-dev/pandas | pandas-dev__pandas-38654 | 1bac7ac9c0484a8bb6b207eccf1341db39a87039 | diff --git a/asv_bench/benchmarks/algorithms.py b/asv_bench/benchmarks/algorithms.py
--- a/asv_bench/benchmarks/algorithms.py
+++ b/asv_bench/benchmarks/algorithms.py
@@ -5,7 +5,6 @@
from pandas._libs import lib
import pandas as pd
-from pandas.core.algorithms import make_duplicates_of_left_unique_in_right
from .pandas_vb_common import tm
@@ -175,15 +174,4 @@ def time_argsort(self, N):
self.array.argsort()
-class RemoveDuplicates:
- def setup(self):
- N = 10 ** 5
- na = np.arange(int(N / 2))
- self.left = np.concatenate([na[: int(N / 4)], na[: int(N / 4)]])
- self.right = np.concatenate([na, na])
-
- def time_make_duplicates_of_left_unique_in_right(self):
- make_duplicates_of_left_unique_in_right(self.left, self.right)
-
-
from .pandas_vb_common import setup # noqa: F401 isort:skip
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -821,7 +821,6 @@ Reshaping
- Bug in :meth:`DataFrame.combine_first` caused wrong alignment with dtype ``string`` and one level of ``MultiIndex`` containing only ``NA`` (:issue:`37591`)
- Fixed regression in :func:`merge` on merging :class:`.DatetimeIndex` with empty DataFrame (:issue:`36895`)
- Bug in :meth:`DataFrame.apply` not setting index of return value when ``func`` return type is ``dict`` (:issue:`37544`)
-- Bug in :func:`concat` resulting in a ``ValueError`` when at least one of both inputs had a non-unique index (:issue:`36263`)
- Bug in :meth:`DataFrame.merge` and :meth:`pandas.merge` returning inconsistent ordering in result for ``how=right`` and ``how=left`` (:issue:`35382`)
- Bug in :func:`merge_ordered` couldn't handle list-like ``left_by`` or ``right_by`` (:issue:`35269`)
- Bug in :func:`merge_ordered` returned wrong join result when length of ``left_by`` or ``right_by`` equals to the rows of ``left`` or ``right`` (:issue:`38166`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -2199,24 +2199,3 @@ def _sort_tuples(values: np.ndarray[tuple]):
arrays, _ = to_arrays(values, None)
indexer = lexsort_indexer(arrays, orders=True)
return values[indexer]
-
-
-def make_duplicates_of_left_unique_in_right(
- left: np.ndarray, right: np.ndarray
-) -> np.ndarray:
- """
- If left has duplicates, which are also duplicated in right, this duplicated values
- are dropped from right, meaning that every duplicate value from left exists only
- once in right.
-
- Parameters
- ----------
- left: ndarray
- right: ndarray
-
- Returns
- -------
- Duplicates of left are unique in right
- """
- left_duplicates = unique(left[duplicated(left)])
- return right[~(duplicated(right) & isin(right, left_duplicates))]
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -24,7 +24,6 @@
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
from pandas.core.dtypes.missing import isna
-import pandas.core.algorithms as algos
from pandas.core.arrays.categorical import (
factorize_from_iterable,
factorize_from_iterables,
@@ -513,14 +512,7 @@ def get_result(self):
# 1-ax to convert BlockManager axis to DataFrame axis
obj_labels = obj.axes[1 - ax]
if not new_labels.equals(obj_labels):
- # We have to remove the duplicates from obj_labels
- # in new labels to make them unique, otherwise we would
- # duplicate or duplicates again
- if not obj_labels.is_unique:
- new_labels = algos.make_duplicates_of_left_unique_in_right(
- np.asarray(obj_labels), np.asarray(new_labels)
- )
- indexers[ax] = obj_labels.reindex(new_labels)[1]
+ indexers[ax] = obj_labels.get_indexer(new_labels)
mgrs_indexers.append((obj._mgr, indexers))
| BUG: concat on axis with both different and duplicate labels raising error
When concatting two dataframes where there are a) there are duplicate columns in one of the dataframes, and b) there are non-overlapping column names in both, then you get a IndexError:
```
In [9]: df1 = pd.DataFrame(np.random.randn(3,3), columns=['A', 'A', 'B1'])
...: df2 = pd.DataFrame(np.random.randn(3,3), columns=['A', 'A', 'B2'])
In [10]: pd.concat([df1, df2])
Traceback (most recent call last):
File "<ipython-input-10-f61a1ab4009e>", line 1, in <module>
pd.concat([df1, df2])
...
File "c:\users\vdbosscj\scipy\pandas-joris\pandas\core\index.py", line 765, in take
taken = self.view(np.ndarray).take(indexer)
IndexError: index 3 is out of bounds for axis 0 with size 3
```
I don't know if it should work (although I suppose it should, as with only the duplicate columns it does work), but at least the error message is not really helpfull.
| cc @immerrr
I think after #6745 this will be straightforward to fix
The main issue is how to align indices that both have duplicate items, as of now, indexing with dupes does strange things:
``` python
In [1]: pd.Index([1,1,2])
Out[1]: Int64Index([1, 1, 2], dtype='int64')
In [2]: _1.get_indexer_for(_1)
Out[2]: Int64Index([0, 1, 0, 1, 2], dtype='int64')
```
Apparently, for each non-unique element found in destination, get_indexer tries to insert all locs of this element. I can hardly thinkg of a use case when I'd want to do a `reindex(['x', 'y'])` and getting `(['x', 'x', 'x', 'y'])` instead would do.
The dup indexers came out of having duplicates indexers on a unique index, so you have to dup
```
In [1]: df = DataFrame(np.arange(10).reshape(5,2))
In [2]: df
Out[2]:
0 1
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
[5 rows x 2 columns]
In [5]: df.loc[:,[0,1,1,0]]
Out[5]:
0 1 1 0
0 0 1 1 0
1 2 3 3 2
2 4 5 5 4
3 6 7 7 6
4 8 9 9 8
[5 rows x 4 columns]
```
Now maybe outlaw that I suppose
Maybe it would make more sense to require destination index to have the same count of duplicate entries for each element present in source, i.e.:
``` python
# e.g. these should be ok
pd.Index([1,1,2]).get_indexer_for([1,1,2]) # should be ok and return [0, 1, 2]
pd.Index([1,1,2]).get_indexer_for([2]) # return [2]
pd.Index([1,1,2]).get_indexer_for([1,2,1]) # return [0, 2, 1]
# but these should be forbidden
pd.Index([1,1,2]).get_indexer_for([1,2]) # which one of `1` did you want?
pd.Index([1,1,2]).get_indexer_for([1,1,1,2]) # which `1` should be duplicated?
```
_UPD: or maybe cycle over duplicate elements like np.putmask does..._
``` python
x = np.arange(5)
np.putmask(x, x>1, [-33, -44])
print x
array([ 0, 1, -33, -44, -33])
# so that
pd.Index([1,1,2]).get_indexer_for([1,1,2,1,1]) # return [0,1,2,0,1] ?
```
right, so that's a duplicate of a duplicate; yes, that would need to be handled differently
Some more example from #17552 (with `axis=1`, different error messages, but the idea is the same: you only get the error when there are duplicate + different labels):
```
In [113]: df1 = pd.DataFrame(np.zeros((2,2)), index=[0, 1], columns=['a', 'b'])
...: df2 = pd.DataFrame(np.ones((2,2)), index=[2, 2], columns=['c', 'd'])
In [114]: pd.concat([df1, df2], axis=1)
...
ValueError: Shape of passed values is (4, 6), indices imply (4, 4)
In [115]: df1 = pd.DataFrame(np.zeros((2,2)), index=[0, 1], columns=['a', 'b'])
...: df2 = pd.DataFrame(np.ones((2,2)), index=pd.DatetimeIndex([2, 2]), columns=['c', 'd'])
In [116]: pd.concat([df1, df2], axis=1)
...
TypeError: 'NoneType' object is not iterable
```
In general, for maximum usage compatibility, we should treat a table as a symmetric rank-2 tensor, (except for axis labels, e.g., 1st direction called 'column', 2nd direction called 'row'). And thus, no matter whether there are duplicate row indices or duplicate column names or both, we should always be able to perform concatenation along either axis without throwing any errors.
Below is a very elegant design that can grant maximum compatibility, suppose there are duplicate names in both row indices and column names:
<img width="772" alt="image" src="https://user-images.githubusercontent.com/10172392/72241299-697f6a00-3621-11ea-930d-08eb8678af1e.png">
1. For concatenation along axis 0, (`pd.concat([df1, df2], axis=0)`, i.e., vertical concatenation of rows), the first two Column A's in df1 and df2 should align to each other, leaving empty values (or NaN) for the 3rd Column A in df1 because df1 does not have the 3rd Column A
<img width="892" alt="image" src="https://user-images.githubusercontent.com/10172392/72241321-7bf9a380-3621-11ea-8997-4fd5260f5194.png">
2. For concatenation along axis 1, (`pd.concat([df1, df2], axis=1)`, i.e., horizontal concatenation of columns), the first two Row 0's in df1 and df2 should align to each other, leaving empty values (or NaN) for the 3rd Row 0 in df1 because df1 does not have the 3rd Row 0
<img width="1017" alt="image" src="https://user-images.githubusercontent.com/10172392/72241329-887dfc00-3621-11ea-97e1-f55a8e657fe3.png">
3. So in the most generic case, for concatenation along any axis, duplicate indices along that axis are kept and appended (e.g., 2 'A' plus 3 'A' = 5 'A'), duplicate indices along other axis are merged with multiplicity count consistency (e.g., 2 'A' merge with 3 'A', the first 2 'A' corresponds to each other, the last 'A' get empty/NaN values)
These should solve many DataFrame concatenation bugs and crashes in Pandas!
@xuancong84 this is anti pandas philosophy
having order dependencies is extremely fragile and would work unexpectedly if you happened to reorder columns or not
we must align columns on the labels; aligning some of them is just really odd
if you want to treat this as a tensor then drop the labels
-1 on this proposal
@jreback Sorry, could not get what you mean? Would you be more specific by illustration with examples?
Regardless of how you feel, currently when I am developing the omnipotent data plotter for pandas (https://github.com/xuancong84/beiwe-visualizer ), I am experiencing lots of frustrating limitations of pandas. A lot of situations that could be handled easily could not be handled very well. You might want to take a look at my code to how troublesome I work around those to get things work.
Regarding order dependencies, I know it is not ideal because Python dictionary does not preserve order (unless you use OrderedDict). But in cases where there are duplicate names in both row indices and column names, that is the only way to make things work. Otherwise, Pandas just ends up with stupid crashes such as #28479 and #30772 where it should in principle work out correctly.
I understand, it is not easy to fix?
Because we don't know how to order the columns?
I don't get it really, but i can image it is a problem.
Could there maybe be at least an Exception with a helpful message?
I stumbled on it today and took some time to understand the problem.
And [others](https://stackoverflow.com/questions/53853826/concat-two-dataframes-with-duplicated-index-that-are-in-datetime-format/60723153#60723153) seem to have the same problem.
Notificactions keep bringing me here :) I haven't touched pandas codebase for a while, so take my 2¢ with a grain of salt.
> having order dependencies is extremely fragile
I agree with that: they are fragile and unreliable. And as a maintainer of other projects I get the sentiment of not adding stuff unless really necessary, even if it is conceptual stuff, like "in case of indexing non-unique indexes with non-unique indexer _(re-reading this sentence hurts)_, matching is performed according to the order of the labels".
But from pandas end-user perspective I do see this as a UX papercut. Yes, most of the time you shouldn't care about the ordering of columns or rows, but there are cases where there is no way around. At which point you either verify the existing ordering, or sort by a given criterion, and then for a short period of time you can rely on a specific order to perform a specific operation.
A good example of this would be forward-/backward-filling of NAs: ordering among the filling axis will directly influence the outcome, so before applying that I would need to make sure that the data is ordered as I want it to be. The same approach could be applicable here: if you need to concatenate dataframes with non-unique labels and they are not in the order you want them to be, it's up to you to sort them in whatever order you like.
I have a few more cases of the error messages being much less helpful than they could be:
```python
pd.concat([ # One dataframe has repeated column names
pd.DataFrame(np.ones((4, 4)), columns=list("aabc")),
pd.DataFrame(np.ones((4, 3)), columns=list("abc")),
])
```
```pytb
ValueError: Plan shapes are not aligned
```
```python
pd.concat([ # Repeated columns (same amount) different column ordering
pd.DataFrame(np.ones((2, 4)), columns=list("aabc")),
pd.DataFrame(np.ones((2, 4)), columns=list("abca")),
])
```
```pytb
AssertionError: Number of manager items must equal union of block items
```
<details>
<summary> Full tracebacks</summary>
```python
>>> import pandas as pd, numpy as np
>>> pd.concat([ # One dataframe has repeated column names
... pd.DataFrame(np.ones((4, 4)), columns=list("aabc")),
... pd.DataFrame(np.ones((4, 3)), columns=list("abc")),
... ])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 287, in concat
return op.get_result()
File "/usr/local/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 502, in get_result
new_data = concatenate_block_managers(
File "/usr/local/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 54, in concatenate_block_managers
for placement, join_units in concat_plan:
File "/usr/local/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 561, in _combine_concat_plans
raise ValueError("Plan shapes are not aligned")
ValueError: Plan shapes are not aligned
>>> pd.concat([ # Repeated columns (same amount) different column ordering
... pd.DataFrame(np.ones((4, 4)), columns=list("aabc")),
... pd.DataFrame(np.ones((4, 4)), columns=list("abca")),
... ])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 287, in concat
return op.get_result()
File "/usr/local/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 502, in get_result
new_data = concatenate_block_managers(
File "/usr/local/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 84, in concatenate_block_managers
return BlockManager(blocks, axes)
File "/usr/local/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 149, in __init__
self._verify_integrity()
File "/usr/local/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 331, in _verify_integrity
raise AssertionError(
AssertionError: Number of manager items must equal union of block items
# manager items: 3, # tot_items: 4
```
</details>
I think it's a little strange that the following works, but the previous example don't:
```python
>>> pd.concat([ # Repeated columns, same ordering
... pd.DataFrame(np.ones((2, 4)), columns=list("aabc")),
... pd.DataFrame(np.ones((2, 4)), columns=list("aabc")),
... ])
a a b c
0 1.0 1.0 1.0 1.0
1 1.0 1.0 1.0 1.0
0 1.0 1.0 1.0 1.0
1 1.0 1.0 1.0 1.0
```
Could there be a check for this in concatenation which throws a better error?
If non-unique column names are to be disallowed it could be something simple, like this other error pandas throws:
```
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
```
It could even be more specific and even name some of the repeated elements, if you wanted to get fancy.
If the case where ordering is preserved will be kept, it could be something like:
```
InvalidIndexError: Repeated column names {non-unique-columns} could not be uniquely aligned between DataFrames
```
@ivirshup hapoy to take a PR to have better error message
yeha duplicates along the same axis of concatenation are almost always an error
I'd be happy to make a PR. I feel like there might be code in pandas that does checks like these already. Any chance you could point me to places these might be for reference?
Also, I'm assuming you want to keep the existing behaviour of the following working for now?
```python
pd.concat([ # Repeated columns, same order
pd.DataFrame(np.ones((2, 4)), columns=list("aabc")),
pd.DataFrame(np.ones((2, 5)), columns=list("aabc")),
])
```
Is that right?
@jreback, I think this conflicts pretty directly with #36290, which allows duplicate items.
Also, I think there are some bugs in that implementation. Using the current release candidate:
```python
import pandas as pd
import numpy as np
from string import ascii_lowercase
letters = np.array(list(ascii_lowercase))
a_int = pd.DataFrame(np.arange(5), index=[0,1,2,3,3], columns=['a'])
b_int = pd.DataFrame(np.arange(5), index=[0,1,2,2,4], columns=['b'])
a_str = a_int.set_index(letters[a_int.index])
b_str = b_int.set_index(letters[b_int.index])
```
This works (the purpose of the PR, and the example in it's linked issue):
```python
pd.concat([a_int, b_int], axis=1)
```
```
a b
0 0.0 0.0
1 1.0 1.0
2 2.0 2.0
2 2.0 3.0
3 3.0 NaN
3 4.0 NaN
4 NaN 4.0
```
This does not work, though I believe it's pretty equivalent to the previous example:
```python
pd.concat([a_str, b_str], axis=1)
```
```pytb
----> 1 pd.concat([a_str, b_str], axis=1)
~/miniconda3/envs/pandas-1.2/lib/python3.8/site-packages/pandas/core/reshape/concat.py in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)
297 )
298
--> 299 return op.get_result()
300
301
~/miniconda3/envs/pandas-1.2/lib/python3.8/site-packages/pandas/core/reshape/concat.py in get_result(self)
526 mgrs_indexers.append((obj._mgr, indexers))
527
--> 528 new_data = concatenate_block_managers(
529 mgrs_indexers, self.new_axes, concat_axis=self.bm_axis, copy=self.copy
530 )
~/miniconda3/envs/pandas-1.2/lib/python3.8/site-packages/pandas/core/internals/concat.py in concatenate_block_managers(mgrs_indexers, axes, concat_axis, copy)
87 blocks.append(b)
88
---> 89 return BlockManager(blocks, axes)
90
91
~/miniconda3/envs/pandas-1.2/lib/python3.8/site-packages/pandas/core/internals/managers.py in __init__(self, blocks, axes, do_integrity_check)
141
142 if do_integrity_check:
--> 143 self._verify_integrity()
144
145 # Populate known_consolidate, blknos, and blklocs lazily
~/miniconda3/envs/pandas-1.2/lib/python3.8/site-packages/pandas/core/internals/managers.py in _verify_integrity(self)
321 for block in self.blocks:
322 if block.shape[1:] != mgr_shape[1:]:
--> 323 raise construction_error(tot_items, block.shape[1:], self.axes)
324 if len(self.items) != tot_items:
325 raise AssertionError(
ValueError: Shape of passed values is (6, 2), indices imply (5, 2)
```
--------------------------
As an overall point, I think the target behaviour of that PR is wrong. Here's an example of why:
```python
# Using pandas 1.2.0rc0
df1 = pd.DataFrame(np.arange(3), index=[0,1,1], columns=['a'])
df2 = pd.DataFrame(np.arange(3), index=[1,0,1], columns=['b'])
pd.concat([df1, df2], axis=1)
```
```
a b
0 0 1
1 1 0
1 2 2
```
The results here rely on the ordering of the labels, (https://github.com/pandas-dev/pandas/issues/6963#issuecomment-573619524) which I agree is brittle.
I think there are two more reasonable option for the behaviour.
* Union the indices, duplicates cause errors (my suggestion)
* Mimic merge, take actually have the outer product of indices.
I'd note the current behaviour of `concat` interprets `"inner"` / `"outer"` much more like `"intersection"`/ `"union"` compared to `merge`s `"inner"`/ `"outer"` operations. Mimicking merge could be a larger behaviour change.
`1.2.0rc0` is doing something else
<details>
<summary> merge behaviour </summary>
`merge` behaviour compared with `concat` in `1.2.0rc0`
#### merge "inner" and "outer" are equivalent for common repeated indices
```python
In [11]: pd.merge(df1, df2, left_index=True, right_index=True, how="inner")
Out[11]:
a b
0 0 1
1 1 0
1 1 2
1 2 0
1 2 2
In [12]: pd.merge(df1, df2, left_index=True, right_index=True, how="outer")
Out[12]:
a b
0 0 1
1 1 0
1 1 2
1 2 0
1 2 2
```
and do not match current behaviour of `concat`
#### Current implementation otherwise basically works for outer joins if indices are only repeated in one DataFrame
Using definitions from above, e.g.:
```python
a_int = pd.DataFrame(np.random.randn(5), index=[0,1,2,3,3], columns=['a'])
b_int = pd.DataFrame(np.random.randn(5), index=[0,1,2,2,4], columns=['b'])
```
```python
In [4]: pd.merge(a_int, b_int, left_index=True, right_index=True, how="outer")
Out[4]:
a b
0 0.0 0.0
1 1.0 1.0
2 2.0 2.0
2 2.0 3.0
3 3.0 NaN
3 4.0 NaN
4 NaN 4.0
In [89]: pd.concat([a_int, b_int], axis=1, join="outer")
Out[89]:
a b
0 0.0 0.0
1 1.0 1.0
2 2.0 2.0
2 2.0 3.0
3 3.0 NaN
3 4.0 NaN
4 NaN 4.0
```
#### But not for inner joins
```python
In [8]: pd.merge(a_int, b_int, left_index=True, right_index=True, how="inner")
Out[8]:
a b
0 0 0
1 1 1
2 2 2
2 2 3
In [9]: pd.concat([a_int, b_int], axis=1, join="inner")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-03adb33c977d> in <module>
----> 1 pd.concat([a_int, b_int], axis=1, join="inner")
~/miniconda3/envs/pandas-dev/lib/python3.8/site-packages/pandas/core/reshape/concat.py in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)
297 )
298
--> 299 return op.get_result()
300
301
~/miniconda3/envs/pandas-dev/lib/python3.8/site-packages/pandas/core/reshape/concat.py in get_result(self)
526 mgrs_indexers.append((obj._mgr, indexers))
527
--> 528 new_data = concatenate_block_managers(
529 mgrs_indexers, self.new_axes, concat_axis=self.bm_axis, copy=self.copy
530 )
~/miniconda3/envs/pandas-dev/lib/python3.8/site-packages/pandas/core/internals/concat.py in concatenate_block_managers(mgrs_indexers, axes, concat_axis, copy)
87 blocks.append(b)
88
---> 89 return BlockManager(blocks, axes)
90
91
~/miniconda3/envs/pandas-dev/lib/python3.8/site-packages/pandas/core/internals/managers.py in __init__(self, blocks, axes, do_integrity_check)
141
142 if do_integrity_check:
--> 143 self._verify_integrity()
144
145 # Populate known_consolidate, blknos, and blklocs lazily
~/miniconda3/envs/pandas-dev/lib/python3.8/site-packages/pandas/core/internals/managers.py in _verify_integrity(self)
321 for block in self.blocks:
322 if block.shape[1:] != mgr_shape[1:]:
--> 323 raise construction_error(tot_items, block.shape[1:], self.axes)
324 if len(self.items) != tot_items:
325 raise AssertionError(
ValueError: Shape of passed values is (4, 2), indices imply (3, 2)
```
</details>
@ivirshup ahh i remember now. yeah handling duplicates is hard. so we can handle some of them. i am actually ok with raising on duplicates in either axis, but would have to see how much would break. | 2020-12-23T06:38:13Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-10-f61a1ab4009e>", line 1, in <module>
pd.concat([df1, df2])
...
File "c:\users\vdbosscj\scipy\pandas-joris\pandas\core\index.py", line 765, in take
taken = self.view(np.ndarray).take(indexer)
IndexError: index 3 is out of bounds for axis 0 with size 3
| 14,550 |
|||
pandas-dev/pandas | pandas-dev__pandas-38816 | 3d351ed1b9f48cac3a76aecb169903fcfa18e98d | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- Fixed regression in :meth:`.GroupBy.sem` where the presence of non-numeric columns would cause an error instead of being dropped (:issue:`38774`)
- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1620,12 +1620,11 @@ def sem(self, ddof: int = 1):
if result.ndim == 1:
result /= np.sqrt(self.count())
else:
- cols = result.columns.get_indexer_for(
- result.columns.difference(self.exclusions).unique()
- )
- result.iloc[:, cols] = result.iloc[:, cols] / np.sqrt(
- self.count().iloc[:, cols]
- )
+ cols = result.columns.difference(self.exclusions).unique()
+ counts = self.count()
+ result_ilocs = result.columns.get_indexer_for(cols)
+ count_ilocs = counts.columns.get_indexer_for(cols)
+ result.iloc[:, result_ilocs] /= np.sqrt(counts.iloc[:, count_ilocs])
return result
@final
| BUG: sem() with level raised ValueError in pandas 1.2
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
import numpy as np
idx = pd.MultiIndex.from_arrays([
[str(i) for i in range(100)], np.random.choice(['A', 'B'], size=(100,))
], names=['a', 'b'])
data_dict = dict((str(i), np.random.rand(100)) for i in range(10))
data_dict['string'] = [str(i) for i in range(100)]
data_dict['bool'] = np.random.choice([True, False], (100,))
data = pd.DataFrame(data_dict, index=idx)
data.sem(level=1)
```
#### Problem description
Unexpected exception raised:
```
Traceback (most recent call last):
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-cd9abe134148>", line 1, in <module>
data.sem(level=1)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10925, in sem
return NDFrame.sem(self, axis, skipna, level, ddof, numeric_only, **kwargs)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10665, in sem
return self._stat_function_ddof(
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10655, in _stat_function_ddof
return self._agg_by_level(
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10552, in _agg_by_level
return getattr(grouped, name)(**kwargs)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1612, in sem
result.iloc[:, cols] = result.iloc[:, cols] / np.sqrt(
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/indexing.py", line 691, in __setitem__
iloc._setitem_with_indexer(indexer, value, self.name)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/indexing.py", line 1636, in _setitem_with_indexer
self._setitem_single_block(indexer, value, name)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/indexing.py", line 1860, in _setitem_single_block
self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 568, in setitem
return self.apply("setitem", indexer=indexer, value=value)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 427, in apply
applied = getattr(b, f)(**kwargs)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/internals/blocks.py", line 1025, in setitem
values[indexer] = value
ValueError: shape mismatch: value array of shape (2,12) could not be broadcast to indexing result of shape (11,2)
```
#### Expected Output
Correct aggregation results shall be returned.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 3e89b4c4b1580aa890023fc550774e63d499da25
python : 3.8.3.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Tue Nov 10 00:10:30 PST 2020; root:xnu-6153.141.10~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : zh_CN.UTF-8
pandas : 1.2.0
numpy : 1.19.1
pytz : 2020.4
dateutil : 2.8.1
pip : 20.2.2
setuptools : 49.6.0.post20200814
Cython : 0.29.21
pytest : 6.0.1
hypothesis : None
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.17.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : 0.4.2
gcsfs : None
matplotlib : 3.2.2
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.1
pyxlsb : None
s3fs : None
scipy : 1.5.0
sqlalchemy : 1.3.18
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : 0.51.2
</details>
| @wjsi Thanks a lot for the clear report. I can confirm it is a regression.
> I can confirm it is a regression.
first bad commit: [5c73d996a88e36ad61d50e994b4b724f14810b93] CLN: Remove .values from groupby.sem (#38044) cc @rhshadrach | 2020-12-30T15:40:27Z | [] | [] |
Traceback (most recent call last):
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-cd9abe134148>", line 1, in <module>
data.sem(level=1)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10925, in sem
return NDFrame.sem(self, axis, skipna, level, ddof, numeric_only, **kwargs)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10665, in sem
return self._stat_function_ddof(
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10655, in _stat_function_ddof
return self._agg_by_level(
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 10552, in _agg_by_level
return getattr(grouped, name)(**kwargs)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1612, in sem
result.iloc[:, cols] = result.iloc[:, cols] / np.sqrt(
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/indexing.py", line 691, in __setitem__
iloc._setitem_with_indexer(indexer, value, self.name)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/indexing.py", line 1636, in _setitem_with_indexer
self._setitem_single_block(indexer, value, name)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/indexing.py", line 1860, in _setitem_single_block
self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 568, in setitem
return self.apply("setitem", indexer=indexer, value=value)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 427, in apply
applied = getattr(b, f)(**kwargs)
File "/Users/wenjun/miniconda3/lib/python3.8/site-packages/pandas/core/internals/blocks.py", line 1025, in setitem
values[indexer] = value
ValueError: shape mismatch: value array of shape (2,12) could not be broadcast to indexing result of shape (11,2)
| 14,583 |
|||
pandas-dev/pandas | pandas-dev__pandas-38819 | 92bb0c12a76ac7a6cdc9aaca4139ffe1a9b26975 | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -1051,16 +1051,11 @@ def __init__(
xlrd_version = LooseVersion(xlrd.__version__)
- if isinstance(path_or_buffer, (BufferedIOBase, RawIOBase, bytes)):
- ext = inspect_excel_format(
- content=path_or_buffer, storage_options=storage_options
- )
- elif xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
+ if xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
ext = "xls"
else:
- # path_or_buffer is path-like, use stringified path
ext = inspect_excel_format(
- path=str(self._io), storage_options=storage_options
+ content=path_or_buffer, storage_options=storage_options
)
if engine is None:
| BUG: read_excel throws FileNotFoundError with s3fs objects
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
I get a FileNotFoundError when running the following code:
```python
import pandas as pd
import s3fs
fs = s3fs.S3FileSystem()
with fs.open('s3://bucket_name/filename.xlsx') as f:
pd.read_excel(f)
# NOTE: pd.ExcelFile(f) throws same error
```
```python-traceback
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/util/_decorators.py", line 299, in wrapper
return func(*args, **kwargs)
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 336, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 1062, in __init__
ext = inspect_excel_format(
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 938, in inspect_excel_format
with get_handle(
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/common.py", line 648, in get_handle
handle = open(handle, ioargs.mode)
FileNotFoundError: [Errno 2] No such file or directory: '<File-like object S3FileSystem, bucket_name/filename.xlsx>'
```
#### Problem description
I should be able to read in the File-like object from s3fs when using `pd.read_excel` or `pd.ExcelFile`. Pandas 1.1.x allows for this, but it looks like changes to `pd.io.common.get_handle` in 1.2 have made this impossible. The simple workaround for this is to just use the s3 URI instead of using `s3fs` to open it first, but to my knowledge, the ability to use `read_excel` with an s3fs object was not intended to be deprecated in 1.2.
#### My noob guess on what's going wrong
I'm new to contributing to open source projects, so I don't know exactly how to fix this, but it looks like the issue is that the `pd.io.common.get_handle` method in 1.2 thinks the s3fs object is a file handle rather than a file-like buffer. To solve this, I would think something similar to the [`need_text_wrapping` boolean option](https://github.com/pandas-dev/pandas/blob/1.1.x/pandas/io/common.py#L503) from the `get_handle` method in 1.1.x needs to be added to 1.2's `get_handle` in order to tell pandas that the s3fs object needs a TextIOWrapper rather than treating it like a local file handle.
If someone could give me a little guidance on how to fix this, I'd be happy to give my first open-source contribution a go, but if that's not really how this works, I understand.
#### Expected Output
<class 'pandas.core.frame.DataFrame'>
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 3e89b4c4b1580aa890023fc550774e63d499da25
python : 3.8.0.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.0-197-generic
Version : #229-Ubuntu SMP Wed Nov 25 11:05:42 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.0
numpy : 1.19.4
pytz : 2020.4
dateutil : 2.8.1
pip : 20.3.3
setuptools : 41.2.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : 0.8.5
fastparquet : None
gcsfs : None
matplotlib : 3.3.3
numexpr : None
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : 0.5.2
scipy : 1.5.4
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| thank you for your report! Is there a public excel file on s3 so that I can test it quickly (edit: any public S3 file should be sufficient)? I assume that affects most `read/to_*` functions?
`get_handle` is supposed to work with strings/file objects/buffers. Your handle seems to be converted to a string at some point (probably something wrong in `stringify_path`?)
@twoertwein you can also test it with the mocked s3 filesystem used in the tests. I can reproduce the error with:
```patch
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -645,7 +645,7 @@ class TestReaders:
local_table = pd.read_excel("test1" + read_ext)
tm.assert_frame_equal(url_table, local_table)
- @td.skip_if_not_us_locale
def test_read_from_s3_url(self, read_ext, s3_resource, s3so):
# Bucket "pandas-test" created in tests/io/conftest.py
with open("test1" + read_ext, "rb") as f:
@@ -657,6 +657,21 @@ class TestReaders:
local_table = pd.read_excel("test1" + read_ext)
tm.assert_frame_equal(url_table, local_table)
+ def test_read_from_s3fs_object(self, read_ext, s3_resource, s3so):
+ # Bucket "pandas-test" created in tests/io/conftest.py
+ with open("test1" + read_ext, "rb") as f:
+ s3_resource.Bucket("pandas-test").put_object(Key="test1" + read_ext, Body=f)
+
+ import s3fs
+ s3 = s3fs.S3FileSystem(**s3so)
+
+ with s3.open("s3://pandas-test/test1" + read_ext) as f:
+ url_table = pd.read_excel(f)
+
+ local_table = pd.read_excel("test1" + read_ext)
+ tm.assert_frame_equal(url_table, local_table)
+
```
`test_read_from_s3_url` passes for me locally, but the new `test_read_from_s3fs_object` fails
the issue is:
https://github.com/pandas-dev/pandas/blob/e85d0782d3f74093b5a639427753dab0f2464c80/pandas/io/excel/_base.py#L1063
this issue should be limited to excel | 2020-12-30T16:11:15Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/util/_decorators.py", line 299, in wrapper
return func(*args, **kwargs)
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 336, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 1062, in __init__
ext = inspect_excel_format(
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/excel/_base.py", line 938, in inspect_excel_format
with get_handle(
File "/home/airflow/GVIP/venv/lib/python3.8/site-packages/pandas/io/common.py", line 648, in get_handle
handle = open(handle, ioargs.mode)
FileNotFoundError: [Errno 2] No such file or directory: '<File-like object S3FileSystem, bucket_name/filename.xlsx>'
| 14,584 |
|||
pandas-dev/pandas | pandas-dev__pandas-38997 | a67693249b8af029906e4beaf3e91806001ce30d | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -24,6 +24,7 @@ Fixed regressions
- Fixed regression in :func:`read_excel` with non-rawbyte file handles (:issue:`38788`)
- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings. This resulted in a regression in some cases as the default for ``float_precision`` was changed in pandas 1.2.0 (:issue:`38753`)
- Fixed regression in :meth:`Rolling.skew` and :meth:`Rolling.kurt` modifying the object inplace (:issue:`38908`)
+- Fixed regression in :meth:`read_csv` and other read functions were the encoding error policy (``errors``) did not default to ``"replace"`` when no encoding was specified (:issue:`38989`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/common.py b/pandas/io/common.py
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -553,8 +553,7 @@ def get_handle(
Returns the dataclass IOHandles
"""
# Windows does not default to utf-8. Set to utf-8 for a consistent behavior
- if encoding is None:
- encoding = "utf-8"
+ encoding_passed, encoding = encoding, encoding or "utf-8"
# read_csv does not know whether the buffer is opened in binary/text mode
if _is_binary_mode(path_or_buf, mode) and "b" not in mode:
@@ -641,6 +640,9 @@ def get_handle(
# Check whether the filename is to be opened in binary mode.
# Binary mode does not support 'encoding' and 'newline'.
if ioargs.encoding and "b" not in ioargs.mode:
+ if errors is None and encoding_passed is None:
+ # ignore errors when no encoding is specified
+ errors = "replace"
# Encoding
handle = open(
handle,
| BUG: read_csv raising when null bytes are in skipped rows
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
I have a few weird csv files with null bytes in the first row. Since 1.2.0 they are raising even if I skip the first row,
```python
pd.read_csv("test.csv", engine="python", skiprows=[0])
pd.read_csv("test.csv", engine="c", skiprows=[0])
```
Could not get this to fail with an example created in code.
#### Problem description
Both engines raising
```
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharm2020.3/scratches/scratch_4.py", line 377, in <module>
pd.read_csv("/media/sf_Austausch/test.csv", engine="c", skiprows=[0], nrows=2)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 605, in read_csv
return _read(filepath_or_buffer, kwds)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 457, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 814, in __init__
self._engine = self._make_engine(self.engine)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 1045, in _make_engine
return mapping[engine](self.f, **self.options) # type: ignore[call-arg]
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 1894, in __init__
self._reader = parsers.TextReader(self.handles.handle, **kwds)
File "pandas/_libs/parsers.pyx", line 517, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 619, in pandas._libs.parsers.TextReader._get_header
File "pandas/_libs/parsers.pyx", line 813, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1942, in pandas._libs.parsers.raise_parser_error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 8: invalid continuation byte
Process finished with exit code 1
```
This worked on 1.1.5.
Relevant changes are:
https://github.com/pandas-dev/pandas/blob/9b16b1e1ee049b042a7d59cddd8fbd913137223f/pandas/io/common.py#L556:L557
infers encoding now, which leads to
https://github.com/pandas-dev/pandas/blob/9b16b1e1ee049b042a7d59cddd8fbd913137223f/pandas/io/common.py#L643:L651
where errors is always strict for read_csv. On 1.1.5 in case of no encoding given, ``errors`` was set to ``replace``, which caused this to work. Was this an intended change? Labeling as Regression for now.
#### Expected Output
```
! xyz 0 1
0 * 2000 1 2
1 * 2001 0 0
```
cc @gfyoung @twoertwein
This was caused by #36997
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : cd0224d26c6050f0e638861b9e557086673c14c1
python : 3.8.6.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-58-generic
Version : #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.0.dev0+356.ge0cb09e917
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 49.6.0.post20201009
Cython : 0.29.21
pytest : 6.1.1
hypothesis : 5.37.1
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : 1.3.7
lxml.etree : 4.5.2
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.3
bottleneck : 1.3.2
fsspec : 0.8.3
fastparquet : 0.4.1
gcsfs : 0.7.1
matplotlib : 3.3.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.5
pandas_gbq : None
pyarrow : 2.0.0
pyxlsb : None
s3fs : 0.4.2
scipy : 1.5.2
sqlalchemy : 1.3.20
tables : 3.6.1
tabulate : 0.8.7
xarray : 0.16.1
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.51.2
</details>
| > where errors is always strict for read_csv. On 1.1.5 in case of no encoding given, errors was set to replace, which caused this to work. Was this an intended change?
Thank you @phofl for already narrowing it down :) That wasn't an intentional change.
I'm happy to make a PR to restore the old behavior. Which behavior is preferred?
I would say errors=replace when encoding is set to utf-8? Would be inline with the previous behavior
Agreed this is a regression if that's the case, and restoring behavior is the right call. | 2021-01-06T06:02:23Z | [] | [] |
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharm2020.3/scratches/scratch_4.py", line 377, in <module>
pd.read_csv("/media/sf_Austausch/test.csv", engine="c", skiprows=[0], nrows=2)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 605, in read_csv
return _read(filepath_or_buffer, kwds)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 457, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 814, in __init__
self._engine = self._make_engine(self.engine)
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 1045, in _make_engine
return mapping[engine](self.f, **self.options) # type: ignore[call-arg]
File "/home/developer/PycharmProjects/pandas/pandas/io/parsers.py", line 1894, in __init__
self._reader = parsers.TextReader(self.handles.handle, **kwds)
File "pandas/_libs/parsers.pyx", line 517, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 619, in pandas._libs.parsers.TextReader._get_header
File "pandas/_libs/parsers.pyx", line 813, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1942, in pandas._libs.parsers.raise_parser_error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 8: invalid continuation byte
| 14,606 |
|||
pandas-dev/pandas | pandas-dev__pandas-39046 | 8fe3dc628c8702172ecaefa9ca62eca5fc42f108 | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -192,6 +192,7 @@ Categorical
- Bug in ``CategoricalIndex.reindex`` failed when ``Index`` passed with elements all in category (:issue:`28690`)
- Bug where constructing a :class:`Categorical` from an object-dtype array of ``date`` objects did not round-trip correctly with ``astype`` (:issue:`38552`)
- Bug in constructing a :class:`DataFrame` from an ``ndarray`` and a :class:`CategoricalDtype` (:issue:`38857`)
+- Bug in :meth:`DataFrame.reindex` was throwing ``IndexError`` when new index contained duplicates and old index was :class:`CategoricalIndex` (:issue:`38906`)
Datetimelike
^^^^^^^^^^^^
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3730,8 +3730,13 @@ def _reindex_non_unique(self, target):
new_labels[cur_indexer] = cur_labels
new_labels[missing_indexer] = missing_labels
+ # GH#38906
+ if not len(self):
+
+ new_indexer = np.arange(0)
+
# a unique indexer
- if target.is_unique:
+ elif target.is_unique:
# see GH5553, make sure we use the right indexer
new_indexer = np.arange(len(indexer))
| BUG: reindexing empty CategoricalIndex fails if target contains duplicates
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas. (Tested version 1.2.0)
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
This fails:
```python
pd.DataFrame(columns=pd.CategoricalIndex([]), index=['K']).reindex(columns=pd.CategoricalIndex(['A', 'A']))
```
But these succeed:
```python
pd.DataFrame(columns=pd.Index([]), index=['K']).reindex(columns=pd.CategoricalIndex(['A', 'A']))
pd.DataFrame(columns=pd.CategoricalIndex([]), index=['K']).reindex(columns=pd.CategoricalIndex(['A', 'B']))
pd.DataFrame(columns=pd.CategoricalIndex([]), index=['K']).reindex(columns=pd.CategoricalIndex([]))
```
The error is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\util\_decorators.py", line 312, in wrapper
return func(*args, **kwargs)
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\frame.py", line 4173, in reindex
return super().reindex(**kwargs)
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\generic.py", line 4806, in reindex
return self._reindex_axes(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\frame.py", line 4013, in _reindex_axes
frame = frame._reindex_columns(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\frame.py", line 4055, in _reindex_columns
new_columns, indexer = self.columns.reindex(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\indexes\category.py", line 448, in reindex
new_target, indexer, _ = result._reindex_non_unique(np.array(target))
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\indexes\base.py", line 3589, in _reindex_non_unique
new_indexer = np.arange(len(self.take(indexer)))
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\indexes\base.py", line 751, in take
taken = algos.take(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\algorithms.py", line 1657, in take
result = arr.take(indices, axis=axis)
IndexError: cannot do a non-empty take from an empty axes.
```
#### Problem description
It is unexpected that `CategoricalIndex` behaves differently than `Index` in this regard. A problem similar to this was already reported and solved in #16770, but it looks like there is a remaining bug in the edge case where the target index contains duplicates.
#### Expected Output
The failing code should return a dataframe with two columns and one row.
#### Output of ``pd.show_versions()``
<details>
pd.show_versions() fails because I dont' have numba installed in this environment and it's currently impossible to install it simultaneously with Pandas 1.2.0, so this is the output of "conda list" instead:
blas 1.0 mkl
bottleneck 1.3.2 py39h7cc1a96_1
ca-certificates 2020.12.8 haa95532_0
certifi 2020.12.5 py39haa95532_0
et_xmlfile 1.0.1 py_1001
icc_rt 2019.0.0 h0cc432a_1
intel-openmp 2020.3 h57928b3_311 conda-forge
jdcal 1.4.1 py_0
libblas 3.9.0 5_mkl conda-forge
libcblas 3.9.0 5_mkl conda-forge
liblapack 3.9.0 5_mkl conda-forge
mkl 2020.4 hb70f87d_311 conda-forge
mkl-service 2.3.0 py39h196d8e1_0
numpy 1.19.4 py39h6635163_1 conda-forge
openpyxl 3.0.5 py_0
openssl 1.1.1i h2bbff1b_0
pandas 1.2.0 py39h2e25243_0 conda-forge
pip 20.3.3 pyhd8ed1ab_0 conda-forge
python 3.9.1 h7840368_2_cpython conda-forge
python-dateutil 2.8.1 py_0 conda-forge
python_abi 3.9 1_cp39 conda-forge
pytz 2020.5 pyhd8ed1ab_0 conda-forge
scipy 1.5.2 py39h14eb087_0
setuptools 49.6.0 py39h467e6f4_2 conda-forge
six 1.15.0 pyh9f0ad1d_0 conda-forge
sqlite 3.34.0 h8ffe710_0 conda-forge
tzdata 2020f he74cb21_0 conda-forge
vc 14.2 hb210afc_2 conda-forge
vs2015_runtime 14.28.29325 h5e1d092_0 conda-forge
wheel 0.36.2 pyhd3deb0d_0 conda-forge
wincertstore 0.2 py39hde42818_1005 conda-forge
xlrd 2.0.1 pyhd3eb1b0_0
</details>
| Thanks @batterseapower , can confirm this reproduces | 2021-01-08T23:44:42Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\util\_decorators.py", line 312, in wrapper
return func(*args, **kwargs)
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\frame.py", line 4173, in reindex
return super().reindex(**kwargs)
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\generic.py", line 4806, in reindex
return self._reindex_axes(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\frame.py", line 4013, in _reindex_axes
frame = frame._reindex_columns(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\frame.py", line 4055, in _reindex_columns
new_columns, indexer = self.columns.reindex(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\indexes\category.py", line 448, in reindex
new_target, indexer, _ = result._reindex_non_unique(np.array(target))
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\indexes\base.py", line 3589, in _reindex_non_unique
new_indexer = np.arange(len(self.take(indexer)))
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\indexes\base.py", line 751, in take
taken = algos.take(
File "C:\Users\mboling\Anaconda3\envs\pandastest\lib\site-packages\pandas\core\algorithms.py", line 1657, in take
result = arr.take(indices, axis=axis)
IndexError: cannot do a non-empty take from an empty axes.
| 14,621 |
|||
pandas-dev/pandas | pandas-dev__pandas-39253 | 6ff2e7c602123787c3b0061466ab5bb8663eae81 | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -15,6 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Fixed regression in :meth:`~DataFrame.to_csv` that created corrupted zip files when there were more rows than ``chunksize`` (:issue:`38714`)
+- Fixed regression in :meth:`~DataFrame.to_csv` opening ``codecs.StreamReaderWriter`` in binary mode instead of in text mode (:issue:`39247`)
- Fixed regression in :meth:`read_csv` and other read functions were the encoding error policy (``errors``) did not default to ``"replace"`` when no encoding was specified (:issue:`38989`)
- Fixed regression in :func:`read_excel` with non-rawbyte file handles (:issue:`38788`)
- Fixed regression in :meth:`DataFrame.to_stata` not removing the created file when an error occured (:issue:`39202`)
diff --git a/pandas/io/common.py b/pandas/io/common.py
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -2,6 +2,7 @@
from __future__ import annotations
import bz2
+import codecs
from collections import abc
import dataclasses
import gzip
@@ -857,9 +858,12 @@ def file_exists(filepath_or_buffer: FilePathOrBuffer) -> bool:
def _is_binary_mode(handle: FilePathOrBuffer, mode: str) -> bool:
"""Whether the handle is opened in binary mode"""
+ # classes that expect string but have 'b' in mode
+ text_classes = (codecs.StreamReaderWriter,)
+ if isinstance(handle, text_classes):
+ return False
+
# classes that expect bytes
- binary_classes = [BufferedIOBase, RawIOBase]
+ binary_classes = (BufferedIOBase, RawIOBase)
- return isinstance(handle, tuple(binary_classes)) or "b" in getattr(
- handle, "mode", mode
- )
+ return isinstance(handle, binary_classes) or "b" in getattr(handle, "mode", mode)
| BUG: V1.2 DataFrame.to_csv() fails to write a file with codecs
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import pandas as pd
import codecs
x = [ 1, 2, 3, 4, 5 ]
y = [ 6, 7, 8, 9, 10 ]
z = [ 'a', 'b', 'c', 'd', 'e' ]
data = { "X": x, "Y":y, "Z":z }
df = pd.DataFrame( data, columns=[ "X", "Y", "Z" ] )
print( "Pandas version = %s" % pd.__version__ )
print(df)
fp = codecs.open( "out-testPD12.csv", "w", "utf-8" )
fp.write( "Pandas version = %s\n" % pd.__version__ )
df.to_csv( fp, index=False, header=True )
fp.close()
```
#### Problem description
When saving the file, `TypeError: utf_8_encode() argument 1 must be str, not bytes` flags on.
#### Expected Output
The output below has been obtained by downgrading and pinned Pandas to V1.1.5.
V1.1.2 is also tested OK.
```
Pandas version = 1.1.5
X,Y,Z
1,6,a
2,7,b
3,8,c
4,9,d
5,10,e
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 3e89b4c4b1580aa890023fc550774e63d499da25
python : 3.8.2.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Tue Nov 10 00:10:30 PST 2020; root:xnu-6153.141.10~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 1.2.0
numpy : 1.19.2
pytz : 2020.5
dateutil : 2.8.1
pip : 20.3.3
setuptools : 51.1.2.post20210112
Cython : 0.29.21
pytest : 6.2.1
hypothesis : None
sphinx : 3.4.3
blosc : None
feather : None
xlsxwriter : 1.3.7
lxml.etree : 4.6.2
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.19.0
pandas_datareader: None
bs4 : 4.9.3
bottleneck : 1.3.2
fsspec : 0.8.3
fastparquet : None
gcsfs : None
matplotlib : 3.3.2
numexpr : 2.7.2
odfpy : None
openpyxl : 3.0.6
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : 1.3.21
tables : 3.6.1
tabulate : None
xarray : None
xlrd : 2.0.1
xlwt : 1.3.0
numba : 0.51.2
```
(base) Catalina1{kazzz-s} temp (1)% python testPD12.py
Pandas version = 1.2.0
X Y Z
0 1 6 a
1 2 7 b
2 3 8 c
3 4 9 d
4 5 10 e
Traceback (most recent call last):
File "testPD12.py", line 52, in <module>
df.to_csv( fp, index=False, header=True )
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 3384, in to_csv
return DataFrameRenderer(formatter).to_csv(
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 1083, in to_csv
csv_formatter.save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/csvs.py", line 248, in save
self._save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 104, in __exit__
self.close()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 89, in close
self.handle.flush()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 721, in write
return self.writer.write(data)
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 377, in write
data, consumed = self.encode(object, self.errors)
TypeError: utf_8_encode() argument 1 must be str, not bytes
```
</details>
| thank you for the report and your minimal examples! Yes, that is a regression.
Unfortunately it seems that `codecs.open( ..., mode="w", encoding="utf-8" ).mode` is `"wb"`. So pandas 1.2 tries to write bytes to it. Interestingly `codecs.open( ..., mode="w" ).mode` is `"w"` and should therefore work with pandas 1.2. | 2021-01-18T15:38:54Z | [] | [] |
Traceback (most recent call last):
File "testPD12.py", line 52, in <module>
df.to_csv( fp, index=False, header=True )
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 3384, in to_csv
return DataFrameRenderer(formatter).to_csv(
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 1083, in to_csv
csv_formatter.save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/csvs.py", line 248, in save
self._save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 104, in __exit__
self.close()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 89, in close
self.handle.flush()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 721, in write
return self.writer.write(data)
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 377, in write
data, consumed = self.encode(object, self.errors)
TypeError: utf_8_encode() argument 1 must be str, not bytes
| 14,654 |
|||
pandas-dev/pandas | pandas-dev__pandas-39278 | 626448d2d829012d600ca824d17012cb1b26aefe | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -276,6 +276,7 @@ Indexing
- Bug in :meth:`DataFrame.loc`, :meth:`Series.loc`, :meth:`DataFrame.__getitem__` and :meth:`Series.__getitem__` returning incorrect elements for non-monotonic :class:`DatetimeIndex` for string slices (:issue:`33146`)
- Bug in :meth:`DataFrame.reindex` and :meth:`Series.reindex` with timezone aware indexes raising ``TypeError`` for ``method="ffill"`` and ``method="bfill"`` and specified ``tolerance`` (:issue:`38566`)
- Bug in :meth:`DataFrame.__setitem__` raising ``ValueError`` with empty :class:`DataFrame` and specified columns for string indexer and non empty :class:`DataFrame` to set (:issue:`38831`)
+- Bug in :meth:`DataFrame.loc.__setitem__` raising ValueError when expanding unique column for :class:`DataFrame` with duplicate columns (:issue:`38521`)
- Bug in :meth:`DataFrame.iloc.__setitem__` and :meth:`DataFrame.loc.__setitem__` with mixed dtypes when setting with a dictionary value (:issue:`38335`)
- Bug in :meth:`DataFrame.loc` dropping levels of :class:`MultiIndex` when :class:`DataFrame` used as input has only one row (:issue:`10521`)
- Bug in setting ``timedelta64`` values into numeric :class:`Series` failing to cast to object dtype (:issue:`39086`)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1850,10 +1850,11 @@ def _setitem_single_block(self, indexer, value, name: str):
for i, idx in enumerate(indexer)
if i != info_axis
)
- and item_labels.is_unique
):
- self.obj[item_labels[indexer[info_axis]]] = value
- return
+ selected_item_labels = item_labels[indexer[info_axis]]
+ if len(item_labels.get_indexer_for([selected_item_labels])) == 1:
+ self.obj[selected_item_labels] = value
+ return
indexer = maybe_convert_ix(*indexer)
if (isinstance(value, ABCSeries) and name != "iloc") or isinstance(value, dict):
| BUG: Setting values to slice fails with duplicated column name
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [x] (optional) I have confirmed this bug exists on the master branch of pandas.
---
Originally posted on [StackOverflow](https://stackoverflow.com/questions/65255166/interesting-results-with-duplicate-columns-in-pandas-dataframe).
Possibly related to #15695 (the traceback looks different though)
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
df = pd.DataFrame(columns=['a', 'b', 'b'])
df.loc[:, 'a'] = list(range(5)) # raise ValueError
```
Traceback:
```python
Traceback (most recent call last):
File "c:\Users\leona\pandas\main.py", line 3, in <module>
df.loc[:, 'a'] = list(range(5))
File "c:\Users\leona\pandas\pandas\core\indexing.py", line 691, in __setitem__
iloc._setitem_with_indexer(indexer, value, self.name)
File "c:\Users\leona\pandas\pandas\core\indexing.py", line 1636, in _setitem_with_indexer
self._setitem_single_block(indexer, value, name)
File "c:\Users\leona\pandas\pandas\core\indexing.py", line 1862, in _setitem_single_block
self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)
File "c:\Users\leona\pandas\pandas\core\internals\managers.py", line 565, in setitem
return self.apply("setitem", indexer=indexer, value=value)
File "c:\Users\leona\pandas\pandas\core\internals\managers.py", line 428, in apply
applied = getattr(b, f)(**kwargs)
File "c:\Users\leon\pandas\pandas\core\internals\blocks.py", line 1022, in setitem
values[indexer] = value
ValueError: cannot copy sequence with size 5 to array axis with dimension 0
```
#### Problem description
It works with no duplicated column:
```python
df = pd.DataFrame(columns=['a', 'b', 'c'])
df.loc[:, 'a'] = list(range(5))
```
These work even with duplicated column names:
```python
df = pd.DataFrame(columns=['a', 'b', 'b'])
df['a'] = list(range(5)) # Same as expected output below
df = pd.DataFrame(columns=['a', 'b', 'b'])
df.a = list(range(5)) # Same as expected output below
```
Setting on new column name is okay:
```python
df = pd.DataFrame(columns=['a', 'b', 'b'])
df.loc[:, 'c'] = list(range(5))
# a b b c
# 0 NaN NaN NaN 0
# 1 NaN NaN NaN 1
# 2 NaN NaN NaN 2
# 3 NaN NaN NaN 3
# 4 NaN NaN NaN 4
```
#### Expected Output
```
a b b
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN NaN
4 4 NaN NaN
```
#### Output of ``pd.show_versions()``
<details><summary>Output:</summary>
INSTALLED VERSIONS
------------------
commit : 122d50246bcffcf8c3f252146340ac02676a5bf6
python : 3.9.0.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18362
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 11, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_Singapore.1252
pandas : 1.3.0.dev0+83.g122d50246.dirty
numpy : 1.19.4
pytz : 2020.4
dateutil : 2.8.1
pip : 20.2.3
setuptools : 49.2.1
Cython : 0.29.21
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Thanks for the report. Further investigations and PRs to fix are most welcome. | 2021-01-19T20:36:45Z | [] | [] |
Traceback (most recent call last):
File "c:\Users\leona\pandas\main.py", line 3, in <module>
df.loc[:, 'a'] = list(range(5))
File "c:\Users\leona\pandas\pandas\core\indexing.py", line 691, in __setitem__
iloc._setitem_with_indexer(indexer, value, self.name)
File "c:\Users\leona\pandas\pandas\core\indexing.py", line 1636, in _setitem_with_indexer
self._setitem_single_block(indexer, value, name)
File "c:\Users\leona\pandas\pandas\core\indexing.py", line 1862, in _setitem_single_block
self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)
File "c:\Users\leona\pandas\pandas\core\internals\managers.py", line 565, in setitem
return self.apply("setitem", indexer=indexer, value=value)
File "c:\Users\leona\pandas\pandas\core\internals\managers.py", line 428, in apply
applied = getattr(b, f)(**kwargs)
File "c:\Users\leon\pandas\pandas\core\internals\blocks.py", line 1022, in setitem
values[indexer] = value
ValueError: cannot copy sequence with size 5 to array axis with dimension 0
| 14,658 |
|||
pandas-dev/pandas | pandas-dev__pandas-39308 | 7e4d331cd1c03d36471b21451fc0fc760bc7153f | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -233,6 +233,7 @@ Datetimelike
- Bug in :meth:`DatetimeIndex.intersection`, :meth:`DatetimeIndex.symmetric_difference`, :meth:`PeriodIndex.intersection`, :meth:`PeriodIndex.symmetric_difference` always returning object-dtype when operating with :class:`CategoricalIndex` (:issue:`38741`)
- Bug in :meth:`Series.where` incorrectly casting ``datetime64`` values to ``int64`` (:issue:`37682`)
- Bug in :class:`Categorical` incorrectly typecasting ``datetime`` object to ``Timestamp`` (:issue:`38878`)
+- Bug in :func:`date_range` incorrectly creating :class:`DatetimeIndex` containing ``NaT`` instead of raising ``OutOfBoundsDatetime`` in corner cases (:issue:`24124`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/arrays/_ranges.py b/pandas/core/arrays/_ranges.py
--- a/pandas/core/arrays/_ranges.py
+++ b/pandas/core/arrays/_ranges.py
@@ -7,7 +7,13 @@
import numpy as np
-from pandas._libs.tslibs import BaseOffset, OutOfBoundsDatetime, Timedelta, Timestamp
+from pandas._libs.tslibs import (
+ BaseOffset,
+ OutOfBoundsDatetime,
+ Timedelta,
+ Timestamp,
+ iNaT,
+)
def generate_regular_range(
@@ -150,7 +156,12 @@ def _generate_range_overflow_safe_signed(
addend = np.int64(periods) * np.int64(stride)
try:
# easy case with no overflows
- return np.int64(endpoint) + addend
+ result = np.int64(endpoint) + addend
+ if result == iNaT:
+ # Putting this into a DatetimeArray/TimedeltaArray
+ # would incorrectly be interpreted as NaT
+ raise OverflowError
+ return result
except (FloatingPointError, OverflowError):
# with endpoint negative and addend positive we risk
# FloatingPointError; with reversed signed we risk OverflowError
| DatetimeIndex and Timestamp have different implementation limits
`Timestamp`'s minimum value is bounded away from `np.iinfo(np.int64).min` "to allow overflow free conversion with a microsecond resolution", but `DatetimeIndex` is not:
```
>>> dti = pd.date_range(end=pd.Timestamp.min, periods=2, freq='ns')
>>> dti
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/base.py", line 77, in __repr__
return str(self)
File "pandas/core/base.py", line 57, in __str__
return self.__bytes__()
File "pandas/core/base.py", line 69, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "pandas/core/indexes/base.py", line 927, in __unicode__
data = self._format_data()
File "pandas/core/indexes/base.py", line 970, in _format_data
is_justify=is_justify, name=name)
File "pandas/io/formats/printing.py", line 348, in format_object_summary
first = formatter(obj[0])
File "pandas/core/arrays/datetimelike.py", line 333, in __getitem__
return self._box_func(val)
File "pandas/core/arrays/datetimes.py", line 327, in <lambda>
return lambda x: Timestamp(x, freq=self.freq, tz=self.tz)
File "pandas/_libs/tslibs/timestamps.pyx", line 736, in pandas._libs.tslibs.timestamps.Timestamp.__new__
ts = convert_to_tsobject(ts_input, tz, unit, 0, 0, nanosecond or 0)
File "pandas/_libs/tslibs/conversion.pyx", line 324, in pandas._libs.tslibs.conversion.convert_to_tsobject
check_dts_bounds(&obj.dts)
File "pandas/_libs/tslibs/np_datetime.pyx", line 120, in pandas._libs.tslibs.np_datetime.check_dts_bounds
raise OutOfBoundsDatetime(
pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 1677-09-21 00:12:43
```
| 2021-01-20T23:21:12Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/base.py", line 77, in __repr__
return str(self)
File "pandas/core/base.py", line 57, in __str__
return self.__bytes__()
File "pandas/core/base.py", line 69, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "pandas/core/indexes/base.py", line 927, in __unicode__
data = self._format_data()
File "pandas/core/indexes/base.py", line 970, in _format_data
is_justify=is_justify, name=name)
File "pandas/io/formats/printing.py", line 348, in format_object_summary
first = formatter(obj[0])
File "pandas/core/arrays/datetimelike.py", line 333, in __getitem__
return self._box_func(val)
File "pandas/core/arrays/datetimes.py", line 327, in <lambda>
return lambda x: Timestamp(x, freq=self.freq, tz=self.tz)
File "pandas/_libs/tslibs/timestamps.pyx", line 736, in pandas._libs.tslibs.timestamps.Timestamp.__new__
ts = convert_to_tsobject(ts_input, tz, unit, 0, 0, nanosecond or 0)
File "pandas/_libs/tslibs/conversion.pyx", line 324, in pandas._libs.tslibs.conversion.convert_to_tsobject
check_dts_bounds(&obj.dts)
File "pandas/_libs/tslibs/np_datetime.pyx", line 120, in pandas._libs.tslibs.np_datetime.check_dts_bounds
raise OutOfBoundsDatetime(
pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 1677-09-21 00:12:43
| 14,663 |
||||
pandas-dev/pandas | pandas-dev__pandas-39326 | 7e531e3dfabd22fdf6669ea3b4caaf8e3b57cdd8 | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -345,6 +345,7 @@ Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
- Bug in :meth:`SeriesGroupBy.value_counts` where unobserved categories in a grouped categorical series were not tallied (:issue:`38672`)
+- Bug in :meth:`SeriesGroupBy.value_counts` where error was raised on an empty series (:issue:`39172`)
- Bug in :meth:`.GroupBy.indices` would contain non-existent indices when null values were present in the groupby keys (:issue:`9304`)
- Fixed bug in :meth:`DataFrameGroupBy.sum` and :meth:`SeriesGroupBy.sum` causing loss of precision through using Kahan summation (:issue:`38778`)
- Fixed bug in :meth:`DataFrameGroupBy.cumsum`, :meth:`SeriesGroupBy.cumsum`, :meth:`DataFrameGroupBy.mean` and :meth:`SeriesGroupBy.mean` causing loss of precision through using Kahan summation (:issue:`38934`)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -730,11 +730,16 @@ def apply_series_value_counts():
ids, lab = ids[sorter], lab[sorter]
# group boundaries are where group ids change
- idx = np.r_[0, 1 + np.nonzero(ids[1:] != ids[:-1])[0]]
+ idchanges = 1 + np.nonzero(ids[1:] != ids[:-1])[0]
+ idx = np.r_[0, idchanges]
+ if not len(ids):
+ idx = idchanges
# new values are where sorted labels change
lchanges = llab(lab, slice(1, None)) != llab(lab, slice(None, -1))
inc = np.r_[True, lchanges]
+ if not len(lchanges):
+ inc = lchanges
inc[idx] = True # group boundaries are also new values
out = np.diff(np.nonzero(np.r_[inc, True])[0]) # value counts
| BUG: SeriesGroupBy.value_counts() raises on an empty Series
- [X] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions.
- [ ] I have asked my usage related question on [StackOverflow](https://stackoverflow.com).
---
#### Question about pandas
Using the `value_counts` method after grouping an empty DataFrame and selecting a column raises an error.
Example:
```python
>>> pd.DataFrame(columns=["A", "B"]).groupby("A")["B"].value_counts()
```
Error:
```python-traceback
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/username/.virtualenvs/my_project/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 736, in value_counts
codes = [rep(level_codes) for level_codes in codes] + [llab(lab, inc)]
File "/Users/username/.virtualenvs/my_project/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 705, in <lambda>
llab = lambda lab, inc: lab[inc]
IndexError: boolean index did not match indexed array along dimension 0; dimension is 0 but corresponding boolean dimension is 1
```
Instead, I would expect that `value_counts` returns an empty series, like when using the `max` method, like the following,
```python
>>> pd.DataFrame(columns=["A", "B"]).groupby("A")["B"].max()
```
which gives this result:
```
Series([], Name: B, dtype: object)
```
If it is normal to have an error when using `value_counts` for an empty `SeriesGroupBy`, why is it so?
Thank you.
| Thanks for the report. This looks like a bug to me. Further investigations and PRs to fix are welcome!
Take
That is the line that raises the error https://github.com/pandas-dev/pandas/blob/v1.2.1/pandas/core/groupby/generic.py#L736 | 2021-01-21T19:09:19Z | [] | [] |
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/username/.virtualenvs/my_project/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 736, in value_counts
codes = [rep(level_codes) for level_codes in codes] + [llab(lab, inc)]
File "/Users/username/.virtualenvs/my_project/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 705, in <lambda>
llab = lambda lab, inc: lab[inc]
IndexError: boolean index did not match indexed array along dimension 0; dimension is 0 but corresponding boolean dimension is 1
| 14,665 |
|||
pandas-dev/pandas | pandas-dev__pandas-39440 | bdfaea47d7e36b3c524f0bdb686d1ac10036597e | diff --git a/doc/source/whatsnew/v1.2.2.rst b/doc/source/whatsnew/v1.2.2.rst
--- a/doc/source/whatsnew/v1.2.2.rst
+++ b/doc/source/whatsnew/v1.2.2.rst
@@ -16,6 +16,7 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- Fixed regression in :meth:`~DataFrame.to_pickle` failing to create bz2/xz compressed pickle files with ``protocol=5`` (:issue:`39002`)
- Fixed regression in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` always raising ``AssertionError`` when comparing extension dtypes (:issue:`39410`)
+- Fixed regression in :meth:`~DataFrame.to_csv` opening ``codecs.StreamWriter`` in binary mode instead of in text mode and ignoring user-provided ``mode`` (:issue:`39247`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/common.py b/pandas/io/common.py
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -857,12 +857,15 @@ def file_exists(filepath_or_buffer: FilePathOrBuffer) -> bool:
def _is_binary_mode(handle: FilePathOrBuffer, mode: str) -> bool:
"""Whether the handle is opened in binary mode"""
+ # specified by user
+ if "t" in mode or "b" in mode:
+ return "b" in mode
+
# classes that expect string but have 'b' in mode
- text_classes = (codecs.StreamReaderWriter,)
- if isinstance(handle, text_classes):
+ text_classes = (codecs.StreamWriter, codecs.StreamReader, codecs.StreamReaderWriter)
+ if issubclass(type(handle), text_classes):
return False
# classes that expect bytes
binary_classes = (BufferedIOBase, RawIOBase)
-
return isinstance(handle, binary_classes) or "b" in getattr(handle, "mode", mode)
| BUG: V1.2 DataFrame.to_csv() fails to write a file with codecs
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import pandas as pd
import codecs
x = [ 1, 2, 3, 4, 5 ]
y = [ 6, 7, 8, 9, 10 ]
z = [ 'a', 'b', 'c', 'd', 'e' ]
data = { "X": x, "Y":y, "Z":z }
df = pd.DataFrame( data, columns=[ "X", "Y", "Z" ] )
print( "Pandas version = %s" % pd.__version__ )
print(df)
fp = codecs.open( "out-testPD12.csv", "w", "utf-8" )
fp.write( "Pandas version = %s\n" % pd.__version__ )
df.to_csv( fp, index=False, header=True )
fp.close()
```
#### Problem description
When saving the file, `TypeError: utf_8_encode() argument 1 must be str, not bytes` flags on.
#### Expected Output
The output below has been obtained by downgrading and pinned Pandas to V1.1.5.
V1.1.2 is also tested OK.
```
Pandas version = 1.1.5
X,Y,Z
1,6,a
2,7,b
3,8,c
4,9,d
5,10,e
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 3e89b4c4b1580aa890023fc550774e63d499da25
python : 3.8.2.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Tue Nov 10 00:10:30 PST 2020; root:xnu-6153.141.10~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 1.2.0
numpy : 1.19.2
pytz : 2020.5
dateutil : 2.8.1
pip : 20.3.3
setuptools : 51.1.2.post20210112
Cython : 0.29.21
pytest : 6.2.1
hypothesis : None
sphinx : 3.4.3
blosc : None
feather : None
xlsxwriter : 1.3.7
lxml.etree : 4.6.2
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.19.0
pandas_datareader: None
bs4 : 4.9.3
bottleneck : 1.3.2
fsspec : 0.8.3
fastparquet : None
gcsfs : None
matplotlib : 3.3.2
numexpr : 2.7.2
odfpy : None
openpyxl : 3.0.6
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : 1.3.21
tables : 3.6.1
tabulate : None
xarray : None
xlrd : 2.0.1
xlwt : 1.3.0
numba : 0.51.2
```
(base) Catalina1{kazzz-s} temp (1)% python testPD12.py
Pandas version = 1.2.0
X Y Z
0 1 6 a
1 2 7 b
2 3 8 c
3 4 9 d
4 5 10 e
Traceback (most recent call last):
File "testPD12.py", line 52, in <module>
df.to_csv( fp, index=False, header=True )
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 3384, in to_csv
return DataFrameRenderer(formatter).to_csv(
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 1083, in to_csv
csv_formatter.save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/csvs.py", line 248, in save
self._save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 104, in __exit__
self.close()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 89, in close
self.handle.flush()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 721, in write
return self.writer.write(data)
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 377, in write
data, consumed = self.encode(object, self.errors)
TypeError: utf_8_encode() argument 1 must be str, not bytes
```
</details>
| thank you for the report and your minimal examples! Yes, that is a regression.
Unfortunately it seems that `codecs.open( ..., mode="w", encoding="utf-8" ).mode` is `"wb"`. So pandas 1.2 tries to write bytes to it. Interestingly `codecs.open( ..., mode="w" ).mode` is `"w"` and should therefore work with pandas 1.2.
> The output below has been obtained by downgrading and pinned Pandas to V1.1.5.
d75eb5ba1be16b6cd74fd44a68ce124be6575e4f...ff1cd78535f1badc74061c36700ea005193a8461
https://github.com/simonjayhawkins/pandas/runs/1723695416?check_suite_focus=true
perhaps #36997
Hello, we still seem to be getting this issue in Pandas 1.2.1, but with another text IO class:
```
>>> import codecs
>>> import pandas
>>> df = pandas.DataFrame({'a': [1,2,3]})
>>> f = codecs.getwriter('utf-8')(open('blah', 'wb'))
>>> df.to_csv(f)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "env/lib/python3.7/site-packages/pandas/core/generic.py", line 3402, in to_csv
storage_options=storage_options,
File "env/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1083, in to_csv
csv_formatter.save()
File "env/lib/python3.7/site-packages/pandas/io/formats/csvs.py", line 248, in save
self._save()
File "env/lib/python3.7/site-packages/pandas/io/common.py", line 105, in __exit__
self.close()
File "env/lib/python3.7/site-packages/pandas/io/common.py", line 90, in close
self.handle.flush()
File "env/lib/python3.7/codecs.py", line 377, in write
data, consumed = self.encode(object, self.errors)
TypeError: utf_8_encode() argument 1 must be str, not bytes
```
The class that `codecs.getwriter('utf-8')` returns on my system is `encodings.utf_8.StreamWriter`.
oh my, I think there are three directions how to address this:
1) go back to the pre-1.2 behavior: assume text mode except for a growing list of binary classes
2) try to infer it with `.mode` (as done in 1.2) and a growing list of common text/byte classes
3) make sure that `read_`/`to_` have a `mode` keyword: force the user to specify the correct mode
(3) would probably not be okay for 1.2.*
@jreback what do you think is the best short/long-term solution?
It looks like all of the results of `codecs.getwriter(...)` are subclasses of `codecs.StreamWriter` ... so far as I can tell.
@twoertwein 2) sgtm
It could also be helpful to be able to pass 'wt' as the `mode` argument of `to_csv`, to force text mode in case it wasn't detected? | 2021-01-27T17:22:23Z | [] | [] |
Traceback (most recent call last):
File "testPD12.py", line 52, in <module>
df.to_csv( fp, index=False, header=True )
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 3384, in to_csv
return DataFrameRenderer(formatter).to_csv(
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 1083, in to_csv
csv_formatter.save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/csvs.py", line 248, in save
self._save()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 104, in __exit__
self.close()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 89, in close
self.handle.flush()
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 721, in write
return self.writer.write(data)
File "/Users/kazzz-s/opt/anaconda3/lib/python3.8/codecs.py", line 377, in write
data, consumed = self.encode(object, self.errors)
TypeError: utf_8_encode() argument 1 must be str, not bytes
| 14,689 |
|||
pandas-dev/pandas | pandas-dev__pandas-39464 | 8e2e98baa62de4629ed3240abb66be9cc1c3ead5 | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -412,6 +412,7 @@ Reshaping
- Bug in :meth:`DataFrame.join` not assigning values correctly when having :class:`MultiIndex` where at least one dimension is from dtype ``Categorical`` with non-alphabetically sorted categories (:issue:`38502`)
- :meth:`Series.value_counts` and :meth:`Series.mode` return consistent keys in original order (:issue:`12679`, :issue:`11227` and :issue:`39007`)
- Bug in :meth:`DataFrame.apply` would give incorrect results when used with a string argument and ``axis=1`` when the axis argument was not supported and now raises a ``ValueError`` instead (:issue:`39211`)
+- Bug in :meth:`DataFrame.sort_values` not reshaping index correctly after sorting on columns, when ``ignore_index=True`` (:issue:`39464`)
- Bug in :meth:`DataFrame.append` returning incorrect dtypes with combinations of ``ExtensionDtype`` dtypes (:issue:`39454`)
Sparse
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5560,7 +5560,9 @@ def sort_values( # type: ignore[override]
)
if ignore_index:
- new_data.set_axis(1, ibase.default_index(len(indexer)))
+ new_data.set_axis(
+ self._get_block_manager_axis(axis), ibase.default_index(len(indexer))
+ )
result = self._constructor(new_data)
if inplace:
| BUG: sort_values create an unprintable object
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
# Your code here
import pandas as pd
import numpy as np
RAND_LOW = 0
RAND_HIGH = 100
NCOLS = 2 ** 5
NROWS = 2 ** 2
random_state = np.random.RandomState(seed=42)
data = {
"col{}".format(int((i - NCOLS / 2) % NCOLS + 1)): random_state.randint(
RAND_LOW, RAND_HIGH, size=(NROWS)
)
for i in range(NCOLS)
}
df1 = pd.DataFrame(data)
print(df1)
df2 = df1.sort_values(df1.index[0], axis=1, ignore_index=True)
print(df2)
```
#### Problem description
[this should explain **why** the current behaviour is a problem and why the expected output is a better solution]
Printing object returned by `sort_values` produces an exception because some internal metadata is wrong:
```
Traceback (most recent call last):
File "sort_values_test6.py", line 19, in <module>
print(df2)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/frame.py", line 803, in __repr__
self.to_string(
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/frame.py", line 939, in to_string
return fmt.DataFrameRenderer(formatter).to_string(
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 1031, in to_string
string = string_formatter.to_string()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/string.py", line 23, in to_string
text = self._get_string_representation()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/string.py", line 47, in _get_string_representation
return self._fit_strcols_to_terminal_width(strcols)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/string.py", line 179, in _fit_strcols_to_terminal_width
self.fmt.truncate()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 700, in truncate
self._truncate_horizontally()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 718, in _truncate_horizontally
self.tr_frame = concat((left, right), axis=1)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 298, in concat
return op.get_result()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 520, in get_result
new_data = concatenate_block_managers(
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 89, in concatenate_block_managers
return BlockManager(blocks, axes)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 143, in __init__
self._verify_integrity()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 323, in _verify_integrity
raise construction_error(tot_items, block.shape[1:], self.axes)
ValueError: Shape of passed values is (4, 16), indices imply (32, 16)
```
#### Expected Output
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here leaving a blank line after the details tag]
INSTALLED VERSIONS
------------------
commit : 9d598a5e1eee26df95b3910e3f2934890d062caa
python : 3.8.6.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-56-generic
Version : #62-Ubuntu SMP Mon Nov 23 19:20:19 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.1
numpy : 1.19.4
pytz : 2020.1
dateutil : 2.8.1
pip : 20.3.1
setuptools : 49.6.0.post20201009
Cython : None
pytest : 6.2.1
hypothesis : None
sphinx : None
blosc : None
feather : 0.4.1
xlsxwriter : None
lxml.etree : 4.6.2
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.3
bottleneck : None
fsspec : 0.8.4
fastparquet : None
gcsfs : None
matplotlib : 3.2.2
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.5
pandas_gbq : 0.14.1
pyarrow : 2.0.0
pyxlsb : None
s3fs : 0.5.2
scipy : 1.5.4
sqlalchemy : 1.3.21
tables : 3.6.1
tabulate : None
xarray : 0.16.2
xlrd : 2.0.1
xlwt : None
numba : None
</details>
| Also shape of DataFrame changes with this `sort_values` call, which doesn't seem to be right. Original DataFrame has shape `(4, 32)` and sorted DataFrame has shape `(32, 32)`.
On master this is already raising with sort values
```
Traceback (most recent call last):
File "/home/developer/.config/JetBrains/PyCharm2020.3/scratches/scratch_4.py", line 409, in <module>
df2 = df1.sort_values(df1.index[0], axis=1, ignore_index=True)
File "/home/developer/PycharmProjects/pandas/pandas/core/frame.py", line 5551, in sort_values
new_data.set_axis(1, ibase.default_index(len(indexer)))
File "/home/developer/PycharmProjects/pandas/pandas/core/internals/managers.py", line 224, in set_axis
raise ValueError(
ValueError: Length mismatch: Expected axis has 4 elements, new values have 32 elements
Process finished with exit code 1
```
if you remove `ignore_index=True` then this works. From the looks of it this recent argument is maybe trying to refit the index, when the columns have been sorted:
```
5550: if ignore_index:
5551: new_data.set_axis(1, ibase.default_index(len(indexer)))
```
If you change this to:
```
5550: if ignore_index:
5551: new_data.set_axis(1-axis, ibase.default_index(len(indexer)))
```
then you do actually get the result you want.
Yes I know that it works with `ignore_index=False`. Also this test is sensitive of `df1.columns` contents. It is not a matter of performing this operation for me, it is a matter of correct implementation in Pandas. This bug was found by Modin project tests on `sort_values`. We need a correctly working Pandas function to implement this in Modin.
I provided the solution to correctly implement in Pandas, feel free to submit PR and check it satisfies all tests
Instead of the `1 - axis`, we should probably use `self._get_block_manager_axis(axis)` (which basically does the same, but with the helper function that is meant for this).
take | 2021-01-29T03:46:02Z | [] | [] |
Traceback (most recent call last):
File "sort_values_test6.py", line 19, in <module>
print(df2)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/frame.py", line 803, in __repr__
self.to_string(
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/frame.py", line 939, in to_string
return fmt.DataFrameRenderer(formatter).to_string(
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 1031, in to_string
string = string_formatter.to_string()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/string.py", line 23, in to_string
text = self._get_string_representation()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/string.py", line 47, in _get_string_representation
return self._fit_strcols_to_terminal_width(strcols)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/string.py", line 179, in _fit_strcols_to_terminal_width
self.fmt.truncate()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 700, in truncate
self._truncate_horizontally()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/io/formats/format.py", line 718, in _truncate_horizontally
self.tr_frame = concat((left, right), axis=1)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 298, in concat
return op.get_result()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 520, in get_result
new_data = concatenate_block_managers(
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 89, in concatenate_block_managers
return BlockManager(blocks, axes)
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 143, in __init__
self._verify_integrity()
File "/localdisk/gashiman/miniconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 323, in _verify_integrity
raise construction_error(tot_items, block.shape[1:], self.axes)
ValueError: Shape of passed values is (4, 16), indices imply (32, 16)
| 14,693 |
|||
pandas-dev/pandas | pandas-dev__pandas-3949 | da14c6e857fd1fc7875cd552779a6063ec9e4ddc | diff --git a/RELEASE.rst b/RELEASE.rst
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -101,6 +101,7 @@ pandas 0.11.1
to select with a Storer; these are invalid parameters at this time
- can now specify an ``encoding`` option to ``append/put``
to enable alternate encodings (GH3750_)
+ - enable support for ``iterator/chunksize`` with ``read_hdf``
- The repr() for (Multi)Index now obeys display.max_seq_items rather
then numpy threshold print options. (GH3426_, GH3466_)
- Added mangle_dupe_cols option to read_table/csv, allowing users
diff --git a/doc/source/io.rst b/doc/source/io.rst
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1925,6 +1925,18 @@ The default is 50,000 rows returned in a chunk.
for df in store.select('df', chunksize=3):
print df
+.. note::
+
+ .. versionadded:: 0.11.1
+
+ You can also use the iterator with ``read_hdf`` which will open, then
+ automatically close the store when finished iterating.
+
+ .. code-block:: python
+
+ for df in read_hdf('store.h5','df', chunsize=3):
+ print df
+
Note, that the chunksize keyword applies to the **returned** rows. So if you
are doing a query, then that set will be subdivided and returned in the
iterator. Keep in mind that if you do not pass a ``where`` selection criteria
diff --git a/doc/source/v0.11.1.txt b/doc/source/v0.11.1.txt
--- a/doc/source/v0.11.1.txt
+++ b/doc/source/v0.11.1.txt
@@ -6,6 +6,11 @@ v0.11.1 (June ??, 2013)
This is a minor release from 0.11.0 and includes several new features and
enhancements along with a large number of bug fixes.
+Highlites include a consistent I/O API naming scheme, routines to read html,
+write multi-indexes to csv files, read & write STATA data files, read & write JSON format
+files, Python 3 support for ``HDFStore``, filtering of groupby expressions via ``filter``, and a
+revamped ``replace`` routine that accepts regular expressions.
+
API changes
~~~~~~~~~~~
@@ -148,8 +153,8 @@ API changes
``bs4`` + ``html5lib`` when lxml fails to parse. a list of parsers to try
until success is also valid
-Enhancements
-~~~~~~~~~~~~
+I/O Enhancements
+~~~~~~~~~~~~~~~~
- ``pd.read_html()`` can now parse HTML strings, files or urls and return
DataFrames, courtesy of @cpcloud. (GH3477_, GH3605_, GH3606_, GH3616_).
@@ -184,28 +189,6 @@ Enhancements
accessable via ``read_json`` top-level function for reading,
and ``to_json`` DataFrame method for writing, :ref:`See the docs<io.json>`
- - ``DataFrame.replace()`` now allows regular expressions on contained
- ``Series`` with object dtype. See the examples section in the regular docs
- :ref:`Replacing via String Expression <missing_data.replace_expression>`
-
- For example you can do
-
- .. ipython :: python
-
- df = DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]})
- df.replace(regex=r'\s*\.\s*', value=np.nan)
-
- to replace all occurrences of the string ``'.'`` with zero or more
- instances of surrounding whitespace with ``NaN``.
-
- Regular string replacement still works as expected. For example, you can do
-
- .. ipython :: python
-
- df.replace('.', np.nan)
-
- to replace all occurrences of the string ``'.'`` with ``NaN``.
-
- Multi-index column support for reading and writing csv format files
- The ``header`` option in ``read_csv`` now accepts a
@@ -225,19 +208,62 @@ Enhancements
with ``df.to_csv(..., index=False``), then any ``names`` on the columns index will
be *lost*.
+ .. ipython:: python
+
+ from pandas.util.testing import makeCustomDataframe as mkdf
+ df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
+ df.to_csv('mi.csv',tupleize_cols=False)
+ print open('mi.csv').read()
+ pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_cols=False)
+
+ .. ipython:: python
+ :suppress:
+
+ import os
+ os.remove('mi.csv')
+
+ - Support for ``HDFStore`` (via ``PyTables 3.0.0``) on Python3
+
+ - Iterator support via ``read_hdf`` that automatically opens and closes the
+ store when iteration is finished. This is only for *tables*
+
.. ipython:: python
- from pandas.util.testing import makeCustomDataframe as mkdf
- df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
- df.to_csv('mi.csv',tupleize_cols=False)
- print open('mi.csv').read()
- pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_cols=False)
+ path = 'store_iterator.h5'
+ DataFrame(randn(10,2)).to_hdf(path,'df',table=True)
+ for df in read_hdf(path,'df', chunksize=3):
+ print df
.. ipython:: python
- :suppress:
+ :suppress:
- import os
- os.remove('mi.csv')
+ import os
+ os.remove(path)
+
+Other Enhancements
+~~~~~~~~~~~~~~~~~~
+
+ - ``DataFrame.replace()`` now allows regular expressions on contained
+ ``Series`` with object dtype. See the examples section in the regular docs
+ :ref:`Replacing via String Expression <missing_data.replace_expression>`
+
+ For example you can do
+
+ .. ipython :: python
+
+ df = DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]})
+ df.replace(regex=r'\s*\.\s*', value=np.nan)
+
+ to replace all occurrences of the string ``'.'`` with zero or more
+ instances of surrounding whitespace with ``NaN``.
+
+ Regular string replacement still works as expected. For example, you can do
+
+ .. ipython :: python
+
+ df.replace('.', np.nan)
+
+ to replace all occurrences of the string ``'.'`` with ``NaN``.
- ``pd.melt()`` now accepts the optional parameters ``var_name`` and ``value_name``
to specify custom column names of the returned DataFrame.
@@ -261,8 +287,6 @@ Enhancements
pd.get_option('a.b')
pd.get_option('b.c')
- - Support for ``HDFStore`` (via ``PyTables 3.0.0``) on Python3
-
- The ``filter`` method for group objects returns a subset of the original
object. Suppose we want to take only elements that belong to groups with a
group sum greater than 2.
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -196,12 +196,27 @@ def to_hdf(path_or_buf, key, value, mode=None, complevel=None, complib=None, app
def read_hdf(path_or_buf, key, **kwargs):
""" read from the store, closeit if we opened it """
- f = lambda store: store.select(key, **kwargs)
+ f = lambda store, auto_close: store.select(key, auto_close=auto_close, **kwargs)
if isinstance(path_or_buf, basestring):
- with get_store(path_or_buf) as store:
- return f(store)
- f(path_or_buf)
+
+ # can't auto open/close if we are using an iterator
+ # so delegate to the iterator
+ store = HDFStore(path_or_buf)
+ try:
+ return f(store, True)
+ except:
+
+ # if there is an error, close the store
+ try:
+ store.close()
+ except:
+ pass
+
+ raise
+
+ # a passed store; user controls open/close
+ f(path_or_buf, False)
class HDFStore(object):
"""
@@ -405,7 +420,7 @@ def get(self, key):
raise KeyError('No object named %s in the file' % key)
return self._read_group(group)
- def select(self, key, where=None, start=None, stop=None, columns=None, iterator=False, chunksize=None, **kwargs):
+ def select(self, key, where=None, start=None, stop=None, columns=None, iterator=False, chunksize=None, auto_close=False, **kwargs):
"""
Retrieve pandas object stored in file, optionally based on where
criteria
@@ -419,6 +434,7 @@ def select(self, key, where=None, start=None, stop=None, columns=None, iterator=
columns : a list of columns that if not None, will limit the return columns
iterator : boolean, return an iterator, default False
chunksize : nrows to include in iteration, return an iterator
+ auto_close : boolean, should automatically close the store when finished, default is False
"""
group = self.get_node(key)
@@ -434,9 +450,11 @@ def func(_start, _stop):
return s.read(where=where, start=_start, stop=_stop, columns=columns, **kwargs)
if iterator or chunksize is not None:
- return TableIterator(func, nrows=s.nrows, start=start, stop=stop, chunksize=chunksize)
+ if not s.is_table:
+ raise TypeError("can only use an iterator or chunksize on a table")
+ return TableIterator(self, func, nrows=s.nrows, start=start, stop=stop, chunksize=chunksize, auto_close=auto_close)
- return TableIterator(func, nrows=s.nrows, start=start, stop=stop).get_values()
+ return TableIterator(self, func, nrows=s.nrows, start=start, stop=stop, auto_close=auto_close).get_values()
def select_as_coordinates(self, key, where=None, start=None, stop=None, **kwargs):
"""
@@ -473,7 +491,7 @@ def select_column(self, key, column, **kwargs):
"""
return self.get_storer(key).read_column(column = column, **kwargs)
- def select_as_multiple(self, keys, where=None, selector=None, columns=None, start=None, stop=None, iterator=False, chunksize=None, **kwargs):
+ def select_as_multiple(self, keys, where=None, selector=None, columns=None, start=None, stop=None, iterator=False, chunksize=None, auto_close=False, **kwargs):
""" Retrieve pandas objects from multiple tables
Parameters
@@ -541,9 +559,9 @@ def func(_start, _stop):
return concat(objs, axis=axis, verify_integrity=True)
if iterator or chunksize is not None:
- return TableIterator(func, nrows=nrows, start=start, stop=stop, chunksize=chunksize)
+ return TableIterator(self, func, nrows=nrows, start=start, stop=stop, chunksize=chunksize, auto_close=auto_close)
- return TableIterator(func, nrows=nrows, start=start, stop=stop).get_values()
+ return TableIterator(self, func, nrows=nrows, start=start, stop=stop, auto_close=auto_close).get_values()
def put(self, key, value, table=None, append=False, **kwargs):
@@ -916,16 +934,20 @@ class TableIterator(object):
Parameters
----------
- func : the function to get results
+ store : the reference store
+ func : the function to get results
nrows : the rows to iterate on
start : the passed start value (default is None)
- stop : the passed stop value (default is None)
+ stop : the passed stop value (default is None)
chunksize : the passed chunking valeu (default is 50000)
+ auto_close : boolean, automatically close the store at the end of iteration,
+ default is False
kwargs : the passed kwargs
"""
- def __init__(self, func, nrows, start=None, stop=None, chunksize=None):
- self.func = func
+ def __init__(self, store, func, nrows, start=None, stop=None, chunksize=None, auto_close=False):
+ self.store = store
+ self.func = func
self.nrows = nrows or 0
self.start = start or 0
@@ -937,6 +959,7 @@ def __init__(self, func, nrows, start=None, stop=None, chunksize=None):
chunksize = 100000
self.chunksize = chunksize
+ self.auto_close = auto_close
def __iter__(self):
current = self.start
@@ -950,9 +973,16 @@ def __iter__(self):
yield v
+ self.close()
+
+ def close(self):
+ if self.auto_close:
+ self.store.close()
+
def get_values(self):
- return self.func(self.start, self.stop)
-
+ results = self.func(self.start, self.stop)
+ self.close()
+ return results
class IndexCol(object):
""" an index column description class
| allow HDFStore to remain open when TableIterator is returned from read_hdf
Hi,
I'm using a TableIterator from pandas.read_hdf function (with the keyword argument iterator=True), I am unable to retrieve any data due to the error "ClosedNodeError: the node object is closed".
For instance:
```
pandas.DataFrame({'a':[1,2,3], 'b':[4,5,6]}).to_hdf("test.h5", "test", append=True)
it = pandas.read_hdf("test.h5","test",iterator=True)
iter(it).next()
Traceback (most recent call last):
File "<ipython-input-22-5634d86698ab>", line 1, in <module>
iter(it).next()
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 912, in __iter__
v = self.func(current, stop)
...
File "/usr/local/lib/python2.7/site-packages/tables/node.py", line 355, in _g_check_open
raise ClosedNodeError("the node object is closed")
ClosedNodeError: the node object is closed
```
I looked through source code of panda.io.pytables and found that in the
get_store function, store.close() is always run when read_hdf returns, even if
it returns an TableIterator. My assumption is that store should remain open in
order for TableIterator to work. Can you please let me know if this fix is
acceptable, or is there an easier way to do this?
Thanks,
Sean
| its only open/closed automatically when you use `read_hdf`, otherwise use store as normal. The example uses the more verbose syntax. http://pandas.pydata.org/pandas-docs/dev/io.html#iterator
@seanyeh anything further? otherwise pls close
I do find it odd that it allows an option that doesn't work. Thanks anyway
where does the option not work? it works exactly how its supposed to in `read_hdf`, which provides a context to open/close the file. When used with an already open store it doesn't close it.
How else would you expect it to work?
@jreback what's the point of passing iterator=True if you can't iterate over the result?
It seems intuitive that this should work, right?
``` python
for x in pandas.read_hdf("test.h5", "test", iterator=True):
print x
```
but according to the example above, that would raise the closed node error.
Maybe it would make more sense to have TableIterator handle the cleanup if `read_hdf` is passed a path/string instead of an open store? (so, in `__iter__`, after control passes out of the while loop, do the cleanup that is necessary to close it up).
@jtratner @seanyeh
originally you always had to open/close stores yourself, `read_hdf` does this for you; I suppose it could be enabled with iterator/chunksize support like above (in which the context manager knowns to close it, but only after iter is done).
I guess if the context manager is passed an open handle then it shouldn't close it...
+1 for enabling read_hdf(..., iterator=True).
Arguably, since pytables automatically closes the h5 file via atexit.register(close_open_files), we don't need to explicitly close it.
@adgaudio I disagree.
These files need explicit open/close; or use with a context manager. ]
This is straightforward to address and will be fixed soon.
Relying on the system to close files is not good from a safety perspective nor good programming practice.
Yea, true. Thanks everyone - I'll look forward to using the patch :)
Agree with @jreback. atexit is fragile and there's no reason not to handle
this explicitly.
On a separate note - @is it problematic to pass (or set) a reference to the
store to the TableIterator? Makes it much cleaner to handle that way...
@adgaudio https://github.com/adgaudio I disagree.
These files need explicit open/close; or use with a context manager. ]
This is straightforward to address and will be fixed soon.
Relying on the system to close files is not good from a safety perspective
nor good programming practice.
—
Reply to this email directly or view it on
GitHubhttps://github.com/pydata/pandas/pull/3937#issuecomment-19635061
.
| 2013-06-18T22:05:48Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-22-5634d86698ab>", line 1, in <module>
iter(it).next()
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 912, in __iter__
v = self.func(current, stop)
...
File "/usr/local/lib/python2.7/site-packages/tables/node.py", line 355, in _g_check_open
raise ClosedNodeError("the node object is closed")
ClosedNodeError: the node object is closed
| 14,698 |
|||
pandas-dev/pandas | pandas-dev__pandas-39496 | 0b16fb308b7808f193a5a8d86d71b6e57d99d4e7 | BUG: read_csv with custom date parser and na_filter=True results in ValueError
```python
import numpy as np
import pandas as pd
from io import StringIO
def __custom_date_parser(time):
time_temp = time.astype(np.float).astype(np.int) # convert float seconds to int type
return pd.to_timedelta(time_temp, unit='s')
testdata = StringIO("""time e n h
41047.00 -98573.7297 871458.0640 389.0089
41048.00 -98573.7299 871458.0640 389.0089
41049.00 -98573.7300 871458.0642 389.0088
41050.00 -98573.7299 871458.0643 389.0088
41051.00 -98573.7302 871458.0640 389.0086
""")
df = pd.read_csv(testdata, delim_whitespace=True, parse_dates=True, date_parser=__custom_date_parser, index_col='time')
```
I noticed this problem when I executed a piece of old code which has worked before (a few months ago). Normally this code would parse a text file with GPS seconds of week as time and convert it to a TimeDeltaIndex. Now when I execute this, it results in a **ValueError: unit abbreviation w/o a number**. (Full stack trace below) I tracked it down to the default option **na_filter=True** in pd.read_csv. When i set it to False everything is working. With a bit of digging I think i found the source of the error in algorithms.py -> _ensure_data -> line 142.
```python
# datetimelike
vals_dtype = getattr(values, "dtype", None)
if needs_i8_conversion(vals_dtype) or needs_i8_conversion(dtype):
if is_period_dtype(vals_dtype) or is_period_dtype(dtype):
from pandas import PeriodIndex
values = PeriodIndex(values)
dtype = values.dtype
elif is_timedelta64_dtype(vals_dtype) or is_timedelta64_dtype(dtype):
from pandas import TimedeltaIndex
values = TimedeltaIndex(values) #This is line 142
dtype = values.dtype
else:
# Datetime
if values.ndim > 1 and is_datetime64_ns_dtype(vals_dtype):
# Avoid calling the DatetimeIndex constructor as it is 1D only
# Note: this is reached by DataFrame.rank calls GH#27027
# TODO(EA2D): special case not needed with 2D EAs
asi8 = values.view("i8")
dtype = values.dtype
return asi8, dtype
from pandas import DatetimeIndex
values = DatetimeIndex(values)
dtype = values.dtype
```
Here the function tries to parse **values** as TimeDeltaIndex, but values is ['' 'n/a' '-nan' '#N/A' '1.#QNAN' 'nan' '#NA' 'NaN' '-1.#QNAN' '#N/A N/A', '-NaN' 'N/A' 'NULL' '<NA>' 'null' '1.#IND' 'NA' '-1.#IND'] in this case. It executes this if statement, because **is_timedelta64_dtype(dtype)** is true in this case. I can't believe that this is expected behaviour, as it has worked before.
``` python-traceback
Traceback (most recent call last):
File "...\lib\site-packages\pandas\io\parsers.py", line 458, in _read
data = parser.read(nrows)
File "...\lib\site-packages\pandas\io\parsers.py", line 1186, in read
ret = self._engine.read(nrows)
File "...\lib\site-packages\pandas\io\parsers.py", line 2221, in read
index, names = self._make_index(data, alldata, names)
File "...\lib\site-packages\pandas\io\parsers.py", line 1667, in _make_index
index = self._agg_index(index)
File "...\lib\site-packages\pandas\io\parsers.py", line 1760, in _agg_index
arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
File "...\lib\site-packages\pandas\io\parsers.py", line 1861, in _infer_types
mask = algorithms.isin(values, list(na_values))
File "...\lib\site-packages\pandas\core\algorithms.py", line 433, in isin
values, _ = _ensure_data(values, dtype=dtype)
File "...\lib\site-packages\pandas\core\algorithms.py", line 142, in _ensure_data
values = TimedeltaIndex(values)
File "...\lib\site-packages\pandas\core\indexes\timedeltas.py", line 157, in __new__
data, freq=freq, unit=unit, dtype=dtype, copy=copy
File "...\lib\site-packages\pandas\core\arrays\timedeltas.py", line 216, in _from_sequence
data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit)
File "...\lib\site-packages\pandas\core\arrays\timedeltas.py", line 930, in sequence_to_td64ns
data = objects_to_td64ns(data, unit=unit, errors=errors)
File "...\lib\site-packages\pandas\core\arrays\timedeltas.py", line 1040, in objects_to_td64ns
result = array_to_timedelta64(values, unit=unit, errors=errors)
File "pandas\_libs\tslibs\timedeltas.pyx", line 273, in pandas._libs.tslibs.timedeltas.array_to_timedelta64
File "pandas\_libs\tslibs\timedeltas.pyx", line 268, in pandas._libs.tslibs.timedeltas.array_to_timedelta64
File "pandas\_libs\tslibs\timedeltas.pyx", line 215, in pandas._libs.tslibs.timedeltas.convert_to_timedelta64
File "pandas\_libs\tslibs\timedeltas.pyx", line 428, in pandas._libs.tslibs.timedeltas.parse_timedelta_string
ValueError: unit abbreviation w/o a number
python-BaseException
```
<details>
INSTALLED VERSIONS
------------------
commit : f2ca0a2665b2d169c97de87b8e778dbed86aea07
python : 3.7.9.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18362
machine : AMD64
processor : Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 1.1.1
numpy : 1.18.1
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.2
setuptools : 49.6.0.post20200814
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.1.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Thanks for the report. I get a different exception on master
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.parse_timedelta_unit()
500 try:
--> 501 return timedelta_abbrevs[unit.lower()]
502 except (KeyError, AttributeError):
KeyError: '#qnan'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.array_to_timedelta64()
262 else:
--> 263 result[i] = parse_timedelta_string(values[i])
264 except (TypeError, ValueError):
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.parse_timedelta_string()
424 if len(number):
--> 425 r = timedelta_from_spec(number, frac, unit)
426 result += timedelta_as_neg(r, neg)
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.timedelta_from_spec()
472 unit = 'm'
--> 473 unit = parse_timedelta_unit(unit)
474 except KeyError:
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.parse_timedelta_unit()
502 except (KeyError, AttributeError):
--> 503 raise ValueError(f"invalid unit abbreviation: {unit}")
504
ValueError: invalid unit abbreviation: #QNAN
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.parse_timedelta_unit()
500 try:
--> 501 return timedelta_abbrevs[unit.lower()]
502 except (KeyError, AttributeError):
KeyError: '#qnan'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-25-1b8638be91b4> in <module>
15 """)
16
---> 17 df = pd.read_csv(testdata, delim_whitespace=True, parse_dates=True, date_parser=__custom_date_parser, index_col='time')
~/sandbox/pandas/pandas/io/parsers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
687 )
688
--> 689 return _read(filepath_or_buffer, kwds)
690
691
~/sandbox/pandas/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
457
458 try:
--> 459 data = parser.read(nrows)
460 finally:
461 parser.close()
~/sandbox/pandas/pandas/io/parsers.py in read(self, nrows)
1187 def read(self, nrows=None):
1188 nrows = _validate_integer("nrows", nrows)
-> 1189 ret = self._engine.read(nrows)
1190
1191 # May alter columns / col_dict
~/sandbox/pandas/pandas/io/parsers.py in read(self, nrows)
2220
2221 names, data = self._do_date_conversions(names, data)
-> 2222 index, names = self._make_index(data, alldata, names)
2223
2224 # maybe create a mi on the columns
~/sandbox/pandas/pandas/io/parsers.py in _make_index(self, data, alldata, columns, indexnamerow)
1668 elif not self._has_complex_date_col:
1669 index = self._get_simple_index(alldata, columns)
-> 1670 index = self._agg_index(index)
1671 elif self._has_complex_date_col:
1672 if not self._name_processed:
~/sandbox/pandas/pandas/io/parsers.py in _agg_index(self, index, try_parse_dates)
1761 )
1762
-> 1763 arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
1764 arrays.append(arr)
1765
~/sandbox/pandas/pandas/io/parsers.py in _infer_types(self, values, na_values, try_num_bool)
1862 na_count = 0
1863 if issubclass(values.dtype.type, (np.number, np.bool_)):
-> 1864 mask = algorithms.isin(values, list(na_values))
1865 na_count = mask.sum()
1866 if na_count > 0:
~/sandbox/pandas/pandas/core/algorithms.py in isin(comps, values)
432
433 comps, dtype = _ensure_data(comps)
--> 434 values, _ = _ensure_data(values, dtype=dtype)
435
436 # faster for larger cases to use np.in1d
~/sandbox/pandas/pandas/core/algorithms.py in _ensure_data(values, dtype)
140 from pandas import TimedeltaIndex
141
--> 142 values = TimedeltaIndex(values)
143 dtype = values.dtype
144 else:
~/sandbox/pandas/pandas/core/indexes/timedeltas.py in __new__(cls, data, unit, freq, closed, dtype, copy, name)
154 # - Cases checked above all return/raise before reaching here - #
155
--> 156 tdarr = TimedeltaArray._from_sequence(
157 data, freq=freq, unit=unit, dtype=dtype, copy=copy
158 )
~/sandbox/pandas/pandas/core/arrays/timedeltas.py in _from_sequence(cls, data, dtype, copy, freq, unit)
214 freq, freq_infer = dtl.maybe_infer_freq(freq)
215
--> 216 data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit)
217 freq, freq_infer = dtl.validate_inferred_freq(freq, inferred_freq, freq_infer)
218 if explicit_none:
~/sandbox/pandas/pandas/core/arrays/timedeltas.py in sequence_to_td64ns(data, copy, unit, errors)
928 if is_object_dtype(data.dtype) or is_string_dtype(data.dtype):
929 # no need to make a copy, need to convert if string-dtyped
--> 930 data = objects_to_td64ns(data, unit=unit, errors=errors)
931 copy = False
932
~/sandbox/pandas/pandas/core/arrays/timedeltas.py in objects_to_td64ns(data, unit, errors)
1038 values = np.array(data, dtype=np.object_, copy=False)
1039
-> 1040 result = array_to_timedelta64(values, unit=unit, errors=errors)
1041 return result.view("timedelta64[ns]")
1042
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.array_to_timedelta64()
271 result[i] = NPY_NAT
272 else:
--> 273 raise
274
275 return iresult.base # .base to access underlying np.ndarray
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.array_to_timedelta64()
266 for i in range(n):
267 try:
--> 268 result[i] = convert_to_timedelta64(values[i], parsed_unit)
269 except ValueError:
270 if errors == 'coerce':
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.convert_to_timedelta64()
213 ts = parse_iso_format_string(ts)
214 else:
--> 215 ts = parse_timedelta_string(ts)
216 ts = np.timedelta64(ts)
217 elif is_tick_object(ts):
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.parse_timedelta_string()
423 elif len(unit):
424 if len(number):
--> 425 r = timedelta_from_spec(number, frac, unit)
426 result += timedelta_as_neg(r, neg)
427 else:
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.timedelta_from_spec()
471 # not month
472 unit = 'm'
--> 473 unit = parse_timedelta_unit(unit)
474 except KeyError:
475 raise ValueError(f"invalid abbreviation: {unit}")
~/sandbox/pandas/pandas/_libs/tslibs/timedeltas.pyx in pandas._libs.tslibs.timedeltas.parse_timedelta_unit()
501 return timedelta_abbrevs[unit.lower()]
502 except (KeyError, AttributeError):
--> 503 raise ValueError(f"invalid unit abbreviation: {unit}")
504
505 # ----------------------------------------------------------------------
ValueError: invalid unit abbreviation: #QNAN
```
Can you investigate where things changed?
This works now on master. Could need a test
take | 2021-01-31T00:11:42Z | [] | [] |
Traceback (most recent call last):
File "...\lib\site-packages\pandas\io\parsers.py", line 458, in _read
data = parser.read(nrows)
File "...\lib\site-packages\pandas\io\parsers.py", line 1186, in read
ret = self._engine.read(nrows)
File "...\lib\site-packages\pandas\io\parsers.py", line 2221, in read
index, names = self._make_index(data, alldata, names)
File "...\lib\site-packages\pandas\io\parsers.py", line 1667, in _make_index
index = self._agg_index(index)
File "...\lib\site-packages\pandas\io\parsers.py", line 1760, in _agg_index
arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
File "...\lib\site-packages\pandas\io\parsers.py", line 1861, in _infer_types
mask = algorithms.isin(values, list(na_values))
File "...\lib\site-packages\pandas\core\algorithms.py", line 433, in isin
values, _ = _ensure_data(values, dtype=dtype)
File "...\lib\site-packages\pandas\core\algorithms.py", line 142, in _ensure_data
values = TimedeltaIndex(values)
File "...\lib\site-packages\pandas\core\indexes\timedeltas.py", line 157, in __new__
data, freq=freq, unit=unit, dtype=dtype, copy=copy
File "...\lib\site-packages\pandas\core\arrays\timedeltas.py", line 216, in _from_sequence
data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit)
File "...\lib\site-packages\pandas\core\arrays\timedeltas.py", line 930, in sequence_to_td64ns
data = objects_to_td64ns(data, unit=unit, errors=errors)
File "...\lib\site-packages\pandas\core\arrays\timedeltas.py", line 1040, in objects_to_td64ns
result = array_to_timedelta64(values, unit=unit, errors=errors)
File "pandas\_libs\tslibs\timedeltas.pyx", line 273, in pandas._libs.tslibs.timedeltas.array_to_timedelta64
File "pandas\_libs\tslibs\timedeltas.pyx", line 268, in pandas._libs.tslibs.timedeltas.array_to_timedelta64
File "pandas\_libs\tslibs\timedeltas.pyx", line 215, in pandas._libs.tslibs.timedeltas.convert_to_timedelta64
File "pandas\_libs\tslibs\timedeltas.pyx", line 428, in pandas._libs.tslibs.timedeltas.parse_timedelta_string
ValueError: unit abbreviation w/o a number
| 14,701 |
||||
pandas-dev/pandas | pandas-dev__pandas-39586 | d558bce8e9d5d4adfb0ab587be20b8a231dd1eea | diff --git a/doc/source/whatsnew/v1.2.2.rst b/doc/source/whatsnew/v1.2.2.rst
--- a/doc/source/whatsnew/v1.2.2.rst
+++ b/doc/source/whatsnew/v1.2.2.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in :func:`pandas.testing.assert_series_equal` and :func:`pandas.testing.assert_frame_equal` always raising ``AssertionError`` when comparing extension dtypes (:issue:`39410`)
- Fixed regression in :meth:`~DataFrame.to_csv` opening ``codecs.StreamWriter`` in binary mode instead of in text mode and ignoring user-provided ``mode`` (:issue:`39247`)
- Fixed regression in :meth:`core.window.rolling.Rolling.count` where the ``min_periods`` argument would be set to ``0`` after the operation (:issue:`39554`)
+- Fixed regression in :func:`read_excel` that incorrectly raised when the argument ``io`` was a non-path and non-buffer and the ``engine`` argument was specified (:issue:`39528`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -1069,26 +1069,37 @@ def __init__(
xlrd_version = LooseVersion(get_version(xlrd))
- if xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
- ext = "xls"
- else:
- ext = inspect_excel_format(
- content_or_path=path_or_buffer, storage_options=storage_options
- )
-
+ ext = None
if engine is None:
+ # Only determine ext if it is needed
+ if xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
+ ext = "xls"
+ else:
+ ext = inspect_excel_format(
+ content_or_path=path_or_buffer, storage_options=storage_options
+ )
+
# ext will always be valid, otherwise inspect_excel_format would raise
engine = config.get_option(f"io.excel.{ext}.reader", silent=True)
if engine == "auto":
engine = get_default_engine(ext, mode="reader")
- if engine == "xlrd" and ext != "xls" and xlrd_version is not None:
- if xlrd_version >= "2":
+ if engine == "xlrd" and xlrd_version is not None:
+ if ext is None:
+ # Need ext to determine ext in order to raise/warn
+ if isinstance(path_or_buffer, xlrd.Book):
+ ext = "xls"
+ else:
+ ext = inspect_excel_format(
+ path_or_buffer, storage_options=storage_options
+ )
+
+ if ext != "xls" and xlrd_version >= "2":
raise ValueError(
f"Your version of xlrd is {xlrd_version}. In xlrd >= 2.0, "
f"only the xls format is supported. Install openpyxl instead."
)
- else:
+ elif ext != "xls":
caller = inspect.stack()[1]
if (
caller.filename.endswith(
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -533,7 +533,11 @@ def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
version = LooseVersion(get_version(openpyxl))
- if version >= "3.0.0":
+ # There is no good way of determining if a sheet is read-only
+ # https://foss.heptapod.net/openpyxl/openpyxl/-/issues/1605
+ is_readonly = hasattr(sheet, "reset_dimensions")
+
+ if version >= "3.0.0" and is_readonly:
sheet.reset_dimensions()
data: List[List[Scalar]] = []
@@ -541,7 +545,7 @@ def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
converted_row = [self._convert_cell(cell, convert_float) for cell in row]
data.append(converted_row)
- if version >= "3.0.0" and len(data) > 0:
+ if version >= "3.0.0" and is_readonly and len(data) > 0:
# With dimension reset, openpyxl no longer pads rows
max_width = max(len(data_row) for data_row in data)
if min(len(data_row) for data_row in data) < max_width:
| BUG: read_excel with Workbook and engine="openpyxl" raises ValueError
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample
```python
from openpyxl import load_workbook
from pandas import read_excel
wb = load_workbook("testfile.xlsx")
read_excel(wb, engine="openpyxl")
```
#### Problem description
In pandas 1.1.5, the above code completes with no problems.
In pandas 1.2.1, it causes the following exception:
```
Traceback (most recent call last):
File "c:/Users/akaijanaho/scratch/pandas-openpyxl-bug/bug.py", line 5, in <module>
read_excel(wb, engine="openpyxl")
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\util\_decorators.py", line 299, in wrapper
return func(*args, **kwargs)
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\excel\_base.py", line 336, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\excel\_base.py", line 1057, in __init__
ext = inspect_excel_format(
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\excel\_base.py", line 938, in inspect_excel_format
with get_handle(
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\common.py", line 558, in get_handle
ioargs = _get_filepath_or_buffer(
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\common.py", line 371, in _get_filepath_or_buffer
raise ValueError(msg)
ValueError: Invalid file path or buffer object type: <class 'openpyxl.workbook.workbook.Workbook'>
```
The documentation does not specify Workbook as an acceptable value type for io, but supporting it seems reasonable and accords with the 1.1.5 behavior.
In my use case, we mainly parse an Excel file with openpyxl but use pandas with a specific sub-problem. We would like to reuse the same Workbook instead of having pandas re-read the file.
</details>
| I will have time later today to look into this. Can you open/wrap the workbook with `ExcelFile` and then pass it to `read_excel`? I'm not too familiar with the ExcelFile syntax, something like the code below might avoid opening the file twice as a workaround for 1.2.1.
```py
import pandas as pd
with pd.ExcelFile('test.xlsx', engine="openpyxl") as excel:
dataframe = pd.read_excel(excel)
```
Can reproduce on master.
```
>>> from openpyxl import load_workbook
>>> from pandas import read_excel
>>> wb = load_workbook("pandas/tests/io/data/excel/blank.xlsx")
>>> read_excel(wb, engine="openpyxl")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\liwende\pandas\pandas\util\_decorators.py", line 299, in wrapper
return func(*args, **kwargs)
File "C:\Users\liwende\pandas\pandas\io\excel\_base.py", line 340, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
File "C:\Users\liwende\pandas\pandas\io\excel\_base.py", line 1063, in __init__
content_or_path=path_or_buffer, storage_options=storage_options
File "C:\Users\liwende\pandas\pandas\io\excel\_base.py", line 945, in inspect_excel_format
content_or_path, "rb", storage_options=storage_options, is_text=False
File "C:\Users\liwende\pandas\pandas\io\common.py", line 576, in get_handle
storage_options=storage_options,
File "C:\Users\liwende\pandas\pandas\io\common.py", line 378, in _get_filepath_or_buffer
raise ValueError(msg)
ValueError: Invalid file path or buffer object type: <class 'openpyxl.workbook.workbook.Workbook'>
```
@twoertwein
I suspect that https://github.com/pandas-dev/pandas/blob/master/pandas/io/excel/_base.py#L1072-L1077 (master) and https://github.com/pandas-dev/pandas/blob/1.2.x/pandas/io/excel/_base.py#L1065-L1070 (for pandas 1.2) should probably be only executed with no engine specified.
However, the bigger issue is that inspect_excel_format should be able to pick up that this is an openpyxl workbook, which it doesn't since it errors in get_handle(which makes sense as an openpyxl workbook is not a buffer I think).
Yes, it could probably be fixed by something like:
```
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 213be7c..fb6ca55 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -1071,7 +1071,7 @@ class ExcelFile:
if xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
ext = "xls"
- else:
+ elif engine in (None, "xlrd", "auto"):
ext = inspect_excel_format(
content_or_path=path_or_buffer, storage_options=storage_options
)
```
Thanks for the report @ajkaijanaho! @twoertwein - seems to me like the patch should be improving inspect_excel_format itself instead of avoiding calling it.
> I will have time later today to look into this. Can you open/wrap the workbook with `ExcelFile` and then pass it to `read_excel`? I'm not too familiar with the ExcelFile syntax, something like the code below might avoid opening the file twice as a workaround for 1.2.1.
>
> ```python
> import pandas as pd
>
> with pd.ExcelFile('test.xlsx', engine="openpyxl") as excel:
> dataframe = pd.read_excel(excel)
> ```
Not really useful for us, because in our code the Workbook object is created somewhere else and passed around the call chain quite a bit before we get to Pandas. Rewriting all those calls to pass around ExcelFile is not really worth the effort. Our workaround is to stay with 1.1.5 for now, and it works well enough.
@twoertwein - I was wrong above, your suggested patch is much better. We should really avoid calling `inspect_excel_format` unless necessary in order to not have to check (and maintain) all sorts of dependency instances.
> @twoertwein - I was wrong above, your suggested patch is much better. We should really avoid calling `inspect_excel_format` unless necessary in order to not have to check (and maintain) all sorts of dependency instances.
Depending on which behavior is expected, this simple elif-patch is probably not enough. If a user provides a workbook compatible with one of the engines but does not specify an engine explicitly, do we need to auto-detect the engine from the workbook type? If we need that, it should probably go into `inspect_excel_format`.
@twoertwein: Comparing to 1.1.x:
```
if engine is None:
engine = "xlrd"
if isinstance(path_or_buffer, (BufferedIOBase, RawIOBase)):
if _is_ods_stream(path_or_buffer):
engine = "odf"
else:
ext = os.path.splitext(str(path_or_buffer))[-1]
if ext == ".ods":
engine = "odf"
if engine not in self._engines:
raise ValueError(f"Unknown engine: {engine}")
```
It appears to me that passing a Workbook with `engine=None` would raise (need to test this though). Assuming that, the patch should be a minimal change: only restore support for a openpyxl Workbook when passing engine='openpyxl', and in more generality, avoid the unnecessary type-detection when engine is specified.
Supporting engine=None with Workbooks would be an additional feature, which might be welcome, and should go into 1.3 (or later). My only hesitation here is that it seems the implementation would have to special-case all engines and their individual workbook (and worksheet?) types which I think is a bit undesirable. If there is an implementation that is free of this, then I'd certainly be +1.
> Depending on which behavior is expected, this simple elif-patch is probably not enough. If a user provides a workbook compatible with one of the engines but does not specify an engine explicitly, do we need to auto-detect the engine from the workbook type? If we need that, it should probably go into `inspect_excel_format`.
The [read_excel documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html) is quite explicit about this point: "engine str, default None If io is not a buffer or path, this must be set to identify io" | 2021-02-03T23:40:06Z | [] | [] |
Traceback (most recent call last):
File "c:/Users/akaijanaho/scratch/pandas-openpyxl-bug/bug.py", line 5, in <module>
read_excel(wb, engine="openpyxl")
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\util\_decorators.py", line 299, in wrapper
return func(*args, **kwargs)
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\excel\_base.py", line 336, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\excel\_base.py", line 1057, in __init__
ext = inspect_excel_format(
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\excel\_base.py", line 938, in inspect_excel_format
with get_handle(
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\common.py", line 558, in get_handle
ioargs = _get_filepath_or_buffer(
File "C:\Users\akaijanaho\scratch\pandas-openpyxl-bug\venv\lib\site-packages\pandas\io\common.py", line 371, in _get_filepath_or_buffer
raise ValueError(msg)
ValueError: Invalid file path or buffer object type: <class 'openpyxl.workbook.workbook.Workbook'>
| 14,711 |
|||
pandas-dev/pandas | pandas-dev__pandas-39800 | fc9fdba6592bdb5d0d1147ce4d65639acd897565 | diff --git a/doc/source/whatsnew/v1.2.3.rst b/doc/source/whatsnew/v1.2.3.rst
--- a/doc/source/whatsnew/v1.2.3.rst
+++ b/doc/source/whatsnew/v1.2.3.rst
@@ -15,7 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
--
+- Fixed regression in :func:`pandas.to_excel` raising ``KeyError`` when giving duplicate columns with ``columns`` attribute (:issue:`39695`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -475,7 +475,7 @@ def __init__(
if not len(Index(cols).intersection(df.columns)):
raise KeyError("passes columns are not ALL present dataframe")
- if len(Index(cols).intersection(df.columns)) != len(cols):
+ if len(Index(cols).intersection(df.columns)) != len(set(cols)):
# Deprecated in GH#17295, enforced in 1.0.0
raise KeyError("Not all names specified in 'columns' are found")
| BUG: DataFrame.to_excel() now raises if column parameter contains duplicates
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
dft = pd.DataFrame({"A": [0, 1], "B": [10, 11]})
dft.to_excel(r"c:\test\test3.xlsx", columns=["A", "B", "A"])
```
#### Problem description
The example works with pandas 1.1.0, but not with pandas 1.2.1 any more. With 1.2.1 it raises:
```python-traceback
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\test\venv4\lib\site-packages\pandas\core\generic.py", line 2177, in to_excel
formatter = ExcelFormatter(
File "c:\test\venv4\lib\site-packages\pandas\io\formats\excel.py", line 470, in __init__
raise KeyError("Not all names specified in 'columns' are found")
KeyError: "Not all names specified in 'columns' are found"
```
If the column argument doesn't contain duplicates (e.g. `columns=["A", "B"]`) it works also in 1.2.1
In the documentation of .to_excel() I found no information about an intended change in behaviour.
#### Expected Output
I expected it to work like with pandas 1.1.0, producing an excel file with the following content:
![image](https://user-images.githubusercontent.com/25440862/107364440-ba3eec00-6adb-11eb-99c5-3abe55f07e9e.png)
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 9d598a5e1eee26df95b3910e3f2934890d062caa
python : 3.9.1.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18362
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 10, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : de_DE.cp1252
pandas : 1.2.1
numpy : 1.20.0
pytz : 2021.1
dateutil : 2.8.1
pip : 21.0.1
setuptools : 53.0.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : 3.0.6
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| Thanks @RagBlufThim for the report! I can also reproduce this on master.
This appears to be caused by #37374. cc @jbrockmendel (https://github.com/pandas-dev/pandas/blame/879d2fb8a36cfb52e8129e60b1f808cf8937a378/pandas/io/formats/excel.py#L478)
> The example works with pandas 1.1.0, but not with pandas 1.2.1 any more. With 1.2.1 it raises:
was also working in 1.1.4
first bad commit: [e99e5ab32c4e831e7bbac0346189f4d6d86a6225] BUG: Fix duplicates in intersection of multiindexes (#36927) cc @phofl
Will take a look over the weekend | 2021-02-13T21:07:24Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\test\venv4\lib\site-packages\pandas\core\generic.py", line 2177, in to_excel
formatter = ExcelFormatter(
File "c:\test\venv4\lib\site-packages\pandas\io\formats\excel.py", line 470, in __init__
raise KeyError("Not all names specified in 'columns' are found")
KeyError: "Not all names specified in 'columns' are found"
| 14,742 |
|||
pandas-dev/pandas | pandas-dev__pandas-4043 | bc038e7798bb1e8fbbefbafdeeb53cdfdf04a36a | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -252,8 +252,7 @@ pandas 0.11.1
- Fix running of bs4 tests when it is not installed (:issue:`3605`)
- Fix parsing of html table (:issue:`3606`)
- ``read_html()`` now only allows a single backend: ``html5lib`` (:issue:`3616`)
- - ``convert_objects`` with ``convert_dates='coerce'`` was parsing some single-letter strings
- into today's date
+ - ``convert_objects`` with ``convert_dates='coerce'`` was parsing some single-letter strings into today's date
- ``DataFrame.from_records`` did not accept empty recarrays (:issue:`3682`)
- ``DataFrame.to_csv`` will succeed with the deprecated option ``nanRep``, @tdsmith
- ``DataFrame.to_html`` and ``DataFrame.to_latex`` now accept a path for
@@ -276,10 +275,11 @@ pandas 0.11.1
- Indexing with a string with seconds resolution not selecting from a time index (:issue:`3925`)
- csv parsers would loop infinitely if ``iterator=True`` but no ``chunksize`` was
specified (:issue:`3967`), python parser failing with ``chunksize=1``
- - Fix index name not propogating when using ``shift``
- - Fixed dropna=False being ignored with multi-index stack (:issue:`3997`)
+ - Fix index name not propogating when using ``shift``
+ - Fixed dropna=False being ignored with multi-index stack (:issue:`3997`)
- Fixed flattening of columns when renaming MultiIndex columns DataFrame (:issue:`4004`)
- - Fix ``Series.clip`` for datetime series. NA/NaN threshold values will now throw ValueError (:issue:`3996`)
+ - Fix ``Series.clip`` for datetime series. NA/NaN threshold values will now throw ValueError (:issue:`3996`)
+ - Fixed insertion issue into DataFrame, after rename (:issue:`4032`)
.. _Gh3616: https://github.com/pydata/pandas/issues/3616
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -85,6 +85,8 @@ def set_ref_items(self, ref_items, maybe_rename=True):
"""
if not isinstance(ref_items, Index):
raise AssertionError('block ref_items must be an Index')
+ if maybe_rename == 'clear':
+ self._ref_locs = None
if maybe_rename:
self.items = ref_items.take(self.ref_locs)
self.ref_items = ref_items
@@ -1798,12 +1800,18 @@ def insert(self, loc, item, value, allow_duplicates=False):
if len(self.blocks) > 100:
self._consolidate_inplace()
+ elif new_items.is_unique:
+ self.set_items_clear(new_items)
self._known_consolidated = False
def set_items_norename(self, value):
self.set_axis(0, value, maybe_rename=False, check_axis=False)
+ def set_items_clear(self, value):
+ """ clear the ref_locs on all blocks """
+ self.set_axis(0, value, maybe_rename='clear', check_axis=False)
+
def _delete_from_all_blocks(self, loc, item):
""" delete from the items loc the item
the item could be in multiple blocks which could
@@ -1914,7 +1922,7 @@ def _add_new_block(self, item, value, loc=None):
# and reset
self._reset_ref_locs()
self._set_ref_locs(do_refs=True)
-
+
def _find_block(self, item):
self._check_have(item)
for i, block in enumerate(self.blocks):
| Some items were not contained in blocks AssertionError
The code below causes the DataFrame internals to get confused and throws an AssertionError on `print df.values`. The code looks contrived, but its just a simplified version of something I was trying to do. It doesn't seem to matter whether any columns are actually renamed so I simplified that also, but the rename calls do contribute to the problem.
```
import pandas as pd
df = pd.DataFrame({'a': [1, 2],
'b': [3, 4],
'c': [5, 6]})
df = df.set_index(['a', 'b'])
df = df.unstack()
df = df.rename(columns={'a': 'd'})
df = df.reset_index()
print df._data
df = df.rename(columns={})
print df._data
print df.values
```
```
BlockManager
Items: MultiIndex
[(u'a', u''), (u'c', 3), (u'c', 4)]
Axis 1: Int64Index([0, 1], dtype=int64)
FloatBlock: [(c, 3), (c, 4)], 2 x 2, dtype float64
IntBlock: [(a, )], 1 x 2, dtype int64
BlockManager
Items: MultiIndex
[(u'a', u''), (u'c', 3), (u'c', 4)]
Axis 1: Int64Index([0, 1], dtype=int64)
FloatBlock: [(a, ), (c, 3)], 2 x 2, dtype float64
IntBlock: [(a, )], 1 x 2, dtype int64
Traceback (most recent call last):
File "/home/mtk/bug.py", line 13, in <module>
print df.values
File "/Users/mtk/Source/pandas/pandas/core/frame.py", line 1779, in as_matrix
return self._data.as_matrix(columns).T
File "/Users/mtk/Source/pandas/pandas/core/internals.py", line 1513, in as_matrix
mat = self._interleave(self.items)
File "/Users/mtk/Source/pandas/pandas/core/internals.py", line 1549, in _interleave
raise AssertionError('Some items were not contained in blocks')
AssertionError: Some items were not contained in blocks
```
```
>>> pd.__version__
'0.11.1.dev-8a242d2'
```
| the 2nd rename should not be allowed and is prob causing an issue (eg state gets messed up)
I think `rename({})` should be allowed, as it could be you use a generated dictionary (which could be empty) to rename with. But agree that's probably where the issue lies.
Here's a simpler snippet that reproduces the problem:
```
import pandas as pd
df = pd.DataFrame({'b': [1.1, 2.2]})
df = df.rename(columns={})
df.insert(0, 'a', [1, 2])
df = df.rename(columns={})
print df.values
```
To be clear, this has nothing to do with passing an empty dict to rename. I know a little about the internals now, but not enough fix this one. I believe the bug is in insert. The first rename just happens to cause ref_locs to be calculated on the blocks. The insert method fails to update these ref_locs when a new columns is inserted (block.ref_items is updated but block.ref_locs is not). The second rename then causes block.items to be calculated with incorrect ref_locs. Things go downhill from there.
This is a regression from 11.0.
| 2013-06-26T13:01:35Z | [] | [] |
Traceback (most recent call last):
File "/home/mtk/bug.py", line 13, in <module>
print df.values
File "/Users/mtk/Source/pandas/pandas/core/frame.py", line 1779, in as_matrix
return self._data.as_matrix(columns).T
File "/Users/mtk/Source/pandas/pandas/core/internals.py", line 1513, in as_matrix
mat = self._interleave(self.items)
File "/Users/mtk/Source/pandas/pandas/core/internals.py", line 1549, in _interleave
raise AssertionError('Some items were not contained in blocks')
AssertionError: Some items were not contained in blocks
| 14,756 |
|||
pandas-dev/pandas | pandas-dev__pandas-4228 | f4246fb5c8d359202fbc0e836a8c930acbba114a | diff --git a/pandas/io/stata.py b/pandas/io/stata.py
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -327,8 +327,16 @@ def _read_header(self):
typlist = [ord(self.path_or_buf.read(1)) for i in range(self.nvar)]
else:
typlist = [self.OLD_TYPE_MAPPING[self._decode_bytes(self.path_or_buf.read(1))] for i in range(self.nvar)]
- self.typlist = [self.TYPE_MAP[typ] for typ in typlist]
- self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
+
+ try:
+ self.typlist = [self.TYPE_MAP[typ] for typ in typlist]
+ except:
+ raise ValueError("cannot convert stata types [{0}]".format(','.join(typlist)))
+ try:
+ self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
+ except:
+ raise ValueError("cannot convert stata dtypes [{0}]".format(','.join(typlist)))
+
if self.format_version > 108:
self.varlist = [self._null_terminate(self.path_or_buf.read(33)) for i in range(self.nvar)]
else:
| sparc test_read_dta10: KeyError: 0 self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
```
======================================================================
ERROR: test_read_dta10 (pandas.io.tests.test_stata.StataTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_stata.py", line 205, in test_read_dta10
written_and_read_again = self.read_dta(path)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_stata.py", line 41, in read_dta
return read_stata(file, convert_dates=True)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/stata.py", line 38, in read_stata
reader = StataReader(filepath_or_buffer, encoding)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/stata.py", line 305, in __init__
self._read_header()
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/stata.py", line 331, in _read_header
self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
KeyError: 0
```
on 0.12.0~rc1+git43-g7b2eaa4-1
| can u post all of the variables I that last line
typist and self.DTYPE_MAP
prob an endian issue
```
-> self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
(Pdb) print self.DTYPE_MAP
{1: 'a1', 2: 'a2', 3: 'a3', 4: 'a4', 5: 'a5', 6: 'a6', 7: 'a7', 8: 'a8', 9: 'a9', 10: 'a10', 11: 'a11', 12: 'a12', 13: 'a13', 14: 'a14', 15: 'a15', 16: 'a16', 17: 'a17', 18: 'a18', 19: 'a19', 20: 'a20', 21: 'a21', 22: 'a22', 23: 'a23', 24: 'a24', 25: 'a25', 26: 'a26', 27: 'a27', 28: 'a28', 29: 'a29', 30: 'a30', 31: 'a31', 32: 'a32', 33: 'a33', 34: 'a34', 35: 'a35', 36: 'a36', 37: 'a37', 38: 'a38', 39: 'a39', 40: 'a40', 41: 'a41', 42: 'a42', 43: 'a43', 44: 'a44', 45: 'a45', 46: 'a46', 47: 'a47', 48: 'a48', 49: 'a49', 50: 'a50', 51: 'a51', 52: 'a52', 53: 'a53', 54: 'a54', 55: 'a55', 56: 'a56', 57: 'a57', 58: 'a58', 59: 'a59', 60: 'a60', 61: 'a61', 62: 'a62', 63: 'a63', 64: 'a64', 65: 'a65', 66: 'a66', 67: 'a67', 68: 'a68', 69: 'a69', 70: 'a70', 71: 'a71', 72: 'a72', 73: 'a73', 74: 'a74', 75: 'a75', 76: 'a76', 77: 'a77', 78: 'a78', 79: 'a79', 80: 'a80', 81: 'a81', 82: 'a82', 83: 'a83', 84: 'a84', 85: 'a85', 86: 'a86', 87: 'a87', 88: 'a88', 89: 'a89', 90: 'a90', 91: 'a91', 92: 'a92', 93: 'a93', 94: 'a94', 95: 'a95', 96: 'a96', 97: 'a97', 98: 'a98', 99: 'a99', 100: 'a100', 101: 'a101', 102: 'a102', 103: 'a103', 104: 'a104', 105: 'a105', 106: 'a106', 107: 'a107', 108: 'a108', 109: 'a109', 110: 'a110', 111: 'a111', 112: 'a112', 113: 'a113', 114: 'a114', 115: 'a115', 116: 'a116', 117: 'a117', 118: 'a118', 119: 'a119', 120: 'a120', 121: 'a121', 122: 'a122', 123: 'a123', 124: 'a124', 125: 'a125', 126: 'a126', 127: 'a127', 128: 'a128', 129: 'a129', 130: 'a130', 131: 'a131', 132: 'a132', 133: 'a133', 134: 'a134', 135: 'a135', 136: 'a136', 137: 'a137', 138: 'a138', 139: 'a139', 140: 'a140', 141: 'a141', 142: 'a142', 143: 'a143', 144: 'a144', 145: 'a145', 146: 'a146', 147: 'a147', 148: 'a148', 149: 'a149', 150: 'a150', 151: 'a151', 152: 'a152', 153: 'a153', 154: 'a154', 155: 'a155', 156: 'a156', 157: 'a157', 158: 'a158', 159: 'a159', 160: 'a160', 161: 'a161', 162: 'a162', 163: 'a163', 164: 'a164', 165: 'a165', 166: 'a166', 167: 'a167', 168: 'a168', 169: 'a169', 170: 'a170', 171: 'a171', 172: 'a172', 173: 'a173', 174: 'a174', 175: 'a175', 176: 'a176', 177: 'a177', 178: 'a178', 179: 'a179', 180: 'a180', 181: 'a181', 182: 'a182', 183: 'a183', 184: 'a184', 185: 'a185', 186: 'a186', 187: 'a187', 188: 'a188', 189: 'a189', 190: 'a190', 191: 'a191', 192: 'a192', 193: 'a193', 194: 'a194', 195: 'a195', 196: 'a196', 197: 'a197', 198: 'a198', 199: 'a199', 200: 'a200', 201: 'a201', 202: 'a202', 203: 'a203', 204: 'a204', 205: 'a205', 206: 'a206', 207: 'a207', 208: 'a208', 209: 'a209', 210: 'a210', 211: 'a211', 212: 'a212', 213: 'a213', 214: 'a214', 215: 'a215', 216: 'a216', 217: 'a217', 218: 'a218', 219: 'a219', 220: 'a220', 221: 'a221', 222: 'a222', 223: 'a223', 224: 'a224', 225: 'a225', 226: 'a226', 227: 'a227', 228: 'a228', 229: 'a229', 230: 'a230', 231: 'a231', 232: 'a232', 233: 'a233', 234: 'a234', 235: 'a235', 236: 'a236', 237: 'a237', 238: 'a238', 239: 'a239', 240: 'a240', 241: 'a241', 242: 'a242', 243: 'a243', 244: 'a244', 251: <type 'numpy.int16'>, 252: <type 'numpy.int32'>, 253: <type 'numpy.int64'>, 254: <type 'numpy.float32'>, 255: <type 'numpy.float64'>}
(Pdb) print typlist
[253, 244, 244, 253, 255, 255, 105, 110, 100, 101, 120, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 115, 116, 114, 105, 110, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 111, 98, 106, 101, 99, 116, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 105, 110, 116, 101, 103, 101, 114, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 102, 108, 111, 97, 116, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 100, 97, 116, 101, 116, 105, 109, 101, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 57, 46, 48, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 50, 52, 52, 115, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 50, 52, 52, 115, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 57, 46, 48, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 49, 48, 46, 48, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 116, 99, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 115, 116, 114, 105, 110, 103, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 111, 98, 106, 101, 99, 116, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```
| 2013-07-13T00:44:41Z | [] | [] |
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_stata.py", line 205, in test_read_dta10
written_and_read_again = self.read_dta(path)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_stata.py", line 41, in read_dta
return read_stata(file, convert_dates=True)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/stata.py", line 38, in read_stata
reader = StataReader(filepath_or_buffer, encoding)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/stata.py", line 305, in __init__
self._read_header()
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/stata.py", line 331, in _read_header
self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
KeyError: 0
| 14,787 |
|||
pandas-dev/pandas | pandas-dev__pandas-4231 | 0e80039e317585905f184750ad576e3cd54301fb | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -331,6 +331,8 @@ pandas 0.12
- Fixed bug in ``Series.where`` where broadcasting a single element input vector
to the length of the series resulted in multiplying the value
inside the input (:issue:`4192`)
+ - Fixed bug in plotting that wasn't raising on invalid colormap for
+ matplotlib 1.1.1 (:issue:`4215`)
pandas 0.11.0
=============
diff --git a/doc/source/v0.12.0.txt b/doc/source/v0.12.0.txt
--- a/doc/source/v0.12.0.txt
+++ b/doc/source/v0.12.0.txt
@@ -463,6 +463,8 @@ Bug Fixes
argument in ``to_datetime`` (:issue:`4152`)
- Fixed bug in ``PandasAutoDateLocator`` where ``invert_xaxis`` triggered
incorrectly ``MilliSecondLocator`` (:issue:`3990`)
+ - Fixed bug in plotting that wasn't raising on invalid colormap for
+ matplotlib 1.1.1 (:issue:`4215`)
See the :ref:`full release notes
<release>` or issue tracker
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -91,14 +91,17 @@
def _get_standard_kind(kind):
return {'density': 'kde'}.get(kind, kind)
-def _get_standard_colors(num_colors=None, colormap=None,
- color_type='default', color=None):
+def _get_standard_colors(num_colors=None, colormap=None, color_type='default',
+ color=None):
import matplotlib.pyplot as plt
if color is None and colormap is not None:
if isinstance(colormap, basestring):
import matplotlib.cm as cm
+ cmap = colormap
colormap = cm.get_cmap(colormap)
+ if colormap is None:
+ raise ValueError("Colormap {0} is not recognized".format(cmap))
colors = map(colormap, np.linspace(0, 1, num=num_colors))
elif color is not None:
if colormap is not None:
| test_invalid_colormap: ValueError not raised
```
======================================================================
FAIL: test_invalid_colormap (pandas.tests.test_graphics.TestDataFrameGroupByPlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 995, in test_invalid_colormap
self.assertRaises(ValueError, df.plot, colormap='invalid_colormap')
AssertionError: ValueError not raised
```
that is under Agg backend and matplotlib 1.1.1~rc2-1
| cc @qwhelan, @jtratner
anyone know why this is happening?
@yarikoptic did you say you were using a customized back end?
Also, can you show us what happens if you try to get a non-standard colormap from your version of matplotlib?
This test, IIRC, just checks that plotting bubbles up the error from the mpl call to get the colormap. So if the other tests are passing, to me this suggests that either it's a matplotlib bug OR the Agg backend handles unknown colormaps differently.
If there is variance in how colormaps are handled by backend, we should add a skip test at the start if directly asking for a bad colormap doesn't raise an error.
Should have been @yarikoptic above (sorry about that!)
For me even with the default interactive backend I get no exception while asking for the colormap
```
$> python -c "import matplotlib as mpl; print mpl.__version__;import matplotlib.cm as cm; print cm.get_cmap('invalid_colormap');"
1.1.1rc2
```
```
$> nosetests -s -v pandas/tests/test_graphics.py:TestDataFrameGroupByPlots.test_invalid_colormap
/usr/bin/nosetests:5: UserWarning: Module dap was already imported from None, but /usr/lib/python2.7/dist-packages is being added to sys.path
from pkg_resources import load_entry_point
test_invalid_colormap (pandas.tests.test_graphics.TestDataFrameGroupByPlots) ... FAIL
======================================================================
FAIL: test_invalid_colormap (pandas.tests.test_graphics.TestDataFrameGroupByPlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_graphics.py", line 995, in test_invalid_colormap
self.assertRaises(ValueError, df.plot, colormap='invalid_colormap')
AssertionError: ValueError not raised
----------------------------------------------------------------------
Ran 1 test in 0.319s
FAILED (failures=1)
```
@yarikoptic I think t's because of the version of `matplotlib` that you're using
what you should see
```
In [7]: cm.get_cmap('invalid')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-ddb89b348247> in <module>()
----> 1 cm.get_cmap('invalid')
/home/phillip/.virtualenvs/pandas/lib/python2.7/site-packages/matplotlib/cm.pyc in get_cmap(name, lut)
153 return _generate_cmap(name, lut)
154
--> 155 raise ValueError("Colormap %s is not recognized" % name)
156
157 class ScalarMappable:
ValueError: Colormap invalid is not recognized
```
my versions
```
INSTALLED VERSIONS
------------------
Python: 2.7.5.final.0
OS: Linux 3.9.9-1-ARCH #1 SMP PREEMPT Wed Jul 3 22:45:16 CEST 2013 x86_64
LC_ALL: None
LANG: en_US.UTF-8
Cython: 0.19.1
Numpy: 1.7.1
Scipy: 0.12.0
statsmodels: 0.5.0.dev-95737a1
patsy: 0.1.0
scikits.timeseries: 0.91.3
dateutil: 2.1
pytz: 2013b
bottleneck: 0.6.0
PyTables: 3.0.0
numexpr: 2.1
matplotlib: 1.2.1
openpyxl: 1.6.2
xlrd: 0.9.2
xlwt: 0.7.5
sqlalchemy: 0.8.1
lxml: 3.2.1
bs4: 4.2.1
html5lib: 1.0b1
```
:phone: pandas version help hotline, how can i help you? :smile:
yep it's the version, just installed 1.1.1 and it returns `None`
i'll add a version check in there in the test
actually u know what i'll raise if the color map is `None`
wow mpl gained a hefty 29MB from 1.1.1 to 1.2.1...
| 2013-07-13T15:58:46Z | [] | [] |
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 995, in test_invalid_colormap
self.assertRaises(ValueError, df.plot, colormap='invalid_colormap')
AssertionError: ValueError not raised
| 14,788 |
|||
pandas-dev/pandas | pandas-dev__pandas-4232 | 70da8c33c3afe79111c1177adf7d9cd31fe25289 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -106,6 +106,8 @@ pandas 0.12
- Added ``layout`` keyword to DataFrame.hist() for more customizable layout (:issue:`4050`)
- Timestamp.min and Timestamp.max now represent valid Timestamp instances instead
of the default datetime.min and datetime.max (respectively), thanks @SleepingPills
+ - ``read_html`` now raises when no tables are found and BeautifulSoup==4.2.0
+ is detected (:issue:`4214`)
**API Changes**
diff --git a/doc/source/v0.12.0.txt b/doc/source/v0.12.0.txt
--- a/doc/source/v0.12.0.txt
+++ b/doc/source/v0.12.0.txt
@@ -344,6 +344,9 @@ Other Enhancements
- Timestamp.min and Timestamp.max now represent valid Timestamp instances instead
of the default datetime.min and datetime.max (respectively), thanks @SleepingPills
+ - ``read_html`` now raises when no tables are found and BeautifulSoup==4.2.0
+ is detected (:issue:`4214`)
+
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -8,23 +8,18 @@
import numbers
import urllib2
import urlparse
-import contextlib
import collections
-
-try:
- from importlib import import_module
-except ImportError:
- import_module = __import__
+from distutils.version import LooseVersion
import numpy as np
from pandas import DataFrame, MultiIndex, isnull
-from pandas.io.common import _is_url
+from pandas.io.common import _is_url, urlopen
try:
- import_module('bs4')
+ import bs4
except ImportError:
_HAS_BS4 = False
else:
@@ -32,7 +27,7 @@
try:
- import_module('lxml')
+ import lxml
except ImportError:
_HAS_LXML = False
else:
@@ -40,7 +35,7 @@
try:
- import_module('html5lib')
+ import html5lib
except ImportError:
_HAS_HTML5LIB = False
else:
@@ -119,7 +114,7 @@ def _read(io):
"""
if _is_url(io):
try:
- with contextlib.closing(urllib2.urlopen(io)) as url:
+ with urlopen(io) as url:
raw_text = url.read()
except urllib2.URLError:
raise ValueError('Invalid URL: "{0}"'.format(io))
@@ -131,7 +126,8 @@ def _read(io):
elif isinstance(io, basestring):
raw_text = io
else:
- raise ValueError("Cannot read object of type '{0}'".format(type(io)))
+ raise TypeError("Cannot read object of type "
+ "'{0.__class__.__name__!r}'".format(io))
return raw_text
@@ -414,6 +410,7 @@ def _parse_tables(self, doc, match, attrs):
element_name = self._strainer.name
tables = doc.find_all(element_name, attrs=attrs)
if not tables:
+ # known sporadically working release
raise AssertionError('No tables found')
mts = [table.find(text=match) for table in tables]
@@ -429,7 +426,8 @@ def _parse_tables(self, doc, match, attrs):
def _setup_build_doc(self):
raw_text = _read(self.io)
if not raw_text:
- raise AssertionError('No text parsed from document')
+ raise AssertionError('No text parsed from document: '
+ '{0}'.format(self.io))
return raw_text
def _build_doc(self):
@@ -721,6 +719,14 @@ def _parser_dispatch(flavor):
raise ImportError("html5lib not found please install it")
if not _HAS_BS4:
raise ImportError("bs4 not found please install it")
+ if bs4.__version__ == LooseVersion('4.2.0'):
+ raise AssertionError("You're using a version"
+ " of BeautifulSoup4 (4.2.0) that has been"
+ " known to cause problems on certain"
+ " operating systems such as Debian. "
+ "Please install a version of"
+ " BeautifulSoup4 != 4.2.0, both earlier"
+ " and later releases will work.")
else:
if not _HAS_LXML:
raise ImportError("lxml not found please install it")
| read_html failing in many tests: AssertionError: No tables found
typical error with
```
======================================================================
FAIL: test_banklist (pandas.io.tests.test_html.TestReadHtmlBase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 114, in test_banklist
attrs={'id': 'table'})
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 67, in run_read_html
return read_html(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 900, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 769, in _parse
raise retained
AssertionError: No tables found
```
| I guess another related failure
```
======================================================================
FAIL: pandas.io.tests.test_html.test_bs4_finds_tables
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 453, in test_bs4_finds_tables
assert get_elements_from_url(filepath, 'table')
AssertionError
```
can you show ci/print_versions.py?
Here is what I have ATM (without running the build with adjusted
matplotlib backend and PYTHONPATH... which shouldn't matter -- just want
to make clear if I am not missing anything):
$> ci/print_versions.py
## INSTALLED VERSIONS
Python: 2.7.5.final.0
OS: Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2 x86_64
LC_ALL: None
LANG: en_US
Cython: 0.19
Numpy: 1.7.1
Scipy: 0.10.1
statsmodels: 0.4.2
patsy: Not installed
scikits.timeseries: Not installed
dateutil: 1.5
pytz: 2012c
bottleneck: Not installed
PyTables: 2.3.1
numexpr: 2.0.1
matplotlib: 1.1.1rc2
openpyxl: 1.6.1
xlrd: 0.6.1
xlwt: 0.7.4
sqlalchemy: 0.7.9
lxml: 3.2.0
bs4: 4.2.0
html5lib: 0.95-dev
On Thu, 11 Jul 2013, jreback wrote:
> can you show ci/print_versions.py?
>
> —
> Reply to this email directly or [1]view it on GitHub.
>
> References
>
> Visible links
> 1. https://github.com/pydata/pandas/issues/4214#issuecomment-20822890
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
@cpcloud ?
@yarikoptic this is master correct?
7097368 should have fixed these issues
it looks like you're using an older version of html5lib. i believe 1.0b2 is out.
also, check out the [reading html gotchas](http://pandas.pydata.org/pandas-docs/dev/gotchas.html#html-table-parsing)
yes -- master 0.11.0+git43-g7b2eaa4
thanks for the fix -- I will check it out
html5lib -- well, I have the freshiest non-beta release ;) thanks for the note though!
i'll try this on a vagrant box
the commit i referenced above is already in master
@yarikoptic is it of any significance that your version says 0.11.0?
``` sh
$ git describe
v0.12.0rc1-43-g7b2eaa4
```
@yarikoptic there are no guarantees that your `bs4` version will work...
see the [optional dependencies docs](http://pandas.pydata.org/pandas-docs/dev/install.html#optional-dependencies)
also your `lxml` version _might_ need to be upgraded.
i would say: leave html5lib alone for now that should be fine, but change your `bs4` from `4.2.0` to either `4.2.1`, `4.1.3` or `4.0.2`. then run tests. if `lxml` still fails upgrade to `3.2.1`
if none of this works then i'll mark it as a bug
there's a release of `bs4` that really shouldn't have existed, i believe it is `4.2.0`.
fyi travis installs `bs4` 4.0.2 and that works...i'm not sure how different ubuntu 12.04 is from debian sid, but i imagine they share a lot of similarity, i'm sure you know _much_ more than i about this
It think @cpcloud should setup a version dependency hotline, kind of like AA :)
Hello, this is `read_html`-one-one, what's your emergency?
ah -- thanks -- should have been indeed 0.12.0~rc1+git... as the Debian
perspective version -- fixed in my debian branch.
On Thu, 11 Jul 2013, Phillip Cloud wrote:
> [1]@yarikoptic is it of any significance that your version says 0.11.0?
>
> $ git describe
> v0.12.0rc1-43-g7b2eaa4
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
excellent!
does that mean `bs4==4.2.0` works for you?
yikes -- good to know... well at least in current Debian stable we have
4.1.0 -- is that good enough? ;-)
On Thu, 11 Jul 2013, Phillip Cloud wrote:
> there's a release of bs4 that really shouldn't have existed, i believe it
> is 4.2.0.
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
that should be ok, as long as it passes for you. does this also fix #4212? if so please close both, thanks for the report :smile:
also the other issues you've raised today might be fixed now that your version is correct
sorry -- I guess my mental pipelining broke -- what should have fixed my issue? downgrade of bs4 to < 4.2.0?
that commit mentioned https://github.com/pydata/pandas/commit/7097368 has been in master for a while so was already included in the version I have built -- or am I wrong?
The bs4 downgrade should be the fix.
ok -- will give it a try. but as for the reports correction it would be better if pandas internally switched between backends/skipped the tests if broken bs4 is present; since otherwise it becomes impractical for me to build them across all debian/ubuntu releases having the broken bs4
btw -- what about bs4 4.2.1 -- should that be "good enough"?
Re bs4 skip good idea. I'll add a warning to the code as well since iirc 4.0.2 worked fine on my arch Linux box.
4.2.1 should be fine.
I mean 4.2.0 worked fine.
... so bs4 4.2.0 is not at fault? (anyways -- rebuilding/testing now with 4.2.1 installed)
no bs4 4.2.0 _is_ at fault
then what "I mean 4.2.0 worked fine. " is about? ;) or it is at fault only in some deployments?
works fine on my arch linux machine, but fails on debian distros, not sure exactly why
let me check that though...
wow -- with 4.2.1 only #4215 is left failing:
#
## FAIL: test_invalid_colormap (pandas.tests.test_graphics.TestDataFrameGroupByPlots)
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 995, in test_invalid_colormap
self.assertRaises(ValueError, df.plot, colormap='invalid_colormap')
AssertionError: ValueError not raised
---
Ran 3622 tests in 743.502s
FAILED (SKIP=83, failures=1)
On Thu, 11 Jul 2013, Phillip Cloud wrote:
> let me check that though...
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
that good news! not sure where that is coming from...i'll see what i can do about it
@cpcloud ok....you are going to add a skip test / warning message ? if 4.2.0 is installed (maybe just raise in `read_html`...hey user your 4.2.0 is broken, this won't work.....
i'm going to actually raise if no tables found and `bs4==4.2.0`, since there's nothing else to fallback on after that. i've cooked up a fairly informative error message
hard to test for this...we don't have an extensive list of OSes that work with bs4
| 2013-07-13T16:54:53Z | [] | [] |
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 114, in test_banklist
attrs={'id': 'table'})
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 67, in run_read_html
return read_html(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 900, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 769, in _parse
raise retained
AssertionError: No tables found
| 14,789 |
|||
pandas-dev/pandas | pandas-dev__pandas-4257 | 404dfab8b25206bed554a3a7fa9c1a87f27fa68b | read_html failing in many tests: AssertionError: No tables found
typical error with
```
======================================================================
FAIL: test_banklist (pandas.io.tests.test_html.TestReadHtmlBase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 114, in test_banklist
attrs={'id': 'table'})
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 67, in run_read_html
return read_html(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 900, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 769, in _parse
raise retained
AssertionError: No tables found
```
| I guess another related failure
```
======================================================================
FAIL: pandas.io.tests.test_html.test_bs4_finds_tables
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 453, in test_bs4_finds_tables
assert get_elements_from_url(filepath, 'table')
AssertionError
```
can you show ci/print_versions.py?
Here is what I have ATM (without running the build with adjusted
matplotlib backend and PYTHONPATH... which shouldn't matter -- just want
to make clear if I am not missing anything):
$> ci/print_versions.py
## INSTALLED VERSIONS
Python: 2.7.5.final.0
OS: Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2 x86_64
LC_ALL: None
LANG: en_US
Cython: 0.19
Numpy: 1.7.1
Scipy: 0.10.1
statsmodels: 0.4.2
patsy: Not installed
scikits.timeseries: Not installed
dateutil: 1.5
pytz: 2012c
bottleneck: Not installed
PyTables: 2.3.1
numexpr: 2.0.1
matplotlib: 1.1.1rc2
openpyxl: 1.6.1
xlrd: 0.6.1
xlwt: 0.7.4
sqlalchemy: 0.7.9
lxml: 3.2.0
bs4: 4.2.0
html5lib: 0.95-dev
On Thu, 11 Jul 2013, jreback wrote:
> can you show ci/print_versions.py?
>
> —
> Reply to this email directly or [1]view it on GitHub.
>
> References
>
> Visible links
> 1. https://github.com/pydata/pandas/issues/4214#issuecomment-20822890
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
@cpcloud ?
@yarikoptic this is master correct?
7097368 should have fixed these issues
it looks like you're using an older version of html5lib. i believe 1.0b2 is out.
also, check out the [reading html gotchas](http://pandas.pydata.org/pandas-docs/dev/gotchas.html#html-table-parsing)
yes -- master 0.11.0+git43-g7b2eaa4
thanks for the fix -- I will check it out
html5lib -- well, I have the freshiest non-beta release ;) thanks for the note though!
i'll try this on a vagrant box
the commit i referenced above is already in master
@yarikoptic is it of any significance that your version says 0.11.0?
``` sh
$ git describe
v0.12.0rc1-43-g7b2eaa4
```
@yarikoptic there are no guarantees that your `bs4` version will work...
see the [optional dependencies docs](http://pandas.pydata.org/pandas-docs/dev/install.html#optional-dependencies)
also your `lxml` version _might_ need to be upgraded.
i would say: leave html5lib alone for now that should be fine, but change your `bs4` from `4.2.0` to either `4.2.1`, `4.1.3` or `4.0.2`. then run tests. if `lxml` still fails upgrade to `3.2.1`
if none of this works then i'll mark it as a bug
there's a release of `bs4` that really shouldn't have existed, i believe it is `4.2.0`.
fyi travis installs `bs4` 4.0.2 and that works...i'm not sure how different ubuntu 12.04 is from debian sid, but i imagine they share a lot of similarity, i'm sure you know _much_ more than i about this
It think @cpcloud should setup a version dependency hotline, kind of like AA :)
Hello, this is `read_html`-one-one, what's your emergency?
ah -- thanks -- should have been indeed 0.12.0~rc1+git... as the Debian
perspective version -- fixed in my debian branch.
On Thu, 11 Jul 2013, Phillip Cloud wrote:
> [1]@yarikoptic is it of any significance that your version says 0.11.0?
>
> $ git describe
> v0.12.0rc1-43-g7b2eaa4
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
excellent!
does that mean `bs4==4.2.0` works for you?
yikes -- good to know... well at least in current Debian stable we have
4.1.0 -- is that good enough? ;-)
On Thu, 11 Jul 2013, Phillip Cloud wrote:
> there's a release of bs4 that really shouldn't have existed, i believe it
> is 4.2.0.
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
that should be ok, as long as it passes for you. does this also fix #4212? if so please close both, thanks for the report :smile:
also the other issues you've raised today might be fixed now that your version is correct
sorry -- I guess my mental pipelining broke -- what should have fixed my issue? downgrade of bs4 to < 4.2.0?
that commit mentioned https://github.com/pydata/pandas/commit/7097368 has been in master for a while so was already included in the version I have built -- or am I wrong?
The bs4 downgrade should be the fix.
ok -- will give it a try. but as for the reports correction it would be better if pandas internally switched between backends/skipped the tests if broken bs4 is present; since otherwise it becomes impractical for me to build them across all debian/ubuntu releases having the broken bs4
btw -- what about bs4 4.2.1 -- should that be "good enough"?
Re bs4 skip good idea. I'll add a warning to the code as well since iirc 4.0.2 worked fine on my arch Linux box.
4.2.1 should be fine.
I mean 4.2.0 worked fine.
... so bs4 4.2.0 is not at fault? (anyways -- rebuilding/testing now with 4.2.1 installed)
no bs4 4.2.0 _is_ at fault
then what "I mean 4.2.0 worked fine. " is about? ;) or it is at fault only in some deployments?
works fine on my arch linux machine, but fails on debian distros, not sure exactly why
let me check that though...
wow -- with 4.2.1 only #4215 is left failing:
#
## FAIL: test_invalid_colormap (pandas.tests.test_graphics.TestDataFrameGroupByPlots)
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 995, in test_invalid_colormap
self.assertRaises(ValueError, df.plot, colormap='invalid_colormap')
AssertionError: ValueError not raised
---
Ran 3622 tests in 743.502s
FAILED (SKIP=83, failures=1)
On Thu, 11 Jul 2013, Phillip Cloud wrote:
> let me check that though...
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
that good news! not sure where that is coming from...i'll see what i can do about it
@cpcloud ok....you are going to add a skip test / warning message ? if 4.2.0 is installed (maybe just raise in `read_html`...hey user your 4.2.0 is broken, this won't work.....
i'm going to actually raise if no tables found and `bs4==4.2.0`, since there's nothing else to fallback on after that. i've cooked up a fairly informative error message
hard to test for this...we don't have an extensive list of OSes that work with bs4
with 0.12.0~rc1+git79-g50eff60 and bs4 4.2.0 installed (on sparc)
```
======================================================================
FAIL: pandas.io.tests.test_html.test_bs4_finds_tables
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 472, in test_bs4_finds_tables
assert get_elements_from_url(filepath, 'table')
AssertionError
```
ah -- just now mentioned that it is closed -- let me know if I need to open a new one
4.2.0 won't work.
then test should be skipped?
On Mon, 15 Jul 2013, Phillip Cloud wrote:
> 4.2.0 won't work.
>
> —
> Reply to this email directly or [1]view it on GitHub.
>
> References
>
> Visible links
> 1. https://github.com/pydata/pandas/issues/4214#issuecomment-21018914
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
R u sure u have master?
as I have mentioned I had 50eff60 which was the master few hours back, now there is some new changes:
```
*$> git log --pretty=oneline 0.12.0~rc1+git79-g50eff60..origin/master
1e69dade1309b494e43f26f9831326c2ce63f7de Merge pull request #4248 from cpcloud/mpl-1.1.1-build
5be1e5a3f27e43c8b8cae57fed684e62587580fb Merge pull request #4243 from kjordahl/master
5b908a5a1d801f67b78d179b934f860bffe37533 BLD: use mpl 1.1.1 in python 2.7 production travis build
53b7a74fd756cfd2e6fca58326a577558ab97909 Merge pull request #4247 from jreback/series_dups
bbcfd929205ef79c00a403050f107c5e05e0b300 ENH: implement non-unique indexing in series (GH4246)
56e6b173f1c94162bfe5b7dfacb58062011dc96b DOC: Fix typos in CONTRIBUTING.md
```
| 2013-07-16T13:15:20Z | [] | [] |
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 114, in test_banklist
attrs={'id': 'table'})
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_html.py", line 67, in run_read_html
return read_html(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 900, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.11.0+git43-g7b2eaa4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/html.py", line 769, in _parse
raise retained
AssertionError: No tables found
| 14,794 |
||||
pandas-dev/pandas | pandas-dev__pandas-4267 | df5af03d4ff7d31fac00daa84cff0bc223a846d9 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -341,6 +341,7 @@ pandas 0.12
(:issue:`4226`)
- Fixed bug in initializing ``DatetimeIndex`` with an array of strings
in a certain time zone (:issue:`4229`)
+ - Fixed bug where html5lib wasn't being properly skipped (:issue:`4265`)
pandas 0.11.0
=============
diff --git a/doc/source/v0.12.0.txt b/doc/source/v0.12.0.txt
--- a/doc/source/v0.12.0.txt
+++ b/doc/source/v0.12.0.txt
@@ -474,6 +474,7 @@ Bug Fixes
(:issue:`4226`)
- Fixed bug in initializing ``DatetimeIndex`` with an array of strings
in a certain time zone (:issue:`4229`)
+ - Fixed bug where html5lib wasn't being properly skipped (:issue:`4265`)
See the :ref:`full release notes
<release>` or issue tracker
| test_bs4_version_fails: ImportError: html5lib not found please install it
```
======================================================================
ERROR: pandas.io.tests.test_html.test_bs4_version_fails
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 83, in test_bs4_version_fails
flavor='bs4')
File "/usr/lib/python3.2/unittest/case.py", line 557, in assertRaises
callableObj(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 906, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 765, in _parse
parser = _parser_dispatch(flav)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 719, in _parser_dispatch
raise ImportError("html5lib not found please install it")
ImportError: html5lib not found please install it
```
on 4c2d050
there is no python3-html5lib on any debian system yet
| 2013-07-16T21:29:15Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 83, in test_bs4_version_fails
flavor='bs4')
File "/usr/lib/python3.2/unittest/case.py", line 557, in assertRaises
callableObj(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 906, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 765, in _parse
parser = _parser_dispatch(flav)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 719, in _parser_dispatch
raise ImportError("html5lib not found please install it")
ImportError: html5lib not found please install it
| 14,797 |
||||
pandas-dev/pandas | pandas-dev__pandas-4269 | df5af03d4ff7d31fac00daa84cff0bc223a846d9 | diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -581,6 +581,10 @@ cdef inline bint is_timestamp(object o):
cdef class _NaT(_Timestamp):
+ def __hash__(_NaT self):
+ # py3k needs this defined here
+ return hash(self.value)
+
def __richcmp__(_NaT self, object other, int op):
# if not isinstance(other, (_NaT, _Timestamp)):
# raise TypeError('Cannot compare %s with NaT' % type(other))
| python3 test_value_counts_nunique : TypeError: unhashable type: 'NaTType'
on sparc 4c2d050
```
======================================================================
ERROR: test_value_counts_nunique (pandas.tests.test_series.TestSeries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/tests/test_series.py", line 2633, in test_value_counts_nunique
result = s.value_counts()
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/core/series.py", line 1422, in value_counts
normalize=normalize)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/core/algorithms.py", line 186, in value_counts
keys, counts = htable.value_count_object(values, mask)
File "hashtable.pyx", line 935, in pandas.hashtable.value_count_object (pandas/hashtable.c:14937)
TypeError: unhashable type: 'NaTType'
```
| that's a weird error....`NaTType` hashes an `int` or it hashes its `datetime` value.
what happens when you try just
``` python
import pandas as pd
hash(pd.NaT)
```
?
| 2013-07-16T21:50:48Z | [] | [] |
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/tests/test_series.py", line 2633, in test_value_counts_nunique
result = s.value_counts()
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/core/series.py", line 1422, in value_counts
normalize=normalize)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/core/algorithms.py", line 186, in value_counts
keys, counts = htable.value_count_object(values, mask)
File "hashtable.pyx", line 935, in pandas.hashtable.value_count_object (pandas/hashtable.c:14937)
TypeError: unhashable type: 'NaTType'
| 14,798 |
|||
pandas-dev/pandas | pandas-dev__pandas-4276 | d070a1fd797f6b8eb209b26f36519b1e7422cc01 | python3.2: test_html ImportError: html5lib not found please install it
Now (d070a1f) that #4265 fixed, all the test_html tests fail similarly
```
======================================================================
ERROR: test_bad_url_protocol (pandas.io.tests.test_html.TestReadHtmlBase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 290, in test_bad_url_protocol
'.*Water.*')
File "/usr/lib/python3.2/unittest/case.py", line 557, in assertRaises
callableObj(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 91, in run_read_html
return read_html(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 906, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 765, in _parse
parser = _parser_dispatch(flav)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 719, in _parser_dispatch
raise ImportError("html5lib not found please install it")
ImportError: html5lib not found please install it
```
| and 1 more which probably belongs here as well
```
======================================================================
ERROR: pandas.io.tests.test_html.test_bs4_finds_tables
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 479, in test_bs4_finds_tables
assert get_elements_from_url(filepath, 'table')
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 470, in get_elements_from_url
soup = BeautifulSoup(f, features='html5lib')
File "/usr/lib/python3/dist-packages/bs4/__init__.py", line 155, in __init__
% ",".join(features))
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: html5lib. Do you need to install a parser library?
```
| 2013-07-17T13:34:28Z | [] | [] |
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 290, in test_bad_url_protocol
'.*Water.*')
File "/usr/lib/python3.2/unittest/case.py", line 557, in assertRaises
callableObj(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 91, in run_read_html
return read_html(*args, **kwargs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 906, in read_html
attrs)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 765, in _parse
parser = _parser_dispatch(flav)
File "/home/yoh/deb/gits/pkg-exppsy/build-area/pandas-0.12.0~rc1+git101-gd070a1f/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 719, in _parser_dispatch
raise ImportError("html5lib not found please install it")
ImportError: html5lib not found please install it
| 14,800 |
||||
pandas-dev/pandas | pandas-dev__pandas-4356 | 5d2b85fbb7e13a6e2b1caf9ef378daa1e21dee16 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -52,6 +52,8 @@ pandas 0.13
(:issue:`4102`, :issue:`4014`) in ``*.hist`` plotting methods
- Fixed bug in ``PeriodIndex.map`` where using ``str`` would return the str
representation of the index (:issue:`4136`)
+ - Fix running of stata IO tests. Now uses temporary files to write
+ (:issue:`4353`)
pandas 0.12
===========
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -30,6 +30,9 @@ Bug Fixes
- Fixed bug in ``PeriodIndex.map`` where using ``str`` would return the str
representation of the index (:issue:`4136`)
+ - Fix running of stata IO tests. Now uses temporary files to write
+ (:issue:`4353`)
+
See the :ref:`full release notes
<release>` or issue tracker
on GitHub for a complete list.
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -86,7 +86,7 @@ def set_trace():
#------------------------------------------------------------------------------
# contextmanager to ensure the file cleanup
@contextmanager
-def ensure_clean(filename = None):
+def ensure_clean(filename=None):
# if we are not passed a filename, generate a temporary
if filename is None:
filename = tempfile.mkstemp()[1]
| TST: Testing an installed package shouldn't require root permissions
Trying to run `nosetests3 pandas` on an installed pandas package, I get:
```
ERROR: test_read_dta10 (pandas.io.tests.test_stata.StataTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pandas/io/tests/test_stata.py", line 207, in test_read_dta10
original.to_stata(path, {'datetime': 'tc'}, False)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1503, in to_stata
writer = StataWriter(fname,self,convert_dates=convert_dates, encoding=encoding, byteorder=byteorder)
File "/usr/lib/python3/dist-packages/pandas/io/stata.py", line 745, in __init__
self._file = _open_file_binary_write(fname, self._encoding)
File "/usr/lib/python3/dist-packages/pandas/io/stata.py", line 577, in _open_file_binary_write
return open(fname, "wb")
PermissionError: [Errno 13] Permission denied: '/usr/lib/python3/dist-packages/pandas/io/tests/data/stata10.dta'
```
... and other similar errors. Maybe it'll be a good idea to fix the tests so that they can be run on an installed package without needing to modify the actually installed files.
| If you install the package locally then you don't need root permissions. If you install with `sudo` then you'll need root permissions.
The tests themselves have nothing to do with the fact that they are being run as root, so it's not clear what you mean by
> fix the tests so that they can be run on an installed package without needing to modify the actually installed files
see #761
@cpcloud Actually I use Debian package here, installed via package manager. Also, did you really see the traceback above? The test fails because it doesn't have permission to modify a file in `/usr/lib` — which **is** related to them being run as non-root.
If I didn't convince you, this failure can also be seen on [Ubuntu's Jenkins server](https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-pandas/7/ARCH=amd64,label=adt/console).
Sorry, I guess I wasn't being clear. IIRC, most package managers require root permissions to install anything. Anyway, you're right. The tests should use make liberal use of the `tempfile` module when writing files.
@mitya57 I am pretty sure you are running with < 0.12. This was fixed to use a temporary (non permissioned) in 0.12.
@jreback according to this line the jenkins server above is running 0.12
```
+ buildtree=/tmp/tmp.KLHag020rp/dsc0-build/pandas-0.12.0~rc1+git127-gec8920a/
```
although not sure if that is before or after the change u mentioned
that line is near the bottom of the page
Yeah, I'm using the same snapshot (ec8920a, 5-day-old), and don't see anything related in newer commits...
sorry....this actually specifies a direct path (which is wrong)..so yes...marking as a test bug
`ensure_clean` will generate a safe temporary file, except when a full pathname is specified (which is wrong here)
| 2013-07-25T12:59:39Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pandas/io/tests/test_stata.py", line 207, in test_read_dta10
original.to_stata(path, {'datetime': 'tc'}, False)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1503, in to_stata
writer = StataWriter(fname,self,convert_dates=convert_dates, encoding=encoding, byteorder=byteorder)
File "/usr/lib/python3/dist-packages/pandas/io/stata.py", line 745, in __init__
self._file = _open_file_binary_write(fname, self._encoding)
File "/usr/lib/python3/dist-packages/pandas/io/stata.py", line 577, in _open_file_binary_write
return open(fname, "wb")
PermissionError: [Errno 13] Permission denied: '/usr/lib/python3/dist-packages/pandas/io/tests/data/stata10.dta'
| 14,810 |
|||
pandas-dev/pandas | pandas-dev__pandas-4410 | b89c88a2688e9bed85604ee5e1f5b31f714d5138 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -94,8 +94,10 @@ pandas 0.13
- Fixed an issue where ``PeriodIndex`` joining with self was returning a new
instance rather than the same instance (:issue:`4379`); also adds a test
for this for the other index types
- - Fixed a bug with all the dtypes being converted to object when using the CSV cparser
+ - Fixed a bug with all the dtypes being converted to object when using the CSV cparser
with the usecols parameter (:issue: `3192`)
+ - Fix an issue in merging blocks where the resulting DataFrame had partially
+ set _ref_locs (:issue:`4403`)
pandas 0.12
===========
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -683,6 +683,7 @@ def get_result(self):
blockmaps = self._prepare_blocks()
kinds = _get_merge_block_kinds(blockmaps)
+ result_is_unique = self.result_axes[0].is_unique
result_blocks = []
# maybe want to enable flexible copying <-- what did I mean?
@@ -692,6 +693,12 @@ def get_result(self):
if klass in mapping:
klass_blocks.extend((unit, b) for b in mapping[klass])
res_blk = self._get_merged_block(klass_blocks)
+
+ # if we have a unique result index, need to clear the _ref_locs
+ # a non-unique is set as we are creating
+ if result_is_unique:
+ res_blk.set_ref_locs(None)
+
result_blocks.append(res_blk)
return BlockManager(result_blocks, self.result_axes)
@@ -1070,7 +1077,7 @@ def _concat_blocks(self, blocks):
# map the column location to the block location
# GH3602
if not self.new_axes[0].is_unique:
- block._ref_locs = indexer
+ block.set_ref_locs(indexer)
return block
| Dataframe rename issue.
I just upgrage from 0.11 to 0.12 version. And meet dataframe rename error caused by upgrading. (It worked well in 0.11) .
```
>>> df4
TClose RT TExg
STK_ID RPT_Date
600809 20130331 22.02 0.0454 0.0422
>>> df5
STK_ID RPT_Date STK_Name TClose
STK_ID RPT_Date
600809 20120930 600809 20120930 山西汾酒 38.05
20121231 600809 20121231 山西汾酒 41.66
20130331 600809 20130331 山西汾酒 30.01
>>> k=pd.merge(df4, df5, how='inner', left_index=True, right_index=True)
>>> k
TClose_x RT TExg STK_ID RPT_Date STK_Name TClose_y
STK_ID RPT_Date
600809 20130331 22.02 0.0454 0.0422 600809 20130331 山西汾酒 30.01
>>> k.rename(columns={'TClose_x':'TClose', 'TClose_y':'QT_Close'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "d:\Python27\lib\site-packages\pandas\core\base.py", line 40, in __repr__
return str(self)
File "d:\Python27\lib\site-packages\pandas\core\base.py", line 20, in __str__
return self.__bytes__()
File "d:\Python27\lib\site-packages\pandas\core\base.py", line 32, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 668, in __unicode__
self.to_string(buf=buf)
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 1556, in to_string
formatter.to_string()
File "d:\Python27\lib\site-packages\pandas\core\format.py", line 294, in to_string
strcols = self._to_str_columns()
File "d:\Python27\lib\site-packages\pandas\core\format.py", line 239, in _to_str_columns
str_columns = self._get_formatted_column_labels()
File "d:\Python27\lib\site-packages\pandas\core\format.py", line 435, in _get_formatted_column_labels
dtypes = self.frame.dtypes
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 1696, in dtypes
return self.apply(lambda x: x.dtype)
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 4416, in apply
return self._apply_standard(f, axis)
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 4491, in _apply_standard
raise e
TypeError: ("'NoneType' object is not iterable", u'occurred at index TExg')
>>> df4.dtypes
TClose float64
RT float64
TExg float64
dtype: object
>>> df5.dtypes
STK_ID object
RPT_Date object
STK_Name object
TClose float64
dtype: object
>>>
```
| can you supply a reproducible for these initial frames (e.g. a function which does it exactly)
e.g. something that can be evaled to created it because need to reproduce the unicode characters
(this is a unicode error), just happens to show up in the dtype printing
`DataFrame([['foo',1.0....])`
i think that's a possibly spurious raise there...it should probably be a bare `raise` since `NoneType` not being iterable is not informative
i can repro this using the above frames
@halleygithub please supply some code to create the above frames.
there's a bug in `icol` or `BlockManager.iget`
ahh duplicate `TExg` block somehow...
we really need to remove that `raise e` there that's only way i was able to figure out this was in internals
no that raise is correct
just str(df)
huh? the raise doesn't show the correct location of the exception because it catches everything
here's part of the traceback
```
/home/phillip/Documents/code/py/pandas/pandas/core/frame.pyc in dtypes(self)
1685 @property
1686 def dtypes(self):
-> 1687 return self.apply(lambda x: x.dtype)
1688
1689 def convert_objects(self, convert_dates=True, convert_numeric=False, copy=True):
/home/phillip/Documents/code/py/pandas/pandas/core/frame.pyc in apply(self, func, axis, broadcast, raw, args, **kwds)
4397 return self._apply_raw(f, axis)
4398 else:
-> 4399 return self._apply_standard(f, axis)
4400 else:
4401 return self._apply_broadcast(f, axis)
/home/phillip/Documents/code/py/pandas/pandas/core/frame.pyc in _apply_standard(self, func, axis, ignore_failures)
4472 # no k defined yet
4473 pass
-> 4474 raise e
4475
4476
TypeError: ("'NoneType' object is not iterable", u'occurred at index TExg')
```
this doesn't tell me anything about the location of the raise except that it was somewhere in looping thru `series_gen`
only when i removed `e` did the full traceback show up
maybe there's a way to show that without removing the `e`...
how would it be different anyway? would the possibly caught NameError / UnboundLocalError be raised instead?
```
In [4]: df4 = DataFrame({'TClose': [22.02], 'RT': [0.0454], 'TExg': [0.0422]}, index=MultiIndex.from_tuples([(600809, 20130331)], names=['STK_ID', 'RPT_Date']))
In [5]: df5 = DataFrame({'STK_ID': [600809] * 3, 'RPT_Date': [20120930,20121231,20130331], 'STK_Name': [u'饡驦', u'饡驦', u'饡驦'], 'TClose': [38.05, 41.66, 30.01]},index=MultiIndex.from_tuples([(600809, 20120930
), (600809, 20121231),(600809,20130331)], names=['STK_ID', 'RPT_Date']))
In [6]: k = merge(df4,df5,how='inner',left_index=True,right_index=True)
```
different characters but same error results.
curiously if you type `store k` then restart ipython, type `store -r k` and then
``` python
k.rename(columns={'TClose_x':'TClose'})
```
the error does _not_ show up :angry:
I think there is a pr out there to take out the e
but regardless the apply hits the error but its really in the construction
can u post your creation example?
it's there
this seems fishy
```
ipdb> self.items
Index([u'RT', u'TClose', u'TExg', u'RPT_Date', u'STK_ID', u'STK_Name', u'TClose_y'], dtype=object)
ipdb> self.blocks
[ObjectBlock: [TExg], 1 x 1, dtype object, IntBlock: [RT, TClose], 2 x 1, dtype int64, FloatBlock: [RT, TClose, TExg, TClose_y], 4 x 1, dtype float64]
```
where is `RPT_Date` in the blocks?
| 2013-07-31T00:17:47Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "d:\Python27\lib\site-packages\pandas\core\base.py", line 40, in __repr__
return str(self)
File "d:\Python27\lib\site-packages\pandas\core\base.py", line 20, in __str__
return self.__bytes__()
File "d:\Python27\lib\site-packages\pandas\core\base.py", line 32, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 668, in __unicode__
self.to_string(buf=buf)
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 1556, in to_string
formatter.to_string()
File "d:\Python27\lib\site-packages\pandas\core\format.py", line 294, in to_string
strcols = self._to_str_columns()
File "d:\Python27\lib\site-packages\pandas\core\format.py", line 239, in _to_str_columns
str_columns = self._get_formatted_column_labels()
File "d:\Python27\lib\site-packages\pandas\core\format.py", line 435, in _get_formatted_column_labels
dtypes = self.frame.dtypes
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 1696, in dtypes
return self.apply(lambda x: x.dtype)
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 4416, in apply
return self._apply_standard(f, axis)
File "d:\Python27\lib\site-packages\pandas\core\frame.py", line 4491, in _apply_standard
raise e
TypeError: ("'NoneType' object is not iterable", u'occurred at index TExg')
| 14,817 |
|||
pandas-dev/pandas | pandas-dev__pandas-4709 | 4db583dccad8a2b6db9e65405d7b0fd0e1aabee6 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -209,6 +209,7 @@ API Changes
- ``HDFStore``
+ - A zero length series written to HDF cannot be read back. (:issue:`4708`)
- ``append_to_multiple`` automatically synchronizes writing rows to multiple
tables and adds a ``dropna`` kwarg (:issue:`4698`)
- handle a passed ``Series`` in table format (:issue:`4330`)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2213,6 +2213,11 @@ def read_multi_index(self, key):
def read_index_node(self, node):
data = node[:]
+ # If the index was an empty array write_array_empty() will
+ # have written a sentinel. Here we relace it with the original.
+ if 'shape' in node._v_attrs \
+ and self._is_empty_array(getattr(node._v_attrs, 'shape')):
+ data = np.empty(getattr(node._v_attrs, 'shape'), dtype=getattr(node._v_attrs, 'value_type'))
kind = _ensure_decoded(node._v_attrs.kind)
name = None
@@ -2251,12 +2256,16 @@ def write_array_empty(self, key, value):
getattr(self.group, key)._v_attrs.value_type = str(value.dtype)
getattr(self.group, key)._v_attrs.shape = value.shape
+ def _is_empty_array(self, shape):
+ """Returns true if any axis is zero length."""
+ return any(x == 0 for x in shape)
+
def write_array(self, key, value, items=None):
if key in self.group:
self._handle.removeNode(self.group, key)
# Transform needed to interface with pytables row/col notation
- empty_array = any(x == 0 for x in value.shape)
+ empty_array = self._is_empty_array(value.shape)
transposed = False
if not empty_array:
@@ -2305,17 +2314,18 @@ def write_array(self, key, value, items=None):
vlarr = self._handle.createVLArray(self.group, key,
_tables().ObjectAtom())
vlarr.append(value)
- elif value.dtype.type == np.datetime64:
- self._handle.createArray(self.group, key, value.view('i8'))
- getattr(self.group, key)._v_attrs.value_type = 'datetime64'
- elif value.dtype.type == np.timedelta64:
- self._handle.createArray(self.group, key, value.view('i8'))
- getattr(self.group, key)._v_attrs.value_type = 'timedelta64'
else:
if empty_array:
self.write_array_empty(key, value)
else:
- self._handle.createArray(self.group, key, value)
+ if value.dtype.type == np.datetime64:
+ self._handle.createArray(self.group, key, value.view('i8'))
+ getattr(self.group, key)._v_attrs.value_type = 'datetime64'
+ elif value.dtype.type == np.timedelta64:
+ self._handle.createArray(self.group, key, value.view('i8'))
+ getattr(self.group, key)._v_attrs.value_type = 'timedelta64'
+ else:
+ self._handle.createArray(self.group, key, value)
getattr(self.group, key)._v_attrs.transposed = transposed
@@ -2362,11 +2372,7 @@ def shape(self):
def read(self, **kwargs):
self.validate_read(kwargs)
index = self.read_index('index')
- if len(index) > 0:
- values = self.read_array('values')
- else:
- values = []
-
+ values = self.read_array('values')
return Series(values, index=index, name=self.name)
def write(self, obj, **kwargs):
| BUG: A zero length series written to HDF cannot be read back.
This happens with and empty series with numpy arrays for both the values and the index.
``` python
>>> import pandas as pd
>>> import numpy as np
>>>
>>> with pd.get_store('foo.h5') as store:
... s = pd.Series(np.array([], dtype=np.int64), index=np.array([], dtype=np.int64))
... store['s'] = s
... s = store['s']
...
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 349, in __getitem__
return self.get(key)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 507, in get
return self._read_group(group)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 1093, in _read_group
return s.read(**kwargs)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 2247, in read
return Series(values, index=index, name=self.name)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/series.py", line 657, in __init__
data = SingleBlockManager(data, index, fastpath=True)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/internals.py", line 2942, in __init__
block = make_block(block, axis, axis, ndim=1, fastpath=True)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/internals.py", line 1535, in make_block
return klass(values, items, ref_items, ndim=ndim, fastpath=fastpath, placement=placement)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/internals.py", line 62, in __init__
% (len(items), len(values)))
ValueError: Wrong number of items passed 1, indices imply 0
```
I have fixed this locally and will put in a PR.
| 2013-08-30T15:12:15Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 349, in __getitem__
return self.get(key)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 507, in get
return self._read_group(group)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 1093, in _read_group
return s.read(**kwargs)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/io/pytables.py", line 2247, in read
return Series(values, index=index, name=self.name)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/series.py", line 657, in __init__
data = SingleBlockManager(data, index, fastpath=True)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/internals.py", line 2942, in __init__
block = make_block(block, axis, axis, ndim=1, fastpath=True)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/internals.py", line 1535, in make_block
return klass(values, items, ref_items, ndim=ndim, fastpath=fastpath, placement=placement)
File "/users/is/pross/workspace/pandas/git/pandas/pandas/core/internals.py", line 62, in __init__
% (len(items), len(values)))
ValueError: Wrong number of items passed 1, indices imply 0
| 14,863 |
||||
pandas-dev/pandas | pandas-dev__pandas-4770 | ac0ce3c40c666790d65039076cc968b83d8f0403 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -167,6 +167,8 @@ Improvements to existing features
- Improve support for converting R datasets to pandas objects (more
informative index for timeseries and numeric, support for factors, dist, and
high-dimensional arrays).
+ - :func:`~pandas.read_html` now supports the ``parse_dates``,
+ ``tupleize_cols`` and ``thousands`` parameters (:issue:`4770`).
API Changes
~~~~~~~~~~~
@@ -373,6 +375,8 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
``core/generic.py`` (:issue:`4435`).
- Refactor cum objects to core/generic.py (:issue:`4435`), note that these have a more numpy-like
function signature.
+ - :func:`~pandas.read_html` now uses ``TextParser`` to parse HTML data from
+ bs4/lxml (:issue:`4770`).
.. _release.bug_fixes-0.13.0:
@@ -538,6 +542,15 @@ Bug Fixes
- Make sure series-series boolean comparions are label based (:issue:`4947`)
- Bug in multi-level indexing with a Timestamp partial indexer (:issue:`4294`)
- Tests/fix for multi-index construction of an all-nan frame (:isue:`4078`)
+ - Fixed a bug where :func:`~pandas.read_html` wasn't correctly inferring
+ values of tables with commas (:issue:`5029`)
+ - Fixed a bug where :func:`~pandas.read_html` wasn't providing a stable
+ ordering of returned tables (:issue:`4770`, :issue:`5029`).
+ - Fixed a bug where :func:`~pandas.read_html` was incorrectly parsing when
+ passed ``index_col=0`` (:issue:`5066`).
+ - Fixed a bug where :func:`~pandas.read_html` was incorrectly infering the
+ type of headers (:issue:`5048`).
+
pandas 0.12.0
-------------
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -7,15 +7,18 @@
import re
import numbers
import collections
+import warnings
from distutils.version import LooseVersion
import numpy as np
-from pandas import DataFrame, MultiIndex, isnull
from pandas.io.common import _is_url, urlopen, parse_url
-from pandas.compat import range, lrange, lmap, u, map
-from pandas import compat
+from pandas.io.parsers import TextParser
+from pandas.compat import (lrange, lmap, u, string_types, iteritems, text_type,
+ raise_with_traceback)
+from pandas.core import common as com
+from pandas import Series
try:
@@ -45,7 +48,7 @@
#############
# READ HTML #
#############
-_RE_WHITESPACE = re.compile(r'([\r\n]+|\s{2,})')
+_RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
def _remove_whitespace(s, regex=_RE_WHITESPACE):
@@ -67,7 +70,7 @@ def _remove_whitespace(s, regex=_RE_WHITESPACE):
return regex.sub(' ', s.strip())
-def _get_skiprows_iter(skiprows):
+def _get_skiprows(skiprows):
"""Get an iterator given an integer, slice or container.
Parameters
@@ -80,11 +83,6 @@ def _get_skiprows_iter(skiprows):
TypeError
* If `skiprows` is not a slice, integer, or Container
- Raises
- ------
- TypeError
- * If `skiprows` is not a slice, integer, or Container
-
Returns
-------
it : iterable
@@ -92,13 +90,12 @@ def _get_skiprows_iter(skiprows):
"""
if isinstance(skiprows, slice):
return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
- elif isinstance(skiprows, numbers.Integral):
- return lrange(skiprows)
- elif isinstance(skiprows, collections.Container):
+ elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
return skiprows
- else:
- raise TypeError('{0} is not a valid type for skipping'
- ' rows'.format(type(skiprows)))
+ elif skiprows is None:
+ return 0
+ raise TypeError('%r is not a valid type for skipping rows' %
+ type(skiprows).__name__)
def _read(io):
@@ -120,11 +117,10 @@ def _read(io):
elif os.path.isfile(io):
with open(io) as f:
raw_text = f.read()
- elif isinstance(io, compat.string_types):
+ elif isinstance(io, string_types):
raw_text = io
else:
- raise TypeError("Cannot read object of type "
- "'{0.__class__.__name__!r}'".format(io))
+ raise TypeError("Cannot read object of type %r" % type(io).__name__)
return raw_text
@@ -194,12 +190,6 @@ def _parse_raw_data(self, rows):
A callable that takes a row node as input and returns a list of the
column node in that row. This must be defined by subclasses.
- Raises
- ------
- AssertionError
- * If `text_getter` is not callable
- * If `column_finder` is not callable
-
Returns
-------
data : list of list of strings
@@ -254,7 +244,7 @@ def _parse_tables(self, doc, match, attrs):
Raises
------
- AssertionError
+ ValueError
* If `match` does not match any text in the document.
Returns
@@ -406,25 +396,28 @@ def _parse_tfoot(self, table):
def _parse_tables(self, doc, match, attrs):
element_name = self._strainer.name
tables = doc.find_all(element_name, attrs=attrs)
+
if not tables:
- # known sporadically working release
- raise AssertionError('No tables found')
+ raise ValueError('No tables found')
- mts = [table.find(text=match) for table in tables]
- matched_tables = [mt for mt in mts if mt is not None]
- tables = list(set(mt.find_parent(element_name)
- for mt in matched_tables))
+ result = []
+ unique_tables = set()
- if not tables:
- raise AssertionError("No tables found matching "
- "'{0}'".format(match.pattern))
- return tables
+ for table in tables:
+ if (table not in unique_tables and
+ table.find(text=match) is not None):
+ result.append(table)
+ unique_tables.add(table)
+
+ if not result:
+ raise ValueError("No tables found matching pattern %r" %
+ match.pattern)
+ return result
def _setup_build_doc(self):
raw_text = _read(self.io)
if not raw_text:
- raise AssertionError('No text parsed from document: '
- '{0}'.format(self.io))
+ raise ValueError('No text parsed from document: %s' % self.io)
return raw_text
def _build_doc(self):
@@ -432,7 +425,7 @@ def _build_doc(self):
return BeautifulSoup(self._setup_build_doc(), features='html5lib')
-def _build_node_xpath_expr(attrs):
+def _build_xpath_expr(attrs):
"""Build an xpath expression to simulate bs4's ability to pass in kwargs to
search for attributes when using the lxml parser.
@@ -450,8 +443,8 @@ def _build_node_xpath_expr(attrs):
if 'class_' in attrs:
attrs['class'] = attrs.pop('class_')
- s = (u("@{k}='{v}'").format(k=k, v=v) for k, v in compat.iteritems(attrs))
- return u('[{0}]').format(' and '.join(s))
+ s = [u("@%s=%r") % (k, v) for k, v in iteritems(attrs)]
+ return u('[%s]') % ' and '.join(s)
_re_namespace = {'re': 'http://exslt.org/regular-expressions'}
@@ -491,23 +484,20 @@ def _parse_tr(self, table):
def _parse_tables(self, doc, match, kwargs):
pattern = match.pattern
- # check all descendants for the given pattern
- check_all_expr = u('//*')
- if pattern:
- check_all_expr += u("[re:test(text(), '{0}')]").format(pattern)
-
- # go up the tree until we find a table
- check_table_expr = '/ancestor::table'
- xpath_expr = check_all_expr + check_table_expr
+ # 1. check all descendants for the given pattern and only search tables
+ # 2. go up the tree until we find a table
+ query = '//table//*[re:test(text(), %r)]/ancestor::table'
+ xpath_expr = u(query) % pattern
# if any table attributes were given build an xpath expression to
# search for them
if kwargs:
- xpath_expr += _build_node_xpath_expr(kwargs)
+ xpath_expr += _build_xpath_expr(kwargs)
+
tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
+
if not tables:
- raise AssertionError("No tables found matching regex "
- "'{0}'".format(pattern))
+ raise ValueError("No tables found matching regex %r" % pattern)
return tables
def _build_doc(self):
@@ -528,6 +518,7 @@ def _build_doc(self):
"""
from lxml.html import parse, fromstring, HTMLParser
from lxml.etree import XMLSyntaxError
+
parser = HTMLParser(recover=False)
try:
@@ -552,8 +543,8 @@ def _build_doc(self):
scheme = parse_url(self.io).scheme
if scheme not in _valid_schemes:
# lxml can't parse it
- msg = ('{0} is not a valid url scheme, valid schemes are '
- '{1}').format(scheme, _valid_schemes)
+ msg = ('%r is not a valid url scheme, valid schemes are '
+ '%s') % (scheme, _valid_schemes)
raise ValueError(msg)
else:
# something else happened: maybe a faulty connection
@@ -583,101 +574,38 @@ def _parse_raw_tfoot(self, table):
table.xpath(expr)]
-def _data_to_frame(data, header, index_col, infer_types, skiprows):
- """Parse a BeautifulSoup table into a DataFrame.
+def _expand_elements(body):
+ lens = Series(lmap(len, body))
+ lens_max = lens.max()
+ not_max = lens[lens != lens_max]
- Parameters
- ----------
- data : tuple of lists
- The raw data to be placed into a DataFrame. This is a list of lists of
- strings or unicode. If it helps, it can be thought of as a matrix of
- strings instead.
-
- header : int or None
- An integer indicating the row to use for the column header or None
- indicating no header will be used.
+ for ind, length in iteritems(not_max):
+ body[ind] += [np.nan] * (lens_max - length)
- index_col : int or None
- An integer indicating the column to use for the index or None
- indicating no column will be used.
- infer_types : bool
- Whether to convert numbers and dates.
+def _data_to_frame(data, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands):
+ head, body, _ = data # _ is footer which is rarely used: ignore for now
- skiprows : collections.Container or int or slice
- Iterable used to skip rows.
+ if head:
+ body = [head] + body
- Returns
- -------
- df : DataFrame
- A DataFrame containing the data from `data`
-
- Raises
- ------
- ValueError
- * If `skiprows` is not found in the rows of the parsed DataFrame.
+ if header is None: # special case when a table has <th> elements
+ header = 0
- Raises
- ------
- ValueError
- * If `skiprows` is not found in the rows of the parsed DataFrame.
-
- See Also
- --------
- read_html
-
- Notes
- -----
- The `data` parameter is guaranteed not to be a list of empty lists.
- """
- thead, tbody, tfoot = data
- columns = thead or None
- df = DataFrame(tbody, columns=columns)
+ # fill out elements of body that are "ragged"
+ _expand_elements(body)
- if skiprows is not None:
- it = _get_skiprows_iter(skiprows)
+ tp = TextParser(body, header=header, index_col=index_col,
+ skiprows=_get_skiprows(skiprows),
+ parse_dates=parse_dates, tupleize_cols=tupleize_cols,
+ thousands=thousands)
+ df = tp.read()
- try:
- df = df.drop(it)
- except ValueError:
- raise ValueError('Labels {0} not found when trying to skip'
- ' rows'.format(it))
-
- # convert to numbers/dates where possible
- # must be sequential since dates trump numbers if both args are given
- if infer_types:
- df = df.convert_objects(convert_numeric=True)
+ if infer_types: # TODO: rm this code so infer_types has no effect in 0.14
df = df.convert_objects(convert_dates='coerce')
-
- if header is not None:
- header_rows = df.iloc[header]
-
- if header_rows.ndim == 2:
- names = header_rows.index
- df.columns = MultiIndex.from_arrays(header_rows.values,
- names=names)
- else:
- df.columns = header_rows
-
- df = df.drop(df.index[header])
-
- if index_col is not None:
- cols = df.columns[index_col]
-
- try:
- cols = cols.tolist()
- except AttributeError:
- pass
-
- # drop by default
- df.set_index(cols, inplace=True)
- if df.index.nlevels == 1:
- if isnull(df.index.name) or not df.index.name:
- df.index.name = None
- else:
- names = [name or None for name in df.index.names]
- df.index = MultiIndex.from_tuples(df.index.values, names=names)
-
+ else:
+ df = df.applymap(text_type)
return df
@@ -701,15 +629,15 @@ def _parser_dispatch(flavor):
Raises
------
- AssertionError
+ ValueError
* If `flavor` is not a valid backend.
ImportError
* If you do not have the requested `flavor`
"""
valid_parsers = list(_valid_parsers.keys())
if flavor not in valid_parsers:
- raise AssertionError('"{0!r}" is not a valid flavor, valid flavors are'
- ' {1}'.format(flavor, valid_parsers))
+ raise ValueError('%r is not a valid flavor, valid flavors are %s' %
+ (flavor, valid_parsers))
if flavor in ('bs4', 'html5lib'):
if not _HAS_HTML5LIB:
@@ -717,46 +645,54 @@ def _parser_dispatch(flavor):
if not _HAS_BS4:
raise ImportError("bs4 not found please install it")
if bs4.__version__ == LooseVersion('4.2.0'):
- raise AssertionError("You're using a version"
- " of BeautifulSoup4 (4.2.0) that has been"
- " known to cause problems on certain"
- " operating systems such as Debian. "
- "Please install a version of"
- " BeautifulSoup4 != 4.2.0, both earlier"
- " and later releases will work.")
+ raise ValueError("You're using a version"
+ " of BeautifulSoup4 (4.2.0) that has been"
+ " known to cause problems on certain"
+ " operating systems such as Debian. "
+ "Please install a version of"
+ " BeautifulSoup4 != 4.2.0, both earlier"
+ " and later releases will work.")
else:
if not _HAS_LXML:
raise ImportError("lxml not found please install it")
return _valid_parsers[flavor]
-def _validate_parser_flavor(flavor):
+def _print_as_set(s):
+ return '{%s}' % ', '.join([com.pprint_thing(el) for el in s])
+
+
+def _validate_flavor(flavor):
if flavor is None:
- flavor = ['lxml', 'bs4']
- elif isinstance(flavor, compat.string_types):
- flavor = [flavor]
+ flavor = 'lxml', 'bs4'
+ elif isinstance(flavor, string_types):
+ flavor = flavor,
elif isinstance(flavor, collections.Iterable):
- if not all(isinstance(flav, compat.string_types) for flav in flavor):
- raise TypeError('{0} is not an iterable of strings'.format(flavor))
+ if not all(isinstance(flav, string_types) for flav in flavor):
+ raise TypeError('Object of type %r is not an iterable of strings' %
+ type(flavor).__name__)
else:
- raise TypeError('{0} is not a valid "flavor"'.format(flavor))
-
- flavor = list(flavor)
- valid_flavors = list(_valid_parsers.keys())
-
- if not set(flavor) & set(valid_flavors):
- raise ValueError('{0} is not a valid set of flavors, valid flavors are'
- ' {1}'.format(flavor, valid_flavors))
+ fmt = '{0!r}' if isinstance(flavor, string_types) else '{0}'
+ fmt += ' is not a valid flavor'
+ raise ValueError(fmt.format(flavor))
+
+ flavor = tuple(flavor)
+ valid_flavors = set(_valid_parsers)
+ flavor_set = set(flavor)
+
+ if not flavor_set & valid_flavors:
+ raise ValueError('%s is not a valid set of flavors, valid flavors are '
+ '%s' % (_print_as_set(flavor_set),
+ _print_as_set(valid_flavors)))
return flavor
-def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs):
- # bonus: re.compile is idempotent under function iteration so you can pass
- # a compiled regex to it and it will return itself
- flavor = _validate_parser_flavor(flavor)
- compiled_match = re.compile(match)
+def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands, attrs):
+ flavor = _validate_flavor(flavor)
+ compiled_match = re.compile(match) # you can pass a compiled regex here
- # ugly hack because python 3 DELETES the exception variable!
+ # hack around python 3 deleting the exception variable
retained = None
for flav in flavor:
parser = _parser_dispatch(flav)
@@ -769,25 +705,26 @@ def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs):
else:
break
else:
- raise retained
+ raise_with_traceback(retained)
- return [_data_to_frame(table, header, index_col, infer_types, skiprows)
+ return [_data_to_frame(table, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands)
for table in tables]
def read_html(io, match='.+', flavor=None, header=None, index_col=None,
- skiprows=None, infer_types=True, attrs=None):
- r"""Read an HTML table into a DataFrame.
+ skiprows=None, infer_types=None, attrs=None, parse_dates=False,
+ tupleize_cols=False, thousands=','):
+ r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
Parameters
----------
io : str or file-like
- A string or file like object that can be either a url, a file-like
- object, or a raw string containing HTML. Note that lxml only accepts
- the http, ftp and file url protocols. If you have a URI that starts
- with ``'https'`` you might removing the ``'s'``.
+ A URL, a file-like object, or a raw string containing HTML. Note that
+ lxml only accepts the http, ftp and file url protocols. If you have a
+ URL that starts with ``'https'`` you might try removing the ``'s'``.
- match : str or regex, optional, default '.+'
+ match : str or compiled regular expression, optional
The set of tables containing text matching this regex or string will be
returned. Unless the HTML is extremely simple you will probably need to
pass a non-empty string here. Defaults to '.+' (match any non-empty
@@ -795,44 +732,30 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
This value is converted to a regular expression so that there is
consistent behavior between Beautiful Soup and lxml.
- flavor : str, container of strings, default ``None``
- The parsing engine to use under the hood. 'bs4' and 'html5lib' are
- synonymous with each other, they are both there for backwards
- compatibility. The default of ``None`` tries to use ``lxml`` to parse
- and if that fails it falls back on ``bs4`` + ``html5lib``.
+ flavor : str or None, container of strings
+ The parsing engine to use. 'bs4' and 'html5lib' are synonymous with
+ each other, they are both there for backwards compatibility. The
+ default of ``None`` tries to use ``lxml`` to parse and if that fails it
+ falls back on ``bs4`` + ``html5lib``.
- header : int or array-like or None, optional, default ``None``
- The row (or rows for a MultiIndex) to use to make the columns headers.
- Note that this row will be removed from the data.
+ header : int or list-like or None, optional
+ The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
+ make the columns headers.
- index_col : int or array-like or None, optional, default ``None``
- The column to use to make the index. Note that this column will be
- removed from the data.
+ index_col : int or list-like or None, optional
+ The column (or list of columns) to use to create the index.
- skiprows : int or collections.Container or slice or None, optional, default ``None``
- If an integer is given then skip this many rows after parsing the
- column header. If a sequence of integers is given skip those specific
- rows (0-based). Note that
+ skiprows : int or list-like or slice or None, optional
+ 0-based. Number of rows to skip after parsing the column integer. If a
+ sequence of integers or a slice is given, will skip the rows indexed by
+ that sequence. Note that a single element sequence means 'skip the nth
+ row' whereas an integer means 'skip n rows'.
- .. code-block:: python
-
- skiprows == 0
-
- yields the same result as
-
- .. code-block:: python
+ infer_types : bool, optional
+ This option is deprecated in 0.13, an will have no effect in 0.14. It
+ defaults to ``True``.
- skiprows is None
-
- If `skiprows` is a positive integer, say :math:`n`, then
- it is treated as "skip :math:`n` rows", *not* as "skip the
- :math:`n^\textrm{th}` row".
-
- infer_types : bool, optional, default ``True``
- Whether to convert numeric types and date-appearing strings to numbers
- and dates, respectively.
-
- attrs : dict or None, optional, default ``None``
+ attrs : dict or None, optional
This is a dictionary of attributes that you can pass to use to identify
the table in the HTML. These are not checked for validity before being
passed to lxml or Beautiful Soup. However, these attributes must be
@@ -858,33 +781,38 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
<http://www.w3.org/TR/html-markup/table.html>`__. It contains the
latest information on table attributes for the modern web.
+ parse_dates : bool, optional
+ See :func:`~pandas.read_csv` for details.
+
+ tupleize_cols : bool, optional
+ If ``False`` try to parse multiple header rows into a
+ :class:`~pandas.MultiIndex`, otherwise return raw tuples. Defaults to
+ ``False``.
+
+ thousands : str, optional
+ Separator to use to parse thousands. Defaults to ``','``.
+
Returns
-------
dfs : list of DataFrames
- A list of DataFrames, each of which is the parsed data from each of the
- tables on the page.
Notes
-----
- Before using this function you should probably read the :ref:`gotchas about
- the parser libraries that this function uses <html-gotchas>`.
+ Before using this function you should read the :ref:`gotchas about the
+ HTML parsing libraries <html-gotchas>`.
- There's as little cleaning of the data as possible due to the heterogeneity
- and general disorder of HTML on the web.
+ Expect to do some cleanup after you call this function. For example, you
+ might need to manually assign column names if the column names are
+ converted to NaN when you pass the `header=0` argument. We try to assume as
+ little as possible about the structure of the table and push the
+ idiosyncrasies of the HTML contained in the table to the user.
- Expect some cleanup after you call this function. For example,
- you might need to pass `infer_types=False` and perform manual conversion if
- the column names are converted to NaN when you pass the `header=0`
- argument. We try to assume as little as possible about the structure of the
- table and push the idiosyncrasies of the HTML contained in the table to
- you, the user.
+ This function searches for ``<table>`` elements and only for ``<tr>``
+ and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>``
+ element in the table. ``<td>`` stands for "table data".
- This function only searches for <table> elements and only for <tr> and <th>
- rows and <td> elements within those rows. This could be extended by
- subclassing one of the parser classes contained in :mod:`pandas.io.html`.
-
- Similar to :func:`read_csv` the `header` argument is applied **after**
- `skiprows` is applied.
+ Similar to :func:`~pandas.read_csv` the `header` argument is applied
+ **after** `skiprows` is applied.
This function will *always* return a list of :class:`DataFrame` *or*
it will fail, e.g., it will *not* return an empty list.
@@ -892,12 +820,21 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
Examples
--------
See the :ref:`read_html documentation in the IO section of the docs
- <io.read_html>` for many examples of reading HTML.
+ <io.read_html>` for some examples of reading in HTML tables.
+
+ See Also
+ --------
+ pandas.read_csv
"""
+ if infer_types is not None:
+ warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
+ else:
+ infer_types = True # TODO: remove in 0.14
+
# Type check here. We don't want to parse only to fail because of an
# invalid value of an integer skiprows.
if isinstance(skiprows, numbers.Integral) and skiprows < 0:
- raise AssertionError('cannot skip rows starting from the end of the '
- 'data (you passed a negative value)')
+ raise ValueError('cannot skip rows starting from the end of the '
+ 'data (you passed a negative value)')
return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
- attrs)
+ parse_dates, tupleize_cols, thousands, attrs)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -606,16 +606,10 @@ def _failover_to_python(self):
raise NotImplementedError
def read(self, nrows=None):
- suppressed_warnings = False
if nrows is not None:
if self.options.get('skip_footer'):
raise ValueError('skip_footer not supported for iteration')
- # # XXX hack
- # if isinstance(self._engine, CParserWrapper):
- # suppressed_warnings = True
- # self._engine.set_error_bad_lines(False)
-
ret = self._engine.read(nrows)
if self.options.get('as_recarray'):
@@ -710,7 +704,6 @@ def _should_parse_dates(self, i):
else:
return (j in self.parse_dates) or (name in self.parse_dates)
-
def _extract_multi_indexer_columns(self, header, index_names, col_names, passed_names=False):
""" extract and return the names, index_names, col_names
header is a list-of-lists returned from the parsers """
@@ -728,12 +721,10 @@ def _extract_multi_indexer_columns(self, header, index_names, col_names, passed_
ic = [ ic ]
sic = set(ic)
- orig_header = list(header)
-
# clean the index_names
index_names = header.pop(-1)
- (index_names, names,
- index_col) = _clean_index_names(index_names, self.index_col)
+ index_names, names, index_col = _clean_index_names(index_names,
+ self.index_col)
# extract the columns
field_count = len(header[0])
@@ -766,7 +757,7 @@ def _maybe_make_multi_index_columns(self, columns, col_names=None):
return columns
def _make_index(self, data, alldata, columns, indexnamerow=False):
- if not _is_index_col(self.index_col) or len(self.index_col) == 0:
+ if not _is_index_col(self.index_col) or not self.index_col:
index = None
elif not self._has_complex_date_col:
@@ -1430,7 +1421,7 @@ def read(self, rows=None):
self._first_chunk = False
columns = list(self.orig_names)
- if len(content) == 0: # pragma: no cover
+ if not len(content): # pragma: no cover
# DataFrame with the right metadata, even though it's length 0
return _get_empty_meta(self.orig_names,
self.index_col,
@@ -1468,8 +1459,8 @@ def _convert_data(self, data):
col = self.orig_names[col]
clean_conv[col] = f
- return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues, self.verbose,
- clean_conv)
+ return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues,
+ self.verbose, clean_conv)
def _infer_columns(self):
names = self.names
@@ -1478,16 +1469,15 @@ def _infer_columns(self):
header = self.header
# we have a mi columns, so read and extra line
- if isinstance(header,(list,tuple,np.ndarray)):
+ if isinstance(header, (list, tuple, np.ndarray)):
have_mi_columns = True
- header = list(header) + [header[-1]+1]
+ header = list(header) + [header[-1] + 1]
else:
have_mi_columns = False
- header = [ header ]
+ header = [header]
columns = []
for level, hr in enumerate(header):
-
if len(self.buf) > 0:
line = self.buf[0]
else:
@@ -1521,10 +1511,11 @@ def _infer_columns(self):
if names is not None:
if len(names) != len(columns[0]):
- raise Exception('Number of passed names did not match '
- 'number of header fields in the file')
+ raise ValueError('Number of passed names did not match '
+ 'number of header fields in the file')
if len(columns) > 1:
- raise Exception('Cannot pass names with multi-index columns')
+ raise TypeError('Cannot pass names with multi-index '
+ 'columns')
columns = [ names ]
else:
| Issue with index_col and read_html
```
pd.read_html("http://www.camacau.com/changeLang?lang=en_US&url=/statistic_list",infer_types=False,header=0,index_col=0)
```
yields:
```
Traceback (most recent call last)
<ipython-input-114-a13f8ac8a77b> in <module>()
----> 1 foo2 = pd.read_html("http://www.camacau.com/changeLang?lang=en_US&url=/statistic_list",infer_types=False,header=0,index_col=0)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in read_html(io, match, flavor, header, index_col, skiprows, infer_types, attrs)
904 'data (you passed a negative value)')
905 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
--> 906 attrs)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs)
776
777 return [_data_to_frame(table, header, index_col, infer_types, skiprows)
--> 778 for table in tables]
779
780
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _data_to_frame(data, header, index_col, infer_types, skiprows)
674
675 # drop by default
--> 676 df.set_index(cols, inplace=True)
677 if df.index.nlevels == 1:
678 if isnull(df.index.name) or not df.index.name:
/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in set_index(self, keys, drop, append, inplace, verify_integrity)
2833 arrays.append(level)
2834
-> 2835 index = MultiIndex.from_arrays(arrays, names=names)
2836
2837 if verify_integrity and not index.is_unique:
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in from_arrays(cls, arrays, sortorder, names)
1763 if len(arrays) == 1:
1764 name = None if names is None else names[0]
-> 1765 return Index(arrays[0], name=name)
1766
1767 cats = [Categorical.from_array(arr) for arr in arrays]
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in __new__(cls, data, dtype, copy, name, **kwargs)
108 return Int64Index(data, copy=copy, dtype=dtype, name=name)
109
--> 110 subarr = com._asarray_tuplesafe(data, dtype=object)
111 elif np.isscalar(data):
112 raise ValueError('Index(...) must be called with a collection '
/usr/local/lib/python2.7/dist-packages/pandas/core/common.pyc in _asarray_tuplesafe(values, dtype)
1489 # in numpy, leading to the following
1490 result = np.empty(len(values), dtype=object)
-> 1491 result[:] = values
1492
1493 return result
ValueError: could not broadcast input array from shape (11,2) into shape (11)
```
| 2013-09-07T04:14:16Z | [] | [] |
Traceback (most recent call last)
<ipython-input-114-a13f8ac8a77b> in <module>()
----> 1 foo2 = pd.read_html("http://www.camacau.com/changeLang?lang=en_US&url=/statistic_list",infer_types=False,header=0,index_col=0)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in read_html(io, match, flavor, header, index_col, skiprows, infer_types, attrs)
904 'data (you passed a negative value)')
905 return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
--> 906 attrs)
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs)
776
777 return [_data_to_frame(table, header, index_col, infer_types, skiprows)
--> 778 for table in tables]
779
780
/usr/local/lib/python2.7/dist-packages/pandas/io/html.pyc in _data_to_frame(data, header, index_col, infer_types, skiprows)
674
675 # drop by default
--> 676 df.set_index(cols, inplace=True)
677 if df.index.nlevels == 1:
678 if isnull(df.index.name) or not df.index.name:
/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in set_index(self, keys, drop, append, inplace, verify_integrity)
2833 arrays.append(level)
2834
-> 2835 index = MultiIndex.from_arrays(arrays, names=names)
2836
2837 if verify_integrity and not index.is_unique:
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in from_arrays(cls, arrays, sortorder, names)
1763 if len(arrays) == 1:
1764 name = None if names is None else names[0]
-> 1765 return Index(arrays[0], name=name)
1766
1767 cats = [Categorical.from_array(arr) for arr in arrays]
/usr/local/lib/python2.7/dist-packages/pandas/core/index.pyc in __new__(cls, data, dtype, copy, name, **kwargs)
108 return Int64Index(data, copy=copy, dtype=dtype, name=name)
109
--> 110 subarr = com._asarray_tuplesafe(data, dtype=object)
111 elif np.isscalar(data):
112 raise ValueError('Index(...) must be called with a collection '
/usr/local/lib/python2.7/dist-packages/pandas/core/common.pyc in _asarray_tuplesafe(values, dtype)
1489 # in numpy, leading to the following
1490 result = np.empty(len(values), dtype=object)
-> 1491 result[:] = values
1492
1493 return result
ValueError: could not broadcast input array from shape (11,2) into shape (11)
| 14,879 |
||||
pandas-dev/pandas | pandas-dev__pandas-4806 | db8f25782eb35256a7e228dcf355f10b78bad504 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -376,6 +376,7 @@ Bug Fixes
- Fix bugs in indexing in a Series with a duplicate index (:issue:`4548`, :issue:`4550`)
- Fixed bug with reading compressed files with ``read_fwf`` in Python 3.
(:issue:`3963`)
+ - Fixed an issue with a duplicate index and assignment with a dtype change (:issue:`4686`)
pandas 0.12.0
-------------
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1797,11 +1797,13 @@ def _reset_ref_locs(self):
def _rebuild_ref_locs(self):
""" take _ref_locs and set the individual block ref_locs, skipping Nones
no effect on a unique index """
- if self._ref_locs is not None:
+ if getattr(self,'_ref_locs',None) is not None:
item_count = 0
for v in self._ref_locs:
if v is not None:
block, item_loc = v
+ if block._ref_locs is None:
+ block.reset_ref_locs()
block._ref_locs[item_loc] = item_count
item_count += 1
@@ -2595,11 +2597,11 @@ def _set_item(item, arr):
self.delete(item)
loc = _possibly_convert_to_indexer(loc)
- for i, (l, arr) in enumerate(zip(loc, value)):
+ for i, (l, k, arr) in enumerate(zip(loc, subset, value)):
# insert the item
self.insert(
- l, item, arr[None, :], allow_duplicates=True)
+ l, k, arr[None, :], allow_duplicates=True)
# reset the _ref_locs on indiviual blocks
# rebuild ref_locs
| "AttributeError: _ref_locs" while assigning non-contiguous MultiIndex columns
Hitting an exception with the following code on 0.12.0 (and current master). Copied 0.11.0 behavior for reference, but feel free to reject if assignment to non-contiguous columns of a MultiIndex using a partial label is illegal/unsupported.
```
import numpy as np
import pandas as pd
df = pd.DataFrame(
np.ones((1, 3)),
columns=pd.MultiIndex.from_tuples(
[('A', '1'), ('B', '1'), ('A', '2')]
),
dtype=object
)
print 'Before:'
print df.dtypes
df['A'] = df['A'].astype(float)
print 'After:'
print df.dtypes
```
0.11.0 output:
```
Before:
A 1 object
B 1 object
A 2 object
dtype: object
After:
A 1 float64
B 1 object
A 2 float64
dtype: object
```
0.12.0 output:
```
Before:
A 1 object
B 1 object
A 2 object
dtype: object
Traceback (most recent call last):
File "repro.py", line 15, in <module>
df['A'] = df['A'].astype(float)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/frame.py", line 1929, in __setitem__
self._set_item(key, value)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/frame.py", line 1977, in _set_item
NDFrame._set_item(self, key, value)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/generic.py", line 798, in _set_item
self._data.set(key, value)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/internals.py", line 2448, in set
self._reset_ref_locs()
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/internals.py", line 1644, in _reset_ref_locs
self._rebuild_ref_locs()
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/internals.py", line 1652, in _rebuild_ref_locs
if self._ref_locs is not None:
AttributeError: _ref_locs
```
Thanks!
| Do it this way.
Not sorted assignment is definitnely not supported (and should raise a better error).
This is actually quite tricky. Will mark as a bug and look at it at some point.
```
In [23]: df
Out[23]:
A B A
1 1 2
0 1 1 1
In [24]: df.dtypes
Out[24]:
A 1 object
B 1 object
A 2 object
dtype: object
In [25]: df.sortlevel(axis=1)
Out[25]:
A B
1 2 1
0 1 1 1
In [26]: df.sortlevel(axis=1).dtypes
Out[26]:
A 1 object
2 object
B 1 object
dtype: object
In [27]: df.sortlevel(axis=1).convert_objects()
Out[27]:
A B
1 2 1
0 1 1 1
In [28]: df.sortlevel(axis=1).convert_objects().dtypes
Out[28]:
A 1 float64
2 float64
B 1 float64
dtype: object
```
| 2013-09-10T19:26:27Z | [] | [] |
Traceback (most recent call last):
File "repro.py", line 15, in <module>
df['A'] = df['A'].astype(float)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/frame.py", line 1929, in __setitem__
self._set_item(key, value)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/frame.py", line 1977, in _set_item
NDFrame._set_item(self, key, value)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/generic.py", line 798, in _set_item
self._data.set(key, value)
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/internals.py", line 2448, in set
self._reset_ref_locs()
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/internals.py", line 1644, in _reset_ref_locs
self._rebuild_ref_locs()
File "/home/gmd/ENV/pandas-master/lib/python2.7/site-packages/pandas-0.12.0_270_ge3c71f2-py2.7-linux-i686.egg/pandas/core/internals.py", line 1652, in _rebuild_ref_locs
if self._ref_locs is not None:
AttributeError: _ref_locs
| 14,885 |
|||
pandas-dev/pandas | pandas-dev__pandas-4832 | d702de0930f124e697bf349212f5e44faa9880fd | diff --git a/ci/requirements-2.7_LOCALE.txt b/ci/requirements-2.7_LOCALE.txt
--- a/ci/requirements-2.7_LOCALE.txt
+++ b/ci/requirements-2.7_LOCALE.txt
@@ -8,7 +8,7 @@ cython==0.19.1
bottleneck==0.6.0
numexpr==2.1
tables==2.3.1
-matplotlib==1.2.1
+matplotlib==1.3.0
patsy==0.1.0
html5lib==1.0b2
lxml==3.2.1
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -396,6 +396,8 @@ Bug Fixes
- Fixed bug with reading compressed files in as ``bytes`` rather than ``str``
in Python 3. Simplifies bytes-producing file-handling in Python 3
(:issue:`3963`, :issue:`4785`).
+ - Fixed an issue related to ticklocs/ticklabels with log scale bar plots
+ across different versions of matplotlib (:issue:`4789`)
pandas 0.12.0
-------------
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -4,6 +4,7 @@
import warnings
import re
from contextlib import contextmanager
+from distutils.version import LooseVersion
import numpy as np
@@ -1452,7 +1453,13 @@ def f(ax, x, y, w, start=None, log=self.log, **kwds):
def _make_plot(self):
import matplotlib as mpl
+
+ # mpl decided to make their version string unicode across all Python
+ # versions for mpl >= 1.3 so we have to call str here for python 2
+ mpl_le_1_2_1 = str(mpl.__version__) <= LooseVersion('1.2.1')
+
colors = self._get_colors()
+ ncolors = len(colors)
rects = []
labels = []
@@ -1466,19 +1473,18 @@ def _make_plot(self):
ax = self._get_ax(i)
label = com.pprint_thing(label)
kwds = self.kwds.copy()
- kwds['color'] = colors[i % len(colors)]
+ kwds['color'] = colors[i % ncolors]
- start =0
+ start = 0
if self.log:
start = 1
if any(y < 1):
# GH3254
- start = 0 if mpl.__version__ == "1.2.1" else None
+ start = 0 if mpl_le_1_2_1 else None
if self.subplots:
rect = bar_f(ax, self.ax_pos, y, self.bar_width,
- start = start,
- **kwds)
+ start=start, **kwds)
ax.set_title(label)
elif self.stacked:
mask = y > 0
@@ -1489,8 +1495,7 @@ def _make_plot(self):
neg_prior = neg_prior + np.where(mask, 0, y)
else:
rect = bar_f(ax, self.ax_pos + i * 0.75 / K, y, 0.75 / K,
- start = start,
- label=label, **kwds)
+ start=start, label=label, **kwds)
rects.append(rect)
if self.mark_right:
labels.append(self._get_marked_label(label, i))
| test_bar_log fights back with matplotlib 1.3.0
```
======================================================================
FAIL: test_bar_log (pandas.tests.test_graphics.TestDataFramePlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_graphics.py", line 568, in test_bar_log
self.assertEqual(ax.yaxis.get_ticklocs()[0], 1.0)
AssertionError: 0.10000000000000001 != 1.0
----------------------------------------------------------------------
Ran 1 test in 0.341s
```
needs to be fixed up for debian where mpl now 1.3.0
| @yarikoptic side note - aren't there a _bunch_ of failing tests for nipy's pandas builds?
@jtratner -- yes unfortunately there are
http://nipy.bic.berkeley.edu/waterfall?category=pandas
and I would appreciate if they get addressed. ATM I worry about this particular one to stabilize released pandas 0.12 building on Debian
@yarikoptic or @cpcloud any chance there's a vagrant box already set up to make it easier to debug these locally?
@yarikoptic Forgive me, but can you give a 2-second overview of how to navigate all of that stuff?
@jtratner for this particular bug I guess you would not find any "stock" vagrant box since Debian unstable is rolling too much (updated twice a day) for anyone to care... may be at some point I would reincarnate my veewee setup to furnish those. But you could get ANY Debian based vagrant box (thus stock too) and debootstrap a complete Debian into a subdirectory within... see e.g. my elderly post http://neuro.debian.net/blog/2011/2011-12-12_schroot_fslview.html
For sparc-specific bugs -- you would need a sparc box and vagrant alone would not be sufficient.
@cpcloud buildbot's views are indeed a bit archaic and require "get used to". I prefer waterfall for a quick look
http://nipy.bic.berkeley.edu/waterfall?category=pandas
there on top it better be green ;) those are builders names, defined as combinations of project, python, architecture etc in their names, e.g. pandas-py2.x-sid-sparc is pandas with python2 (whatever is default there), Debian sid, sparc architecture. Thus you have "builders" for sid (unstable) and wheezy (stable Debian), python2 and python3 (concrete versions differ between sid and wheezy -- look inside logs ;) ).
if any builder "red" -- look inside,e.g. go to
http://nipy.bic.berkeley.edu/builders/pandas-py2.x-sid-sparc
there you would see all failed/succeeded builds. For failed, go to the failed one e.g.
http://nipy.bic.berkeley.edu/builders/pandas-py2.x-sid-sparc/builds/87
and you will see what succeded (in green) and what failed (again in red). Step 9 here (`shell_4 'xvfb-run --auto-servernum ...`) is the one running tests and it is in red -- just click on its stdio link to get to entire dump of output.
You could also get there straight from waterfall for recent builds -- for older ones you might need to scroll down to get to that failed step in red.
I hope this helps
@cpcloud I actually find it easier to just go to this page - http://nipy.bic.berkeley.edu/builders and search for 'pandas' - then it's basically just the same thing as Travis, just need to click on the red blocks until it shows you a list of test cases.
so what about the issue itself? ;)
@yarikoptic forgot to say: thanks, your explanation was very helpful...
re this bug....i'll take a look...i don't think this is just happening on sparc
@cpcloud I never said it happens just on sparc ;) it is matplotlib 1.3.0 compatibility issue
@y-p
wondering if u know the reason for this:
```
In [1]: paste
p1 = Series([200, 500]).plot(log=True, kind='bar')
## -- End pasted text --
In [2]: p1.yaxis.get_ticklocs()
Out[2]: array([ 0.1, 1. , 10. , 100. , 1000. , 10000. ])
```
note the extra 0.1 and 10000, here's the plot
![i-can-haz-lawg-plawt](https://f.cloud.github.com/assets/417981/1135598/42e84d86-1c19-11e3-9f7f-c0598bf0ac20.png)
only 1, 10, 100, and 1000 in the plot....should i report to matplotlib?
opened an issue over at matplotlib
https://github.com/matplotlib/matplotlib/issues/2419
> commit 4c0f3f30f805f53f91f22f26212d366c59d2ae48
> Author: Daniel Hyams <dhyams@gitdev.(none)>
> Date: Sat Jan 19 19:14:03 2013 -0500
>
> Modifications to MultipleLocator and LogLocator, so that the locators will
> give locations one past what they really need to. This is necessary, because
> in situations where round off error matters, we want the locator to offer
> more that it really thinks it needs to. The clipping of ticks still takes
> place in the Axis class, so it's perfectly fine to add more locs than necessary.
>
> In fact, matplotlib relies on the clipping behavior in the Axis class already
> to not draw ticks outside of the limits of the axis.
I'll update the tests for different versions of MPL
| 2013-09-13T13:29:21Z | [] | [] |
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_graphics.py", line 568, in test_bar_log
self.assertEqual(ax.yaxis.get_ticklocs()[0], 1.0)
AssertionError: 0.10000000000000001 != 1.0
| 14,891 |
|||
pandas-dev/pandas | pandas-dev__pandas-4881 | d957ad774da73d5c7085337f25e210ca9a7c13cc | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -429,9 +429,11 @@ Bug Fixes
``ascending`` was being interpreted as ``True`` (:issue:`4839`,
:issue:`4846`)
- Fixed ``Panel.tshift`` not working. Added `freq` support to ``Panel.shift`` (:issue:`4853`)
- - Fix an issue in TextFileReader w/ Python engine (i.e. PythonParser)
+ - Fix an issue in TextFileReader w/ Python engine (i.e. PythonParser)
with thousands != "," (:issue:`4596`)
-
+ - Bug in getitem with a duplicate index when using where (:issue:`4879`)
+
+
pandas 0.12.0
-------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1840,8 +1840,12 @@ def _getitem_column(self, key):
if self.columns.is_unique:
return self._get_item_cache(key)
- # duplicate columns
- return self._constructor(self._data.get(key))
+ # duplicate columns & possible reduce dimensionaility
+ result = self._constructor(self._data.get(key))
+ if result.columns.is_unique:
+ result = result[key]
+
+ return result
def _getitem_slice(self, key):
return self._slice(key, axis=0)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -401,7 +401,7 @@ def _astype(self, dtype, copy=False, raise_on_error=True, values=None,
if values is None:
values = com._astype_nansafe(self.values, dtype, copy=True)
newb = make_block(
- values, self.items, self.ref_items, ndim=self.ndim,
+ values, self.items, self.ref_items, ndim=self.ndim, placement=self._ref_locs,
fastpath=True, dtype=dtype, klass=klass)
except:
if raise_on_error is True:
@@ -716,7 +716,7 @@ def create_block(v, m, n, item, reshape=True):
if inplace:
return [self]
- return [make_block(new_values, self.items, self.ref_items, fastpath=True)]
+ return [make_block(new_values, self.items, self.ref_items, placement=self._ref_locs, fastpath=True)]
def interpolate(self, method='pad', axis=0, inplace=False,
limit=None, fill_value=None, coerce=False,
@@ -2853,12 +2853,13 @@ def _reindex_indexer_items(self, new_items, indexer, fill_value):
# TODO: less efficient than I'd like
item_order = com.take_1d(self.items.values, indexer)
+ new_axes = [new_items] + self.axes[1:]
+ new_blocks = []
+ is_unique = new_items.is_unique
# keep track of what items aren't found anywhere
+ l = np.arange(len(item_order))
mask = np.zeros(len(item_order), dtype=bool)
- new_axes = [new_items] + self.axes[1:]
-
- new_blocks = []
for blk in self.blocks:
blk_indexer = blk.items.get_indexer(item_order)
selector = blk_indexer != -1
@@ -2872,12 +2873,19 @@ def _reindex_indexer_items(self, new_items, indexer, fill_value):
new_block_items = new_items.take(selector.nonzero()[0])
new_values = com.take_nd(blk.values, blk_indexer[selector], axis=0,
allow_fill=False)
- new_blocks.append(make_block(new_values, new_block_items,
- new_items, fastpath=True))
+ placement = l[selector] if not is_unique else None
+ new_blocks.append(make_block(new_values,
+ new_block_items,
+ new_items,
+ placement=placement,
+ fastpath=True))
if not mask.all():
na_items = new_items[-mask]
- na_block = self._make_na_block(na_items, new_items,
+ placement = l[-mask] if not is_unique else None
+ na_block = self._make_na_block(na_items,
+ new_items,
+ placement=placement,
fill_value=fill_value)
new_blocks.append(na_block)
new_blocks = _consolidate(new_blocks, new_items)
@@ -2943,7 +2951,7 @@ def reindex_items(self, new_items, indexer=None, copy=True, fill_value=None):
return self.__class__(new_blocks, new_axes)
- def _make_na_block(self, items, ref_items, fill_value=None):
+ def _make_na_block(self, items, ref_items, placement=None, fill_value=None):
# TODO: infer dtypes other than float64 from fill_value
if fill_value is None:
@@ -2954,8 +2962,7 @@ def _make_na_block(self, items, ref_items, fill_value=None):
dtype, fill_value = com._infer_dtype_from_scalar(fill_value)
block_values = np.empty(block_shape, dtype=dtype)
block_values.fill(fill_value)
- na_block = make_block(block_values, items, ref_items)
- return na_block
+ return make_block(block_values, items, ref_items, placement=placement)
def take(self, indexer, new_index=None, axis=1, verify=True):
if axis < 1:
| AssertionError while boolean indexing with non-unique columns
Hit this `AssertionError: cannot create BlockManager._ref_locs ...` while processing a dataframe with multiple empty column names. Appears to fail with 0.12/master and pass with 0.11.
```
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(3,4), columns=['', '', 'C', 'D'])
df[df.C > .5]
```
```
Traceback (most recent call last):
File "test_fail.py", line 5, in <module>
df[df.C > .5]
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 1830, in __getitem__
return self._getitem_frame(key)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 1898, in _getitem_frame
return self.where(key)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/generic.py", line 2372, in where
cond = cond.reindex(**self._construct_axes_dict())
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/generic.py", line 1118, in reindex
return self._reindex_axes(axes, level, limit, method, fill_value, copy, takeable=takeable)._propogate_attributes(self)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 2413, in _reindex_axes
fill_value, limit, takeable=takeable)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 2436, in _reindex_columns
copy=copy, fill_value=fill_value, allow_dups=takeable)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/generic.py", line 1218, in _reindex_with_indexers
fill_value=fill_value, allow_dups=allow_dups)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 2840, in reindex_indexer
return self._reindex_indexer_items(new_axis, indexer, fill_value)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 2885, in _reindex_indexer_items
return self.__class__(new_blocks, new_axes)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 1735, in __init__
self._set_ref_locs(do_refs=True)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 1863, in _set_ref_locs
"does not have _ref_locs set" % (block, labels))
AssertionError: cannot create BlockManager._ref_locs because block [BoolBlock: [C], 1 x 3, dtype: bool] with duplicate items [Index([u'', u'', u'C', u'D'], dtype=object)] does not have _ref_locs set
```
| 2013-09-19T12:23:50Z | [] | [] |
Traceback (most recent call last):
File "test_fail.py", line 5, in <module>
df[df.C > .5]
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 1830, in __getitem__
return self._getitem_frame(key)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 1898, in _getitem_frame
return self.where(key)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/generic.py", line 2372, in where
cond = cond.reindex(**self._construct_axes_dict())
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/generic.py", line 1118, in reindex
return self._reindex_axes(axes, level, limit, method, fill_value, copy, takeable=takeable)._propogate_attributes(self)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 2413, in _reindex_axes
fill_value, limit, takeable=takeable)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/frame.py", line 2436, in _reindex_columns
copy=copy, fill_value=fill_value, allow_dups=takeable)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/generic.py", line 1218, in _reindex_with_indexers
fill_value=fill_value, allow_dups=allow_dups)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 2840, in reindex_indexer
return self._reindex_indexer_items(new_axis, indexer, fill_value)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 2885, in _reindex_indexer_items
return self.__class__(new_blocks, new_axes)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 1735, in __init__
self._set_ref_locs(do_refs=True)
File "/home/gmd/.virtualenvs/pandas-master/local/lib/python2.7/site-packages/pandas-0.12.0_476_gb889fda-py2.7-linux-i686.egg/pandas/core/internals.py", line 1863, in _set_ref_locs
"does not have _ref_locs set" % (block, labels))
AssertionError: cannot create BlockManager._ref_locs because block [BoolBlock: [C], 1 x 3, dtype: bool] with duplicate items [Index([u'', u'', u'C', u'D'], dtype=object)] does not have _ref_locs set
| 14,904 |
||||
pandas-dev/pandas | pandas-dev__pandas-5018 | 1f00335b51d345d58e9574c2c5d86613214aba9b | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -497,6 +497,7 @@ Bug Fixes
- Fixed wrong index name during read_csv if using usecols. Applies to c parser only. (:issue:`4201`)
- ``Timestamp`` objects can now appear in the left hand side of a comparison
operation with a ``Series`` or ``DataFrame`` object (:issue:`4982`).
+ - Fix a bug when indexing with ``np.nan`` via ``iloc/loc`` (:issue:`5016`)
pandas 0.12.0
-------------
diff --git a/pandas/core/index.py b/pandas/core/index.py
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -424,7 +424,7 @@ def _convert_scalar_indexer(self, key, typ=None):
def to_int():
ikey = int(key)
if ikey != key:
- self._convert_indexer_error(key, 'label')
+ return self._convert_indexer_error(key, 'label')
return ikey
if typ == 'iloc':
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1,12 +1,12 @@
# pylint: disable=W0223
from datetime import datetime
-from pandas.core.common import _asarray_tuplesafe, is_list_like
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.compat import range, zip
import pandas.compat as compat
import pandas.core.common as com
from pandas.core.common import (_is_bool_indexer, is_integer_dtype,
+ _asarray_tuplesafe, is_list_like, isnull,
ABCSeries, ABCDataFrame, ABCPanel)
import pandas.lib as lib
@@ -979,12 +979,20 @@ def _has_valid_type(self, key, axis):
else:
def error():
+ if isnull(key):
+ raise ValueError("cannot use label indexing with a null key")
raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
- key = self._convert_scalar_indexer(key, axis)
try:
+ key = self._convert_scalar_indexer(key, axis)
if not key in ax:
error()
+ except (TypeError) as e:
+
+ # python 3 type errors should be raised
+ if 'unorderable' in str(e): # pragma: no cover
+ error()
+ raise
except:
error()
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -97,8 +97,13 @@ def ref_locs(self):
indexer = self.ref_items.get_indexer(self.items)
indexer = com._ensure_platform_int(indexer)
if (indexer == -1).any():
- raise AssertionError('Some block items were not in block '
- 'ref_items')
+
+ # this means that we have nan's in our block
+ try:
+ indexer[indexer == -1] = np.arange(len(self.items))[isnull(self.items)]
+ except:
+ raise AssertionError('Some block items were not in block '
+ 'ref_items')
self._ref_locs = indexer
return self._ref_locs
@@ -2500,9 +2505,18 @@ def _consolidate_inplace(self):
def get(self, item):
if self.items.is_unique:
+
+ if isnull(item):
+ indexer = np.arange(len(self.items))[isnull(self.items)]
+ return self.get_for_nan_indexer(indexer)
+
_, block = self._find_block(item)
return block.get(item)
else:
+
+ if isnull(item):
+ raise ValueError("cannot label index with a null key")
+
indexer = self.items.get_loc(item)
ref_locs = np.array(self._set_ref_locs())
@@ -2528,14 +2542,31 @@ def get(self, item):
def iget(self, i):
item = self.items[i]
+
+ # unique
if self.items.is_unique:
- return self.get(item)
+ if notnull(item):
+ return self.get(item)
+ return self.get_for_nan_indexer(i)
- # compute the duplicative indexer if needed
ref_locs = self._set_ref_locs()
b, loc = ref_locs[i]
return b.iget(loc)
+ def get_for_nan_indexer(self, indexer):
+
+ # allow a single nan location indexer
+ if not np.isscalar(indexer):
+ if len(indexer) == 1:
+ indexer = indexer.item()
+ else:
+ raise ValueError("cannot label index with a null key")
+
+ # take a nan indexer and return the values
+ ref_locs = self._set_ref_locs(do_refs='force')
+ b, loc = ref_locs[indexer]
+ return b.iget(loc)
+
def get_scalar(self, tup):
"""
Retrieve single item
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1053,10 +1053,10 @@ def __setitem__(self, key, value):
except TypeError as e:
if isinstance(key, tuple) and not isinstance(self.index, MultiIndex):
raise ValueError("Can only tuple-index with a MultiIndex")
+
# python 3 type errors should be raised
if 'unorderable' in str(e): # pragma: no cover
raise IndexError(key)
- # Could not hash item
if _is_bool_indexer(key):
key = _check_bool_indexer(self.index, key)
diff --git a/pandas/hashtable.pyx b/pandas/hashtable.pyx
--- a/pandas/hashtable.pyx
+++ b/pandas/hashtable.pyx
@@ -643,6 +643,8 @@ cdef class Float64HashTable(HashTable):
return uniques.to_array()
+na_sentinel = object
+
cdef class PyObjectHashTable(HashTable):
# cdef kh_pymap_t *table
@@ -660,6 +662,8 @@ cdef class PyObjectHashTable(HashTable):
def __contains__(self, object key):
cdef khiter_t k
hash(key)
+ if key != key or key is None:
+ key = na_sentinel
k = kh_get_pymap(self.table, <PyObject*>key)
return k != self.table.n_buckets
@@ -669,6 +673,8 @@ cdef class PyObjectHashTable(HashTable):
cpdef get_item(self, object val):
cdef khiter_t k
+ if val != val or val is None:
+ val = na_sentinel
k = kh_get_pymap(self.table, <PyObject*>val)
if k != self.table.n_buckets:
return self.table.vals[k]
@@ -677,6 +683,8 @@ cdef class PyObjectHashTable(HashTable):
def get_iter_test(self, object key, Py_ssize_t iterations):
cdef Py_ssize_t i, val
+ if key != key or key is None:
+ key = na_sentinel
for i in range(iterations):
k = kh_get_pymap(self.table, <PyObject*>key)
if k != self.table.n_buckets:
@@ -689,6 +697,8 @@ cdef class PyObjectHashTable(HashTable):
char* buf
hash(key)
+ if key != key or key is None:
+ key = na_sentinel
k = kh_put_pymap(self.table, <PyObject*>key, &ret)
# self.table.keys[k] = key
if kh_exist_pymap(self.table, k):
@@ -706,6 +716,9 @@ cdef class PyObjectHashTable(HashTable):
for i in range(n):
val = values[i]
hash(val)
+ if val != val or val is None:
+ val = na_sentinel
+
k = kh_put_pymap(self.table, <PyObject*>val, &ret)
self.table.vals[k] = i
@@ -720,6 +733,9 @@ cdef class PyObjectHashTable(HashTable):
for i in range(n):
val = values[i]
hash(val)
+ if val != val or val is None:
+ val = na_sentinel
+
k = kh_get_pymap(self.table, <PyObject*>val)
if k != self.table.n_buckets:
locs[i] = self.table.vals[k]
| NaN in columns produces TypeError
``` python
>>> pd.DataFrame([[1,2,3]], columns=[1.1,2.2,np.nan])
Traceback (most recent call last):
...
TypeError: ("'NoneType' object is not iterable", u'occurred at index 2.2')
```
Is this a bug or a feature? I don't see anything about that in the `v0.13.0.txt`, and it worked in 0.12:
``` python
>>> pd.DataFrame([[1,2,3]], columns=[1.1,2.2,np.nan])
1.1 2.2 NaN
0 1 2 3
```
| This is a quite tricky and non-trivial bug. It didn't work before, just didn't error. Not really sure it should even be allowed. (but is supported in some places).
Its not very useful; you end up defeating name based indexing, eg. `df[np.nan]` doesn't even make sense
@jreback I found this when I tried to reproduce the issue #4987 in 0.13.
``` python
>>> df = pd.read_html(
'http://en.wikipedia.org/wiki/Vancouver',
match='Municipality', header=0
)[0]
>>> df
0 Country Municipality NaN
1 Ukraine Odessa 1944
2 Japan Yokohama 1965
3 United Kingdom Edinburgh[198][199] 1978
4 China Guangzhou[200] 1985
5 United States Los Angeles 1986
6 South Korea Seoul 2007
```
This is the result in 0.12, in 0.13 it just brakes... Maybe this should be a bug on the `read_html` function then?
| 2013-09-28T16:19:45Z | [] | [] |
Traceback (most recent call last):
...
TypeError: ("'NoneType' object is not iterable", u'occurred at index 2.2')
| 14,938 |
|||
pandas-dev/pandas | pandas-dev__pandas-5177 | d1478811e45b24e4f42fdb74a9843ba218aff08c | diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1053,7 +1053,7 @@ def _possibly_downcast_to_dtype(result, dtype):
# do a test on the first element, if it fails then we are done
r = result.ravel()
arr = np.array([ r[0] ])
- if (arr != arr.astype(dtype)).item():
+ if not np.allclose(arr,arr.astype(dtype)):
return result
# a comparable, e.g. a Decimal may slip in here
@@ -1062,8 +1062,14 @@ def _possibly_downcast_to_dtype(result, dtype):
if issubclass(result.dtype.type, (np.object_,np.number)) and notnull(result).all():
new_result = result.astype(dtype)
- if (new_result == result).all():
- return new_result
+ try:
+ if np.allclose(new_result,result):
+ return new_result
+ except:
+
+ # comparison of an object dtype with a number type could hit here
+ if (new_result == result).all():
+ return new_result
except:
pass
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -376,10 +376,10 @@ def downcast(self, dtypes=None):
dtype = dtypes.get(item, self._downcast_dtype)
if dtype is None:
- nv = _block_shape(values[i])
+ nv = _block_shape(values[i],ndim=self.ndim)
else:
nv = _possibly_downcast_to_dtype(values[i], dtype)
- nv = _block_shape(nv)
+ nv = _block_shape(nv,ndim=self.ndim)
blocks.append(make_block(nv, Index([item]), self.ref_items, ndim=self.ndim, fastpath=True))
| Failing interpolation test
```
$ nosetests pandas/tests/test_generic.py:TestSeries.test_interp_quad
F
======================================================================
FAIL: test_interp_quad (pandas.tests.test_generic.TestSeries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/phillip/Documents/code/py/pandas/pandas/tests/test_generic.py", line 339, in test_interp_quad
assert_series_equal(result, expected)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 452, in assert_series_equal
assert_attr_equal('dtype', left, right)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 369, in assert_attr_equal
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 354, in assert_equal
assert a == b, "%s: %r != %r" % (msg.format(a,b), a, b)
AssertionError: attr is not equal [dtype]: dtype('int64') != dtype('float64')
----------------------------------------------------------------------
Ran 1 test in 0.041s
FAILED (failures=1)
```
| cc @TomAugspurger
Does this make any sense? A float block with array of items with int dtype?
```
ipdb> self
SingleBlockManager
Items: Int64Index([1, 2, 3, 4], dtype=int64)
FloatBlock: 4 dtype: float64
```
I'm in `core.internals.apply`
items are the 'index'...so that is right
@cpcloud where do you see this failing? I can't repro on 64 or 32-bit
spoke too soon!
@TomAugspurger that test just needs to have the expected be `int64` otherwise looks fine. as an FYI, maybe need
some tests that don't infer dtypes (e.g. set `downcast=False` to have no inferring of the results)
So are you saying to change expected to `expected = Series([1, 4, 9, 16], index=[1, 2, 3, 4])` (int type), because that fails for me. I'm trying to figure out why the result [1., 4., 9., 16.] doesn't get downcast for me right now.
@TomAugspurger you may also want to put some more creative logic in there for inference. Since we know that we are going to only float/int coming in, you could always infer the ints so that you will get ints if possible and the floats will stay floats.
@TomAugspurger the result IS downcast (to int64), its the expected that is float64
> @TomAugspurger the result IS downcast (to int64), its the expected that is float64
Not for me:
``` python
In [5]: result = sq.interpolate(method='quadratic')
In [6]: result
Out[6]:
1 1
2 4
3 9
4 16
dtype: float64
```
---
Can you clear this up for me? I think this is where things aren't going the same way. `b` is the float block with the nan interpolated.
``` python
ipdb> !b
FloatBlock: 4 dtype: float64
ipdb> !b.values
array([ 1., 4., 9., 16.])
ipdb> !b.downcast(downcast)[0].values # should be ints?
array([ 1., 4., 9., 16.])
ipdb> downcast
'infer'
```
That's in `pandas/core/internals.py(337)_maybe_downcast()`
I'll dig a bit deeper.
umm... in `/pandas/core/common.py(1064)_possibly_downcast_to_dtype()`:
``` python
ipdb> result
array([ 1., 4., 9., 16.])
ipdb> result.astype(dtype)
array([ 1, 4, 8, 16])
ipdb> dtype
dtype('int64')
ipdb>
```
but back at in the interpreter:
``` python
In [10]: a.astype(np.int64)
Out[10]: array([ 1, 4, 9, 16])
In [11]: a = np.array([1., 4., 9., 16.])
In [12]: a.astype(np.int64)
Out[12]: array([ 1, 4, 9, 16])
```
This is a precision issue
```
array([ 1., 4., 9., 16.])
(Pdb) p result[0]
1.0
(Pdb) p result[1]
4.0
(Pdb) p result[2]
9.0000000000000036
(Pdb) p result[3]
16.0
```
thus this array is NOT equal to array([1,4,9,16])
thus should not be downcasted (though you can make a case that it close 'enough') to be....
```
(Pdb) result == new_result
array([ True, True, False, True], dtype=bool)
(Pdb) result.round(8) == new_result
array([ True, True, True, True], dtype=bool)
```
should we round when trying to downcast to int?
I think I should just do `allclose` with the default tolerances (1e-5,1e-8).....
Fair enough. And users can override that with `s.interpolate(…, infer=False)` right? Where would the necessary changes need to be made?
Or were you saying `allclose` for the test?
yes...they can specify `infer=False` to turn off downcasting; I am going to put up a PR to basically use `allclose` to figure out if the values are downcastable, so going to change your test.
| 2013-10-10T18:45:36Z | [] | [] |
Traceback (most recent call last):
File "/home/phillip/Documents/code/py/pandas/pandas/tests/test_generic.py", line 339, in test_interp_quad
assert_series_equal(result, expected)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 452, in assert_series_equal
assert_attr_equal('dtype', left, right)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 369, in assert_attr_equal
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 354, in assert_equal
assert a == b, "%s: %r != %r" % (msg.format(a,b), a, b)
AssertionError: attr is not equal [dtype]: dtype('int64') != dtype('float64')
| 14,971 |
|||
pandas-dev/pandas | pandas-dev__pandas-5362 | da8983417ddc69b86189779014ffbd35e9b327ce | Failing interpolation test
```
$ nosetests pandas/tests/test_generic.py:TestSeries.test_interp_quad
F
======================================================================
FAIL: test_interp_quad (pandas.tests.test_generic.TestSeries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/phillip/Documents/code/py/pandas/pandas/tests/test_generic.py", line 339, in test_interp_quad
assert_series_equal(result, expected)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 452, in assert_series_equal
assert_attr_equal('dtype', left, right)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 369, in assert_attr_equal
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 354, in assert_equal
assert a == b, "%s: %r != %r" % (msg.format(a,b), a, b)
AssertionError: attr is not equal [dtype]: dtype('int64') != dtype('float64')
----------------------------------------------------------------------
Ran 1 test in 0.041s
FAILED (failures=1)
```
| cc @TomAugspurger
Does this make any sense? A float block with array of items with int dtype?
```
ipdb> self
SingleBlockManager
Items: Int64Index([1, 2, 3, 4], dtype=int64)
FloatBlock: 4 dtype: float64
```
I'm in `core.internals.apply`
items are the 'index'...so that is right
@cpcloud where do you see this failing? I can't repro on 64 or 32-bit
spoke too soon!
@TomAugspurger that test just needs to have the expected be `int64` otherwise looks fine. as an FYI, maybe need
some tests that don't infer dtypes (e.g. set `downcast=False` to have no inferring of the results)
So are you saying to change expected to `expected = Series([1, 4, 9, 16], index=[1, 2, 3, 4])` (int type), because that fails for me. I'm trying to figure out why the result [1., 4., 9., 16.] doesn't get downcast for me right now.
@TomAugspurger you may also want to put some more creative logic in there for inference. Since we know that we are going to only float/int coming in, you could always infer the ints so that you will get ints if possible and the floats will stay floats.
@TomAugspurger the result IS downcast (to int64), its the expected that is float64
> @TomAugspurger the result IS downcast (to int64), its the expected that is float64
Not for me:
``` python
In [5]: result = sq.interpolate(method='quadratic')
In [6]: result
Out[6]:
1 1
2 4
3 9
4 16
dtype: float64
```
---
Can you clear this up for me? I think this is where things aren't going the same way. `b` is the float block with the nan interpolated.
``` python
ipdb> !b
FloatBlock: 4 dtype: float64
ipdb> !b.values
array([ 1., 4., 9., 16.])
ipdb> !b.downcast(downcast)[0].values # should be ints?
array([ 1., 4., 9., 16.])
ipdb> downcast
'infer'
```
That's in `pandas/core/internals.py(337)_maybe_downcast()`
I'll dig a bit deeper.
umm... in `/pandas/core/common.py(1064)_possibly_downcast_to_dtype()`:
``` python
ipdb> result
array([ 1., 4., 9., 16.])
ipdb> result.astype(dtype)
array([ 1, 4, 8, 16])
ipdb> dtype
dtype('int64')
ipdb>
```
but back at in the interpreter:
``` python
In [10]: a.astype(np.int64)
Out[10]: array([ 1, 4, 9, 16])
In [11]: a = np.array([1., 4., 9., 16.])
In [12]: a.astype(np.int64)
Out[12]: array([ 1, 4, 9, 16])
```
This is a precision issue
```
array([ 1., 4., 9., 16.])
(Pdb) p result[0]
1.0
(Pdb) p result[1]
4.0
(Pdb) p result[2]
9.0000000000000036
(Pdb) p result[3]
16.0
```
thus this array is NOT equal to array([1,4,9,16])
thus should not be downcasted (though you can make a case that it close 'enough') to be....
```
(Pdb) result == new_result
array([ True, True, False, True], dtype=bool)
(Pdb) result.round(8) == new_result
array([ True, True, True, True], dtype=bool)
```
should we round when trying to downcast to int?
I think I should just do `allclose` with the default tolerances (1e-5,1e-8).....
Fair enough. And users can override that with `s.interpolate(…, infer=False)` right? Where would the necessary changes need to be made?
Or were you saying `allclose` for the test?
yes...they can specify `infer=False` to turn off downcasting; I am going to put up a PR to basically use `allclose` to figure out if the values are downcastable, so going to change your test.
see #5177 I think that should do it
@TomAugspurger see if you think you need tests with `infer=False` (you may not)....
jsut did on v0.12.0-993-gda89834
```
======================================================================
FAIL: test_interp_quad (pandas.tests.test_generic.TestSeries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/tests/test_generic.py", line 483, in test_interp_quad
assert_series_equal(result, expected)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/util/testing.py", line 416, in assert_series_equal
assert_attr_equal('dtype', left, right)
File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/util/testing.py", line 399, in assert_attr_equal
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
File "/home/yoh/deb/gits/pkg-exppsy/pandas/pandas/util/testing.py", line 382, in assert_equal
assert a == b, "%s: %r != %r" % (msg.format(a,b), a, b)
AssertionError: attr is not equal [dtype]: dtype('float64') != dtype('int64')
```
@yarikoptic can you show ci/print_versions?
```
$> ci/print_versions.py
INSTALLED VERSIONS
------------------
Python: 2.7.5.final.0
OS: Linux 3.9-1-amd64 #1 SMP Debian 3.9.8-1 x86_64
byteorder: little
LC_ALL: None
LANG: en_US
pandas: 0.12.0.dev-09e62f5
Cython: 0.19.1
Numpy: 1.7.1
Scipy: 0.12.0
statsmodels: 0.6.0.dev-d11bf99
patsy: 0.1.0+dev
scikits.timeseries: Not installed
dateutil: 1.5
pytz: 2012c
bottleneck: Not installed
PyTables: 2.4.0
numexpr: 2.0.1
matplotlib: 1.3.1
openpyxl: 1.6.1
xlrd: 0.9.2
xlwt: 0.7.4
xlsxwriter: Not installed
sqlalchemy: 0.8.2
lxml: 3.2.0
bs4: 4.2.1
html5lib: 0.95-dev
bigquery: Not installed
apiclient: 1.2
```
I've seen this consistently on OSX for the last week and a half as well.
what kind of machine is this/linux kernel?
@jtratner can you see what this does?
```
In [11]: np.allclose(np.array([9.0005]),np.array([9.]))
Out[11]: False
In [12]: np.allclose(np.array([9.00005]),np.array([9.]))
Out[12]: True
```
maybe need to put an argument there
yes, won't have access until tonight.
On Mon, Oct 28, 2013 at 11:58 AM, jreback notifications@github.com wrote:
> @jtratner https://github.com/jtratner can you see what this does?
>
> In [11]: np.allclose(np.array([9.0005]),np.array([9.]))
> Out[11]: False
>
> In [12]: np.allclose(np.array([9.00005]),np.array([9.]))
> Out[12]: True
>
> maybe need to put an argument there
>
> —
> Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/5174#issuecomment-27224616
> .
@jreback
``` python
In [1]: np.allclose(np.array([9.0005]),np.array([9.]))
Out[1]: False
In [2]: np.allclose(np.array([9.00005]),np.array([9.]))
Out[2]: True
```
But I haven't sorted out my failing scipy tests due to precision errors, so I'm not sure how reliable my results are.
@TomAugspurger are you showing this failure as well?
Yep. The way I had it written originally (with expected as a float) passed on my system.
Should I change expected to a float and set infer=False for this test?
can you step thru and see why its not coercing? (it does on my system and on travis),
put a break in `com._possibly_downcast_to_dtype` it SHOULD coerce to int64 in the existing test, lmk where it returns
I think that's what I posted [up here](https://github.com/pydata/pandas/issues/5174#issuecomment-26074991) When it tried to downcast the result' the 9 got flipped to an 8.
Let me know if you were asking for something different.
@TomAugspurger ahh...I c ....that is very odd....why would numpy flip the 9 float to an 8...(and only on mac)...
I guess let's just change the test, e.g. `infer=False` and compare vs float....can you do a quick PR for that?
| 2013-10-28T18:26:15Z | [] | [] |
Traceback (most recent call last):
File "/home/phillip/Documents/code/py/pandas/pandas/tests/test_generic.py", line 339, in test_interp_quad
assert_series_equal(result, expected)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 452, in assert_series_equal
assert_attr_equal('dtype', left, right)
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 369, in assert_attr_equal
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
File "/home/phillip/Documents/code/py/pandas/pandas/util/testing.py", line 354, in assert_equal
assert a == b, "%s: %r != %r" % (msg.format(a,b), a, b)
AssertionError: attr is not equal [dtype]: dtype('int64') != dtype('float64')
| 15,012 |
||||
pandas-dev/pandas | pandas-dev__pandas-5432 | 2d2e8b5146024a90001f96862cdd0172adb4d1b8 | diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -192,7 +192,9 @@ def _get_time_period_bins(self, axis):
labels = binner = PeriodIndex(start=axis[0], end=axis[-1],
freq=self.freq)
- end_stamps = (labels + 1).asfreq('D', 's').to_timestamp()
+ end_stamps = (labels + 1).asfreq(self.freq, 's').to_timestamp()
+ if axis.tzinfo:
+ end_stamps = end_stamps.tz_localize(axis.tzinfo)
bins = axis.searchsorted(end_stamps, side='left')
return binner, bins, labels
| "AssertionError: Index length did not match values" when resampling with kind='period'
Should raise that `kind='period'` is not accepted for `DatetimeIndex` when resampling
Possible issue with period index resampling hanging (see @cpcloud example below)
version = 0.12.0.dev-f61d7e3
This bug also exists in 0.11.
## The bug
``` Python
In [20]: s.resample('T', kind='period')
-----------------
AssertionError
Traceback (most recent call last)
<ipython-input-79-c290c0578332> in <module>()
----> 1 s.resample('T', kind='period')
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/generic.py in resample(self, rule, how, axis, fill_method, closed, label, convention, kind, loffset, limit, base)
255 fill_method=fill_method, convention=convention,
256 limit=limit, base=base)
--> 257 return sampler.resample(self)
258
259 def first(self, offset):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/tseries/resample.py in resample(self, obj)
81
82 if isinstance(axis, DatetimeIndex):
---> 83 rs = self._resample_timestamps(obj)
84 elif isinstance(axis, PeriodIndex):
85 offset = to_offset(self.freq)
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/tseries/resample.py in _resample_timestamps(self, obj)
224 # Irregular data, have to use groupby
225 grouped = obj.groupby(grouper, axis=self.axis)
--> 226 result = grouped.aggregate(self._agg_method)
227
228 if self.fill_method is not None:
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in aggregate(self, func_or_funcs, *args, **kwargs)
1410 if isinstance(func_or_funcs, basestring):
-> 1411 return getattr(self, func_or_funcs)(*args, **kwargs)
1412
1413 if hasattr(func_or_funcs, '__iter__'):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in mean(self)
356 except Exception: # pragma: no cover
357 f = lambda x: x.mean(axis=self.axis)
--> 358 return self._python_agg_general(f)
359
360 def median(self):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in _python_agg_general(self, func, *args, **kwargs)
498 output[name] = self._try_cast(values[mask],result)
499
--> 500 return self._wrap_aggregated_output(output)
501
502 def _wrap_applied_output(self, *args, **kwargs):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in _wrap_aggregated_output(self, output, names)
1473 return DataFrame(output, index=index, columns=names)
1474 else:
-> 1475 return Series(output, index=index, name=self.name)
1476
1477 def _wrap_applied_output(self, keys, values, not_indexed_same=False):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/series.py in __new__(cls, data, index, dtype, name, copy)
494 else:
495 subarr = subarr.view(Series)
--> 496 subarr.index = index
497 subarr.name = name
498
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/lib.so in pandas.lib.SeriesIndex.__set__ (pandas/lib.c:29775)()
AssertionError: Index length did not match values
```
## A workaround / expected behaviour
``` Python
In [81]: s.resample('T').to_period()
Out[81]:
2013-04-12 19:15 325.000000
2013-04-12 19:16 326.899994
...
2013-04-12 22:58 305.600006
2013-04-12 22:59 320.444458
Freq: T, Length: 225, dtype: float32
```
## More information
``` Python
In [83]: s
Out[83]:
2013-04-12 19:15:25 323
2013-04-12 19:15:28 NaN
...
2013-04-12 22:59:55 319
2013-04-12 22:59:56 NaN
2013-04-12 22:59:57 NaN
2013-04-12 22:59:58 NaN
2013-04-12 22:59:59 NaN
Name: aggregate, Length: 13034, dtype: float32
In [76]: s.index
Out[76]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-04-12 19:15:25, ..., 2013-04-12 22:59:59]
Length: 13034, Freq: None, Timezone: None
In [77]: s.head()
Out[77]:
2013-04-12 19:15:25 323
2013-04-12 19:15:28 NaN
2013-04-12 19:15:29 NaN
2013-04-12 19:15:30 NaN
2013-04-12 19:15:31 327
Name: aggregate, dtype: float32
In [78]: s.resample('T')
Out[78]:
2013-04-12 19:15:00 325.000000
2013-04-12 19:16:00 326.899994
...
2013-04-12 22:58:00 305.600006
2013-04-12 22:59:00 320.444458
Freq: T, Length: 225, dtype: float32
In [80]: pd.__version__
Out[80]: '0.12.0.dev-f61d7e3'
In [84]: type(s)
Out[84]: pandas.core.series.TimeSeries
```
(Please let me know if you need more info! I'm using Ubuntu 13.04. It's entirely possible that this isn't a bug but instead I am doing something stupid. Oh, and let me take this opportunity to thank the Pandas dev team! Pandas is awesome!!! THANK YOU!)
More informative exception when trying to use ``MS`` as period frequency
Closes #5332
| This is not supported. As you indicated you can resample then `to_period`, or
`s.to_period().resample('T',kind='period'` will also work
I'll make this an enhancement/bug request, because it should raise a helpful message (or be implemented)
thanks
wow, thanks for the very swift response ;)
I actually don't get an error here. It just hangs.
@cpcloud can you post what you did?
@jreback Yeah, sorry to hit and run like that :)
``` python
dind = period_range('1/1/2001', '1/1/2002').to_timestamp()
s = Series(randn(dind.size), dind)
s.resample('T', kind='period') # hangs here
```
The other ways of doing this (from above) work fine. Doesn't hang (throws the above error) for the simple case of
``` python
dind = period_range('1/1/2001', '1/2/2001').to_timestamp()
s = Series(randn(dind.size), dind)
s.resample('T', kind='period') # hangs here
```
and starts to hang for `dind.size > 2`.
hmm...that might be something else
I replicated @JackKelly was donig by this
```
In [16]: s = Series(range(100),index=date_range('20130101',freq='s',periods=100),dtype='float')
In [17]: s[10:30] = np.nan
In [18]: s.to_period().resample('T',kind='period')
Out[18]:
2013-01-01 00:00 34.5
2013-01-01 00:01 79.5
Freq: T, dtype: float64
In [19]: s.resample('T',kind='period')
AssertionError: Index length did not match values
```
Yeah that works.
should i open an issue for the above? seems to be a day frequency issue.
yeh....(I put it in the header) so just ref this issue too
@jreback i would like to clear this up since clearing it up would actually close 3 issues: this (#3609), #3612, and #3899. what's the original reason for not supporting this...my current fix loses the last element when resampling from datetimes to period so i'm guessing that might be one issue...but that's because of the ability to choose your resampling either include the start/end point of datetimes which periods don't have
one issue that two element case is not handled
now i'm getting a segfault when i try to use `sum` ... joy
move back to 0.13 then?
I'm not convinced this is reasonable to do. I don't get a keyerror when using "MS" - I think it's just a backwards compatibility thing where MS is always made into milliseconds (which is what "L" means too).
@jtratner I thought you had originally been seeing the key error? When I use Pandas v0.12, I definitely am getting key error.
The ability to use MS to mean milliseconds is inconsistent. There are parts of the Pandas code that explicitly prevent this and others that do not.
@cancan101 seems like the underlying problem is that this fails poorly:
``` python
pd.Period('2013').asfreq(freq='L')
```
Also, Wakari version of pandas gets a `KeyError` on `'MS'`, guess I'm just special.
I actually think there are two issues here. One is that MS, in my opinion,
imcorrectly maps to L. The second problem is that L is not recognized as a
valid period frequency.
On Sunday, October 27, 2013, Jeff Tratner wrote:
> @cancan101 https://github.com/cancan101 seems like the underlying
> problem is that this fails poorly:
>
> pd.Period('2013').asfreq(freq='L')
>
> Also, Wakari version of pandas gets a KeyError on 'MS', guess I'm just
> special.
>
> —
> Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/pull/5340#issuecomment-27184392
> .
Prefer to address both. Also, what should MS map to, if anything?
Actually, what I'd really prefer is that passing an unknown frequency raises `ValueError("Unknown frequency: %s")` vs. `KeyError`. It's okay to separate out `'L'` /`'MS'`, etc. But definitely should warn instead of just removing it.
Initially, I thought that `MS` should map to `MonthBegin` but in thinking about that, it doesn't make sense for a period. This is because a period encompasses both the start and end time stamps so Month (i.e. `M`) should be all that is needed to describe a "month" period. I think that `MS` should not map to anything. As for `ValueError`, that is what my new tests checks for: https://github.com/pydata/pandas/pull/5340/files#diff-f9b276c4aa39a6161726d8b43ce62516R496
| 2013-11-04T20:11:40Z | [] | [] |
Traceback (most recent call last)
<ipython-input-79-c290c0578332> in <module>()
----> 1 s.resample('T', kind='period')
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/generic.py in resample(self, rule, how, axis, fill_method, closed, label, convention, kind, loffset, limit, base)
255 fill_method=fill_method, convention=convention,
256 limit=limit, base=base)
--> 257 return sampler.resample(self)
258
259 def first(self, offset):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/tseries/resample.py in resample(self, obj)
81
82 if isinstance(axis, DatetimeIndex):
---> 83 rs = self._resample_timestamps(obj)
84 elif isinstance(axis, PeriodIndex):
85 offset = to_offset(self.freq)
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/tseries/resample.py in _resample_timestamps(self, obj)
224 # Irregular data, have to use groupby
225 grouped = obj.groupby(grouper, axis=self.axis)
--> 226 result = grouped.aggregate(self._agg_method)
227
228 if self.fill_method is not None:
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in aggregate(self, func_or_funcs, *args, **kwargs)
1410 if isinstance(func_or_funcs, basestring):
-> 1411 return getattr(self, func_or_funcs)(*args, **kwargs)
1412
1413 if hasattr(func_or_funcs, '__iter__'):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in mean(self)
356 except Exception: # pragma: no cover
357 f = lambda x: x.mean(axis=self.axis)
--> 358 return self._python_agg_general(f)
359
360 def median(self):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in _python_agg_general(self, func, *args, **kwargs)
498 output[name] = self._try_cast(values[mask],result)
499
--> 500 return self._wrap_aggregated_output(output)
501
502 def _wrap_applied_output(self, *args, **kwargs):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/groupby.py in _wrap_aggregated_output(self, output, names)
1473 return DataFrame(output, index=index, columns=names)
1474 else:
-> 1475 return Series(output, index=index, name=self.name)
1476
1477 def _wrap_applied_output(self, keys, values, not_indexed_same=False):
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/core/series.py in __new__(cls, data, index, dtype, name, copy)
494 else:
495 subarr = subarr.view(Series)
--> 496 subarr.index = index
497 subarr.name = name
498
/home/dk3810/workspace/python/pda/scripts/src/pandas/pandas/lib.so in pandas.lib.SeriesIndex.__set__ (pandas/lib.c:29775)()
AssertionError: Index length did not match values
| 15,025 |
|||
pandas-dev/pandas | pandas-dev__pandas-5474 | f4b8e70e76c8eba3a26d571bae8acf17b9047e7a | diff --git a/pandas/core/panel.py b/pandas/core/panel.py
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -537,7 +537,7 @@ def __setitem__(self, key, value):
if value.shape != shape[1:]:
raise ValueError('shape of value must be {0}, shape of given '
'object was {1}'.format(shape[1:],
- value.shape))
+ tuple(map(int, value.shape))))
mat = np.asarray(value)
elif np.isscalar(value):
dtype, value = _infer_dtype_from_scalar(value)
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -16,15 +16,14 @@ _TYPE_MAP = {
np.string_: 'string',
np.unicode_: 'unicode',
np.bool_: 'boolean',
- np.datetime64 : 'datetime64'
+ np.datetime64 : 'datetime64',
+ np.timedelta64 : 'timedelta64'
}
try:
_TYPE_MAP[np.float128] = 'floating'
_TYPE_MAP[np.complex256] = 'complex'
_TYPE_MAP[np.float16] = 'floating'
- _TYPE_MAP[np.datetime64] = 'datetime64'
- _TYPE_MAP[np.timedelta64] = 'timedelta64'
except AttributeError:
pass
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -177,8 +177,14 @@ def get_locales(prefix=None, normalize=True,
For example::
locale.setlocale(locale.LC_ALL, locale_string)
+
+ On error will return None (no locale available, e.g. Windows)
+
"""
- raw_locales = locale_getter()
+ try:
+ raw_locales = locale_getter()
+ except:
+ return None
try:
raw_locales = str(raw_locales, encoding=pd.options.display.encoding)
| Error in test suite for timedelta operations
This is on the VM running pandas windows builds.
Windows XP
Python 2.7 32-bit
numpy 1.7.1
```
======================================================================
ERROR: test_timedelta_ops_with_missing_values (pandas.tseries.tests.test_timedeltas.TestTimedeltas)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\workspace\pandas-windows-test-py27\pandas\tseries\tests\test_timedeltas.py", line 247, in test_timedelta_ops_with_missing_values
actual = s1 + timedelta_NaT
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 461, in wrapper
time_converted = _TimeOp.maybe_convert_for_time_op(left, right, name)
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 428, in maybe_convert_for_time_op
return cls(left, right, name)
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 246, in __init__
rvalues = self._convert_to_array(right, name=name, other=lvalues)
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 363, in _convert_to_array
" operation".format(pa.array(values).dtype))
TypeError: incompatible type [timedelta64[ns]] for a datetime/timedelta operation
```
| @changhiskhan can I post / email how you setup this vm - so I can replicate
i'll shoot you an email
| 2013-11-08T16:04:19Z | [] | [] |
Traceback (most recent call last):
File "C:\workspace\pandas-windows-test-py27\pandas\tseries\tests\test_timedeltas.py", line 247, in test_timedelta_ops_with_missing_values
actual = s1 + timedelta_NaT
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 461, in wrapper
time_converted = _TimeOp.maybe_convert_for_time_op(left, right, name)
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 428, in maybe_convert_for_time_op
return cls(left, right, name)
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 246, in __init__
rvalues = self._convert_to_array(right, name=name, other=lvalues)
File "C:\workspace\pandas-windows-test-py27\pandas\core\ops.py", line 363, in _convert_to_array
" operation".format(pa.array(values).dtype))
TypeError: incompatible type [timedelta64[ns]] for a datetime/timedelta operation
| 15,032 |
|||
pandas-dev/pandas | pandas-dev__pandas-5477 | e684bdc6c279d576a7dfe5c39ab184c908b73f6e | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -74,6 +74,7 @@ class NDFrame(PandasObject):
'_data', 'name', '_cacher', '_is_copy', '_subtyp', '_index', '_default_kind', '_default_fill_value']
_internal_names_set = set(_internal_names)
_metadata = []
+ _is_copy = None
def __init__(self, data, axes=None, copy=False, dtype=None, fastpath=False):
| missing _is_copy causes strangeness
Note the duplication of the assignment is not a copy/paste error-- the first one works, the second one fails, presumably because a different path is taken if the column already exists than if it doesn't.
```
>>> pd.__version__
'0.12.0-1081-ge684bdc'
>>> df = pd.DataFrame({"A": [1,2]})
>>> df._is_copy
False
>>> df.to_pickle("tmp.pk")
>>> df2 = pd.read_pickle("tmp.pk")
>>> hasattr(df2, "_is_copy")
False
>>> df2["B"] = df2["A"]
>>> df2["B"] = df2["A"]
Traceback (most recent call last):
File "<ipython-input-155-e1fb2db534a8>", line 1, in <module>
df2["B"] = df2["A"]
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/frame.py", line 1841, in __setitem__
self._set_item(key, value)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/frame.py", line 1907, in _set_item
self._check_setitem_copy()
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/generic.py", line 1001, in _check_setitem_copy
if self._is_copy:
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/generic.py", line 1525, in __getattr__
(type(self).__name__, name))
AttributeError: 'DataFrame' object has no attribute '_is_copy'
```
| easy fix
unpicking doesn't hit the normal creation machinery
thanks for the report
Why not just define '_is_copy=None' on NDFrame?
On Nov 8, 2013 4:45 PM, "jreback" notifications@github.com wrote:
> easy fix
>
> unpicking doesn't hit the normal creation machinery
>
> thanks for the report
>
> —
> Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/5475#issuecomment-28100586
> .
That way, worst that happens is that we accidentally don't name something a
copy
| 2013-11-09T01:43:33Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-155-e1fb2db534a8>", line 1, in <module>
df2["B"] = df2["A"]
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/frame.py", line 1841, in __setitem__
self._set_item(key, value)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/frame.py", line 1907, in _set_item
self._check_setitem_copy()
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/generic.py", line 1001, in _check_setitem_copy
if self._is_copy:
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1081_ge684bdc-py2.7-linux-i686.egg/pandas/core/generic.py", line 1525, in __getattr__
(type(self).__name__, name))
AttributeError: 'DataFrame' object has no attribute '_is_copy'
| 15,033 |
|||
pandas-dev/pandas | pandas-dev__pandas-5538 | d250d64a21c685d95ab61bf4761e81c4e71168d9 | tools.tests.test_util:TestLocaleUtils.test_set_locale fails spuriously on FreeBSD
```
$ nosetests pandas.tools.tests.test_util:TestLocaleUtils.test_set_locale
F
======================================================================
FAIL: test_set_locale (pandas.tools.tests.test_util.TestLocaleUtils)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/obrienjw/src/github/pandas/pandas/tools/tests/test_util.py", line 71, in test_set_locale
pass
File "/home/obrienjw/src/github/pandas/pandas/util/testing.py", line 1308, in __exit__
raise AssertionError("{0} not raised.".format(name))
AssertionError: Error not raised.
----------------------------------------------------------------------
Ran 1 test in 0.044s
FAILED (failures=1)
```
My environment:
```
$ git describe
v0.12.0-1110-gd250d64
$ ci/print_versions.py
INSTALLED VERSIONS
------------------
Python: 2.7.5.final.0
OS: FreeBSD 9.2-PRERELEASE FreeBSD 9.2-PRERELEASE #0 r254955: Tue Aug 27 11:01:00 EDT 2013 root@drivel.saltant.net:/usr/obj/usr/src/sys/NARB amd64
byteorder: little
LC_ALL: None
LANG: None
pandas: Not installed
Cython: 0.19.1
Numpy: 1.7.0
Scipy: 0.12.1
statsmodels: Not installed
patsy: Not installed
scikits.timeseries: Not installed
dateutil: 2.1
pytz: 2013.8
bottleneck: Not installed
PyTables: Not Installed
numexpr: 2.2.2
matplotlib: 1.2.0
openpyxl: Not installed
xlrd: Not installed
xlwt: Not installed
xlsxwriter: Not installed
sqlalchemy: Not installed
lxml: Not installed
bs4: Not installed
html5lib: Not installed
bigquery: Not installed
apiclient: Not installed
```
I will submit a pull request for this shortly.
| 2013-11-17T14:24:40Z | [] | [] |
Traceback (most recent call last):
File "/home/obrienjw/src/github/pandas/pandas/tools/tests/test_util.py", line 71, in test_set_locale
pass
File "/home/obrienjw/src/github/pandas/pandas/util/testing.py", line 1308, in __exit__
raise AssertionError("{0} not raised.".format(name))
AssertionError: Error not raised.
| 15,042 |
|||||
pandas-dev/pandas | pandas-dev__pandas-5558 | e6aaf5e88470af1ed8c43ecaff0f59838ce03b65 | diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -870,12 +870,12 @@ def _tidy_repr(self, max_vals=20):
Internal function, should always return unicode string
"""
num = max_vals // 2
- head = self[:num]._get_repr(print_header=True, length=False,
- dtype=False, name=False)
- tail = self[-(max_vals - num):]._get_repr(print_header=False,
- length=False,
- name=False,
- dtype=False)
+ head = self.iloc[:num]._get_repr(print_header=True, length=False,
+ dtype=False, name=False)
+ tail = self.iloc[-(max_vals - num):]._get_repr(print_header=False,
+ length=False,
+ name=False,
+ dtype=False)
result = head + '\n...\n' + tail
result = '%s\n%s' % (result, self._repr_footer())
| BUG: KeyErrors during value_counts
I'm getting occasional KeyErrors during `.value_counts()` on trunk. It's pretty flaky, and this is the smallest example I had at hand (had one with only ~60 but lost it.)
```
>>> import pandas as pd
>>> print pd.__version__
0.12.0-1128-ge6aaf5e
>>>
>>> ser = {256: 2321.0, 1: 78.0, 2: 2716.0, 3: 0.0, 4: 369.0, 5: 0.0, 6: 269.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 3536.0, 11: 0.0, 12: 24.0, 13: 0.0, 14: 931.0, 15: 0.0, 16: 101.0, 17: 78.0, 18: 9643.0, 19: 0.0, 20: 0.0, 21: 0.0, 22: 63761.0, 23: 0.0, 24: 446.0, 25: 0.0, 26: 34773.0, 27: 0.0, 28: 729.0, 29: 78.0, 30: 0.0, 31: 0.0, 32: 3374.0, 33: 0.0, 34: 1391.0, 35: 0.0, 36: 361.0, 37: 0.0, 38: 61808.0, 39: 0.0, 40: 0.0, 41: 0.0, 42: 6677.0, 43: 0.0, 44: 802.0, 45: 0.0, 46: 2691.0, 47: 0.0, 48: 3582.0, 49: 0.0, 50: 734.0, 51: 0.0, 52: 627.0, 53: 70.0, 54: 2584.0, 55: 0.0, 56: 324.0, 57: 0.0, 58: 605.0, 59: 0.0, 60: 0.0, 61: 0.0, 62: 3989.0, 63: 10.0, 64: 42.0, 65: 0.0, 66: 904.0, 67: 0.0, 68: 88.0, 69: 70.0, 70: 8172.0, 71: 0.0, 72: 0.0, 73: 0.0, 74: 64902.0, 75: 0.0, 76: 347.0, 77: 0.0, 78: 36605.0, 79: 0.0, 80: 379.0, 81: 70.0, 82: 0.0, 83: 0.0, 84: 3001.0, 85: 0.0, 86: 1630.0, 87: 7.0, 88: 364.0, 89: 0.0, 90: 67404.0, 91: 9.0, 92: 0.0, 93: 0.0, 94: 7685.0, 95: 0.0, 96: 1017.0, 97: 0.0, 98: 2831.0, 99: 0.0, 100: 2963.0, 101: 0.0, 102: 854.0, 103: 0.0, 104: 0.0, 105: 0.0, 106: 0.0, 107: 0.0, 108: 0.0, 109: 0.0, 110: 0.0, 111: 0.0, 112: 0.0, 113: 0.0, 114: 0.0, 115: 0.0, 116: 0.0, 117: 0.0, 118: 0.0, 119: 0.0, 120: 0.0, 121: 0.0, 122: 0.0, 123: 0.0, 124: 0.0, 125: 0.0, 126: 67744.0, 127: 22.0, 128: 264.0, 129: 0.0, 260: 197.0, 268: 0.0, 265: 0.0, 269: 0.0, 261: 0.0, 266: 1198.0, 267: 0.0, 262: 2629.0, 258: 775.0, 257: 0.0, 263: 0.0, 259: 0.0, 264: 163.0, 250: 10326.0, 251: 0.0, 252: 1228.0, 253: 0.0, 254: 2769.0, 255: 0.0}
>>> s = pd.Series(ser)
>>> s.value_counts()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/base.py", line 55, in __repr__
return str(self)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/base.py", line 35, in __str__
return self.__bytes__()
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/base.py", line 47, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 854, in __unicode__
result = self._tidy_repr(min(30, max_rows - 4))
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 873, in _tidy_repr
head = self[:num]._get_repr(print_header=True, length=False,
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 698, in __getslice__
return self._slice(slobj)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 478, in _slice
slobj = self.index._convert_slice_indexer(slobj, typ=typ or 'getitem')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1811, in _convert_slice_indexer
return self.slice_indexer(key.start, key.stop, key.step)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1493, in slice_indexer
start_slice, end_slice = self.slice_locs(start, end)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1555, in slice_locs
end_slice = self.get_loc(end)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1017, in get_loc
return self._engine.get_loc(_values_from_object(key))
File "index.pyx", line 129, in pandas.index.IndexEngine.get_loc (pandas/index.c:3548)
File "index.pyx", line 149, in pandas.index.IndexEngine.get_loc (pandas/index.c:3428)
File "hashtable.pyx", line 674, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:10784)
File "hashtable.pyx", line 682, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:10737)
KeyError: 15
>>> 15 in s
True
>>> s[15]
0.0
```
| 2013-11-20T19:52:12Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/base.py", line 55, in __repr__
return str(self)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/base.py", line 35, in __str__
return self.__bytes__()
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/base.py", line 47, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 854, in __unicode__
result = self._tidy_repr(min(30, max_rows - 4))
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 873, in _tidy_repr
head = self[:num]._get_repr(print_header=True, length=False,
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 698, in __getslice__
return self._slice(slobj)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/series.py", line 478, in _slice
slobj = self.index._convert_slice_indexer(slobj, typ=typ or 'getitem')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1811, in _convert_slice_indexer
return self.slice_indexer(key.start, key.stop, key.step)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1493, in slice_indexer
start_slice, end_slice = self.slice_locs(start, end)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1555, in slice_locs
end_slice = self.get_loc(end)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.12.0_1128_ge6aaf5e-py2.7-linux-i686.egg/pandas/core/index.py", line 1017, in get_loc
return self._engine.get_loc(_values_from_object(key))
File "index.pyx", line 129, in pandas.index.IndexEngine.get_loc (pandas/index.c:3548)
File "index.pyx", line 149, in pandas.index.IndexEngine.get_loc (pandas/index.c:3428)
File "hashtable.pyx", line 674, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:10784)
File "hashtable.pyx", line 682, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:10737)
KeyError: 15
| 15,046 |
||||
pandas-dev/pandas | pandas-dev__pandas-5598 | d5ef4eba961fa6d2f53ce43f59accc16877dddb3 | Test failures on windows
The included error message is on 32-bit windows for python 2.6 but it's the same for 64-bit (waiting for other python versions to finish)
this is a blocker for 0.13 rc1
```
======================================================================
FAIL: test_groupby_return_type (pandas.tests.test_groupby.TestGroupBy)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\workspace\pandas-windows-test-py26\pandas\tests\test_groupby.py", line 331, in test_groupby_return_type
assert_frame_equal(result,expected)
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 475, in assert_frame_equal
check_less_precise=check_less_precise)
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 423, in assert_series_equal
assert_attr_equal('dtype', left, right)
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 407, in assert_attr_equal
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 390, in assert_equal
assert a == b, "%s: %r != %r" % (msg.format(a,b), a, b)
AssertionError: attr is not equal [dtype]: dtype('int64') != dtype('int32')
```
| cc @jreback
sure thanks
| 2013-11-27T12:55:23Z | [] | [] |
Traceback (most recent call last):
File "C:\workspace\pandas-windows-test-py26\pandas\tests\test_groupby.py", line 331, in test_groupby_return_type
assert_frame_equal(result,expected)
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 475, in assert_frame_equal
check_less_precise=check_less_precise)
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 423, in assert_series_equal
assert_attr_equal('dtype', left, right)
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 407, in assert_attr_equal
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
File "C:\workspace\pandas-windows-test-py26\pandas\util\testing.py", line 390, in assert_equal
assert a == b, "%s: %r != %r" % (msg.format(a,b), a, b)
AssertionError: attr is not equal [dtype]: dtype('int64') != dtype('int32')
| 15,050 |
||||
pandas-dev/pandas | pandas-dev__pandas-5723 | 39a12efef1809d9253264c7736006efcd881f420 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -246,7 +246,7 @@ API Changes
(:issue:`4390`)
- allow ``ix/loc`` for Series/DataFrame/Panel to set on any axis even when
the single-key is not currently contained in the index for that axis
- (:issue:`2578`, :issue:`5226`, :issue:`5632`)
+ (:issue:`2578`, :issue:`5226`, :issue:`5632`, :issue:`5720`)
- Default export for ``to_clipboard`` is now csv with a sep of `\t` for
compat (:issue:`3368`)
- ``at`` now will enlarge the object inplace (and return the same)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1904,16 +1904,17 @@ def _ensure_valid_index(self, value):
if not len(self.index):
# GH5632, make sure that we are a Series convertible
- try:
- value = Series(value)
- except:
- pass
+ if is_list_like(value):
+ try:
+ value = Series(value)
+ except:
+ pass
- if not isinstance(value, Series):
- raise ValueError('Cannot set a frame with no defined index '
- 'and a value that cannot be converted to a '
- 'Series')
- self._data.set_axis(1, value.index.copy(), check_axis=False)
+ if not isinstance(value, Series):
+ raise ValueError('Cannot set a frame with no defined index '
+ 'and a value that cannot be converted to a '
+ 'Series')
+ self._data.set_axis(1, value.index.copy(), check_axis=False)
def _set_item(self, key, value):
"""
| Empty dataframe corrupted by column add (0.13rc1)
Noticed the following difference in behavior between 0.12 and 0.13rc1 when adding a column to an empty dataframe. Obviously, a weird case, and can be worked around easily.
```
df = pd.DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
y = df[df.A > 5]
y['New'] = np.nan
print y
print y.values
```
```
(pandas-0.12)$ python test.py
Empty DataFrame
Columns: [A, B, New]
Index: []
[]
```
```
(pandas-master)$ python test.py
A B New
0 NaN NaN NaN
[1 rows x 3 columns]
Traceback (most recent call last):
File "do_fail.py", line 8, in <module>
print y.values
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1705, in values
return self.as_matrix()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1697, in as_matrix
self._consolidate_inplace()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1622, in _consolidate_inplace
self._data = self._protect_consolidate(f)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1660, in _protect_consolidate
result = f()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1621, in <lambda>
f = lambda: self._data.consolidate()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 2727, in consolidate
bm = self.__class__(self.blocks, self.axes)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 1945, in __init__
self._verify_integrity()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 2227, in _verify_integrity
tot_items, block.values.shape[1:], self.axes)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 3561, in construction_error
tuple(map(int, [len(ax) for ax in axes]))))
ValueError: Shape of passed values is (3, 0), indices imply (3, 1)
```
| this was fixed by #5633 (after 0.13rc1)
pls confirm with current master
thanks
confirmed this also fails with current master.
| 2013-12-17T20:36:18Z | [] | [] |
Traceback (most recent call last):
File "do_fail.py", line 8, in <module>
print y.values
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1705, in values
return self.as_matrix()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1697, in as_matrix
self._consolidate_inplace()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1622, in _consolidate_inplace
self._data = self._protect_consolidate(f)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1660, in _protect_consolidate
result = f()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/generic.py", line 1621, in <lambda>
f = lambda: self._data.consolidate()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 2727, in consolidate
bm = self.__class__(self.blocks, self.axes)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 1945, in __init__
self._verify_integrity()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 2227, in _verify_integrity
tot_items, block.values.shape[1:], self.axes)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0rc1_82_g66934c2-py2.7-linux-i686.egg/pandas/core/internals.py", line 3561, in construction_error
tuple(map(int, [len(ax) for ax in axes]))))
ValueError: Shape of passed values is (3, 0), indices imply (3, 1)
| 15,073 |
|||
pandas-dev/pandas | pandas-dev__pandas-5772 | 62c41537061fa77be3ee59f07eaf5dcf5a54913c | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -420,7 +420,10 @@ def can_do_equal_len():
self.obj._maybe_update_cacher(clear=True)
def _align_series(self, indexer, ser):
- # indexer to assign Series can be tuple or scalar
+ # indexer to assign Series can be tuple, slice, scalar
+ if isinstance(indexer, slice):
+ indexer = tuple([indexer])
+
if isinstance(indexer, tuple):
aligners = [not _is_null_slice(idx) for idx in indexer]
| Cannot increment after loc indexing
``` python
import pandas as pd
a = pd.Series(index=[4,5,6], data=0)
print a.loc[4:5]
a.loc[4:5] += 1
```
Yields:
```
4 0
5 0
Traceback (most recent call last):
File "temp1.py", line 9, in <module>
dtype: int64
a.loc[4:5] += 1
File "lib\site-packages\pandas\core\indexing.py", line 88, in __setitem__
self._setitem_with_indexer(indexer, value)
File "lib\site-packages\pandas\core\indexing.py", line 177, in _setitem_with_indexer
value = self._align_series(indexer, value)
File "lib\site-packages\pandas\core\indexing.py", line 206, in _align_series
raise ValueError('Incompatible indexer with Series')
ValueError: Incompatible indexer with Series
```
| 2013-12-25T12:54:50Z | [] | [] |
Traceback (most recent call last):
File "temp1.py", line 9, in <module>
dtype: int64
| 15,082 |
||||
pandas-dev/pandas | pandas-dev__pandas-5791 | 7b255724dab4637b7064c4116f954f266c9a036c | test_aggregate_item_by_item fails on SPARC py3
```
=============================================================
=========
FAIL: test_aggregate_item_by_item (pandas.tests.test_groupby.TestGroupBy)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py3_x-wheezy-sparc/build/venv/lib/python3.2/site-packages/pandas-0.13.0rc1_128_g375a66e-py3.2-linux-sparc64.egg/pandas/tests/test_groupby.py", line 556, in test_aggregate_item_by_item
assert_almost_equal(result.xs('foo'), [foo] * K)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2554)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1796)
File "testing.pyx", line 113, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2077)
AssertionError: First object is numeric, second is not: 5.0 != 5
```
http://nipy.bic.berkeley.edu/builders/pandas-py3.x-wheezy-sparc
http://nipy.bic.berkeley.edu/builders/pandas-py3.x-wheezy-sparc/builds/330/steps/shell_4/logs/stdio
I've got a patch for a seperate unicode issue awaiting travis, with it and this fixed (cc @jreback), sparc should be green across the board.
| cc @yarikoptic
In the build environment then I schroot to.... python 2.7 and 3.3 work fine
3.2 seems to be installed as well, but no numpy installed. can you put 1.7.1 (or 1.8 ok too) for that as well?
other deps seem ok.
this error passed on 3.3 (and the report is on 3.2)
| 2013-12-29T23:34:43Z | [] | [] |
Traceback (most recent call last):
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py3_x-wheezy-sparc/build/venv/lib/python3.2/site-packages/pandas-0.13.0rc1_128_g375a66e-py3.2-linux-sparc64.egg/pandas/tests/test_groupby.py", line 556, in test_aggregate_item_by_item
assert_almost_equal(result.xs('foo'), [foo] * K)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2554)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1796)
File "testing.pyx", line 113, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2077)
AssertionError: First object is numeric, second is not: 5.0 != 5
| 15,086 |
||||
pandas-dev/pandas | pandas-dev__pandas-5844 | 3881f03f20e6626be260140a45343793b175a4d8 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -486,7 +486,8 @@ def pxd(name):
msgpack_ext = Extension('pandas.msgpack',
sources = [srcpath('msgpack',
- suffix=suffix, subdir='')],
+ suffix=suffix if suffix == '.pyx' else '.cpp',
+ subdir='')],
language='c++',
include_dirs=common_include,
define_macros=macros)
@@ -499,7 +500,7 @@ def pxd(name):
if suffix == '.pyx' and 'setuptools' in sys.modules:
# undo dumb setuptools bug clobbering .pyx sources back to .c
for ext in extensions:
- if ext.sources[0].endswith('.c'):
+ if ext.sources[0].endswith(('.c','.cpp')):
root, _ = os.path.splitext(ext.sources[0])
ext.sources[0] = root + suffix
| python3.2 difficulty to use pre-generated pandas/msgpack.cpp
on all aged Debian/Ubuntus (e.g. wheeze) where I build using pre-cythonized sources python3.2 seems to become blind and tries only to look for msgpack.c, not msgpack.cpp:
```
~/pandas-0.13.0# python3.2 setup.py build_ext
running build_ext
Traceback (most recent call last):
File "setup.py", line 583, in <module>
**setuptools_kwargs)
File "/usr/lib/python3.2/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.2/distutils/dist.py", line 917, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.2/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/usr/lib/python3.2/distutils/command/build_ext.py", line 344, in run
self.build_extensions()
File "setup.py", line 346, in build_extensions
self.check_cython_extensions(self.extensions)
File "setup.py", line 343, in check_cython_extensions
""" % src)
Exception: Cython-generated file 'pandas/msgpack.c' not found.
Cython is required to compile pandas from a development branch.
Please install Cython or download a release package of pandas.
~/pandas-0.13.0# ls -l pandas/msgpack.cpp
-rw-r--r-- 1 pbuilder pbuilder 570047 Jan 3 03:44 pandas/msgpack.cpp
```
I wonder if anyone ran into this misbehavior (before I start patching left and right ;-) )
| this has a specific entry in setup.py to check for the cpp, but @wesm would know more
your line numbers are different that what I see in current master. is this a current setup.py file?
this one I believe is just a symptom of borken setuptools or distutils with python3.2
there were changes in setup.py after 0.13.0 release that is why line numbers might be different?
@yarikoptic sorry..was some slight changes (in formatting)
can you work around this?
I've never had to deal with this myself.
| 2014-01-04T03:28:50Z | [] | [] |
Traceback (most recent call last):
File "setup.py", line 583, in <module>
**setuptools_kwargs)
File "/usr/lib/python3.2/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.2/distutils/dist.py", line 917, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.2/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/usr/lib/python3.2/distutils/command/build_ext.py", line 344, in run
self.build_extensions()
File "setup.py", line 346, in build_extensions
self.check_cython_extensions(self.extensions)
File "setup.py", line 343, in check_cython_extensions
""" % src)
Exception: Cython-generated file 'pandas/msgpack.c' not found.
| 15,091 |
|||
pandas-dev/pandas | pandas-dev__pandas-5894 | caec432c705cffb9e8c4fd2bae2d93bd37e00367 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -83,6 +83,7 @@ Bug Fixes
- Bug in groupby dtype conversion with datetimelike (:issue:`5869`)
- Regresssion in handling of empty Series as indexers to Series (:issue:`5877`)
- Bug in internal caching, related to (:issue:`5727`)
+ - Testing bug in reading json/msgpack from a non-filepath on windows under py3 (:issue:`5874`)
pandas 0.13.0
-------------
diff --git a/pandas/io/json.py b/pandas/io/json.py
--- a/pandas/io/json.py
+++ b/pandas/io/json.py
@@ -173,7 +173,15 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
filepath_or_buffer, _ = get_filepath_or_buffer(path_or_buf)
if isinstance(filepath_or_buffer, compat.string_types):
- if os.path.exists(filepath_or_buffer):
+ try:
+ exists = os.path.exists(filepath_or_buffer)
+
+ # if the filepath is too long will raise here
+ # 5874
+ except (TypeError,ValueError):
+ exists = False
+
+ if exists:
with open(filepath_or_buffer, 'r') as fh:
json = fh.read()
else:
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -147,11 +147,11 @@ def read(fh):
if isinstance(path_or_buf, compat.string_types):
try:
- path_exists = os.path.exists(path_or_buf)
- except (TypeError):
- path_exists = False
+ exists = os.path.exists(path_or_buf)
+ except (TypeError,ValueError):
+ exists = False
- if path_exists:
+ if exists:
with open(path_or_buf, 'rb') as fh:
return read(fh)
| test_json:test_round_trip_exception_ fails on python3 in windows
The API design issue discussed in #5655 again. pd.read_json guesses that a 150KB long string is a filename.
``` python
======================================================================
ERROR: test_round_trip_exception_ (pandas.io.tests.test_json.test_pandas.TestPandasContainer)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\workspace\pandas_tests@2\BITS\64\PYTHONVER\33\pandas\util\testing.py", line 1057, in network_wrapper
return t(*args, **kwargs)
File "C:\workspace\pandas_tests@2\BITS\64\PYTHONVER\33\pandas\io\tests\test_json\test_pandas.py", line 593, in test_round_trip_exception_
result = pd.read_json(s)
File "C:\workspace\pandas_tests@2\BITS\64\PYTHONVER\33\pandas\io\json.py", line 176, in read_json
if os.path.exists(filepath_or_buffer):
File "c:\envs\33-64\lib\genericpath.py", line 18, in exists
os.stat(path)
nose.proxy.ValueError: ValueError: path too long for Windows
-------------------- >> begin captured stdout << ---------------------
Failed: ValueError('path too long for Windows',)
--------------------- >> end captured stdout << ----------------------
```
| 2014-01-09T16:38:56Z | [] | [] |
Traceback (most recent call last):
File "C:\workspace\pandas_tests@2\BITS\64\PYTHONVER\33\pandas\util\testing.py", line 1057, in network_wrapper
return t(*args, **kwargs)
File "C:\workspace\pandas_tests@2\BITS\64\PYTHONVER\33\pandas\io\tests\test_json\test_pandas.py", line 593, in test_round_trip_exception_
result = pd.read_json(s)
File "C:\workspace\pandas_tests@2\BITS\64\PYTHONVER\33\pandas\io\json.py", line 176, in read_json
if os.path.exists(filepath_or_buffer):
File "c:\envs\33-64\lib\genericpath.py", line 18, in exists
os.stat(path)
nose.proxy.ValueError: ValueError: path too long for Windows
| 15,101 |
||||
pandas-dev/pandas | pandas-dev__pandas-5979 | 46e1f146419f199b3a1fa37f837c9c4aaf6e2e64 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1444,7 +1444,7 @@ def info(self, verbose=True, buf=None, max_cols=None):
count= ""
if show_counts:
- count = counts[i]
+ count = counts.iloc[i]
lines.append(_put_str(col, space) +
tmpl % (count, dtype))
diff --git a/vb_suite/parser.py b/vb_suite/parser_vb.py
similarity index 100%
rename from vb_suite/parser.py
rename to vb_suite/parser_vb.py
diff --git a/vb_suite/suite.py b/vb_suite/suite.py
--- a/vb_suite/suite.py
+++ b/vb_suite/suite.py
@@ -17,12 +17,12 @@
'miscellaneous',
'panel_ctor',
'packers',
- 'parser',
+ 'parser_vb',
'plotting',
'reindex',
'replace',
'sparse',
- 'strings',
+ 'strings',
'reshape',
'stat_ops',
'timeseries',
| vb_suite parser.py can shadow builtin parser module
This is identical to Tom Augspurger's issue [here](https://groups.google.com/d/msg/pydata/pxFe_ISulyA/Upyjnx96P-UJ).
Under some circumstances, you can't run the performance tests:
```
~/sys/pandas$ ./test_perf.sh
Traceback (most recent call last):
File "/home/mcneil/sys/pandas/vb_suite/test_perf.py", line 55, in <module>
import pandas as pd
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/__init__.py", line 37, in <module>
import pandas.core.config_init
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/core/config_init.py", line 17, in <module>
from pandas.core.format import detect_console_encoding
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/core/format.py", line 9, in <module>
from pandas.core.index import Index, MultiIndex, _ensure_index
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/core/index.py", line 11, in <module>
import pandas.index as _index
File "index.pyx", line 34, in init pandas.index (pandas/index.c:16227)
File "/usr/local/lib/python2.7/dist-packages/pytz/__init__.py", line 29, in <module>
from pkg_resources import resource_stream
File "/usr/local/lib/python2.7/dist-packages/pkg_resources.py", line 76, in <module>
import parser
File "/home/mcneil/sys/pandas/vb_suite/parser.py", line 1, in <module>
from vbench.api import Benchmark
File "/usr/local/lib/python2.7/dist-packages/vbench/__init__.py", line 2, in <module>
import vbench.log
File "/usr/local/lib/python2.7/dist-packages/vbench/log.py", line 34, in <module>
from vbench.config import is_interactive
File "/usr/local/lib/python2.7/dist-packages/vbench/config.py", line 4, in <module>
TIME_ZONE = pytz.timezone('US/Eastern')
AttributeError: 'module' object has no attribute 'timezone'
```
This happens because pytz does
```
from pkg_resources import resource_stream
```
which itself calls
```
import parser
```
which is picking up `pandas/vb_suite/parser.py` instead of the builtin parser module. Net result: the `pytz` import never manages to get to the point where `timezone` is defined.
As expected, renaming `parser.py` solves the problem (and restoring the original name creates it again).
| 2014-01-16T19:38:24Z | [] | [] |
Traceback (most recent call last):
File "/home/mcneil/sys/pandas/vb_suite/test_perf.py", line 55, in <module>
import pandas as pd
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/__init__.py", line 37, in <module>
import pandas.core.config_init
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/core/config_init.py", line 17, in <module>
from pandas.core.format import detect_console_encoding
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/core/format.py", line 9, in <module>
from pandas.core.index import Index, MultiIndex, _ensure_index
File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.0_75_g7d9e9fa-py2.7-linux-i686.egg/pandas/core/index.py", line 11, in <module>
import pandas.index as _index
File "index.pyx", line 34, in init pandas.index (pandas/index.c:16227)
File "/usr/local/lib/python2.7/dist-packages/pytz/__init__.py", line 29, in <module>
from pkg_resources import resource_stream
File "/usr/local/lib/python2.7/dist-packages/pkg_resources.py", line 76, in <module>
import parser
File "/home/mcneil/sys/pandas/vb_suite/parser.py", line 1, in <module>
from vbench.api import Benchmark
File "/usr/local/lib/python2.7/dist-packages/vbench/__init__.py", line 2, in <module>
import vbench.log
File "/usr/local/lib/python2.7/dist-packages/vbench/log.py", line 34, in <module>
from vbench.config import is_interactive
File "/usr/local/lib/python2.7/dist-packages/vbench/config.py", line 4, in <module>
TIME_ZONE = pytz.timezone('US/Eastern')
AttributeError: 'module' object has no attribute 'timezone'
| 15,118 |
||||
pandas-dev/pandas | pandas-dev__pandas-6004 | bf2ca400484385ac1dde0a2faa83b99a75e60bb2 | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -568,7 +568,10 @@ def flush(self, fsync=False):
if self._handle is not None:
self._handle.flush()
if fsync:
- os.fsync(self._handle.fileno())
+ try:
+ os.fsync(self._handle.fileno())
+ except:
+ pass
def get(self, key):
"""
| s390 test failure: file descriptor cannot be a negative integer
full log https://buildd.debian.org/status/fetch.php?pkg=pandas&arch=s390x&ver=0.13.0-2&stamp=1390095775&file=log
```
======================================================================
ERROR: test_flush (pandas.io.tests.test_pytables.TestHDFStore)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/«PKGBUILDDIR»/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_pytables.py", line 523, in test_flush
store.flush(fsync=True)
File "/«PKGBUILDDIR»/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 571, in flush
os.fsync(self._handle.fileno())
ValueError: file descriptor cannot be a negative integer (-1)
```
may be due to a recent update to pytables 3.0.0-2
| 2014-01-19T18:44:36Z | [] | [] |
Traceback (most recent call last):
File "/«PKGBUILDDIR»/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_pytables.py", line 523, in test_flush
store.flush(fsync=True)
File "/«PKGBUILDDIR»/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 571, in flush
os.fsync(self._handle.fileno())
ValueError: file descriptor cannot be a negative integer (-1)
| 15,122 |
||||
pandas-dev/pandas | pandas-dev__pandas-6022 | 2f24ff210d3488217a7b79ce469908f489797e8d | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -126,8 +126,10 @@ Bug Fixes
of pandas in QTConsole, now fixed. If you're using an older version and
need to supress the warnings, see (:issue:`5922`).
- Bug in merging ``timedelta`` dtypes (:issue:`5695`)
- - Bug in plotting.scatter_matrix function. Wrong alignment among diagonal
+ - Bug in plotting.scatter_matrix function. Wrong alignment among diagonal
and off-diagonal plots, see (:issue:`5497`).
+ - Regression in Series with a multi-index via ix (:issue:`6018`)
+ - Bug in Series.xs with a multi-index (:issue:`6018`)
pandas 0.13.0
-------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2037,145 +2037,6 @@ def _sanitize_column(self, key, value):
def _series(self):
return self._data.get_series_dict()
- def xs(self, key, axis=0, level=None, copy=True, drop_level=True):
- """
- Returns a cross-section (row(s) or column(s)) from the DataFrame.
- Defaults to cross-section on the rows (axis=0).
-
- Parameters
- ----------
- key : object
- Some label contained in the index, or partially in a MultiIndex
- axis : int, default 0
- Axis to retrieve cross-section on
- level : object, defaults to first n levels (n=1 or len(key))
- In case of a key partially contained in a MultiIndex, indicate
- which levels are used. Levels can be referred by label or position.
- copy : boolean, default True
- Whether to make a copy of the data
- drop_level : boolean, default True
- If False, returns object with same levels as self.
-
- Examples
- --------
- >>> df
- A B C
- a 4 5 2
- b 4 0 9
- c 9 7 3
- >>> df.xs('a')
- A 4
- B 5
- C 2
- Name: a
- >>> df.xs('C', axis=1)
- a 2
- b 9
- c 3
- Name: C
- >>> s = df.xs('a', copy=False)
- >>> s['A'] = 100
- >>> df
- A B C
- a 100 5 2
- b 4 0 9
- c 9 7 3
-
-
- >>> df
- A B C D
- first second third
- bar one 1 4 1 8 9
- two 1 7 5 5 0
- baz one 1 6 6 8 0
- three 2 5 3 5 3
- >>> df.xs(('baz', 'three'))
- A B C D
- third
- 2 5 3 5 3
- >>> df.xs('one', level=1)
- A B C D
- first third
- bar 1 4 1 8 9
- baz 1 6 6 8 0
- >>> df.xs(('baz', 2), level=[0, 'third'])
- A B C D
- second
- three 5 3 5 3
-
- Returns
- -------
- xs : Series or DataFrame
-
- """
- axis = self._get_axis_number(axis)
- labels = self._get_axis(axis)
- if level is not None:
- loc, new_ax = labels.get_loc_level(key, level=level,
- drop_level=drop_level)
-
- if not copy and not isinstance(loc, slice):
- raise ValueError('Cannot retrieve view (copy=False)')
-
- # level = 0
- loc_is_slice = isinstance(loc, slice)
- if not loc_is_slice:
- indexer = [slice(None)] * 2
- indexer[axis] = loc
- indexer = tuple(indexer)
- else:
- indexer = loc
- lev_num = labels._get_level_number(level)
- if labels.levels[lev_num].inferred_type == 'integer':
- indexer = self.index[loc]
-
- # select on the correct axis
- if axis == 1 and loc_is_slice:
- indexer = slice(None), indexer
- result = self.ix[indexer]
- setattr(result, result._get_axis_name(axis), new_ax)
- return result
-
- if axis == 1:
- data = self[key]
- if copy:
- data = data.copy()
- return data
-
- self._consolidate_inplace()
-
- index = self.index
- if isinstance(index, MultiIndex):
- loc, new_index = self.index.get_loc_level(key,
- drop_level=drop_level)
- else:
- loc = self.index.get_loc(key)
-
- if isinstance(loc, np.ndarray):
- if loc.dtype == np.bool_:
- inds, = loc.nonzero()
- return self.take(inds, axis=axis, convert=False)
- else:
- return self.take(loc, axis=axis, convert=True)
-
- if not np.isscalar(loc):
- new_index = self.index[loc]
-
- if np.isscalar(loc):
-
- new_values, copy = self._data.fast_2d_xs(loc, copy=copy)
- result = Series(new_values, index=self.columns,
- name=self.index[loc])
- result.is_copy=True
-
- else:
- result = self[loc]
- result.index = new_index
-
- return result
-
- _xs = xs
-
def lookup(self, row_labels, col_labels):
"""Label-based "fancy indexing" function for DataFrame.
Given equal-length arrays of row and column labels, return an
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1133,6 +1133,145 @@ def take(self, indices, axis=0, convert=True, is_copy=True):
return result
+ def xs(self, key, axis=0, level=None, copy=True, drop_level=True):
+ """
+ Returns a cross-section (row(s) or column(s)) from the Series/DataFrame.
+ Defaults to cross-section on the rows (axis=0).
+
+ Parameters
+ ----------
+ key : object
+ Some label contained in the index, or partially in a MultiIndex
+ axis : int, default 0
+ Axis to retrieve cross-section on
+ level : object, defaults to first n levels (n=1 or len(key))
+ In case of a key partially contained in a MultiIndex, indicate
+ which levels are used. Levels can be referred by label or position.
+ copy : boolean, default True
+ Whether to make a copy of the data
+ drop_level : boolean, default True
+ If False, returns object with same levels as self.
+
+ Examples
+ --------
+ >>> df
+ A B C
+ a 4 5 2
+ b 4 0 9
+ c 9 7 3
+ >>> df.xs('a')
+ A 4
+ B 5
+ C 2
+ Name: a
+ >>> df.xs('C', axis=1)
+ a 2
+ b 9
+ c 3
+ Name: C
+ >>> s = df.xs('a', copy=False)
+ >>> s['A'] = 100
+ >>> df
+ A B C
+ a 100 5 2
+ b 4 0 9
+ c 9 7 3
+
+
+ >>> df
+ A B C D
+ first second third
+ bar one 1 4 1 8 9
+ two 1 7 5 5 0
+ baz one 1 6 6 8 0
+ three 2 5 3 5 3
+ >>> df.xs(('baz', 'three'))
+ A B C D
+ third
+ 2 5 3 5 3
+ >>> df.xs('one', level=1)
+ A B C D
+ first third
+ bar 1 4 1 8 9
+ baz 1 6 6 8 0
+ >>> df.xs(('baz', 2), level=[0, 'third'])
+ A B C D
+ second
+ three 5 3 5 3
+
+ Returns
+ -------
+ xs : Series or DataFrame
+
+ """
+ axis = self._get_axis_number(axis)
+ labels = self._get_axis(axis)
+ if level is not None:
+ loc, new_ax = labels.get_loc_level(key, level=level,
+ drop_level=drop_level)
+
+ if not copy and not isinstance(loc, slice):
+ raise ValueError('Cannot retrieve view (copy=False)')
+
+ # level = 0
+ loc_is_slice = isinstance(loc, slice)
+ if not loc_is_slice:
+ indexer = [slice(None)] * self.ndim
+ indexer[axis] = loc
+ indexer = tuple(indexer)
+ else:
+ indexer = loc
+ lev_num = labels._get_level_number(level)
+ if labels.levels[lev_num].inferred_type == 'integer':
+ indexer = self.index[loc]
+
+ # select on the correct axis
+ if axis == 1 and loc_is_slice:
+ indexer = slice(None), indexer
+ result = self.ix[indexer]
+ setattr(result, result._get_axis_name(axis), new_ax)
+ return result
+
+ if axis == 1:
+ data = self[key]
+ if copy:
+ data = data.copy()
+ return data
+
+ self._consolidate_inplace()
+
+ index = self.index
+ if isinstance(index, MultiIndex):
+ loc, new_index = self.index.get_loc_level(key,
+ drop_level=drop_level)
+ else:
+ loc = self.index.get_loc(key)
+
+ if isinstance(loc, np.ndarray):
+ if loc.dtype == np.bool_:
+ inds, = loc.nonzero()
+ return self.take(inds, axis=axis, convert=False)
+ else:
+ return self.take(loc, axis=axis, convert=True)
+
+ if not np.isscalar(loc):
+ new_index = self.index[loc]
+
+ if np.isscalar(loc):
+ from pandas import Series
+ new_values, copy = self._data.fast_2d_xs(loc, copy=copy)
+ result = Series(new_values, index=self.columns,
+ name=self.index[loc])
+ result.is_copy=True
+
+ else:
+ result = self[loc]
+ result.index = new_index
+
+ return result
+
+ _xs = xs
+
# TODO: Check if this was clearer in 0.12
def select(self, crit, axis=0):
"""
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -57,7 +57,9 @@ def __getitem__(self, key):
def _get_label(self, label, axis=0):
# ueber-hack
- if (isinstance(label, tuple) and
+ if self.ndim == 1:
+ return self.obj[label]
+ elif (isinstance(label, tuple) and
isinstance(label[axis], slice)):
raise IndexingError('no slices here')
@@ -1364,46 +1366,6 @@ def _crit(v):
return not both_none and (_crit(obj.start) and _crit(obj.stop))
-class _SeriesIndexer(_IXIndexer):
-
- """
- Class to support fancy indexing, potentially using labels
-
- Notes
- -----
- Indexing based on labels is INCLUSIVE
- Slicing uses PYTHON SEMANTICS (endpoint is excluded)
-
- If Index contains int labels, these will be used rather than the locations,
- so be very careful (ambiguous).
-
- Examples
- --------
- >>> ts.ix[5:10] # equivalent to ts[5:10]
- >>> ts.ix[[date1, date2, date3]]
- >>> ts.ix[date1:date2] = 0
- """
-
- def _get_label(self, key, axis=0):
- return self.obj[key]
-
- def _get_loc(self, key, axis=0):
- return self.obj.values[key]
-
- def _slice(self, indexer, axis=0, typ=None):
- return self.obj._get_values(indexer)
-
- def _setitem_with_indexer(self, indexer, value):
-
- # need to delegate to the super setter
- if isinstance(indexer, dict):
- return super(_SeriesIndexer, self)._setitem_with_indexer(indexer,
- value)
-
- # fast access
- self.obj._set_values(indexer, value)
-
-
def _check_bool_indexer(ax, key):
# boolean indexing, need to check that the data are aligned, otherwise
# disallowed
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -28,7 +28,7 @@
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
_ensure_index, _handle_legacy_indexes)
from pandas.core.indexing import (
- _SeriesIndexer, _check_bool_indexer, _check_slice_bounds,
+ _check_bool_indexer, _check_slice_bounds,
_is_index_slice, _maybe_convert_indices)
from pandas.core import generic
from pandas.core.internals import SingleBlockManager
@@ -445,11 +445,6 @@ def _maybe_box(self, values):
return values
- def _xs(self, key, axis=0, level=None, copy=True):
- return self.__getitem__(key)
-
- xs = _xs
-
def _ixs(self, i, axis=0):
"""
Return the i-th value or values in the Series by location
@@ -2473,10 +2468,6 @@ def to_period(self, freq=None, copy=True):
Series._add_numeric_operations()
_INDEX_TYPES = ndarray, Index, list, tuple
-# reinstall the SeriesIndexer
-# defined in indexing.py; pylint: disable=E0203
-Series._create_indexer('ix', _SeriesIndexer)
-
#------------------------------------------------------------------------------
# Supplementary functions
| Multi-Index Slice Selection in Series
Slicing in Series with MultiIndex does not seem to work on master with .ix or .loc. This is not comparable with previous versions.
``` python
In [54]:
s = pd.Series([1,2,3])
s.index = pd.MultiIndex.from_tuples([(0,0),(1,1), (2,1)])
s[:,0]
Out[55]:
1 2
2 3
In [56]:
s.ix[:,1]
IndexingError
Traceback (most recent call last)
<ipython-input-56-3163789d3245> in <module>()
----> 1 s.ix[:,1]
/cellar/users/agross/anaconda2/lib/python2.7/site-packages/pandas-0.13.0_120_gdd89ce4-py2.7-linux-x86_64.egg/pandas/core/indexing.pyc in __getitem__(self, key)
52 pass
53
---> 54 return self._getitem_tuple(key)
55 else:
56 return self._getitem_axis(key, axis=0)
/cellar/users/agross/anaconda2/lib/python2.7/site-packages/pandas-0.13.0_120_gdd89ce4-py2.7-linux-x86_64.egg/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
593
594 # no multi-index, so validate all of the indexers
--> 595 self._has_valid_tuple(tup)
596
597 # ugly hack for GH #836
/cellar/users/agross/anaconda2/lib/python2.7/site-packages/pandas-0.13.0_120_gdd89ce4-py2.7-linux-x86_64.egg/pandas/core/indexing.pyc in _has_valid_tuple(self, key)
103 for i, k in enumerate(key):
104 if i >= self.obj.ndim:
--> 105 raise IndexingError('Too many indexers')
106 if not self._has_valid_type(k, i):
107 raise ValueError("Location based indexing can only have [%s] "
IndexingError: Too many indexers
```
| 2014-01-21T14:11:10Z | [] | [] |
Traceback (most recent call last)
<ipython-input-56-3163789d3245> in <module>()
----> 1 s.ix[:,1]
/cellar/users/agross/anaconda2/lib/python2.7/site-packages/pandas-0.13.0_120_gdd89ce4-py2.7-linux-x86_64.egg/pandas/core/indexing.pyc in __getitem__(self, key)
52 pass
53
---> 54 return self._getitem_tuple(key)
55 else:
56 return self._getitem_axis(key, axis=0)
/cellar/users/agross/anaconda2/lib/python2.7/site-packages/pandas-0.13.0_120_gdd89ce4-py2.7-linux-x86_64.egg/pandas/core/indexing.pyc in _getitem_tuple(self, tup)
593
594 # no multi-index, so validate all of the indexers
--> 595 self._has_valid_tuple(tup)
596
597 # ugly hack for GH #836
/cellar/users/agross/anaconda2/lib/python2.7/site-packages/pandas-0.13.0_120_gdd89ce4-py2.7-linux-x86_64.egg/pandas/core/indexing.pyc in _has_valid_tuple(self, key)
103 for i, k in enumerate(key):
104 if i >= self.obj.ndim:
--> 105 raise IndexingError('Too many indexers')
106 if not self._has_valid_type(k, i):
107 raise ValueError("Location based indexing can only have [%s] "
IndexingError: Too many indexers
| 15,126 |
||||
pandas-dev/pandas | pandas-dev__pandas-6097 | 07e9e20a22f9919a8238548c12310555f34b3b34 | wonky test on travis TestHDFStore:test_append_with_data_columns
```
======================================================================
FAIL: test_append_with_data_columns (pandas.io.tests.test_pytables.TestHDFStore)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.2_with_system_site_packages/lib/python3.2/site-packages/pandas/io/tests/test_pytables.py", line 1330, in test_append_with_data_columns
tm.assert_frame_equal(result, expected)
File "/home/travis/virtualenv/python3.2_with_system_site_packages/lib/python3.2/site-packages/pandas/util/testing.py", line 479, in assert_frame_equal
assert_index_equal(left.index, right.index)
File "/home/travis/virtualenv/python3.2_with_system_site_packages/lib/python3.2/site-packages/pandas/util/testing.py", line 423, in assert_index_equal
right.dtype))
AssertionError: [index] left [datetime64[ns] <class 'pandas.tseries.index.DatetimeIndex'>
[2000-01-24, ..., 2000-01-28]
Length: 5, Freq: B, Timezone: None], right [<class 'pandas.tseries.index.DatetimeIndex'>
[2000-01-24, ..., 2000-01-28]
Length: 3, Freq: None, Timezone: None datetime64[ns]]
```
| yep
let me just skip it
Ok, but skipped tests are no tests.
ScatterCI should allow us to run some stats on usual suspects in a little bit,
it saves everything.
ok
I have looked at it before
let me see
| 2014-01-25T17:29:33Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.2_with_system_site_packages/lib/python3.2/site-packages/pandas/io/tests/test_pytables.py", line 1330, in test_append_with_data_columns
tm.assert_frame_equal(result, expected)
| 15,139 |
||||
pandas-dev/pandas | pandas-dev__pandas-6122 | a323cc9a71369503252f2ad9e10fda2d635317c4 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -157,6 +157,8 @@ Bug Fixes
- Bug with insert of strings into DatetimeIndex (:issue:`5818`)
- Fixed unicode bug in to_html/HTML repr (:issue:`6098`)
- Fixed missing arg validation in get_options_data (:issue:`6105`)
+ - Bug in assignment with duplicate columns in a frame where the locations
+ are a slice (e.g. next to each other) (:issue:`6120`)
pandas 0.13.0
-------------
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -2969,7 +2969,7 @@ def _set_item(item, arr):
# we are inserting one by one, so the index can go from unique
# to non-unique during the loop, need to have _ref_locs defined
# at all times
- if np.isscalar(item) and com.is_list_like(loc):
+ if np.isscalar(item) and (com.is_list_like(loc) or isinstance(loc, slice)):
# first delete from all blocks
self.delete(item)
| Assign to df with repeated column fails
If you have a DataFrame with a repeated or non-unique column, then some assignments fail.
``` python
df = pd.DataFrame(np.random.randn(10,2), columns=['that', 'that'])
df2
Out[10]:
that that
0 1 1
1 1 1
2 1 1
3 1 1
4 1 1
5 1 1
6 1 1
7 1 1
8 1 1
9 1 1
[10 rows x 2 columns]
```
This is float data and the following works:
``` python
df['that'] = 1.0
```
However, this fails with an error and breaks the dataframe (e.g. a subsequent repr will also fail.)
``` python
df2['that'] = 1
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-11-8701f5b0efe4>", line 1, in <module>
df2['that'] = 1
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 1879, in __setitem__
self._set_item(key, value)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 1960, in _set_item
NDFrame._set_item(self, key, value)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1057, in _set_item
self._data.set(key, value)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2968, in set
_set_item(item, arr[None, :])
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2927, in _set_item
self._add_new_block(item, arr, loc=None)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3108, in _add_new_block
new_block = make_block(value, self.items[loc:loc + 1].copy(),
TypeError: unsupported operand type(s) for +: 'slice' and 'int'
```
I stepped through the code and it looked like most places handle repeated columns ok except the code that reallocates arrays when the dtype changes.
I've tested this against pandas 0.13.0 and the latest master. Here's the output of installed versions when running on the master:
commit: None
python: 2.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 2.6.18-308.el5
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB
pandas: 0.13.0-292-g4dcecb0
Cython: 0.16
numpy: 1.7.1
scipy: 0.9.0
statsmodels: None
patsy: None
scikits.timeseries: None
dateutil: 1.5
pytz: None
bottleneck: 0.6.0
tables: 2.3.1-1
numexpr: 2.0.1
matplotlib: 1.1.1
openpyxl: None
xlrd: 0.8.0
xlwt: None
xlsxwriter: None
sqlalchemy: None
lxml: 2.3.6
bs4: None
html5lib: None
bq: None
apiclient: None
| looks like an untested case; i'll take look
| 2014-01-27T12:41:28Z | [] | [] |
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-11-8701f5b0efe4>", line 1, in <module>
df2['that'] = 1
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 1879, in __setitem__
self._set_item(key, value)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 1960, in _set_item
NDFrame._set_item(self, key, value)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1057, in _set_item
self._data.set(key, value)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2968, in set
_set_item(item, arr[None, :])
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2927, in _set_item
self._add_new_block(item, arr, loc=None)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3108, in _add_new_block
new_block = make_block(value, self.items[loc:loc + 1].copy(),
TypeError: unsupported operand type(s) for +: 'slice' and 'int'
| 15,147 |
|||
pandas-dev/pandas | pandas-dev__pandas-6123 | ebe46412938bcfedfdeb1b200162cb8747b292c0 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -159,6 +159,8 @@ Bug Fixes
- Fixed missing arg validation in get_options_data (:issue:`6105`)
- Bug in assignment with duplicate columns in a frame where the locations
are a slice (e.g. next to each other) (:issue:`6120`)
+ - Bug in propogating _ref_locs during construction of a DataFrame with dups
+ index/columns (:issue:`6121`)
pandas 0.13.0
-------------
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -116,6 +116,25 @@ def ref_locs(self):
self._ref_locs = indexer
return self._ref_locs
+ def take_ref_locs(self, indexer):
+ """
+ need to preserve the ref_locs and just shift them
+ return None if ref_locs is None
+
+ see GH6509
+ """
+
+ ref_locs = self._ref_locs
+ if ref_locs is None:
+ return None
+
+ tindexer = np.ones(len(ref_locs),dtype=bool)
+ tindexer[indexer] = False
+ tindexer = tindexer.astype(int).cumsum()[indexer]
+ ref_locs = ref_locs[indexer]
+ ref_locs -= tindexer
+ return ref_locs
+
def reset_ref_locs(self):
""" reset the block ref_locs """
self._ref_locs = np.empty(len(self.items), dtype='int64')
@@ -866,13 +885,20 @@ def func(x):
ndim=self.ndim, klass=self.__class__, fastpath=True)]
return self._maybe_downcast(blocks, downcast)
- def take(self, indexer, ref_items, axis=1):
+ def take(self, indexer, ref_items, new_axis, axis=1):
if axis < 1:
raise AssertionError('axis must be at least 1, got %d' % axis)
new_values = com.take_nd(self.values, indexer, axis=axis,
allow_fill=False)
+
+ # need to preserve the ref_locs and just shift them
+ # GH6121
+ ref_locs = None
+ if not new_axis.is_unique:
+ ref_locs = self._ref_locs
+
return [make_block(new_values, self.items, ref_items, ndim=self.ndim,
- klass=self.__class__, fastpath=True)]
+ klass=self.__class__, placement=ref_locs, fastpath=True)]
def get_values(self, dtype=None):
return self.values
@@ -1820,7 +1846,7 @@ def shift(self, indexer, periods, axis=0):
new_values[periods:] = fill_value
return [self.make_block(new_values)]
- def take(self, indexer, ref_items, axis=1):
+ def take(self, indexer, ref_items, new_axis, axis=1):
""" going to take our items
along the long dimension"""
if axis < 1:
@@ -2601,18 +2627,7 @@ def get_slice(self, slobj, axis=0, raise_on_error=False):
if len(self.blocks) == 1:
blk = self.blocks[0]
-
- # see GH 6059
- ref_locs = blk._ref_locs
- if ref_locs is not None:
-
- # need to preserve the ref_locs and just shift them
- indexer = np.ones(len(ref_locs),dtype=bool)
- indexer[slobj] = False
- indexer = indexer.astype(int).cumsum()[slobj]
- ref_locs = ref_locs[slobj]
- ref_locs -= indexer
-
+ ref_locs = blk.take_ref_locs(slobj)
newb = make_block(blk._slice(slobj), new_items, new_items,
klass=blk.__class__, fastpath=True,
placement=ref_locs)
@@ -3371,6 +3386,7 @@ def take(self, indexer, new_index=None, axis=1, verify=True):
if axis < 1:
raise AssertionError('axis must be at least 1, got %d' % axis)
+ self._consolidate_inplace()
if isinstance(indexer, list):
indexer = np.array(indexer)
@@ -3388,8 +3404,12 @@ def take(self, indexer, new_index=None, axis=1, verify=True):
new_index = self.axes[axis].take(indexer)
new_axes[axis] = new_index
- return self.apply('take', axes=new_axes, indexer=indexer,
- ref_items=new_axes[0], axis=axis)
+ return self.apply('take',
+ axes=new_axes,
+ indexer=indexer,
+ ref_items=new_axes[0],
+ new_axis=new_axes[axis],
+ axis=axis)
def merge(self, other, lsuffix=None, rsuffix=None):
if not self._is_indexed_like(other):
| Slice by column then by index fails if columns/rows are repeated.
We've found a problem where repeating a row and a column in a DataFrame fails with a "Cannot create BlockManager._ref_locs" assertion error.
The dataframe is very simple:
``` python
df = pd.DataFrame(np.arange(25.).reshape(5,5),
index=['a', 'b', 'c', 'd', 'e'],
columns=['a', 'b', 'c', 'd', 'e'])
```
And we pull the data out like this:
``` python
z = df[['a', 'c', 'a']]
z.ix[['a', 'c', 'a']]
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-87-3bdc0aacc4b5>", line 1, in <module>
z.ix[['a', 'c', 'a']]
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 56, in __getitem__
return self._getitem_axis(key, axis=0)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 744, in _getitem_axis
return self._getitem_iterable(key, axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 816, in _getitem_iterable
convert=False)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1164, in take
new_data = self._data.take(indices, axis=baxis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3366, in take
ref_items=new_axes[0], axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2337, in apply
do_integrity_check=do_integrity_check)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 1990, in __init__
self._set_ref_locs(do_refs=True)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2130, in _set_ref_locs
'have _ref_locs set' % (block, labels))
AssertionError: Cannot create BlockManager._ref_locs because block [FloatBlock: [a], 1 x 3, dtype: float64] with duplicate items [Index([u'a', u'c', u'a'], dtype='object')] does not have _ref_locs set
```
If instead we take a copy of the intermediate step, then it works:
``` python
z = df[['a', 'c', 'a']].copy()
z.ix[['a', 'c', 'a']]
Out[89]:
a c a
a 0 2 0
c 10 12 10
a 0 2 0
[3 rows x 3 columns]
```
This means that if you several functions which each do a part of the data processing, you need to know the history of an object to know whether what you're doing works. I think .ix should _always_ succeed on a DataFrame or Series, regardless of how it was constructed.
(I've read the discussion at https://github.com/pydata/pandas/issues/6056 about chained operations - but it's not something you can avoid if you have a pipeline of small steps instead of one big step).
This wasn't an issue in 0.11.0 but is failing in 0.13.0 and the latest master. Here's the output of installed versions when running on the master:
commit: None
python: 2.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 2.6.18-308.el5
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB
pandas: 0.13.0-292-g4dcecb0
Cython: 0.16
numpy: 1.7.1
scipy: 0.9.0
statsmodels: None
patsy: None
scikits.timeseries: None
dateutil: 1.5
pytz: None
bottleneck: 0.6.0
tables: 2.3.1-1
numexpr: 2.0.1
matplotlib: 1.1.1
openpyxl: None
xlrd: 0.8.0
xlwt: None
xlsxwriter: None
sqlalchemy: None
lxml: 2.3.6
bs4: None
html5lib: None
bq: None
apiclient: None
| 2014-01-27T13:54:50Z | [] | [] |
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-87-3bdc0aacc4b5>", line 1, in <module>
z.ix[['a', 'c', 'a']]
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 56, in __getitem__
return self._getitem_axis(key, axis=0)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 744, in _getitem_axis
return self._getitem_iterable(key, axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 816, in _getitem_iterable
convert=False)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1164, in take
new_data = self._data.take(indices, axis=baxis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3366, in take
ref_items=new_axes[0], axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2337, in apply
do_integrity_check=do_integrity_check)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 1990, in __init__
self._set_ref_locs(do_refs=True)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_292_g4dcecb0-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 2130, in _set_ref_locs
'have _ref_locs set' % (block, labels))
AssertionError: Cannot create BlockManager._ref_locs because block [FloatBlock: [a], 1 x 3, dtype: float64] with duplicate items [Index([u'a', u'c', u'a'], dtype='object')] does not have _ref_locs set
| 15,148 |
||||
pandas-dev/pandas | pandas-dev__pandas-6142 | 76fadb157bb75752adc8846a95e490b80583bfb9 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -164,6 +164,7 @@ Bug Fixes
index/columns (:issue:`6121`)
- Bug in ``DataFrame.apply`` when using mixed datelike reductions (:issue:`6125`)
- Bug in ``DataFrame.append`` when appending a row with different columns (:issue:`6129`)
+ - Bug in DataFrame construction with recarray and non-ns datetime dtype (:issue:`6140`)
pandas 0.13.0
-------------
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -41,9 +41,7 @@ class AmbiguousIndexError(PandasError, KeyError):
_POSSIBLY_CAST_DTYPES = set([np.dtype(t).name
- for t in ['M8[ns]', '>M8[ns]', '<M8[ns]',
- 'm8[ns]', '>m8[ns]', '<m8[ns]',
- 'O', 'int8',
+ for t in ['O', 'int8',
'uint8', 'int16', 'uint16', 'int32',
'uint32', 'int64', 'uint64']])
@@ -1612,6 +1610,14 @@ def _possibly_convert_objects(values, convert_dates=True,
def _possibly_castable(arr):
+ # return False to force a non-fastpath
+
+ # check datetime64[ns]/timedelta64[ns] are valid
+ # otherwise try to coerce
+ kind = arr.dtype.kind
+ if kind == 'M' or kind == 'm':
+ return arr.dtype in _DATELIKE_DTYPES
+
return arr.dtype.name not in _POSSIBLY_CAST_DTYPES
@@ -1681,12 +1687,30 @@ def _possibly_cast_to_datetime(value, dtype, coerce=False):
else:
+ is_array = isinstance(value, np.ndarray)
+
+ # catch a datetime/timedelta that is not of ns variety
+ # and no coercion specified
+ if (is_array and value.dtype.kind in ['M','m']):
+ dtype = value.dtype
+
+ if dtype.kind == 'M' and dtype != _NS_DTYPE:
+ try:
+ value = tslib.array_to_datetime(value)
+ except:
+ raise
+
+ elif dtype.kind == 'm' and dtype != _TD_DTYPE:
+ from pandas.tseries.timedeltas import \
+ _possibly_cast_to_timedelta
+ value = _possibly_cast_to_timedelta(value, coerce='compat')
+
# only do this if we have an array and the dtype of the array is not
# setup already we are not an integer/object, so don't bother with this
# conversion
- if (isinstance(value, np.ndarray) and not
- (issubclass(value.dtype.type, np.integer) or
- value.dtype == np.object_)):
+ elif (is_array and not (
+ issubclass(value.dtype.type, np.integer) or
+ value.dtype == np.object_)):
pass
else:
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2536,7 +2536,10 @@ def _try_cast(arr, take_fast_path):
else:
subarr = _try_cast(data, True)
else:
- subarr = _try_cast(data, True)
+ # don't coerce Index types
+ # e.g. indexes can have different conversions (so don't fast path them)
+ # GH 6140
+ subarr = _try_cast(data, not isinstance(data, Index))
if copy:
subarr = data.copy()
| DataFrame.from_records doesn't handle missing dates (None)
When you construct a DataFrame from a numpy recarray with datetime data and None for missing dates you get an error.
``` python
arrdata = [np.array([datetime.datetime(2005, 3, 1, 0, 0), None])]
dtypes = [('EXPIRY', '<M8[m]')]
recarray = np.core.records.fromarrays(arrdata, dtype=dtypes)
df = pd.DataFrame.from_records(recarray)
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-33-ae01f48c3b82>", line 1, in <module>
df = pd.DataFrame.from_records(recarray)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 841, in from_records
columns)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 4473, in _arrays_to_mgr
return create_block_manager_from_arrays(arrays, arr_names, axes)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3748, in create_block_manager_from_arrays
construction_error(len(arrays), arrays[0].shape[1:], axes, e)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3720, in construction_error
passed,implied))
ValueError: Shape of passed values is (1,), indices imply (1, 2)
```
This is a regression from 0.11.0. Stepping through the code it looks the initial error is raised in tslib.cast_to_nanoseconds and then caught and re-raised in create_block_manager_from_arrays
Incidentally, construction does work from a simple array instead of a recarray:
``` python
pd.DataFrame(np.array([datetime.datetime(2005, 3, 1, 0, 0), None]))
Out[36]:
0
0 2005-03-01
1 NaT
[2 rows x 1 columns]
```
| hmm prob not testing this case
on master too?
Yes, this is on master.
| 2014-01-28T13:20:58Z | [] | [] |
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-33-ae01f48c3b82>", line 1, in <module>
df = pd.DataFrame.from_records(recarray)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 841, in from_records
columns)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 4473, in _arrays_to_mgr
return create_block_manager_from_arrays(arrays, arr_names, axes)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3748, in create_block_manager_from_arrays
construction_error(len(arrays), arrays[0].shape[1:], axes, e)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.0_408_g464c1f9-py2.7-linux-x86_64.egg/pandas/core/internals.py", line 3720, in construction_error
passed,implied))
ValueError: Shape of passed values is (1,), indices imply (1, 2)
| 15,155 |
|||
pandas-dev/pandas | pandas-dev__pandas-6204 | a2d5e53a2dcb7d1fb5c981900f1b8beecc7d4d1d | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -177,6 +177,8 @@ Bug Fixes
- Consistency with dtypes in setting an empty DataFrame (:issue:`6171`)
- Bug in selecting on a multi-index ``HDFStore`` even in the prescence of under
specificed column spec (:issue:`6169`)
+ - Bug in ``nanops.var`` with ``ddof=1`` and 1 elements would sometimes return ``inf``
+ rather than ``nan`` on some platforms (:issue:`6136`)
pandas 0.13.0
-------------
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -302,14 +302,25 @@ def nanvar(values, axis=None, skipna=True, ddof=1):
else:
count = float(values.size - mask.sum())
+ d = count-ddof
if skipna:
values = values.copy()
np.putmask(values, mask, 0)
+ # always return NaN, never inf
+ if np.isscalar(count):
+ if count <= ddof:
+ count = np.nan
+ d = np.nan
+ else:
+ mask = count <= ddof
+ if mask.any():
+ np.putmask(d, mask, np.nan)
+ np.putmask(count, mask, np.nan)
+
X = _ensure_numeric(values.sum(axis))
XX = _ensure_numeric((values ** 2).sum(axis))
- return np.fabs((XX - X ** 2 / count) / (count - ddof))
-
+ return np.fabs((XX - X ** 2 / count) / d)
@bottleneck_switch()
def nanmin(values, axis=None, skipna=True):
| Failing test on win/py2.6/64, TestResample:test_how_lambda_functions
This has failed intermittently a couple of times in the last week or two.
```
=============================================================
=======================
FAILURE: test_how_lambda_functions (pandas.tseries.tests.test_resample.TestResample)
------------------------------------------------------------------------------------
Traceback (most recent call last):
File "c:\Python27-AMD64\Lib\unittest\case.py", line 331, in run
testMethod()
File "C:\workspace\pandas_tests\BITS\64\PYTHONVER\27\pandas\tseries\tests\test_resample.py", line 640, in test_how_lambda_functions
tm.assert_series_equal(result['bar'], bar_exp)
File "C:\workspace\pandas_tests\BITS\64\PYTHONVER\27\pandas\util\testing.py", line 448, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas\src\testing.c:2561)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas\src\testing.c:1803)
File "testing.pyx", line 107, in pandas._testing.assert_almost_equal (pandas\src\testing.c:2023)
AssertionError: First object is not null, second is null: inf != nan
```
| I'll have. look tomorrow
@jreback , I've been chasing this down, I think it's an actual bug.
So far i've only seen it on windows, on py2.6.6/64 bit, with numpy 1.8.0.
I was able to reproduce and capture a pickle of the data that causes the problem.
I don't have python2.6 on this linux box, but the problem doesn't appear
on 2.7 with the same numpy.
I can't inline the float64 data because the repr roundtrip changes the data,
presumably roundoff. I'll mail you the pickle.
Reproduce:
``` python
import pandas as pd
from pandas import *
ts = pd.read_pickle('GH6136.pickle')
# on win/py2.6.6 np1.8.0 vs linux/py2.7.5 np1.8.0
print ts.resample('M', how={'bar': lambda x: x.std(ddof=1)})
# similarly, on linux above and below produce the same result
# on window/... they are different see below. That's the cause
# of the failing test
ts.resample('M', how='std')
```
```
INSTALLED VERSIONS
------------------
commit: 1775ba10518d7026bde95d9d77a1aab52f025033
python: 2.6.6.final.0
python-bits: 64
OS: Windows
OS-release: 7
Cython: 0.19.2
numpy: 1.8.0
bottleneck: None
numexpr: None
```
gives
``` python
In [71]: ts.resample('M', how={'bar': lambda x: x.std(ddof=1)})
Out[71]:
bar
2000-01-31 0.896238
2000-02-29 0.944715
2000-03-31 1.149277
2000-04-30 1.#INF
[4 rows x 1 columns]
In [72]: ts.resample('M', how='std')
Out[72]:
2000-01-31 0.896238
2000-02-29 0.944715
2000-03-31 1.149277
2000-04-30 NaN
Freq: M, dtype: float64
In [73]: np.version.git_revision
Out[73]: 'a60b3901cd635d28bef8328e83bafd35ce631e08'
```
while
```
INSTALLED VERSIONS
------------------
commit: b20fc1540c51ad4d69ac7b71c8d53ee696c52a57
python: 2.7.5.final.0
python-bits: 64
OS: Linux
numpy: 1.8.0
bottleneck: None
numexpr: None
```
gives
``` python
In [24]: ts.resample('M', how={'bar': lambda x: x.std(ddof=1)})
Out[24]:
bar
2000-01-31 0.896238
2000-02-29 0.944715
2000-03-31 1.149277
2000-04-30 NaN
[4 rows x 1 columns]
In [25]:
In [
In [25]: ts.resample('M', how='std')
Out[25]:
2000-01-31 0.896238
2000-02-29 0.944715
2000-03-31 1.149277
2000-04-30 NaN
Freq: M, dtype: float64
>>> np.version.git_revision
'a60b3901cd635d28bef8328e83bafd35ce631e08'
```
moved to 0.13.1, push to 0.14 if we must.
Damn, we're doing that mindread thing again.
detailed!
I have. py2.6 64 windows so I can try to reproduce
| 2014-01-31T14:49:40Z | [] | [] |
Traceback (most recent call last):
File "c:\Python27-AMD64\Lib\unittest\case.py", line 331, in run
testMethod()
File "C:\workspace\pandas_tests\BITS\64\PYTHONVER\27\pandas\tseries\tests\test_resample.py", line 640, in test_how_lambda_functions
tm.assert_series_equal(result['bar'], bar_exp)
File "C:\workspace\pandas_tests\BITS\64\PYTHONVER\27\pandas\util\testing.py", line 448, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas\src\testing.c:2561)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas\src\testing.c:1803)
File "testing.pyx", line 107, in pandas._testing.assert_almost_equal (pandas\src\testing.c:2023)
AssertionError: First object is not null, second is null: inf != nan
| 15,165 |
|||
pandas-dev/pandas | pandas-dev__pandas-6221 | 847bf59cb680537db9c8bb9b1392d6b7d82d4141 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -180,6 +180,8 @@ Bug Fixes
- Bug in ``nanops.var`` with ``ddof=1`` and 1 elements would sometimes return ``inf``
rather than ``nan`` on some platforms (:issue:`6136`)
- Bug in Series and DataFrame bar plots ignoring the ``use_index`` keyword (:issue:`6209`)
+ - Disabled clipboard tests until release time (run locally with ``nosetests
+ -A disabled`` (:issue:`6048`).
pandas 0.13.0
-------------
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -996,7 +996,7 @@ def dec(f):
60, # urllib.error.URLError: [Errno 60] Connection timed out
)
-# Both of the above shouldn't mask reasl issues such as 404's
+# Both of the above shouldn't mask real issues such as 404's
# or refused connections (changed DNS).
# But some tests (test_data yahoo) contact incredibly flakey
# servers.
@@ -1396,3 +1396,8 @@ def skip_if_no_ne(engine='numexpr'):
if ne.__version__ < LooseVersion('2.0'):
raise nose.SkipTest("numexpr version too low: "
"%s" % ne.__version__)
+
+
+def disabled(t):
+ t.disabled = True
+ return t
| TestClipboard test failure on osx-64
I see the following:
```
======================================================================
ERROR: test_round_trip_frame (pandas.io.tests.test_clipboard.TestClipboard)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/tests/test_clipboard.py", line 67, in test_round_trip_frame
self.check_round_trip_frame(dt)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/tests/test_clipboard.py", line 54, in check_round_trip_frame
result = read_clipboard()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/clipboard.py", line 32, in read_clipboard
return read_table(StringIO(text), **kwargs)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 420, in parser_f
return _read(filepath_or_buffer, kwds)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 218, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 502, in __init__
self._make_engine(self.engine)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 616, in _make_engine
self._engine = klass(self.f, **self.options)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1303, in __init__
self.columns, self.num_original_columns = self._infer_columns()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1516, in _infer_columns
line = self._buffered_line()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1643, in _buffered_line
return self._next_line()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1659, in _next_line
line = next(self.data)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1420, in _read
line = next(f)
StopIteration
```
With pandas 0.13.1 on os x (python 2.7.3, numpy 1.8.0, cython 0.19.2).
| 2014-02-01T15:14:40Z | [] | [] |
Traceback (most recent call last):
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/tests/test_clipboard.py", line 67, in test_round_trip_frame
self.check_round_trip_frame(dt)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/tests/test_clipboard.py", line 54, in check_round_trip_frame
result = read_clipboard()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/clipboard.py", line 32, in read_clipboard
return read_table(StringIO(text), **kwargs)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 420, in parser_f
return _read(filepath_or_buffer, kwds)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 218, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 502, in __init__
self._make_engine(self.engine)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 616, in _make_engine
self._engine = klass(self.f, **self.options)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1303, in __init__
self.columns, self.num_original_columns = self._infer_columns()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1516, in _infer_columns
line = self._buffered_line()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1643, in _buffered_line
return self._next_line()
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1659, in _next_line
line = next(self.data)
File "/Users/vagrant/src/master-env/lib/python2.7/site-packages/pandas/io/parsers.py", line 1420, in _read
line = next(f)
StopIteration
| 15,166 |
||||
pandas-dev/pandas | pandas-dev__pandas-6373 | 83936513c5165d7da342b5fcaa439ce2553b23f8 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -111,6 +111,7 @@ Bug Fixes
- Bug in ``pd.eval`` when parsing strings with possible tokens like ``'&'``
(:issue:`6351`)
- Bug correctly handle placements of ``-inf`` in Panels when dividing by integer 0 (:issue:`6178`)
+- ``DataFrame.shift`` with ``axis=1`` was raising (:issue:`6371`)
pandas 0.13.1
-------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3170,7 +3170,7 @@ def shift(self, periods=1, freq=None, axis=0, **kwds):
if freq is None and not len(kwds):
block_axis = self._get_block_manager_axis(axis)
- indexer = com._shift_indexer(len(self), periods)
+ indexer = com._shift_indexer(len(self._get_axis(axis)), periods)
new_data = self._data.shift(indexer=indexer, periods=periods, axis=block_axis)
else:
return self.tshift(periods, freq, **kwds)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -934,19 +934,14 @@ def shift(self, indexer, periods, axis=0):
# that, handle boolean etc also
new_values, fill_value = com._maybe_upcast(new_values)
- # 1-d
- if self.ndim == 1:
- if periods > 0:
- new_values[:periods] = fill_value
- else:
- new_values[periods:] = fill_value
-
- # 2-d
+ axis_indexer = [ slice(None) ] * self.ndim
+ if periods > 0:
+ axis_indexer[axis] = slice(None,periods)
else:
- if periods > 0:
- new_values[:, :periods] = fill_value
- else:
- new_values[:, periods:] = fill_value
+ axis_indexer = [ slice(None) ] * self.ndim
+ axis_indexer[axis] = slice(periods,None)
+ new_values[tuple(axis_indexer)] = fill_value
+
return [make_block(new_values, self.items, self.ref_items,
ndim=self.ndim, fastpath=True)]
| DataFrame Shift with axis=1 gives error
I was playing with axis 0 and 1 in DataFrame shift. I get the following error:
```
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(10,5))
df.shift(1,axis=1)
```
This gives me an error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\generic.py", line 3175, in shift
new_data = self._data.shift(indexer, periods, axis=block_axis)
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\internals.py", line 2407, in shift
return self.apply('shift', *args, **kwargs)
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\internals.py", line 2375, in apply
applied = getattr(blk, f)(*args, **kwargs)
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\internals.py", line 918, in shift
new_values = self.values.take(indexer, axis=axis)
IndexError: index 5 is out of bounds for size 5
```
I am using pandas 0.13.1
| 2014-02-16T17:16:03Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\generic.py", line 3175, in shift
new_data = self._data.shift(indexer, periods, axis=block_axis)
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\internals.py", line 2407, in shift
return self.apply('shift', *args, **kwargs)
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\internals.py", line 2375, in apply
applied = getattr(blk, f)(*args, **kwargs)
File "c:\WinPython-32bit-2.7.6.3\python-2.7.6\lib\site-packages\pandas\core\internals.py", line 918, in shift
new_values = self.values.take(indexer, axis=axis)
IndexError: index 5 is out of bounds for size 5
| 15,188 |
||||
pandas-dev/pandas | pandas-dev__pandas-6396 | 7cd9496200c90d97b3e1fc0ff32c75c81d798c9e | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -135,6 +135,7 @@ Bug Fixes
- Bug in Series.get, was using a buggy access method (:issue:`6383`)
- Bug in hdfstore queries of the form ``where=[('date', '>=', datetime(2013,1,1)), ('date', '<=', datetime(2014,1,1))]`` (:issue:`6313`)
- Bug in DataFrame.dropna with duplicate indices (:issue:`6355`)
+- Regression in chained getitem indexing with embedded list-like from 0.12 (:issue:`6394`)
pandas 0.13.1
-------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1312,7 +1312,11 @@ def xs(self, key, axis=0, level=None, copy=True, drop_level=True):
new_values, copy = self._data.fast_xs(loc, copy=copy)
# may need to box a datelike-scalar
- if not is_list_like(new_values):
+ #
+ # if we encounter an array-like and we only have 1 dim
+ # that means that their are list/ndarrays inside the Series!
+ # so just return them (GH 6394)
+ if not is_list_like(new_values) or self.ndim == 1:
return _maybe_box_datetimelike(new_values)
result = Series(new_values, index=self.columns,
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -477,8 +477,15 @@ def _slice(self, slobj, axis=0, raise_on_error=False, typ=None):
def __getitem__(self, key):
try:
result = self.index.get_value(self, key)
- if isinstance(result, np.ndarray):
- return self._constructor(result,index=[key]*len(result)).__finalize__(self)
+
+ if not np.isscalar(result):
+ if is_list_like(result) and not isinstance(result, Series):
+
+ # we need to box if we have a non-unique index here
+ # otherwise have inline ndarray/lists
+ if not self.index.is_unique:
+ result = self._constructor(result,index=[key]*len(result)).__finalize__(self)
+
return result
except InvalidIndexError:
pass
| Indexing Regression in 0.13.0
In pandas 0.12 the order you indexed a DataFrame didn't matter, which I think is the correct behaviour:
``` python
In [6]: df = pd.DataFrame({'A': 5*[np.zeros(3)], 'B':5*[np.ones(3)]})
In [7]: df
Out[7]:
A B
0 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
1 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
2 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
3 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
4 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
In [8]: df['A'].iloc[2]
Out[8]: array([ 0., 0., 0.])
In [9]: df.iloc[2]['A']
Out[9]: array([ 0., 0., 0.])
In [10]: pd.__version__
Out[10]: '0.12.0'
In [11]: assert type(df.ix[2, 'A']) == type(df['A'].iloc[2]) == type(df.iloc[2]['A'])
In [12]:
```
In pandas 0.13 if you index in a different order you can get a different type out which can be problematic for code expecting an array, especially because of the difference between array indexing and label indexing.
``` python
In [1]: df = pd.DataFrame({'A': 5*[np.zeros(3)], 'B':5*[np.ones(3)]})
In [2]: df
Out[2]:
A B
0 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
1 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
2 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
3 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
4 [0.0, 0.0, 0.0] [1.0, 1.0, 1.0]
5 rows × 2 columns
In [3]: df['A'].iloc[2]
Out[3]: array([ 0., 0., 0.])
In [4]: df.iloc[2]['A']
Out[4]:
A 0
A 0
A 0
Name: 2, dtype: float64
In [5]: pd.__version__
Out[5]: '0.13.1'
In [6]: assert type(df.ix[2, 'A']) == type(df['A'].iloc[2]) == type(df.iloc[2]['A'])
Traceback (most recent call last):
File "<ipython-input-11-946e15564ee1>", line 1, in <module>
assert type(df.ix[2, 'A']) == type(df['A'].iloc[2]) == type(df.iloc[2]['A'])
AssertionError
```
| Storing lists of numpy arrays is not efficient nor really supported.
Chained indexing is to blame, see here:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy'
which exposes how numpy creates / does not create a veiw
don't do it
I know it is inefficient, I said as much in my post on the mailing list.
I don't care whether I am returned a view or a copy - I'm not trying to assign to the data.
Returning a different type dependent on the order of chaining is never a desirable outcome and hence is a bug. It's certainly a regression since the example shown above worked perfectly well in pandas 0.12.
| 2014-02-18T13:54:20Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-11-946e15564ee1>", line 1, in <module>
assert type(df.ix[2, 'A']) == type(df['A'].iloc[2]) == type(df.iloc[2]['A'])
AssertionError
| 15,195 |
|||
pandas-dev/pandas | pandas-dev__pandas-6408 | 3d9ef5a96b0b374f9b1a918c7f0443ce94d2852b | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -80,6 +80,8 @@ API Changes
- ``week,dayofweek,dayofyear,quarter``
- ``microsecond,nanosecond,qyear``
- ``min(),max()``
+ - ``pd.infer_freq()``
+- ``pd.infer_freq()`` will now raise a ``TypeError`` if given an invalid ``Series/Index`` type (:issue:`6407`)
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt
--- a/doc/source/v0.14.0.txt
+++ b/doc/source/v0.14.0.txt
@@ -42,6 +42,7 @@ API changes
- ``week,dayofweek,dayofyear,quarter``
- ``microsecond,nanosecond,qyear``
- ``min(),max()``
+ - ``pd.infer_freq()``
.. ipython:: python
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -13,7 +13,6 @@
import pandas.lib as lib
import pandas.tslib as tslib
-
class FreqGroup(object):
FR_ANN = 1000
FR_QTR = 2000
@@ -637,22 +636,28 @@ def infer_freq(index, warn=True):
Parameters
----------
index : DatetimeIndex
+ if passed a Series will use the values of the series (NOT THE INDEX)
warn : boolean, default True
Returns
-------
freq : string or None
None if no discernible frequency
+ TypeError if the index is not datetime-like
"""
- from pandas.tseries.index import DatetimeIndex
-
- if not isinstance(index, DatetimeIndex):
- from pandas.tseries.period import PeriodIndex
- if isinstance(index, PeriodIndex):
- raise ValueError("PeriodIndex given. Check the `freq` attribute "
- "instead of using infer_freq.")
- index = DatetimeIndex(index)
-
+ import pandas as pd
+
+ if isinstance(index, com.ABCSeries):
+ values = index.values
+ if not (com.is_datetime64_dtype(index.values) or values.dtype == object):
+ raise TypeError("cannot infer freq from a non-convertible dtype on a Series of {0}".format(index.dtype))
+ index = values
+ if isinstance(index, pd.PeriodIndex):
+ raise TypeError("PeriodIndex given. Check the `freq` attribute "
+ "instead of using infer_freq.")
+ if not isinstance(index, pd.DatetimeIndex) and isinstance(index, pd.Index):
+ raise TypeError("cannot infer freq from a non-convertible index type {0}".format(type(index)))
+ index = pd.DatetimeIndex(index)
inferer = _FrequencyInferer(index, warn=warn)
return inferer.get_freq()
| infer_freq broken in 0.13.1
``` python
In [3]: dates = pd.date_range('01-Jan-2015', '01-Jan-2016', freq='MS')
...: s = pd.TimeSeries(1, dates)
...: pd.infer_freq(s)
...:
Traceback (most recent call last):
File "<ipython-input-3-a7de7a7e9245>", line 3, in <module>
pd.infer_freq(s)
File "C:\dev\bin\Anaconda\lib\site-packages\pandas\tseries\frequencies.py", line 656, in infer_freq
inferer = _FrequencyInferer(index, warn=warn)
File "C:\dev\bin\Anaconda\lib\site-packages\pandas\tseries\frequencies.py", line 680, in __init__
self.is_monotonic = self.index.is_monotonic
File "C:\dev\bin\Anaconda\lib\site-packages\pandas\core\generic.py", line 1815, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'is_monotonic'
In [4]: pd.__version__
Out[4]: '0.13.1'
```
| This is actually an enhancement it didn't work in 0.12 (just gave a wrong answer!)
takes a DatetimeIndex only
```
In [10]: pd.infer_freq?
Type: function
String Form:<function infer_freq at 0x3b207d0>
File: /mnt/home/jreback/pandas/pandas/tseries/frequencies.py
Definition: pd.infer_freq(index, warn=True)
Docstring:
Infer the most likely frequency given the input index. If the frequency is
uncertain, a warning will be printed
Parameters
----------
index : DatetimeIndex
warn : boolean, default True
Returns
-------
freq : string or None
None if no discernible frequency
```
works correctly on the index
```
In [12]: pd.infer_freq(s.index)
Out[12]: 'MS'
```
out of curiosity, what is your usecase for this?
| 2014-02-19T13:01:14Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-3-a7de7a7e9245>", line 3, in <module>
pd.infer_freq(s)
File "C:\dev\bin\Anaconda\lib\site-packages\pandas\tseries\frequencies.py", line 656, in infer_freq
inferer = _FrequencyInferer(index, warn=warn)
File "C:\dev\bin\Anaconda\lib\site-packages\pandas\tseries\frequencies.py", line 680, in __init__
self.is_monotonic = self.index.is_monotonic
File "C:\dev\bin\Anaconda\lib\site-packages\pandas\core\generic.py", line 1815, in __getattr__
(type(self).__name__, name))
AttributeError: 'Series' object has no attribute 'is_monotonic'
| 15,198 |
|||
pandas-dev/pandas | pandas-dev__pandas-6438 | 2983b691c5b70de028301495c9ca3eea3d97ad7d | diff --git a/doc/source/merging.rst b/doc/source/merging.rst
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -213,6 +213,33 @@ This is also a valid argument to ``DataFrame.append``:
df1.append(df2, ignore_index=True)
+.. _merging.mixed_ndims:
+
+Concatenating with mixed ndims
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can concatenate a mix of Series and DataFrames. The
+Series will be transformed to DataFrames with the column name as
+the name of the Series.
+
+.. ipython:: python
+
+ df1 = DataFrame(randn(6, 4), columns=['A', 'B', 'C', 'D'])
+ s1 = Series(randn(6), name='foo')
+ concat([df1, s1],axis=1)
+
+If unnamed Series are passed they will be numbered consecutively.
+
+.. ipython:: python
+
+ s2 = Series(randn(6))
+ concat([df1, s2, s2, s2],axis=1)
+
+Passing ``ignore_index=True`` will drop all name references.
+
+.. ipython:: python
+
+ concat([df1, s1],axis=1,ignore_index=True)
More concatenating with group keys
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -98,6 +98,8 @@ API Changes
- The top-level :func:`pandas.eval` function does not allow you use the
``'@'`` prefix and provides you with an error message telling you so.
- ``NameResolutionError`` was removed because it isn't necessary anymore.
+- ``concat`` will now concatenate mixed Series and DataFrames using the Series name
+ or numbering columns as needed (:issue:`2385`)
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
@@ -166,6 +168,7 @@ Bug Fixes
- Bug in ``Series.reindex`` when specifying a ``method`` with some nan values was inconsistent (noted on a resample) (:issue:`6418`)
- Bug in :meth:`DataFrame.replace` where nested dicts were erroneously
depending on the order of dictionary keys and values (:issue:`5338`).
+- Perf issue in concatting with empty objects (:issue:`3259`)
pandas 0.13.1
-------------
diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt
--- a/doc/source/v0.14.0.txt
+++ b/doc/source/v0.14.0.txt
@@ -66,6 +66,8 @@ API changes
- The top-level :func:`pandas.eval` function does not allow you use the
``'@'`` prefix and provides you with an error message telling you so.
- ``NameResolutionError`` was removed because it isn't necessary anymore.
+- ``concat`` will now concatenate mixed Series and DataFrames using the Series name
+ or numbering columns as needed (:issue:`2385`). See :ref:`the docs <mergine.mixed_ndims>`
MultiIndexing Using Slicers
~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -2209,10 +2209,7 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
# make Nones an empty object
if com._count_not_none(*values) != len(values):
- v = None
- for v in values:
- if v is not None:
- break
+ v = next(v for v in values if v is not None)
if v is None:
return DataFrame()
elif isinstance(v, NDFrame):
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -957,7 +957,7 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None,
objs = [objs[k] for k in keys]
if keys is None:
- objs = [obj for obj in objs if obj is not None]
+ objs = [obj for obj in objs if obj is not None ]
else:
# #1649
clean_keys = []
@@ -973,16 +973,43 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None,
if len(objs) == 0:
raise Exception('All objects passed were None')
- # consolidate data
+ # consolidate data & figure out what our result ndim is going to be
+ ndims = set()
for obj in objs:
- if isinstance(obj, NDFrame):
- obj.consolidate(inplace=True)
- self.objs = objs
+ if not isinstance(obj, NDFrame):
+ raise TypeError("cannot concatenate a non-NDFrame object")
+
+ # consolidate
+ obj.consolidate(inplace=True)
+ ndims.add(obj.ndim)
+
+ # get the sample
+ # want the higest ndim that we have, and must be non-empty
+ # unless all objs are empty
+ sample = None
+ if len(ndims) > 1:
+ max_ndim = max(ndims)
+ for obj in objs:
+ if obj.ndim == max_ndim and np.sum(obj.shape):
+ sample = obj
+ break
- sample = objs[0]
+ else:
+ # filter out the empties
+ # if we have not multi-index possibiltes
+ df = DataFrame([ obj.shape for obj in objs ]).sum(1)
+ non_empties = df[df!=0]
+ if len(non_empties) and (keys is None and names is None and levels is None and join_axes is None):
+ objs = [ objs[i] for i in non_empties.index ]
+ sample = objs[0]
+
+ if sample is None:
+ sample = objs[0]
+ self.objs = objs
# Need to flip BlockManager axis in the DataFrame special case
- if isinstance(sample, DataFrame):
+ self._is_frame = isinstance(sample, DataFrame)
+ if self._is_frame:
axis = 1 if axis == 0 else 0
self._is_series = isinstance(sample, ABCSeries)
@@ -990,11 +1017,39 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None,
raise AssertionError("axis must be between 0 and {0}, "
"input was {1}".format(sample.ndim, axis))
+ # if we have mixed ndims, then convert to highest ndim
+ # creating column numbers as needed
+ if len(ndims) > 1:
+ current_column = 0
+ max_ndim = sample.ndim
+ self.objs, objs = [], self.objs
+ for obj in objs:
+
+ ndim = obj.ndim
+ if ndim == max_ndim:
+ pass
+
+ elif ndim != max_ndim-1:
+ raise ValueError("cannot concatenate unaligned mixed "
+ "dimensional NDFrame objects")
+
+ else:
+ name = getattr(obj,'name',None)
+ if ignore_index or name is None:
+ name = current_column
+ current_column += 1
+
+ # doing a row-wise concatenation so need everything
+ # to line up
+ if self._is_frame and axis == 1:
+ name = 0
+ obj = sample._constructor({ name : obj })
+
+ self.objs.append(obj)
+
# note: this is the BlockManager axis (since DataFrame is transposed)
self.axis = axis
-
self.join_axes = join_axes
-
self.keys = keys
self.names = names
self.levels = levels
diff --git a/vb_suite/join_merge.py b/vb_suite/join_merge.py
--- a/vb_suite/join_merge.py
+++ b/vb_suite/join_merge.py
@@ -186,6 +186,21 @@ def sample(values, k):
concat_small_frames = Benchmark('concat([df] * 1000)', setup,
start_date=datetime(2012, 1, 1))
+
+#----------------------------------------------------------------------
+# Concat empty
+
+setup = common_setup + """
+df = DataFrame(dict(A = range(10000)),index=date_range('20130101',periods=10000,freq='s'))
+empty = DataFrame()
+"""
+
+concat_empty_frames1 = Benchmark('concat([df,empty])', setup,
+ start_date=datetime(2012, 1, 1))
+concat_empty_frames2 = Benchmark('concat([empty,df])', setup,
+ start_date=datetime(2012, 1, 1))
+
+
#----------------------------------------------------------------------
# Ordered merge
| concat doesn't work with mixed Series/DataFrames
concat thows an `AssertionError` when passed a ciollection of mixed Series and Dataframes e.g.
``` python
s1 = pd.TimeSeries(np.sin(linspace(0, 2*pi, 100)),
index=pd.date_range('01-Jan-2013',
periods=100, freq='H'))
s2 = pd.TimeSeries(np.cos(linspace(0, 2*pi, 100)),
index=pd.date_range('01-Jan-2013',
periods=100, freq='H'))
df = pd.DataFrame(np.cos(linspace(0, 2*pi,
100)).reshape(-1,1),
index=pd.date_range('01-Jan-2013',
periods=100, freq='H'))
In [23]: pd.concat([df,df], axis=1).shape
Out[23]: (100, 2)
In [24]: pd.concat([s1,s2], axis=1).shape
Out[24]: (100, 2)
In [25]: pd.concat([s1,s2,s1], axis=1).shape
Out[25]: (100, 3)
In [26]: pd.concat([s1,df,s2], axis=1).shape
Traceback (most recent call last):
File "<ipython-input-9-4512588e71f2>", line 1, in <module>
pd.concat([s1,df,s2], axis=1).shape
File "c:\dev\code\pandas\pandas\tools\merge.py", line 881, in concat
return op.get_result()
File "c:\dev\code\pandas\pandas\tools\merge.py", line 960, in get_result
columns=self.new_axes[1])
File "c:\dev\code\pandas\pandas\core\frame.py", line 376, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "c:\dev\code\pandas\pandas\core\frame.py", line 505, in _init_dict
dtype=dtype)
File "c:\dev\code\pandas\pandas\core\frame.py", line 5181, in _arrays_to_mgr
mgr = BlockManager(blocks, axes)
File "c:\dev\code\pandas\pandas\core\internals.py", line 499, in __init__
self._verify_integrity()
File "c:\dev\code\pandas\pandas\core\internals.py", line 584, in _verify_integrity
raise AssertionError('Block shape incompatible with manager')
AssertionError: Block shape incompatible with manager
In [27]: pd.__version__
Out[27]: '0.10.0.dev-fbd77d5'
```
| Well that's not very helpful. Should be improved at some point
Am I right in saying these give better errors in dev?
```
pd.concat([s1,df,s2])
ValueError: arrays must have same number of dimensions
pd.concat([s1,df,s2], axis=1)
ValueError: Shape of passed values is (3,), indices imply (3, 100)
```
I like the first one, prob should catch the error in concat and put up a better message for the 2nd one though....
Agreed.
ok...let's move to 0.12 then...
It's not the incorrect error which is the problem, but the fact that an error is thrown at all. This makes it difficult/tedious to write generic code that works whether or not the input is a DataFrame or a TimeSeries.
In this case the TimeSeries should be treated as a DataFrame with one column where the column label is taken from the TimeSeries name. It the axis parameter was 0 then the TimeSeries should be treated as a DataFrame with one row where the row label is taken from the TimeSeries name.
@dhirschfeld I think we will have a detailed look at this, but nothing is stopping you from pre-conditining, e.g.
```
pd.concat([ pd.DataFrame(x) for x in [s1,df,s2] ], axis=1).shape
```
The DataFrame constructor does exactly what you want for a series with axis=1, and passed thru a DataFrame
(axis=0 you would need to pass the orient so it transposes I think)
I also stumbled on this, but I saw there is already this issue for it.
I think in current master it is still very confusing. An example (with the series being a column from a dataframe, full example: http://nbviewer.ipython.org/5868420/bug_pandas-concat-series.ipynb ):
- concat with `[df1, df2[col]]`: `IndexError: list index out of range`
- concat with `[df1[col], df2]`: `AttributeError: 'DataFrame' object has no attribute 'name'`
I think at the moment it is not clear from the docs that this is not allowed (mixing dataframes with series/columns), and when you do it, you don't get a very informative error message.
So should work like this:
```
In [1]: df = pd.DataFrame([[1, 2], [3, 4]])
In [2]: s = pd.Series([5, 6], name=3)
In [3]: pd.concat([df, pd.DataFrame(s).T]) # but with [df, s]
Out[3]:
0 1
0 1 2
1 3 4
3 5 6
In [4]: pd.concat([df, pd.DataFrame(s)], axis=1) # but with [df. s]
Out[4]:
0 1 3
0 1 2 5
1 3 4 6
# also both with [s, df]
```
series must have a name FYI (otherwise needs to raise)
atm it's a bit sketchy with name:
```
In [5]: s1 = pd.Series([5, 6])
In [6]: pd.concat([s, s1], axis=1) # loses name of s
Out[6]:
0 1
0 5 5
1 6 6
```
Not sure should always raise w/o name.
@hayd that's the problem, it "works" but really should not (as its assigned as if the column index doesn't exist). I would just be careful about it, maybe only allowing it if `as_index=False` or series has a name
not exactly the same situation, but `append` raises if the appended series doesn't have a name and axis=0, so certainly precedent for it.
| 2014-02-21T22:20:49Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-9-4512588e71f2>", line 1, in <module>
pd.concat([s1,df,s2], axis=1).shape
File "c:\dev\code\pandas\pandas\tools\merge.py", line 881, in concat
return op.get_result()
File "c:\dev\code\pandas\pandas\tools\merge.py", line 960, in get_result
columns=self.new_axes[1])
File "c:\dev\code\pandas\pandas\core\frame.py", line 376, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "c:\dev\code\pandas\pandas\core\frame.py", line 505, in _init_dict
dtype=dtype)
File "c:\dev\code\pandas\pandas\core\frame.py", line 5181, in _arrays_to_mgr
mgr = BlockManager(blocks, axes)
File "c:\dev\code\pandas\pandas\core\internals.py", line 499, in __init__
self._verify_integrity()
File "c:\dev\code\pandas\pandas\core\internals.py", line 584, in _verify_integrity
raise AssertionError('Block shape incompatible with manager')
AssertionError: Block shape incompatible with manager
| 15,202 |
|||
pandas-dev/pandas | pandas-dev__pandas-6551 | 549a3902e2311c1c9cc3b065effe1f67d510475e | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -207,6 +207,7 @@ Bug Fixes
- Bug in ``pd.read_stata`` which would use the wrong data types and missing values (:issue:`6327`)
- Bug in ``DataFrame.to_stata`` that lead to data loss in certain cases, and could exported using the
wrong data types and missing values (:issue:`6335`)
+- Bug in indexing: empty list lookup caused ``IndexError`` exceptions (:issue:`6536`, :issue:`6551`)
pandas 0.13.1
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1555,6 +1555,10 @@ def _maybe_convert_indices(indices, n):
"""
if isinstance(indices, list):
indices = np.array(indices)
+ if len(indices) == 0:
+ # If list is empty, np.array will return float and cause indexing
+ # errors.
+ return np.empty(0, dtype=np.int_)
mask = indices < 0
if mask.any():
| iloc with empty list raises IndexError
When you slice a dataframe with an empty list you get an IndexError instead of an empty dataframe. Slicing with a non-empty list works fine. Tested in pandas 0.13.1. This is a regression from pandas 0.11.0.
``` python
df = pd.DataFrame(np.arange(25.0).reshape((5,5)), columns=list('abcde'))
```
Slice with non-empty list.
``` python
[5 rows x 5 columns]
df.iloc[[1,2,3]]
Out[34]:
a b c d e
1 5 6 7 8 9
2 10 11 12 13 14
3 15 16 17 18 19
[3 rows x 5 columns]
```
Slice with empty list.
``` python
df.iloc[[]]
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-35-8806137779f8>", line 1, in <module>
df.iloc[[]]
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 1028, in __getitem__
return self._getitem_axis(key, axis=0)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 1238, in _getitem_axis
return self._get_loc(key, axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 73, in _get_loc
return self.obj._ixs(key, axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 1588, in _ixs
result = self.reindex(i, takeable=True)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2162, in reindex
**kwargs)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1565, in reindex
takeable=takeable).__finalize__(self)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2117, in _reindex_axes
fill_value, limit, takeable=takeable)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2126, in _reindex_index
takeable=takeable)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/index.py", line 1233, in reindex
return self[target], target
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/index.py", line 624, in __getitem__
result = arr_idx[key]
IndexError: arrays used as indices must be of integer (or boolean) type
```
| Forgot to say, this works fine with .ix.
thanks...this is a bug
| 2014-03-05T12:03:51Z | [] | [] |
Traceback (most recent call last):
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/ipython-1.1.0_1_ahl1-py2.7.egg/IPython/core/interactiveshell.py", line 2830, in run_code
exec code_obj in self.user_global_ns, self.user_ns
File "<ipython-input-35-8806137779f8>", line 1, in <module>
df.iloc[[]]
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 1028, in __getitem__
return self._getitem_axis(key, axis=0)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 1238, in _getitem_axis
return self._get_loc(key, axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/indexing.py", line 73, in _get_loc
return self.obj._ixs(key, axis=axis)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 1588, in _ixs
result = self.reindex(i, takeable=True)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2162, in reindex
**kwargs)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1565, in reindex
takeable=takeable).__finalize__(self)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2117, in _reindex_axes
fill_value, limit, takeable=takeable)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2126, in _reindex_index
takeable=takeable)
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/index.py", line 1233, in reindex
return self[target], target
File "/users/is/dbew/pyenvs/timeseries/lib/python2.7/site-packages/pandas-0.13.1-py2.7-linux-x86_64.egg/pandas/core/index.py", line 624, in __getitem__
result = arr_idx[key]
IndexError: arrays used as indices must be of integer (or boolean) type
| 15,225 |
|||
pandas-dev/pandas | pandas-dev__pandas-6560 | 170377d892b8154d8fa3067145dc07b3cb5011f9 | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -218,6 +218,7 @@ Bug Fixes
- Bug in ``DataFrame.to_stata`` that lead to data loss in certain cases, and could exported using the
wrong data types and missing values (:issue:`6335`)
- Inconsistent types in Timestamp addition/subtraction (:issue:`6543`)
+- Bug in preserving frequency across Timestamp addition/subtraction (:issue:`4547`)
- Bug in indexing: empty list lookup caused ``IndexError`` exceptions (:issue:`6536`, :issue:`6551`)
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -681,17 +681,17 @@ cdef class _Timestamp(datetime):
if is_timedelta64_object(other):
other_int = other.astype('timedelta64[ns]').astype(int)
- return Timestamp(self.value + other_int, tz=self.tzinfo)
+ return Timestamp(self.value + other_int, tz=self.tzinfo, offset=self.offset)
if is_integer_object(other):
if self.offset is None:
raise ValueError("Cannot add integral value to Timestamp "
"without offset.")
- return Timestamp((self.offset * other).apply(self))
+ return Timestamp((self.offset * other).apply(self), offset=self.offset)
if isinstance(other, timedelta) or hasattr(other, 'delta'):
nanos = _delta_to_nanoseconds(other)
- return Timestamp(self.value + nanos, tz=self.tzinfo)
+ return Timestamp(self.value + nanos, tz=self.tzinfo, offset=self.offset)
result = datetime.__add__(self, other)
if isinstance(result, datetime):
| BUG: TimeStamp looses frequency info on arithmetic ops
Running the follow code generates an error using pandas 0.11 (with winpython 2.7.5.2 64 bit):
import pandas as pd
ts1=pd.date_range('1/1/2000',periods=1,freq='Q')[0]
ts1-1-1
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tslib.pyx", line 566, in pandas.tslib._Timestamp.__sub__ (pandas\tslib.c:10775)
File "tslib.pyx", line 550, in pandas.tslib._Timestamp.__add__ (pandas\tslib.c:10477)
ValueError: Cannot add integral value to Timestamp without offset.
Reason:
ts1.freq has value:
1 QuarterEnd: startingMonth=12, kwds={'startingMonth': 12}, offset=3 MonthEnds
but
(ts1-1).freq has no value
| You are selecting a single timestamp out, by definition it does not have a frequency; try selecting with a slice (even a len 1 slice)
```
In [30]: ts1=pd.date_range('1/1/2000',periods=1,freq='Q')
In [31]: ts1
Out[31]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2000-03-31 00:00:00]
Length: 1, Freq: Q-DEC, Timezone: None
In [32]: ts1[0:1]
Out[32]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2000-03-31 00:00:00]
Length: 1, Freq: Q-DEC, Timezone: None
```
Thnx for the reply. I understand what you mean. However, Timestamp seems to have a frequency.
Since, define ts1 as:
ts1=pd.date_range('1/1/2000',periods=1,freq='Q')[0]
Now ts1 is a timestamp. And
ts1-1
gives 30/09/1999.
Furthermore: ts1-2 gives 30/06/1999
But: ts1-1-1 does not.
Also, if you ask me, it makes sense to increment timestamps with +1 and -1, since that allows you to easily index DatetimeIndex objects
what does `ts1-1-1` mean?
ts1-1-1=(ts1-1)-1
That is, substract 1 from the timestamp ts1 to obtain 30/09/1999 and then subtract 1 again to obtain 30/06/1999
However, the frequency information is lost along the way and this generates an error.
A single key selection simply has different semantics (across all of pandas), rather than a slice,
The following does what you want (note that I leave the index as is), then select the element at the end
```
In [15]: ts1=pd.date_range('1/1/2000',periods=1,freq='Q')
In [16]: (ts1-1)-1
Out[16]:
<class 'pandas.tseries.index.DatetimeIndex'>
[1999-09-30 00:00:00]
Length: 1, Freq: Q-DEC, Timezone: None
In [17]: ((ts1-1)-1)[0]
Out[17]: Timestamp('1999-09-30 00:00:00', tz=None)
```
Thnx for the solution, I did something like to as a "dirty hack" to fix the code.
However, still, if TimeStamp has an attribute freq, as it does. See for example:
ts1=pd.date_range('1/1/2000',periods=1,freq='Q')[0]
print ts[0].freq
And incremental operators are defined for TimeStamps: that is
ts1-1
Then frequency information should not be lost.
The whole purpose of have have timestamps with a frequency attribute is to use incremental operators on them right?
Ok this is a bug (but not where I thought);
The timestamp has the freq when its taken from the datetimeindex (its not printing it, which may be a separate issue), but its there
```
In [1]: ts1=pd.date_range('1/1/2000',periods=1,freq='Q')[0]
In [2]: ts1.freq
Out[2]: <1 QuarterEnd: startingMonth=12, kwds={'startingMonth': 12}, offset=<3 MonthEnds>>
```
The **sub** operation on a Timestamp is treated correctly, but the new freq is not propogated to the new date
```
In [12]: ts1.__sub__(1).freq
<This returns None>
```
I completely agree! :)
this is a very straightforward change in `pandas/tslib.pyx/Timestamp/__add__`, just need to pass the offset when creating the new Timestamp objects; but will need some tests (and MAYBE modification of some existing tests)...
up for a PR?
Good to hear that the change is straightforward!
What's a PR? :)
PR == [pull request](https://help.github.com/articles/using-pull-requests)
have a look here http://pandas.pydata.org/developers.html
Thanks for submitting your pull request so quickly. If I understand you correctly, you want me to pull you pull request and then we can discuss whether the issue is solved right?
I'm really used to working with git or any other version control system, so I might need some time to get get it working correctly. I'm going on vacation for a several day as of Thursday. So, it might take untill the weekend of the 24th until we can discuss your pull request. Would that be ok?
that is all fine
we r here to help
it is pretty nice when u find a bug them submit a patch in order to fix it!
Oh, i just reread your comment. I would like you to actually do the patch (by submitting a pull request), and add tests as needed. It helps you understand the codebase and even how to use pandas better!
Sure, I'll try. I think it will work out, hardest part will be getting used to the conventions and git interface ;-)
i'm happy to help you with `git`. i used to be pretty terrible at it, now i'm less terrible at it. i know it can seem like a mountain of complexity but once you get the hang of it you'll wonder why anyone would ever use anything else especially once you get in the habit of branching all the time..
also see https://github.com/pydata/pandas/issues/3156 for some tools that will make git/github integration easier, esp. `hub`.
@Martin31415926 doing a PR for this?
Hi jreback, I would like too... but I'm finishing my PhD thesis at the moment, so unfortunately I can't make time for it at the moment. Sorry :(
You can assign this to me - I'm in this code already looking at another minor Timestamp addition/subtraction issue. (which I haven't created an issue for, I figured I would just submit the PR)
gr8! thanks @rosnfeld
Design question: what if somebody adds/subtracts a timedelta (or timedelta64) that is not in keeping with the Timestamp's frequency?
``` python
timestamp = date_range('2014-03-05', periods=10, freq='D')[0]
new_timestamp = timestamp + datetime.timedelta(seconds=1)
```
Now it's a little weird to "mix" new_timestamp and timestamp, as they are no longer "aligned". One could say they have the same frequency or period, but different "phase".
Maybe we just copy the frequency over to the new object and trust that users know what they are doing? Or we don't support copying frequency with anything but integer addition/subtraction. I'm inclined to do the former. I feel we can do better than just not copying it, but doing any checking is going to be expensive; I'm not aware of a way to check that a new timestamp "fits" a given frequency-driven series of Timestamps other than by constructing something like a DatetimeIndex and testing for inclusion.
| 2014-03-06T00:23:30Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tslib.pyx", line 566, in pandas.tslib._Timestamp.__sub__ (pandas\tslib.c:10775)
File "tslib.pyx", line 550, in pandas.tslib._Timestamp.__add__ (pandas\tslib.c:10477)
ValueError: Cannot add integral value to Timestamp without offset.
| 15,228 |
|||
pandas-dev/pandas | pandas-dev__pandas-6567 | 1bab0a2d806dccf1b5bf6bf501d38b5f1a9b3bc7 | diff --git a/ci/install.sh b/ci/install.sh
--- a/ci/install.sh
+++ b/ci/install.sh
@@ -35,7 +35,8 @@ pip install -I -U setuptools
pip install wheel
# comment this line to disable the fetching of wheel files
-base_url=http://cache27diy-cpycloud.rhcloud.com
+base_url=http://pandas.pydata.org/pandas-build/dev/wheels
+
wheel_box=${TRAVIS_PYTHON_VERSION}${JOB_TAG}
PIP_ARGS+=" -I --use-wheel --find-links=$base_url/$wheel_box/ --allow-external --allow-insecure"
diff --git a/ci/requirements-2.7.txt b/ci/requirements-2.7.txt
--- a/ci/requirements-2.7.txt
+++ b/ci/requirements-2.7.txt
@@ -13,7 +13,6 @@ xlrd==0.9.2
patsy==0.1.0
html5lib==1.0b2
lxml==3.2.1
-scikits.timeseries==0.91.3
scipy==0.10.0
beautifulsoup4==4.2.1
statsmodels==0.5.0
diff --git a/ci/requirements-2.7_NUMPY_DEV_master.txt b/ci/requirements-2.7_NUMPY_DEV_master.txt
--- a/ci/requirements-2.7_NUMPY_DEV_master.txt
+++ b/ci/requirements-2.7_NUMPY_DEV_master.txt
@@ -1,3 +1,3 @@
python-dateutil
-pytz==2013b
+pytz
cython==0.19.1
| TST: TravisCI failures
It looks like Travis is failing to install the dependencies recently.
In `ci/install.sh`, we run
```
pip install $PIP_ARGS -r ci/requirements-${wheel_box}.txt
```
where `PIP_ARGS` (I think) is something like `-I --use-wheel --find-links=http://cache27diy-cpycloud.rhcloud.com/2.7/ --allow-external --allow-insecure
` depending on which version.
Which produces this error (sometimes):
``` python
Downloading/unpacking python-dateutil==2.1 (from -r ci/requirements-2.7.txt (line 1))
http://cache27diy-cpycloud.rhcloud.com/2.7/ uses an insecure transport scheme (http). Consider using https if cache27diy-cpycloud.rhcloud.com has it available
Downloading python_dateutil-2.1-py27-none-any.whl (unknown size): 118kB downloaded
Cleaning up...
Exception:
Traceback (most recent call last):
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/commands/install.py", line 274, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/req.py", line 1173, in prepare_files
self.unpack_url(url, location, self.is_download)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/req.py", line 1320, in unpack_url
retval = unpack_http_url(link, location, self.download_cache, self.download_dir, self.session)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/download.py", line 587, in unpack_http_url
unpack_file(temp_location, location, content_type, link)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/util.py", line 621, in unpack_file
unzip_file(filename, location, flatten=not filename.endswith(('.pybundle', '.whl')))
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/util.py", line 491, in unzip_file
zip = zipfile.ZipFile(zipfp)
File "/usr/lib/python2.7/zipfile.py", line 714, in __init__
self._GetContents()
File "/usr/lib/python2.7/zipfile.py", line 748, in _GetContents
self._RealGetContents()
File "/usr/lib/python2.7/zipfile.py", line 763, in _RealGetContents
raise BadZipfile, "File is not a zip file"
BadZipfile: File is not a zip file
```
I've reproduced this locally.
Can we verify that the wheels hosted at `cache27diy-cpycloud.rhcloud.com/2.7/` were built correctly? I'm guessing `cpcloud` refers to @cpcloud?
| 2014-03-06T22:01:25Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/commands/install.py", line 274, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/req.py", line 1173, in prepare_files
self.unpack_url(url, location, self.is_download)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/req.py", line 1320, in unpack_url
retval = unpack_http_url(link, location, self.download_cache, self.download_dir, self.session)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/download.py", line 587, in unpack_http_url
unpack_file(temp_location, location, content_type, link)
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/util.py", line 621, in unpack_file
unzip_file(filename, location, flatten=not filename.endswith(('.pybundle', '.whl')))
File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pip/util.py", line 491, in unzip_file
zip = zipfile.ZipFile(zipfp)
File "/usr/lib/python2.7/zipfile.py", line 714, in __init__
self._GetContents()
File "/usr/lib/python2.7/zipfile.py", line 748, in _GetContents
self._RealGetContents()
File "/usr/lib/python2.7/zipfile.py", line 763, in _RealGetContents
raise BadZipfile, "File is not a zip file"
BadZipfile: File is not a zip file
| 15,229 |
||||
pandas-dev/pandas | pandas-dev__pandas-6639 | dcbbc5929f2a3c867536d0976ca9176e5bffc5f8 | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1287,9 +1287,18 @@ Some other sorting notes / nuances:
* ``Series.sort`` sorts a Series by value in-place. This is to provide
compatibility with NumPy methods which expect the ``ndarray.sort``
behavior.
- * ``DataFrame.sort`` takes a ``column`` argument instead of ``by``. This
- method will likely be deprecated in a future release in favor of just using
- ``sort_index``.
+
+Sorting by a multi-index column
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You must be explicit about sorting when the column is a multi-index, and fully specify
+all levels to ``by``.
+
+.. ipython:: python
+
+ df1.columns = MultiIndex.from_tuples([('a','one'),('a','two'),('b','three')])
+ df1.sort_index(by=('a','two'))
+
Copying
-------
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -125,6 +125,17 @@ API Changes
``DataFrame.stack`` operations where the name of the column index is used as
the name of the inserted column containing the pivoted data.
+- A tuple passed to ``DataFame.sort_index`` will be interpreted as the levels of
+ the index, rather than requiring a list of tuple (:issue:`4370`)
+
+Deprecations
+~~~~~~~~~~~~
+
+Prior Version Deprecations/Changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- Remove ``column`` keyword from ``DataFrame.sort`` (:issue:`4370`)
+
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt
--- a/doc/source/v0.14.0.txt
+++ b/doc/source/v0.14.0.txt
@@ -276,7 +276,9 @@ You can use a right-hand-side of an alignable object as well.
Prior Version Deprecations/Changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are no announced changes in 0.13.1 or prior that are taking effect as of 0.14.0
+Therse are prior version deprecations that are taking effect as of 0.14.0.
+
+- Remove ``column`` keyword from ``DataFrame.sort`` (:issue:`4370`)
Deprecations
~~~~~~~~~~~~
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2530,7 +2530,7 @@ def _m8_to_i8(x):
#----------------------------------------------------------------------
# Sorting
- def sort(self, columns=None, column=None, axis=0, ascending=True,
+ def sort(self, columns=None, axis=0, ascending=True,
inplace=False):
"""
Sort DataFrame either by labels (along either axis) or by the values in
@@ -2539,8 +2539,9 @@ def sort(self, columns=None, column=None, axis=0, ascending=True,
Parameters
----------
columns : object
- Column name(s) in frame. Accepts a column name or a list or tuple
- for a nested sort.
+ Column name(s) in frame. Accepts a column name or a list
+ for a nested sort. A tuple will be interpreted as the
+ levels of a multi-index.
ascending : boolean or list, default True
Sort ascending vs. descending. Specify list for multiple sort
orders
@@ -2557,9 +2558,6 @@ def sort(self, columns=None, column=None, axis=0, ascending=True,
-------
sorted : DataFrame
"""
- if column is not None: # pragma: no cover
- warnings.warn("column is deprecated, use columns", FutureWarning)
- columns = column
return self.sort_index(by=columns, axis=axis, ascending=ascending,
inplace=inplace)
@@ -2574,8 +2572,9 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False,
axis : {0, 1}
Sort index/rows versus columns
by : object
- Column name(s) in frame. Accepts a column name or a list or tuple
- for a nested sort.
+ Column name(s) in frame. Accepts a column name or a list
+ for a nested sort. A tuple will be interpreted as the
+ levels of a multi-index.
ascending : boolean or list, default True
Sort ascending vs. descending. Specify list for multiple sort
orders
@@ -2602,7 +2601,7 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False,
if axis != 0:
raise ValueError('When sorting by column, axis must be 0 '
'(rows)')
- if not isinstance(by, (tuple, list)):
+ if not isinstance(by, list):
by = [by]
if com._is_sequence(ascending) and len(by) != len(ascending):
raise ValueError('Length of ascending (%d) != length of by'
@@ -2629,6 +2628,13 @@ def trans(v):
by = by[0]
k = self[by].values
if k.ndim == 2:
+
+ # try to be helpful
+ if isinstance(self.columns, MultiIndex):
+ raise ValueError('Cannot sort by column %s in a multi-index'
+ ' you need to explicity provide all the levels'
+ % str(by))
+
raise ValueError('Cannot sort by duplicate column %s'
% str(by))
if isinstance(ascending, (tuple, list)):
| ER/DOC: Sorting in multi-index columns: misleading error message, unclear docs
related #739
Have a look at this example:
``` python
import pandas as pd
import numpy as np
from StringIO import StringIO
print "Pandas version %s\n\n" % pd.__version__
data1 = """idx,metric
0,2.1
1,2.5
2,3"""
data2 = """idx,metric
0,2.7
1,2.2
2,2.8"""
df1 = pd.read_csv(StringIO(data1))
df2 = pd.read_csv(StringIO(data2))
concatenated = pd.concat([df1, df2], ignore_index=True)
merged = concatenated.groupby("idx").agg([np.mean, np.std])
print merged
print merged.sort('metric')
```
and its output:
```
$ python test.py
Pandas version 0.11.0
metric
mean std
idx
0 2.40 0.424264
1 2.35 0.212132
2 2.90 0.141421
Traceback (most recent call last):
File "test.py", line 22, in <module>
print merged.sort('metric')
File "/***/Python-2.7.3/lib/python2.7/site-packages/pandas/core/frame.py", line 3098, in sort
inplace=inplace)
File "/***/Python-2.7.3/lib/python2.7/site-packages/pandas/core/frame.py", line 3153, in sort_index
% str(by))
ValueError: Cannot sort by duplicate column metric
```
The problem here is not that there is a duplicate column `metric` as stated by the error message. The problem is that there are still two sub-levels. The solution in this case is to use
``` python
merged.sort([('metric', 'mean')])
```
for sorting by the mean of the metric. It took myself quite a while to figure this out. First of all, the error message should be more clear in this case. Then, maybe I was too stupid, but I could not find the solution in the docs, but within a thread on StackOverflow. Looks like the error message above is the result of an over-generalized condition around https://github.com/pydata/pandas/blob/v0.12.0rc1/pandas/core/frame.py#L3269
| yep....docs/error msg are unclear
| 2014-03-14T21:38:21Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 22, in <module>
print merged.sort('metric')
File "/***/Python-2.7.3/lib/python2.7/site-packages/pandas/core/frame.py", line 3098, in sort
inplace=inplace)
File "/***/Python-2.7.3/lib/python2.7/site-packages/pandas/core/frame.py", line 3153, in sort_index
% str(by))
ValueError: Cannot sort by duplicate column metric
| 15,244 |
|||
pandas-dev/pandas | pandas-dev__pandas-6790 | 0dfa1935a9e194d3bbcdf54c3012704a6e68668c | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1040,6 +1040,10 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
level = 0
_, indexer = labels.reindex(objarr, level=level)
+ # take all
+ if indexer is None:
+ indexer = np.arange(len(labels))
+
check = labels.levels[0].get_indexer(objarr)
else:
level = None
| BUG: Slicing multi Index with one column by column errors out
The following code errors out. Seems like it should just return the original dataframe.
pandas: 0.13.1-552-g8120a59
``` python
import pandas as pd
import numpy as np
from itertools import product
attributes = ['Attribute' + str(i) for i in range(1)]
attribute_values = ['Value' + str(i) for i in range(1000)]
index = pd.MultiIndex.from_tuples(list(product(attributes, attribute_values)))
df = 0.1 * np.random.randn(10, 1 * 1000) + 0.5
df = pd.DataFrame(df, columns=index)
df[attributes]
```
Stacktrace
```
(pandas)pandas git:master ❯ python example_failure.py ✭
Traceback (most recent call last):
File "example_failure.py", line 12, in <module>
df[attributes]
File "PATH/pandas/pandas/core/frame.py", line 1672, in __getitem__
return self._getitem_array(key)
File "PATH/pandas/pandas/core/frame.py", line 1717, in _getitem_array
return self.take(indexer, axis=1, convert=True)
File "PATH/pandas/pandas/core/generic.py", line 1224, in take
indices, len(self._get_axis(axis)))
File "PATH/pandas/pandas/core/indexing.py", line 1564, in _maybe_convert_indices
if mask.any():
AttributeError: 'bool' object has no attribute 'any'
```
| :+1:
@rdooley What version of Python are you running?
```
>>> pandas.show_versions()
INSTALLED VERSIONS
------------------
commit: 8120a5955f912186e6b3913283ffe635474b2581
python: 2.7.3.final.0
python-bits: 32
OS: Linux
OS-release: 3.5.0-48-generic
machine: i686
processor: i686
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.13.1-552-g8120a59
Cython: 0.20.1
numpy: 1.8.1
scipy: None
statsmodels: None
IPython: None
sphinx: None
patsy: None
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.2
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
bq: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
This works fine, what would you expect? you are only specifying 1 level
```
In [39]: df['Attribute0']
Out[39]:
Value0 Value1 Value2 Value3 Value4 Value5 Value6 Value7 Value8 Value9 Value10 Value11 Value12 Value13 Value14 Value15 Value16 Value17 Value18 Value19 Value20 Value21 Value22 Value23 Value24 Value25 Value26 Value27 Value28 \
0 0.428597 0.474725 0.329974 0.339155 0.510907 0.381153 0.494624 0.459862 0.426424 0.593362 0.477724 0.580565 0.654136 0.402909 0.536322 0.519228 0.603902 0.635411 0.560371 0.372337 0.520917 0.706165 0.529375 0.434004 0.402526 0.403120 0.477703 0.332731 0.515974
1 0.240495 0.416920 0.439150 0.622330 0.708689 0.438536 0.445845 0.449184 0.312858 0.601184 0.562577 0.375231 0.730649 0.526096 0.597879 0.413970 0.376763 0.275275 0.490993 0.703520 0.563456 0.563306 0.629091 0.471302 0.551040 0.466567 0.431064 0.714665 0.486743
2 0.331542 0.653771 0.552438 0.589690 0.526086 0.425176 0.434082 0.529349 0.449824 0.505634 0.518366 0.357662 0.497151 0.493554 0.575063 0.512020 0.483671 0.465662 0.466157 0.463028 0.624075 0.504504 0.423229 0.734069 0.651364 0.630627 0.468450 0.328970 0.324258
3 0.514077 0.427319 0.466188 0.605764 0.428764 0.413647 0.400183 0.485824 0.376856 0.417513 0.464381 0.528675 0.541136 0.433951 0.470221 0.381789 0.327926 0.492118 0.574851 0.589681 0.552885 0.568272 0.367919 0.260071 0.483927 0.521802 0.468524 0.299048 0.647669
4 0.395979 0.400110 0.409704 0.448312 0.351965 0.609942 0.696743 0.279321 0.461898 0.377576 0.579921 0.420840 0.648152 0.498430 0.407052 0.488687 0.569436 0.381394 0.767291 0.410772 0.470019 0.753922 0.516066 0.419900 0.398174 0.715835 0.323378 0.596044 0.470373
5 0.526009 0.348792 0.405698 0.404723 0.612659 0.462875 0.473429 0.592034 0.416211 0.534280 0.410594 0.434802 0.675097 0.509855 0.388789 0.438791 0.525251 0.617305 0.391869 0.752799 0.615893 0.557166 0.390778 0.471838 0.581625 0.443952 0.755009 0.551362 0.394231
6 0.774323 0.472731 0.538965 0.498537 0.460677 0.459545 0.486819 0.597577 0.453321 0.613500 0.602001 0.410832 0.584744 0.458095 0.368004 0.483783 0.423427 0.503754 0.300216 0.540073 0.329496 0.562550 0.380913 0.478411 0.554229 0.369369 0.607680 0.581758 0.513870
7 0.419491 0.487092 0.406473 0.397357 0.441859 0.511982 0.532354 0.492218 0.443784 0.369556 0.606099 0.478516 0.743081 0.476748 0.491992 0.398343 0.552451 0.477788 0.397714 0.454639 0.304476 0.569097 0.419266 0.300703 0.435237 0.410379 0.625846 0.616154 0.557484
8 0.394619 0.477231 0.496815 0.360447 0.603958 0.494705 0.483516 0.452920 0.683313 0.393272 0.500985 0.531819 0.441863 0.505411 0.443088 0.651859 0.550723 0.352809 0.490095 0.500648 0.594268 0.481356 0.456229 0.445136 0.376424 0.569474 0.515270 0.546426 0.578490
9 0.632996 0.444865 0.620693 0.519118 0.526293 0.709014 0.409693 0.669661 0.377818 0.403557 0.490237 0.728029 0.578740 0.384762 0.469213 0.480383 0.669422 0.453744 0.355138 0.565972 0.722325 0.288859 0.492174 0.435719 0.643250 0.521973 0.332801 0.560180 0.559778
Value29 Value30 Value31 Value32 Value33 Value34 Value35 Value36 Value37 Value38 Value39 Value40 Value41 Value42 Value43 Value44 Value45 Value46 Value47 Value48 Value49
0 0.321760 0.592699 0.539722 0.796589 0.422642 0.422434 0.511535 0.401984 0.642769 0.591690 0.618729 0.495701 0.499338 0.563066 0.416714 0.446868 0.452054 0.582336 0.548836 0.447153 0.472968 ...
1 0.210142 0.449109 0.485753 0.591893 0.385420 0.461945 0.542475 0.384641 0.504033 0.536906 0.306892 0.306667 0.331088 0.580116 0.487420 0.551604 0.541859 0.571688 0.521816 0.406157 0.502571 ...
2 0.547881 0.360305 0.655494 0.542029 0.566243 0.387138 0.618194 0.450937 0.376648 0.588018 0.585651 0.444006 0.458362 0.350833 0.641674 0.756018 0.433026 0.561242 0.589356 0.451257 0.552501 ...
3 0.478743 0.547547 0.469112 0.491888 0.594351 0.422536 0.524435 0.482052 0.578267 0.478766 0.349651 0.287340 0.579013 0.392594 0.346863 0.582621 0.501879 0.566547 0.534843 0.463062 0.494592 ...
4 0.651505 0.492600 0.473807 0.613098 0.424803 0.490984 0.443187 0.622319 0.403338 0.558826 0.483970 0.344612 0.488133 0.415431 0.377388 0.560634 0.538000 0.617297 0.462177 0.496063 0.560822 ...
5 0.416146 0.448544 0.514729 0.472275 0.347323 0.631208 0.606431 0.442708 0.425733 0.306117 0.323116 0.562519 0.547103 0.681616 0.508262 0.568052 0.583510 0.509279 0.390760 0.499313 0.544170 ...
6 0.558502 0.396221 0.485368 0.561663 0.541815 0.439916 0.434909 0.557702 0.452810 0.500489 0.394362 0.399809 0.616072 0.409369 0.524609 0.510435 0.431721 0.592443 0.540598 0.571026 0.585635 ...
7 0.640528 0.442698 0.765788 0.467680 0.334216 0.663389 0.577547 0.415244 0.539227 0.434006 0.363006 0.501987 0.475661 0.423875 0.600613 0.266944 0.297475 0.459295 0.392350 0.732349 0.461073 ...
8 0.504471 0.473215 0.555433 0.426638 0.356168 0.382980 0.496851 0.459538 0.519070 0.608182 0.489673 0.542196 0.489839 0.679011 0.361691 0.548929 0.703401 0.420543 0.642336 0.446213 0.732407 ...
9 0.544913 0.439394 0.466272 0.377234 0.515162 0.582319 0.618151 0.552965 0.500832 0.583877 0.365398 0.410493 0.634820 0.674711 0.606125 0.671604 0.531618 0.623165 0.528496 0.637803 0.459427 ...
[10 rows x 1000 columns]
```
@jreback I'm getting the same error as @rdooley on Python 2.7.5 with `pandas==0.13.1` and `numpy==1.8.0` installed.
you can't pass a list if you only specify a single level of the multi-index.
what are you trying to do?
@jreback That is indeed the intention. In a utility function under most circumstances multiple levels are specified however an edge case where only one level was specified (and also only one level in the multi index) occurred.
ok...take that back...this is a bug....didn't have a test for it.
| 2014-04-03T21:46:40Z | [] | [] |
Traceback (most recent call last):
File "example_failure.py", line 12, in <module>
df[attributes]
File "PATH/pandas/pandas/core/frame.py", line 1672, in __getitem__
return self._getitem_array(key)
File "PATH/pandas/pandas/core/frame.py", line 1717, in _getitem_array
return self.take(indexer, axis=1, convert=True)
File "PATH/pandas/pandas/core/generic.py", line 1224, in take
indices, len(self._get_axis(axis)))
File "PATH/pandas/pandas/core/indexing.py", line 1564, in _maybe_convert_indices
if mask.any():
AttributeError: 'bool' object has no attribute 'any'
| 15,269 |
|||
pandas-dev/pandas | pandas-dev__pandas-6810 | d2e1abff612a9dc4e5894ddaae706cb6fa18a8bc | diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -172,6 +172,10 @@ API Changes
(and numpy defaults)
- add ``inplace`` keyword to ``Series.order/sort`` to make them inverses (:issue:`6859`)
+- Replace ``pandas.compat.scipy.scoreatpercentile`` with ``numpy.percentile`` (:issue:`6810`)
+- ``.quantile`` on a ``datetime[ns]`` series now returns ``Timestamp`` instead
+ of ``np.datetime64`` objects (:issue:`6810`)
+
Deprecations
~~~~~~~~~~~~
diff --git a/pandas/compat/scipy.py b/pandas/compat/scipy.py
--- a/pandas/compat/scipy.py
+++ b/pandas/compat/scipy.py
@@ -6,88 +6,6 @@
import numpy as np
-def scoreatpercentile(a, per, limit=(), interpolation_method='fraction'):
- """Calculate the score at the given `per` percentile of the sequence `a`.
-
- For example, the score at `per=50` is the median. If the desired quantile
- lies between two data points, we interpolate between them, according to
- the value of `interpolation`. If the parameter `limit` is provided, it
- should be a tuple (lower, upper) of two values. Values of `a` outside
- this (closed) interval will be ignored.
-
- The `interpolation_method` parameter supports three values, namely
- `fraction` (default), `lower` and `higher`. Interpolation is done only,
- if the desired quantile lies between two data points `i` and `j`. For
- `fraction`, the result is an interpolated value between `i` and `j`;
- for `lower`, the result is `i`, for `higher` the result is `j`.
-
- Parameters
- ----------
- a : ndarray
- Values from which to extract score.
- per : scalar
- Percentile at which to extract score.
- limit : tuple, optional
- Tuple of two scalars, the lower and upper limits within which to
- compute the percentile.
- interpolation_method : {'fraction', 'lower', 'higher'}, optional
- This optional parameter specifies the interpolation method to use,
- when the desired quantile lies between two data points `i` and `j`:
-
- - fraction: `i + (j - i)*fraction`, where `fraction` is the
- fractional part of the index surrounded by `i` and `j`.
- - lower: `i`.
- - higher: `j`.
-
- Returns
- -------
- score : float
- Score at percentile.
-
- See Also
- --------
- percentileofscore
-
- Examples
- --------
- >>> from scipy import stats
- >>> a = np.arange(100)
- >>> stats.scoreatpercentile(a, 50)
- 49.5
-
- """
- # TODO: this should be a simple wrapper around a well-written quantile
- # function. GNU R provides 9 quantile algorithms (!), with differing
- # behaviour at, for example, discontinuities.
- values = np.sort(a, axis=0)
- if limit:
- values = values[(limit[0] <= values) & (values <= limit[1])]
-
- idx = per / 100. * (values.shape[0] - 1)
- if idx % 1 == 0:
- score = values[idx]
- else:
- if interpolation_method == 'fraction':
- score = _interpolate(values[int(idx)], values[int(idx) + 1],
- idx % 1)
- elif interpolation_method == 'lower':
- score = values[np.floor(idx)]
- elif interpolation_method == 'higher':
- score = values[np.ceil(idx)]
- else:
- raise ValueError("interpolation_method can only be 'fraction', "
- "'lower' or 'higher'")
-
- return score
-
-
-def _interpolate(a, b, fraction):
- """Returns the point at the given fraction between a and b, where
- 'fraction' must be between 0 and 1.
- """
- return a + (b - a) * fraction
-
-
def rankdata(a):
"""
Ranks the data, dealing with ties appropriately.
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -38,7 +38,7 @@
import pandas.computation.expressions as expressions
from pandas.computation.eval import eval as _eval
from pandas.computation.scope import _ensure_scope
-from pandas.compat.scipy import scoreatpercentile as _quantile
+from numpy import percentile as _quantile
from pandas.compat import(range, zip, lrange, lmap, lzip, StringIO, u,
OrderedDict, raise_with_traceback)
from pandas import compat
diff --git a/pandas/core/series.py b/pandas/core/series.py
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -52,7 +52,7 @@
import pandas.tslib as tslib
import pandas.index as _index
-from pandas.compat.scipy import scoreatpercentile as _quantile
+from numpy import percentile as _quantile
from pandas.core.config import get_option
__all__ = ['Series']
@@ -1235,10 +1235,11 @@ def quantile(self, q=0.5):
valid_values = self.dropna().values
if len(valid_values) == 0:
return pa.NA
- result = _quantile(valid_values, q * 100)
- if not np.isscalar and com.is_timedelta64_dtype(result):
- from pandas.tseries.timedeltas import to_timedelta
- return to_timedelta(result)
+ if com.is_datetime64_dtype(self):
+ values = _values_from_object(self).view('i8')
+ result = lib.Timestamp(_quantile(values, q * 100))
+ else:
+ result = _quantile(valid_values, q * 100)
return result
| Numpy 1.8 `DeprecationWarning` in compat/scipy.py
Not sure how pressing this is, but with `DeprecationWarning` enabled, I notice that numpy 1.8 is raising a warning during the following call to `describe()`. [side note: enabled DeprecationWarning in my test suite after learning that it was changed in py2.7 to "ignore" by default.]
```
import pandas as pd
import warnings
warnings.simplefilter("once", DeprecationWarning)
df = pd.DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
print df.groupby('A')['B'].describe()
```
stdout:
```
$ python test_fail.py
.../pandas/compat/scipy.py:68: DeprecationWarning: using a non-integer
number instead of an integer will result in an error in the future
score = values[idx]
```
Here's the full traceback with DeprecationWarning escalated to an error (`warnings.simplefilter("error", DeprecationWarning)`):
```
Traceback (most recent call last):
File "test_fail.py", line 6, in <module>
print df.groupby('A')['B'].describe()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 343, in wrapper
return self.apply(curried)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 424, in apply
return self._python_apply_general(f)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 427, in _python_apply_general
keys, values, mutated = self.grouper.apply(f, self.obj, self.axis)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 883, in apply
res = f(group)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 422, in f
return func(g, *args, **kwargs)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 329, in curried
return f(x, *args, **kwargs)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/series.py", line 1386, in describe
lb), self.median(), self.quantile(ub),
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/series.py", line 1316, in quantile
result = _quantile(valid_values, q * 100)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/compat/scipy.py", line 68, in scoreatpercentile
score = values[idx]
IndexError: cannot convert index to integer
```
| can you show what `values` and `idx` are at that point (and what `valid_values` and `q` are coming in?
```
values == array([ 1.2])
idx == 0.0
valid_values == array([ 1.2])
q == 0.25
```
Can repro this with plain numpy as:
```
In [1]: import warnings
In [2]: warnings.simplefilter('error', DeprecationWarning)
In [3]: np.array(range(10))[1.0]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-3-2e544ff0834f> in <module>()
----> 1 np.array(range(10))[1.0]
IndexError: cannot convert index to integer
In [4]: warnings.simplefilter('ignore', DeprecationWarning)
In [5]: np.array(range(10))[1.0]
Out[5]: 1
```
ahh...i see you are trying to index with floats as the indexer (when its convertible to an int)....its 'accepted'...but in general not a good idea.
I am going to warn on this in 0.14 too, see here:http://pandas.pydata.org/pandas-docs/dev/whatsnew.html#float64index-api-change (the very last part).
I think pandas is generating the float index internally in compat/scipy.py. If I call `scipy.stats.scoreatpercentile(np.array([ 1.2]), 25.0)` directly with scipy version 0.13.2, I don't see this warning..
if you have your above example (the top example) fail, can you print out those values (they should be the `valid_values`), prob a float. not sure why that would cause a problem (its a numpy array)
Here are the values again for the original example:
`values == array([ 1.2])`, `idx == 0.0`, `valid_values == array([ 1.2])`, `q == 0.25`
Looks to me like pandas has an [old version of scipy `scoreatpercentile`](https://github.com/pydata/pandas/blob/master/pandas/compat/scipy.py#L9) and needs to copy a [new version](https://github.com/scipy/scipy/blob/master/scipy/stats/stats.py#L1369) to prevent indexing a numpy ndarray with a float, which was deprecated in numpy 1.8.
idx is computed by scipy
afaik pandas is just passing simple stuff
can u see where scipy deprecated this (eg the original issue)
maybe need to call a different routine?
I didn't have scipy installed when I the fail, so I think that `idx` is
computed by a copy of a scipy function that lives in pandas. According to the
compat/scipy.py header, it was copied to avoid a dependency on scipy.
Looks like scipy.stats.scoreatpercentile was last changed in commit
https://github.com/jjhelmus/scipy/commit/1cdd08b62f52fdacb520b18910b8e1a71017ac9c
to accept sequences of percentiles, and also not index an ndarray
with a float..
for reference:
pandas/compat/scipy.py created by https://github.com/pydata/pandas/issues/1092
non-integer ndarray indexing deprecated in https://github.com/numpy/numpy/pull/3243
ahh...I see now...
ok...so basically that module then needs updating....
care to submit a PR (and prob need some tests their)
we have a 'soft' dep on scipy...but this is such a common thing its fine to have it 'built' in
so will call this a 'bug' then
Sure, I will draft a pr
On Jan 2, 2014 5:55 PM, "jreback" notifications@github.com wrote:
> ahh...I see now...
>
> ok...so basically that module then needs updating....
>
> care to submit a PR (and prob need some tests their)
>
> we have a 'soft' dep on scipy...but this is such a common thing its fine
> to have it 'built' in
>
> so will call this a 'bug' then
>
> —
> Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/5824#issuecomment-31492223
> .
note that (still unreleased) numpy 1.9 will have a percentile that should be able to replace scipy.scoreatpercentile both in performance and features.
it uses partition instead of sort which is faster and will support extended axes (ext axes is not merged yet but should be soon)
as you are a user of percentile maybe you want to give numpy 1.9.dev a try and see if it works for you.
@gdraps could you submit a PR for this?
(and for numpy 1.9 should take advantage of the changes)....
@gdraps PR for this?
Sent PR #6740 to fix the core issue, though it doesn't take advantage of numpy.percentile, which has been in numpy in a form that appears compatible with pandas's usage since 1.5, best I can tell.
When I tried to simply replace `scoreatpercentile` in core/frame.py and core/series.py with `numpy.percentile`, while using numpy 1.8, two tests below failed.
```
======================================================================
FAIL: test_timedelta_ops (pandas.tseries.tests.test_timedeltas.TestTimedeltas)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gmd/github_shadow/pandas/pandas/tseries/tests/test_timedeltas.py",
line 203, in test_timedelta_ops
tm.assert_almost_equal(result, expected)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal
(pandas/srcAssertionError: numpy.timedelta64(2599999999,'ns') !=
numpy.timedelta64(2600000000,'ns')
======================================================================
FAIL: test_quantile (pandas.tests.test_series.TestSeries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gmd/github_shadow/pandas/pandas/tests/test_series.py", line 2115,
in test_quantile
self.assertEqual(q, scoreatpercentile(self.ts.valid(), 10))
AssertionError: -1.2926251727455667 != -1.2926251727455669
----------------------------------------------------------------------
```
the way the fractions are computed are not the same in your function and numpy so you get slight rounding errors, numpy computes:
```
(1 - fraction) * low + fraction * high
```
while your code has one operation less:
```
low + (high - low) * fraction
```
maybe we could change numpy to that method if you expect it causes issues, but relying on exact results for floating point operations is usually not a good idea in high level programs without tight control on the operations and rounding modes
@gdraps I would be happy just dropping entirely `pandas.compat.scipy/_quantile` in favor of using the numpy method. Then just change the `test_quantile` test to compare against the numpy method iteself. (and just remove this part of the scipy dep).
Not sure why this was not done originally. Pls also add a test for using `datetime64[ns]` as I suspect this fails. (look at the `isin` method to see how to do this).
.
| 2014-04-05T14:27:01Z | [] | [] |
Traceback (most recent call last):
File "test_fail.py", line 6, in <module>
print df.groupby('A')['B'].describe()
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 343, in wrapper
return self.apply(curried)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 424, in apply
return self._python_apply_general(f)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 427, in _python_apply_general
keys, values, mutated = self.grouper.apply(f, self.obj, self.axis)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 883, in apply
res = f(group)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 422, in f
return func(g, *args, **kwargs)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/groupby.py", line 329, in curried
return f(x, *args, **kwargs)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/series.py", line 1386, in describe
lb), self.median(), self.quantile(ub),
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/core/series.py", line 1316, in quantile
result = _quantile(valid_values, q * 100)
File "/home/gmd/ENV/pandas-master-2/lib/python2.7/site-packages/pandas-0.13.0_29_g97860a1-py2.7-linux-i686.egg/pandas/compat/scipy.py", line 68, in scoreatpercentile
score = values[idx]
IndexError: cannot convert index to integer
| 15,271 |
|||
pandas-dev/pandas | pandas-dev__pandas-6985 | be29fd2c28d90c8ff3a193ba88a78e20e8b01e45 | diff --git a/pandas/src/testing.pyx b/pandas/src/testing.pyx
--- a/pandas/src/testing.pyx
+++ b/pandas/src/testing.pyx
@@ -121,7 +121,7 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False):
dtype_a = np.dtype(type(a))
dtype_b = np.dtype(type(b))
if dtype_a.kind == 'f' and dtype_b == 'f':
- if dtype_a.itemsize <= 4 and dtype_b.itemsize <= 4:
+ if dtype_a.itemsize <= 4 or dtype_b.itemsize <= 4:
decimal = 3
if np.isinf(a):
| Random bad asserts for stat ops when running tests.
```
======================================================================
FAIL: test_sum (pandas.tests.test_frame.TestDataFrame)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10590, in test_sum
has_numeric_only=True, check_dtype=False, check_less_precise=True)
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10780, in _check_stat_op
check_less_precise=check_less_precise) # HACK: win32
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/util/testing.py", line 513, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2554)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1796)
File "testing.pyx", line 140, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2387)
AssertionError: expected 0.00144 but got 0.00144
```
| @jreback
It looks like the issue is that nanops specify the `dtype_max` when calling the ops. https://github.com/pydata/pandas/commit/ff7bb2c1931f875878b349d125dbba30a502474f seems to be the culprit.
In the case that I found, the unit test is checking a `float32` frame vs that same frame up-casted to `float64`.
http://nbviewer.ipython.org/gist/anonymous/11349526
it could be but this error has been around a while actually
the issue is that `np.sum` is used as the comparison which should be passing `.sum(dtype='float32')` in this case
(the actual pandas routines are correct) after the fix above
pls submit a PR for this if you can (you can pass `lambda x: np.sum(dtype='float32')` instead of `np.sum` I think. This is sort a 'numpy' issue, really as `np.sum` is really doing the wrong thing
| 2014-04-27T19:02:09Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10590, in test_sum
has_numeric_only=True, check_dtype=False, check_less_precise=True)
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10780, in _check_stat_op
check_less_precise=check_less_precise) # HACK: win32
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/util/testing.py", line 513, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2554)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1796)
File "testing.pyx", line 140, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2387)
AssertionError: expected 0.00144 but got 0.00144
| 15,303 |
|||
pandas-dev/pandas | pandas-dev__pandas-6990 | 5824637fe0ff550d1ec53495a9b3f6bc56028165 | diff --git a/.travis.yml b/.travis.yml
--- a/.travis.yml
+++ b/.travis.yml
@@ -3,7 +3,7 @@ language: python
env:
global:
# scatterci API key
- - secure: "Bx5umgo6WjuGY+5XFa004xjCiX/vq0CyMZ/ETzcs7EIBI1BE/0fIDXOoWhoxbY9HPfdPGlDnDgB9nGqr5wArO2s+BavyKBWg6osZ3dmkfuJPMOWeyCa92EeP+sfKw8e5HSU5MizW9e319wHWOF/xkzdHR7T67Qd5erhv91x4DnQ="
+ #- secure: "Bx5umgo6WjuGY+5XFa004xjCiX/vq0CyMZ/ETzcs7EIBI1BE/0fIDXOoWhoxbY9HPfdPGlDnDgB9nGqr5wArO2s+BavyKBWg6osZ3dmkfuJPMOWeyCa92EeP+sfKw8e5HSU5MizW9e319wHWOF/xkzdHR7T67Qd5erhv91x4DnQ="
# ironcache API key
- secure: "e4eEFn9nDQc3Xa5BWYkzfX37jaWVq89XidVX+rcCNEr5OlOImvveeXnF1IzbRXznH4Sv0YsLwUd8RGUWOmyCvkONq/VJeqCHWtTMyfaCIdqSyhIP9Odz8r9ahch+Y0XFepBey92AJHmlnTh+2GjCDgIiqq4fzglojnp56Vg1ojA="
- secure: "CjmYmY5qEu3KrvMtel6zWFEtMq8ORBeS1S1odJHnjQpbwT1KY2YFZRVlLphfyDQXSz6svKUdeRrCNp65baBzs3DQNA8lIuXGIBYFeJxqVGtYAZZs6+TzBPfJJK798sGOj5RshrOJkFG2rdlWNuTq/XphI0JOrN3nPUkRrdQRpAw="
@@ -51,6 +51,7 @@ matrix:
- JOB_NAME: "27_numpy_master"
- JOB_TAG=_NUMPY_DEV_master
- NUMPY_BUILD=master
+ - PANDAS_TESTING_MODE="numpy_deprecate"
allow_failures:
- python: 2.7
env:
@@ -58,6 +59,7 @@ matrix:
- JOB_NAME: "27_numpy_master"
- JOB_TAG=_NUMPY_DEV_master
- NUMPY_BUILD=master
+ - PANDAS_TESTING_MODE="numpy_deprecate"
# allow importing from site-packages,
# so apt-get python-x works for system pythons
diff --git a/ci/after_script.sh b/ci/after_script.sh
--- a/ci/after_script.sh
+++ b/ci/after_script.sh
@@ -1,7 +1,7 @@
#!/bin/bash
-wget https://raw.github.com/y-p/ScatterCI-CLI/master/scatter_cli.py
-chmod u+x scatter_cli.py
+#wget https://raw.github.com/y-p/ScatterCI-CLI/master/scatter_cli.py
+#chmod u+x scatter_cli.py
pip install -I requests==2.1.0
echo "${TRAVIS_PYTHON_VERSION:0:4}"
@@ -12,7 +12,6 @@ fi
# ScatterCI accepts a build log, but currently does nothing with it.
echo '' > /tmp/build.log
-# These should be in the environment, but not in source control
# nore exposed in the build logs
#export SCATTERCI_ACCESS_KEY=
#export SCATTERCI_HOST=
@@ -22,6 +21,6 @@ ci/print_versions.py -j /tmp/env.json
# nose ran using "--with-xunit --xunit-file nosetest.xml" and generated /tmp/nosetest.xml
# Will timeout if server not available, and should not fail the build
-python scatter_cli.py --xunit-file /tmp/nosetests.xml --log-file /tmp/build.log --env-file /tmp/env.json --build-name "$JOB_NAME" --succeed
+#python scatter_cli.py --xunit-file /tmp/nosetests.xml --log-file /tmp/build.log --env-file /tmp/env.json --build-name "$JOB_NAME" --succeed
true # never fail because bad things happened here
diff --git a/ci/script.sh b/ci/script.sh
--- a/ci/script.sh
+++ b/ci/script.sh
@@ -16,13 +16,6 @@ fi
"$TRAVIS_BUILD_DIR"/ci/build_docs.sh 2>&1 > /tmp/doc.log &
# doc build log will be shown after tests
-# export the testing mode
-if [ -n "$NUMPY_BUILD" ]; then
-
- export PANDAS_TESTING_MODE="numpy_deprecate"
-
-fi
-
echo nosetests --exe -w /tmp -A "$NOSE_ARGS" pandas --with-xunit --xunit-file=/tmp/nosetests.xml
nosetests --exe -w /tmp -A "$NOSE_ARGS" pandas --with-xunit --xunit-file=/tmp/nosetests.xml
diff --git a/pandas/src/testing.pyx b/pandas/src/testing.pyx
--- a/pandas/src/testing.pyx
+++ b/pandas/src/testing.pyx
@@ -118,10 +118,7 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False):
# deal with differing dtypes
if check_less_precise:
- dtype_a = np.dtype(type(a))
- dtype_b = np.dtype(type(b))
- if dtype_a.kind == 'f' and dtype_b == 'f':
- decimal = 3
+ decimal = 3
if np.isinf(a):
assert np.isinf(b), "First object is inf, second isn't"
@@ -132,11 +129,11 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False):
if abs(fa) < 1e-5:
if not decimal_almost_equal(fa, fb, decimal):
assert False, (
- '(very low values) expected %.5f but got %.5f' % (b, a)
+ '(very low values) expected %.5f but got %.5f, with decimal %d' % (fb, fa, decimal)
)
else:
if not decimal_almost_equal(1, fb / fa, decimal):
- assert False, 'expected %.5f but got %.5f' % (b, a)
+ assert False, 'expected %.5f but got %.5f, with decimal %d' % (fb, fa, decimal)
else:
assert a == b, "%r != %r" % (a, b)
| Random bad asserts for stat ops when running tests.
```
======================================================================
FAIL: test_sum (pandas.tests.test_frame.TestDataFrame)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10590, in test_sum
has_numeric_only=True, check_dtype=False, check_less_precise=True)
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10780, in _check_stat_op
check_less_precise=check_less_precise) # HACK: win32
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/util/testing.py", line 513, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2554)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1796)
File "testing.pyx", line 140, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2387)
AssertionError: expected 0.00144 but got 0.00144
```
| @jreback
It looks like the issue is that nanops specify the `dtype_max` when calling the ops. https://github.com/pydata/pandas/commit/ff7bb2c1931f875878b349d125dbba30a502474f seems to be the culprit.
In the case that I found, the unit test is checking a `float32` frame vs that same frame up-casted to `float64`.
http://nbviewer.ipython.org/gist/anonymous/11349526
it could be but this error has been around a while actually
the issue is that `np.sum` is used as the comparison which should be passing `.sum(dtype='float32')` in this case
(the actual pandas routines are correct) after the fix above
pls submit a PR for this if you can (you can pass `lambda x: np.sum(dtype='float32')` instead of `np.sum` I think. This is sort a 'numpy' issue, really as `np.sum` is really doing the wrong thing
@dalejung I put up #6985, I *think \* this should fix....can you reproduce reliably?
@dalejung if you notice any more pls lmk
@dalejung not sure my fix actually fixed this....!
| 2014-04-28T11:59:07Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10590, in test_sum
has_numeric_only=True, check_dtype=False, check_less_precise=True)
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/tests/test_frame.py", line 10780, in _check_stat_op
check_less_precise=check_less_precise) # HACK: win32
File "/home/travis/virtualenv/python2.6_with_system_site_packages/lib/python2.6/site-packages/pandas/util/testing.py", line 513, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2554)
File "testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1796)
File "testing.pyx", line 140, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2387)
AssertionError: expected 0.00144 but got 0.00144
| 15,305 |
|||
pandas-dev/pandas | pandas-dev__pandas-7248 | 879335661b063e750c415748daf3de6fd423ec85 | python 3: test_to_string_truncate_indices UnicodeEncodeError: 'ascii' codec can't encode characters in position 177-178: ordinal not in range(128)
Was trying to build a fresh pkg of current master 0.14.0~rc1+git73-g8793356 and got hiccup with
```
======================================================================
ERROR: test_to_string_truncate_indices (pandas.tests.test_format.TestDataFrameFormatting)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0~rc1+git73-g8793356/debian/tmp/usr/lib/python3/dist-packages/pandas/tests/test_format.py", line 414, in test_to_string_truncate_indices
print(df)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 177-178: ordinal not in range(128)
----------------------------------------------------------------------
Ran 6694 tests in 557.813s
FAILED (SKIP=482, errors=1)
```
please do not "print" in the tests... makes it harder to get through the output to the actual report on failures (and `assert("blah" in str(df))` and `assert("bleh" in repr(df))` would be more functional)
```
> python3 --version
Python 3.3.5
> locale
LANG=C
LANGUAGE=
LC_CTYPE="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_COLLATE="C"
LC_MONETARY="C"
LC_MESSAGES="C"
LC_PAPER="C"
LC_NAME="C"
LC_ADDRESS="C"
LC_TELEPHONE="C"
LC_MEASUREMENT="C"
LC_IDENTIFICATION="C"
LC_ALL=C
```
| yep...these shouldn't be print....PR coming up
| 2014-05-27T16:26:28Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0~rc1+git73-g8793356/debian/tmp/usr/lib/python3/dist-packages/pandas/tests/test_format.py", line 414, in test_to_string_truncate_indices
print(df)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 177-178: ordinal not in range(128)
| 15,355 |
||||
pandas-dev/pandas | pandas-dev__pandas-7260 | 2801bdc42ef87400b6426841a720cd5a105e23a7 | ubuntu 13.04: test_grouped_box_return_type AssertionError: Lists differ: ['A', 'C', 'D', 'E', 'F', 'G',... != ['A', 'B', 'C', 'D', 'E', 'F',
```
======================================================================
FAIL: test_grouped_box_return_type (pandas.tests.test_graphics.TestDataFrameGroupByPlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 2286, in test_grouped_box_return_type
self._check_box_dict(returned, t, klass, categories2)
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 2252, in _check_box_dict
self.assertEqual(sorted(returned.keys()), sorted(expected_keys))
AssertionError: Lists differ: ['A', 'C', 'D', 'E', 'F', 'G',... != ['A', 'B', 'C', 'D', 'E', 'F',...
First differing element 1:
C
B
Second list contains 1 additional elements.
First extra element 9:
J
- ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
+ ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
? +++++
```
## INSTALLED VERSIONS
commit: None
python: 2.7.4.final.0
python-bits: 32
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.14.0.dev
nose: 1.1.2
Cython: 0.17.4
numpy: 1.7.1
scipy: 0.11.0
statsmodels: 0.5.0
IPython: None
sphinx: 1.1.3
patsy: 0.2.1
scikits.timeseries: None
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 2.4.0
numexpr: 2.0.1
matplotlib: 1.2.1
openpyxl: 1.7.0
xlrd: 0.6.1
xlwt: 0.7.4
xlsxwriter: None
lxml: None
bs4: 4.1.2
html5lib: 0.95-dev
bq: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
```
| I see similar on travis: https://travis-ci.org/pydata/pandas/jobs/26187573 on #7255.
This is a random issue caused by the test added in #7225. I'll fix today...
| 2014-05-28T13:30:58Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 2286, in test_grouped_box_return_type
self._check_box_dict(returned, t, klass, categories2)
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 2252, in _check_box_dict
self.assertEqual(sorted(returned.keys()), sorted(expected_keys))
AssertionError: Lists differ: ['A', 'C', 'D', 'E', 'F', 'G',... != ['A', 'B', 'C', 'D', 'E', 'F',...
| 15,357 |
||||
pandas-dev/pandas | pandas-dev__pandas-7264 | 357406999eb52b94e909b15bfcee87de733bb97c | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -291,12 +291,10 @@ Using the :func:`~Series.isin` method for filtering:
.. ipython:: python
- df['E']=['one', 'one','two','three','four','three']
- df
- good_numbers=['two','four']
- df[df['E'].isin(good_numbers)]
-
- df.drop('E', inplace=True, axis=1)
+ df2 = df.copy()
+ df2['E']=['one', 'one','two','three','four','three']
+ df2
+ df2[df2['E'].isin(['two','four'])
Setting
~~~~~~~
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -99,7 +99,7 @@ def _stata_elapsed_date_to_datetime(date, fmt):
#TODO: IIRC relative delta doesn't play well with np.datetime?
#TODO: When pandas supports more than datetime64[ns], this should be improved to use correct range, e.g. datetime[Y] for yearly
if np.isnan(date):
- return np.datetime64('nat')
+ return NaT
date = int(date)
stata_epoch = datetime.datetime(1960, 1, 1)
| test_class_ops: ubuntu 14.04 32bit AssertionError: 1401254445 != 1401254446
```
======================================================================
FAIL: test_class_ops (pandas.tseries.tests.test_timeseries.TestTimestamp)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_timeseries.py", line 2711, in test_class_ops
compare(Timestamp.now('UTC'),datetime.now(pytz.timezone('UTC')))
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_timeseries.py", line 2708, in compare
self.assertEqual(int(Timestamp(x).value/1e9), int(Timestamp(y).value/1e9))
AssertionError: 1401254445 != 1401254446
----------------------------------------------------------------------
Ran 7152 tests in 664.150s
```
```
commit: None
python: 2.7.6.final.0
python-bits: 32
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.14.0.dev
nose: 1.3.1
Cython: 0.20.1post0
numpy: 1.8.1
scipy: 0.13.3
statsmodels: 0.5.0
IPython: None
sphinx: 1.2.2
patsy: 0.2.1
scikits.timeseries: None
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 3.1.0
numexpr: 2.2.2
matplotlib: 1.3.1
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.5
xlsxwriter: None
lxml: None
bs4: 4.2.1
html5lib: 0.999
bq: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| what build is this under? I think I have `schroot` to `wheezy/sid` only?
On Wed, 28 May 2014, jreback wrote:
> what build is this under? I think I have schroot to wheezy/sid only?
in a clean 14.04 chroot, which was originally generated by cowbuilder
--create
if you have a debian/ubuntu box I could probably give you a few lines to
run to create a similar chroot for 14.04. let me know
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Research Scientist, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
alternative could be to follow
http://neuro.debian.net/blog/2012/2012-04-14_ndtools.html
and get a full range of chroots for your troubleshooting pleasure ;)
(few additional changes would probably be needed depending on what is
your base system and version of debootstrap)
On Wed, 28 May 2014, jreback wrote:
> what build is this under? I think I have schroot to wheezy/sid only?
>
> —
> Reply to this email directly or [1]view it on GitHub.
>
> References
>
> Visible links
> 1. https://github.com/pydata/pandas/issues/7263#issuecomment-44412008
##
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Research Scientist, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik
| 2014-05-28T15:50:21Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_timeseries.py", line 2711, in test_class_ops
compare(Timestamp.now('UTC'),datetime.now(pytz.timezone('UTC')))
File "/tmp/buildd/pandas-0.14.0~rc1+git79-g1fa5dd4/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_timeseries.py", line 2708, in compare
self.assertEqual(int(Timestamp(x).value/1e9), int(Timestamp(y).value/1e9))
AssertionError: 1401254445 != 1401254446
| 15,358 |
|||
pandas-dev/pandas | pandas-dev__pandas-7266 | 77c6f0e7aa606f1872ac4cc1e9f24a0404bd0ce0 | Big-endians: test_value_counts_unique_nunique AssertionError: False is not true
As here
http://nipy.bic.berkeley.edu/builders/pandas-py2.x-sid-sparc/builds/768/steps/shell_7/logs/stdio
on a sparc box (which I think some of you have access to) or on debian build boxes on powerpc and s390x:
https://buildd.debian.org/status/fetch.php?pkg=pandas&arch=powerpc&ver=0.14.0~rc1%2Bgit79-g1fa5dd4-1&stamp=1401244093
```
======================================================================
FAIL: test_value_counts_unique_nunique (pandas.tests.test_base.TestIndexOps)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildslave/nd-bb-slave-sparc-sid/pandas-py2_x-sid-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0rc1_79_g1fa5dd4-py2.7-linux-sparc64.egg/pandas/tests/test_base.py", line 271, in test_value_counts_unique_nunique
self.assertTrue(result[0] is pd.NaT)
AssertionError: False is not true
```
| 2014-05-28T18:18:23Z | [] | [] |
Traceback (most recent call last):
File "/home/buildslave/nd-bb-slave-sparc-sid/pandas-py2_x-sid-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0rc1_79_g1fa5dd4-py2.7-linux-sparc64.egg/pandas/tests/test_base.py", line 271, in test_value_counts_unique_nunique
self.assertTrue(result[0] is pd.NaT)
AssertionError: False is not true
| 15,359 |
|||||
pandas-dev/pandas | pandas-dev__pandas-7328 | 97ad7073fd9c125bcc66505d485ca8060f61c929 | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -87,3 +87,4 @@ Bug Fixes
(:issue:`7315`).
- Bug in inferred_freq results in None for eastern hemisphere timezones (:issue:`7310`)
- Bug in ``Easter`` returns incorrect date when offset is negative (:issue:`7195`)
+- Bug in broadcasting with ``.div``, integer dtypes and divide-by-zero (:issue:`7325`)
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1244,20 +1244,21 @@ def _fill_zeros(result, x, y, name, fill):
if is_integer_dtype(y):
- mask = y.ravel() == 0
- if mask.any():
+ if (y.ravel() == 0).any():
shape = result.shape
result = result.ravel().astype('float64')
+ # GH 7325, mask and nans must be broadcastable
signs = np.sign(result)
- nans = np.isnan(x.ravel())
- np.putmask(result, mask & ~nans, fill)
+ mask = ((y == 0) & ~np.isnan(x)).ravel()
+
+ np.putmask(result, mask, fill)
# if we have a fill of inf, then sign it
# correctly
# GH 6178
if np.isinf(fill):
- np.putmask(result,signs<0 & mask & ~nans,-fill)
+ np.putmask(result,signs<0 & mask, -fill)
result = result.reshape(shape)
| BUG: division misbehaving
Attempting to reduce [this question](http://stackoverflow.com/questions/24024928/division-in-pandas-multiple-columns-by-another-column-of-the-same-dataframe) to a minimal test case produced:
```
>>> df = pd.DataFrame(np.arange(3*2).reshape((3,2)))
>>> df
0 1
0 0 1
1 2 3
2 4 5
>>> df.add(df[0], axis='index')
0 1
0 0 1
1 4 5
2 8 9
>>> df.sub(df[0], axis='index')
0 1
0 0 1
1 0 1
2 0 1
>>> df.mul(df[0], axis='index')
0 1
0 0 0
1 4 6
2 16 20
>>> df.div(df[0], axis='index')
Traceback (most recent call last):
File "<ipython-input-47-5b698f939cc6>", line 1, in <module>
df.div(df[0], axis='index')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/ops.py", line 763, in f
return self._combine_series(other, na_op, fill_value, axis, level)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2830, in _combine_series
return self._combine_match_index(other, func, level=level, fill_value=fill_value)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2860, in _combine_match_index
return self._constructor(func(left.values.T, right.values).T,
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/ops.py", line 754, in na_op
result = com._fill_zeros(result, x, y, name, fill_zeros)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/common.py", line 1254, in _fill_zeros
np.putmask(result, mask & ~nans, fill)
ValueError: operands could not be broadcast together with shapes (3,) (6,)
```
For comparison, if we change the types:
```
>>> (df*1.0).div((df*1.0)[0], axis='index')
0 1
0 NaN inf
1 1 1.500000
2 1 1.250000
```
Version info:
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-27-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_CA.UTF-8
pandas: 0.14.0-5-g3b82634
nose: 1.3.3
Cython: 0.20.1post0
numpy: 1.8.1
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 1.2.1
sphinx: None
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.3
bottleneck: None
tables: None
numexpr: 2.4
matplotlib: 1.3.1
openpyxl: 2.0.3
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.5
lxml: 3.3.3
bs4: 4.3.2
html5lib: 0.999
bq: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| This however works:
```
In [3]: df=pd.DataFrame(np.reshape(np.arange(1,7),(3,2)))
In [4]: df
Out[4]:
0 1
0 1 2
1 3 4
2 5 6
[3 rows x 2 columns]
In [5]: print(df.div(df[0], axis='index'))
0 1
0 1 2.000000
1 1 1.333333
2 1 1.200000
[3 rows x 2 columns]
```
What is the expected behavior when trying to divide by zero (in your `df.ix[0,0]`)?
see down a little ways here: http://pandas-docs.github.io/pandas-docs-travis/whatsnew.html#id22
`1/0 == np.inf`
| 2014-06-03T23:39:45Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-47-5b698f939cc6>", line 1, in <module>
df.div(df[0], axis='index')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/ops.py", line 763, in f
return self._combine_series(other, na_op, fill_value, axis, level)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2830, in _combine_series
return self._combine_match_index(other, func, level=level, fill_value=fill_value)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 2860, in _combine_match_index
return self._constructor(func(left.values.T, right.values).T,
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/ops.py", line 754, in na_op
result = com._fill_zeros(result, x, y, name, fill_zeros)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_5_g3b82634-py2.7-linux-x86_64.egg/pandas/core/common.py", line 1254, in _fill_zeros
np.putmask(result, mask & ~nans, fill)
ValueError: operands could not be broadcast together with shapes (3,) (6,)
| 15,372 |
|||
pandas-dev/pandas | pandas-dev__pandas-7343 | 19c29ec5329c945471aaeb8aa39ec2b006d6e4e3 | TST: comparison vs timezone issue for current pytz
Test just needs to use a localized version in the comparison
```
FAIL: test_with_tz (pandas.tseries.tests.test_timezones.TestTimeZoneSupportPytz)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.4.0/lib/python3.4/site-packages/pandas/tseries/tests/test_timezones.py", line 406, in test_with_tz
self.assertIs(central[0].tz, tz)
File "/home/travis/virtualenv/python3.4.0/lib/python3.4/site-packages/pandas/util/testing.py", line 96, in assertIs
assert a is b, "%s: %r is not %r" % (msg.format(a,b), a, b)
AssertionError: : <DstTzInfo 'US/Central' CST-1 day, 18:00:00 STD> is not <DstTzInfo 'US/Central' LMT-1 day, 18:09:00 STD>
----------------------------------------------------------------------
Ran 7104 tests in 558.494s
```
https://travis-ci.org/pydata/pandas/jobs/26755161
| 2014-06-04T18:16:06Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.4.0/lib/python3.4/site-packages/pandas/tseries/tests/test_timezones.py", line 406, in test_with_tz
self.assertIs(central[0].tz, tz)
File "/home/travis/virtualenv/python3.4.0/lib/python3.4/site-packages/pandas/util/testing.py", line 96, in assertIs
assert a is b, "%s: %r is not %r" % (msg.format(a,b), a, b)
AssertionError: : <DstTzInfo 'US/Central' CST-1 day, 18:00:00 STD> is not <DstTzInfo 'US/Central' LMT-1 day, 18:09:00 STD>
| 15,376 |
|||||
pandas-dev/pandas | pandas-dev__pandas-7370 | 255e82a12122c8813ddfa3c46621123494e189f9 | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -135,6 +135,9 @@ Enhancements
- All offsets ``apply``, ``rollforward`` and ``rollback`` can now handle ``np.datetime64``, previously results in ``ApplyTypeError`` (:issue:`7452`)
- ``Period`` and ``PeriodIndex`` can contain ``NaT`` in its values (:issue:`7485`)
+- Support pickling ``Series``, ``DataFrame`` and ``Panel`` objects with
+ non-unique labels along *item* axis (``index``, ``columns`` and ``items``
+ respectively) (:issue:`7370`).
.. _whatsnew_0141.performance:
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1603,16 +1603,19 @@ class SparseBlock(Block):
def __init__(self, values, placement,
ndim=None, fastpath=False,):
+ # Placement must be converted to BlockPlacement via property setter
+ # before ndim logic, because placement may be a slice which doesn't
+ # have a length.
+ self.mgr_locs = placement
+
# kludgetastic
if ndim is None:
- if len(placement) != 1:
+ if len(self.mgr_locs) != 1:
ndim = 1
else:
ndim = 2
self.ndim = ndim
- self.mgr_locs = placement
-
if not isinstance(values, SparseArray):
raise TypeError("values must be SparseArray")
@@ -2050,26 +2053,44 @@ def __getstate__(self):
block_values = [b.values for b in self.blocks]
block_items = [self.items[b.mgr_locs.indexer] for b in self.blocks]
axes_array = [ax for ax in self.axes]
- return axes_array, block_values, block_items
- def __setstate__(self, state):
- # discard anything after 3rd, support beta pickling format for a little
- # while longer
- ax_arrays, bvalues, bitems = state[:3]
+ extra_state = {
+ '0.14.1': {
+ 'axes': axes_array,
+ 'blocks': [dict(values=b.values,
+ mgr_locs=b.mgr_locs.indexer)
+ for b in self.blocks]
+ }
+ }
- self.axes = [_ensure_index(ax) for ax in ax_arrays]
-
- blocks = []
- for values, items in zip(bvalues, bitems):
+ # First three elements of the state are to maintain forward
+ # compatibility with 0.13.1.
+ return axes_array, block_values, block_items, extra_state
+ def __setstate__(self, state):
+ def unpickle_block(values, mgr_locs):
# numpy < 1.7 pickle compat
if values.dtype == 'M8[us]':
values = values.astype('M8[ns]')
-
- blk = make_block(values,
- placement=self.axes[0].get_indexer(items))
- blocks.append(blk)
- self.blocks = tuple(blocks)
+ return make_block(values, placement=mgr_locs)
+
+ if (isinstance(state, tuple) and len(state) >= 4
+ and '0.14.1' in state[3]):
+ state = state[3]['0.14.1']
+ self.axes = [_ensure_index(ax) for ax in state['axes']]
+ self.blocks = tuple(
+ unpickle_block(b['values'], b['mgr_locs'])
+ for b in state['blocks'])
+ else:
+ # discard anything after 3rd, support beta pickling format for a
+ # little while longer
+ ax_arrays, bvalues, bitems = state[:3]
+
+ self.axes = [_ensure_index(ax) for ax in ax_arrays]
+ self.blocks = tuple(
+ unpickle_block(values,
+ self.axes[0].get_indexer(items))
+ for values, items in zip(bvalues, bitems))
self._post_setstate()
| BUG: v14.0 Error when unpickling DF with non-unique column multiindex
```
>>> d = pandas.Series({('1ab','2'): 3, ('1ab',3):4}, )
>>> d = pandas.concat([d,d])
>>> d = pandas.concat([d,d], axis=1)
>>> pickle.loads(pickle.dumps(d))
0 1
1ab 3 4 4
2 3 3
3 4 4
2 3 3
>>> pickle.loads(pickle.dumps(d.T))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/pickle.py", line 1382, in loads
return Unpickler(file).load()
File "/usr/lib64/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib64/python2.7/pickle.py", line 1217, in load_build
setstate(state)
File "venv/lib/python2.7/site-packages/pandas/core/internals.py", line 2063, in __setstate__
placement=self.axes[0].get_indexer(items))
File "venv/lib/python2.7/site-packages/pandas/core/index.py", line 3200, in get_indexer
raise Exception('Reindexing only valid with uniquely valued Index '
Exception: Reindexing only valid with uniquely valued Index objects
```
| cc @immerrr
The problem is that the pickle only contains block items which are not enough to tell which item must go where if they're non-unique. Luckily, there's no need to share items/ref_items anymore and thus blocks can be pickled/unpickled as usual objects. I hope I'll get some time later tonight to prepare a pull request with new pickle format.
@immerrr sounds good
| 2014-06-06T08:13:30Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/pickle.py", line 1382, in loads
return Unpickler(file).load()
File "/usr/lib64/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib64/python2.7/pickle.py", line 1217, in load_build
setstate(state)
File "venv/lib/python2.7/site-packages/pandas/core/internals.py", line 2063, in __setstate__
placement=self.axes[0].get_indexer(items))
File "venv/lib/python2.7/site-packages/pandas/core/index.py", line 3200, in get_indexer
raise Exception('Reindexing only valid with uniquely valued Index '
Exception: Reindexing only valid with uniquely valued Index objects
| 15,381 |
|||
pandas-dev/pandas | pandas-dev__pandas-7392 | aef26532e16d4b518422210fd7049668b687ea18 | diff --git a/pandas/io/data.py b/pandas/io/data.py
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -664,7 +664,9 @@ def _get_option_data(self, month, year, expiry, table_loc, name):
"element".format(url))
tables = root.xpath('.//table')
ntables = len(tables)
- if table_loc - 1 > ntables:
+ if ntables == 0:
+ raise RemoteDataError("No tables found at {0!r}".format(url))
+ elif table_loc - 1 > ntables:
raise IndexError("Table location {0} invalid, {1} tables"
" found".format(table_loc, ntables))
| TST: option retrieval failures
https://travis-ci.org/pydata/pandas/jobs/26755159
need a better skipping mechanism
```
======================================================================
ERROR: test_get_call_data_warning (pandas.io.tests.test_data.TestOptionsWarnings)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/util/testing.py", line 1281, in wrapper
return t(*args, **kwargs)
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/io/tests/test_data.py", line 341, in test_get_call_data_warning
self.aapl.get_call_data(month=self.month, year=self.year)
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/io/data.py", line 709, in get_call_data
return self._get_option_data(month, year, expiry, 9, 'calls')
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/io/data.py", line 669, in _get_option_data
" found".format(table_loc, ntables))
IndexError: Table location 9 invalid, 0 tables found
----------------------------------------------------------------------
```
| cc @dstephens99
| 2014-06-07T22:09:31Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/util/testing.py", line 1281, in wrapper
return t(*args, **kwargs)
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/io/tests/test_data.py", line 341, in test_get_call_data_warning
self.aapl.get_call_data(month=self.month, year=self.year)
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/io/data.py", line 709, in get_call_data
return self._get_option_data(month, year, expiry, 9, 'calls')
File "/home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages/pandas/io/data.py", line 669, in _get_option_data
" found".format(table_loc, ntables))
IndexError: Table location 9 invalid, 0 tables found
| 15,387 |
|||
pandas-dev/pandas | pandas-dev__pandas-7456 | 35849777a469b5d7c2db4293ad073af557f68b98 | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -567,7 +567,7 @@ def insert(self):
ins = self.insert_statement()
data_list = []
temp = self.insert_data()
- keys = temp.columns
+ keys = list(map(str, temp.columns))
for t in temp.itertuples():
data = dict((k, self.maybe_asscalar(v))
| test_integer_col_names: compatibility with wheezy sqlalchemy?
```
======================================================================
ERROR: test_integer_col_names (pandas.io.tests.test_sql.TestSQLApi)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/tests/test_sql.py", line 561, in test_integer_col_names
if_exists='replace')
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 440, in to_sql
index_label=index_label)
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 815, in to_sql
table.insert()
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 584, in insert
self.pd_sql.execute(ins, data_list)
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 783, in execute
return self.engine.execute(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2447, in execute
return connection.execute(statement, *multiparams, **params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1449, in execute
params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1576, in _execute_clauseelement
inline=len(distilled_params) > 1)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py", line 1778, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py", line 1784, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 277, in __init__
engine.Compiled.__init__(self, dialect, statement, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 705, in __init__
self.string = self.process(self.statement)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 724, in process
return obj._compiler_dispatch(self, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 1047, in visit_insert
colparams = self._get_colparams(insert_stmt)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 1243, in _get_colparams
for key in self.column_keys
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 1245, in <genexpr>
key not in stmt.parameters)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py", line 1396, in _column_as_key
return element.key
AttributeError: 'long' object has no attribute 'key'
```
http://nipy.bic.berkeley.edu/builders/pandas-py2.7-wheezy-sparc/builds/193/steps/shell_7/logs/stdio
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.3.final.0
python-bits: 32
OS: Linux
OS-release: 2.6.32-5-sparc64-smp
machine: sparc64
processor:
byteorder: big
LC_ALL: None
LANG: None
pandas: 0.14.0-45-g1754bb5
nose: 1.1.2
Cython: 0.19
numpy: 1.6.2
scipy: 0.10.1
statsmodels: 0.5.0
IPython: 0.13.1
sphinx: 1.1.3
patsy: 0.2.1
scikits.timeseries: None
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 2.3.1
numexpr: 2.0.1
matplotlib: 1.1.1rc2
openpyxl: 1.7.0
xlrd: 0.6.1
xlwt: 0.7.4
xlsxwriter: None
lxml: 2.3.2
bs4: 4.1.0
html5lib: 0.95-dev
bq: None
apiclient: None
rpy2: 2.2.6
sqlalchemy: 0.7.8
pymysql: None
psycopg2: None
```
| Related to #6340 and #7022. We saw the same error there.
| 2014-06-14T10:42:52Z | [] | [] |
Traceback (most recent call last):
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/tests/test_sql.py", line 561, in test_integer_col_names
if_exists='replace')
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 440, in to_sql
index_label=index_label)
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 815, in to_sql
table.insert()
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 584, in insert
self.pd_sql.execute(ins, data_list)
File "/home/buildslave/nd-bb-slave-sparc-wheezy/pandas-py2_7-wheezy-sparc/build/venv/lib/python2.7/site-packages/pandas-0.14.0_45_g1754bb5-py2.7-linux-sparc64.egg/pandas/io/sql.py", line 783, in execute
return self.engine.execute(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2447, in execute
return connection.execute(statement, *multiparams, **params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1449, in execute
params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1576, in _execute_clauseelement
inline=len(distilled_params) > 1)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py", line 1778, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py", line 1784, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 277, in __init__
engine.Compiled.__init__(self, dialect, statement, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 705, in __init__
self.string = self.process(self.statement)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 724, in process
return obj._compiler_dispatch(self, **kwargs)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py", line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 1047, in visit_insert
colparams = self._get_colparams(insert_stmt)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 1243, in _get_colparams
for key in self.column_keys
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py", line 1245, in <genexpr>
key not in stmt.parameters)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py", line 1396, in _column_as_key
return element.key
AttributeError: 'long' object has no attribute 'key'
| 15,403 |
|||
pandas-dev/pandas | pandas-dev__pandas-7479 | 4db22f401db333b38a8bda810b8dd5e0e29f660f | diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1538,6 +1538,14 @@ def _interpolate_scipy_wrapper(x, y, new_x, method, fill_value=None,
terp = interpolate.UnivariateSpline(x, y, k=order)
new_y = terp(new_x)
else:
+ # GH 7295: need to be able to write for some reason
+ # in some circumstances: check all three
+ if not x.flags.writeable:
+ x = x.copy()
+ if not y.flags.writeable:
+ y = y.copy()
+ if not new_x.flags.writeable:
+ new_x = new_x.copy()
method = alt_methods[method]
new_y = method(x, y, new_x)
return new_y
| BUG: test_interp_regression failure
```
test_interp_regression (__main__.TestSeries) ... > /home/dsm/sys/pandas/pandas/tests/stringsource(327)View.MemoryView.memoryview.__cinit__ (scipy/interpolate/_ppoly.c:19922)()
(Pdb) cont
ERROR
======================================================================
ERROR: test_interp_regression (__main__.TestSeries)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_generic.py", line 501, in test_interp_regression
interp_s = ser.reindex(new_index).interpolate(method='pchip')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/generic.py", line 2582, in interpolate
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 2197, in interpolate
return self.apply('interpolate', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 2164, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 667, in interpolate
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 733, in _interpolate
interp_values = np.apply_along_axis(func, axis, data)
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/shape_base.py", line 81, in apply_along_axis
res = func1d(arr[tuple(i.tolist())],*args)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 730, in func
bounds_error=False, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/common.py", line 1489, in interpolate_1d
bounds_error=bounds_error, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/common.py", line 1541, in _interpolate_scipy_wrapper
new_y = method(x, y, new_x)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/_monotone.py", line 221, in pchip_interpolate
return P(x)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/_monotone.py", line 98, in __call__
out = self._bpoly(x, der, extrapolate)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 689, in __call__
self._evaluate(x, nu, extrapolate, out)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 1087, in _evaluate
self.x, x, nu, bool(extrapolate), out, self.c.dtype)
File "_ppoly.pyx", line 846, in scipy.interpolate._ppoly.evaluate_bernstein (scipy/interpolate/_ppoly.c:15014)
File "stringsource", line 622, in View.MemoryView.memoryview_cwrapper (scipy/interpolate/_ppoly.c:23370)
File "stringsource", line 327, in View.MemoryView.memoryview.__cinit__ (scipy/interpolate/_ppoly.c:19922)
ValueError: buffer source array is read-only
----------------------------------------------------------------------
Ran 63 tests in 3.981s
FAILED (SKIP=1, errors=1)
```
Version info:
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.5.final.0
python-bits: 32
OS: Linux
OS-release: 3.11.0-20-generic
machine: i686
processor: i686
byteorder: little
LC_ALL: None
LANG: en_CA.UTF-8
pandas: 0.14.0-12-g150cb3b
nose: 1.3.1
Cython: 0.20.1
numpy: 1.9.0.dev-ef7901d
scipy: 0.15.0.dev-93656a8
statsmodels: 0.6.0.dev-985037f
IPython: 2.0.0-dev
sphinx: 1.2.2
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.1
bottleneck: 0.8.0
tables: None
numexpr: 2.3.1
matplotlib: 1.4.x
openpyxl: 2.0.2
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.5
lxml: 3.3.3
bs4: 4.3.2
html5lib: 1.0b3
bq: None
apiclient: 1.2
rpy2: None
sqlalchemy: 0.9.3
pymysql: None
psycopg2: None
```
| excellent 32-bit scipy testing !
can u try putting a copy before it's passed into interpolate (insert right before the scipy call)
maybe some kind of view corruption
Adding
```
x = x.copy()
y = y.copy()
new_x = new_x.copy()
```
to common.py/interpolate_1d on the alt_methods path seems to prevent the problem. Coping only one of the arrays doesn't seem to.
After some experiments, `y` doesn't seem to need to be copied, only `x` and `new_x`. That makes a little sense, anyway, as right before the copies we have
```
x.flags:
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : False
ALIGNED : True
UPDATEIFCOPY : False
y.flags:
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
new_x.flags:
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : False
ALIGNED : True
UPDATEIFCOPY : False
```
but I admit I don't immediately see why `x` should need to be copied.
cc @TomAugspurger
probably the interpolation routines do some in-place work on these arrays. prob a bug on scipy side in that they should either explicity copy if they are messing with it, or provide documentation of the lack of guarantees. care to file an upstream bug as well?
PR to take care of this (I guess need to explicity copy) if its a view.
@dsm054 can you do a PR for this?
Sure, will do.
| 2014-06-17T01:32:57Z | [] | [] |
Traceback (most recent call last):
File "test_generic.py", line 501, in test_interp_regression
interp_s = ser.reindex(new_index).interpolate(method='pchip')
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/generic.py", line 2582, in interpolate
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 2197, in interpolate
return self.apply('interpolate', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 2164, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 667, in interpolate
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 733, in _interpolate
interp_values = np.apply_along_axis(func, axis, data)
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/shape_base.py", line 81, in apply_along_axis
res = func1d(arr[tuple(i.tolist())],*args)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/internals.py", line 730, in func
bounds_error=False, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/common.py", line 1489, in interpolate_1d
bounds_error=bounds_error, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_12_g150cb3b-py2.7-linux-i686.egg/pandas/core/common.py", line 1541, in _interpolate_scipy_wrapper
new_y = method(x, y, new_x)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/_monotone.py", line 221, in pchip_interpolate
return P(x)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/_monotone.py", line 98, in __call__
out = self._bpoly(x, der, extrapolate)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 689, in __call__
self._evaluate(x, nu, extrapolate, out)
File "/usr/local/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py", line 1087, in _evaluate
self.x, x, nu, bool(extrapolate), out, self.c.dtype)
File "_ppoly.pyx", line 846, in scipy.interpolate._ppoly.evaluate_bernstein (scipy/interpolate/_ppoly.c:15014)
File "stringsource", line 622, in View.MemoryView.memoryview_cwrapper (scipy/interpolate/_ppoly.c:23370)
File "stringsource", line 327, in View.MemoryView.memoryview.__cinit__ (scipy/interpolate/_ppoly.c:19922)
ValueError: buffer source array is read-only
| 15,409 |
|||
pandas-dev/pandas | pandas-dev__pandas-7572 | 87e7e27d96a921859c351cdeff609eb45e9b5d68 | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -271,6 +271,8 @@ Bug Fixes
- Bug in non-monotonic ``Index.union`` may preserve ``name`` incorrectly (:issue:`7458`)
- Bug in ``DatetimeIndex.intersection`` doesn't preserve timezone (:issue:`4690`)
+- Bug in ``rolling_var`` where a window larger than the array would raise an error(:issue:`7297`)
+
- Bug with last plotted timeseries dictating ``xlim`` (:issue:`2960`)
- Bug with ``secondary_y`` axis not being considered for timeseries ``xlim`` (:issue:`3490`)
diff --git a/pandas/algos.pyx b/pandas/algos.pyx
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -1173,6 +1173,10 @@ def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1):
minp = _check_minp(win, minp, N)
+ # Check for windows larger than array, addresses #7297
+ win = min(win, N)
+
+ # Over the first window, observations can only be added, never removed
for i from 0 <= i < win:
val = input[i]
@@ -1196,23 +1200,27 @@ def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1):
output[i] = val
+ # After the first window, observations can both be added and removed
for i from win <= i < N:
val = input[i]
prev = input[i - win]
if val == val:
if prev == prev:
+ # Adding one observation and removing another one
delta = val - prev
prev -= mean_x
mean_x += delta / nobs
val -= mean_x
ssqdm_x += (val + prev) * delta
else:
+ # Adding one observation and not removing any
nobs += 1
delta = (val - mean_x)
mean_x += delta / nobs
ssqdm_x += delta * (val - mean_x)
elif prev == prev:
+ # Adding no new observation, but removing one
nobs -= 1
if nobs:
delta = (prev - mean_x)
@@ -1221,6 +1229,7 @@ def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1):
else:
mean_x = 0
ssqdm_x = 0
+ # Variance is unchanged if no observation is added or removed
if nobs >= minp:
#pathological case
| Change in behavior for rolling_var when win > len(arr) for 0.14: now raises error
In 0.13 I could pass a window length greater than the length of the `Series` passed to `rolling_var` (or, of course, `rolling_std`). In 0.14 that raises an error. Behavior is unchanged from 0.13 for other rolling functions:
``` python
data = """
x
0.1
0.5
0.3
0.2
0.7
"""
df = pd.read_csv(StringIO(data),header=True)
>>> pd.rolling_mean(df['x'],window=6,min_periods=2)
0 NaN
1 0.300
2 0.300
3 0.275
4 0.360
dtype: float64
>>> pd.rolling_skew(df['x'],window=6,min_periods=2)
0 NaN
1 NaN
2 3.903128e-15
3 7.528372e-01
4 6.013638e-01
dtype: float64
>>> pd.rolling_skew(df['x'],window=6,min_periods=6)
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
```
Those work, but not `rolling_var`:
``` python
>>> pd.rolling_var(df['x'],window=6,min_periods=2)
Traceback (most recent call last):
File "./foo.py", line 187, in <module>
print pd.rolling_var(df['x'],window=6,min_periods=2)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 594, in f
center=center, how=how, **kwargs)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 346, in _rolling_moment
result = calc(values)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 340, in <lambda>
**kwds)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 592, in call_cython
return func(arg, window, minp, **kwds)
File "algos.pyx", line 1177, in pandas.algos.roll_var (pandas/algos.c:28449)
IndexError: Out of bounds on buffer access (axis 0)
```
If this is the new desired default behavior for the rolling functions, I can work around it. I do like the behavior of `rolling_skew` and `rolling_mean` better. It was nice default behavior for me when I was doing rolling standard deviations for reasonably large financial data panels.
It looks to me like the issue is caused by the fact that the 0.14 algo for rolling variance is implemented such that the initial loop (`roll_var` (algos.pyx)) is the following:
``` python
for i from 0 <= i < win:
```
So it loops to `win` even when `win > N`.
It looks like to me that the other rolling functions try to implement their algos in such a way that the first loop counts over the following:
``` python
for i from 0 <= i < minp - 1:
```
``` python
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.5.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.10-200.fc20.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.14.0
nose: 1.3.1
Cython: 0.20.1
numpy: 1.8.1
scipy: 0.13.3
statsmodels: 0.6.0.dev-b52bc09
IPython: 2.0.0
sphinx: 1.2.2
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.3
bottleneck: 0.8.0
tables: None
numexpr: 2.4
matplotlib: 1.3.1
openpyxl: 1.8.5
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.3
lxml: 3.3.5
bs4: 4.3.2
html5lib: 0.999
bq: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.4
pymysql: None
psycopg2: None
Non
```
Karl D.
| this was changed in #6817 to provide numerical stability in rolling_var
cc @jamiefrio
prob just don't have all of the test cases
I don't think this shock have changed nor is it consistent
want to put together some tests for ranges of window lengths (0,less than len of array, equal, greater than Len of array) - so that it systematically tests these (for all of the rolling functions)
?
fix should be easy
@jreback, sure I should be able to put together some tests.
@kdiether PR for this?
@jreback Sorry, I've been particularly busy working on a paper. I should be able to get to it soon.
@jreback,
So looking at the tests in `test_moments.py`, it seems to me I could capture these 'out of bounds' window lengths with something like the following:
``` python
def _check_out_of_bounds(self, func):
arr = np.repeat(np.nan,5)
result = func(arr,6,min_periods=4)
self.assertTrue(isnull(result).all())
result = func(arr,6,min_periods=6)
self.assertTrue(isnull(result).all())
def test_rolling_sum_out_of_bounds(self):
self._check_out_of_bounds(mom.rolling_sum)
def test_rolling_mean_out_of_bounds(self):
self._check_out_of_bounds(mom.rolling_mean)
def test_rolling_var_out_of_bounds(self):
self._check_out_of_bounds(mom.rolling_var)
```
Would you be ok with a structure like that?
sure also check min_ periods of 8 and 0 just for kicks
Got it. So something like the following:
``` python
def _check_out_of_bounds(self, func):
arr = np.repeat(np.nan,5)
result = func(arr,6,min_periods=0)
self.assertTrue(isnull(result).all())
result = func(arr,6,min_periods=4)
self.assertTrue(isnull(result).all())
result = func(arr,6,min_periods=6)
self.assertTrue(isnull(result).all())
self.assertRaises(ValueError,func,arr,6,min_periods=8)
def test_rolling_sum_out_of_bounds(self):
self._check_out_of_bounds(mom.rolling_sum)
def test_rolling_mean_out_of_bounds(self):
self._check_out_of_bounds(mom.rolling_mean)
def test_rolling_var_out_of_bounds(self):
self._check_out_of_bounds(mom.rolling_var)
```
In my pull request, do you want me to include tests for all the rolling functions or should I exclude rolling variance/stdev from the test for now?
this essentially a smoke test so can test everything
It looks like to me like the default behavior for `rolling_count` is designed to be different than the other rolling functions because the NaNs from the `rolling_sum` call from within `rolling_count` are converted to zero counts (which makes sense for count ... at least to me).
``` python
result[np.isnan(result)] = 0
```
Should I exclude `rolling_count` from these smoke tests or carve out a special test for it?
@kdiether ideally, you would not use `nan` as your value (maybe use 1), of course you need different results then for the different cases. so prob a several cases here
I missed this thread when it was originally raised, sorry about that. I have added #7572, that fixes the issue. Some comment on why I overlooked this can be found there.
I have added no specific test for this, hoping that @kdiether can finish what he has been working on. Let me know if you don't see yourself finishing it up any time soon, and I'll put something together in that other PR.
@kdiether did you do a pull-request for this?
I didn't yet. Sorry, I'm really hammered by a project.
do you have a branch that is pushed (even if not-working/incomplete)?
I don't have a pushed branch. The only thing I go to was that little code snippet above.
If you are OK with it I'll grab your code and throw it into #7572.
Yes, please do.
| 2014-06-26T04:10:41Z | [] | [] |
Traceback (most recent call last):
File "./foo.py", line 187, in <module>
print pd.rolling_var(df['x'],window=6,min_periods=2)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 594, in f
center=center, how=how, **kwargs)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 346, in _rolling_moment
result = calc(values)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 340, in <lambda>
**kwds)
File "/usr/lib64/python2.7/site-packages/pandas/stats/moments.py", line 592, in call_cython
return func(arg, window, minp, **kwds)
File "algos.pyx", line 1177, in pandas.algos.roll_var (pandas/algos.c:28449)
IndexError: Out of bounds on buffer access (axis 0)
| 15,426 |
|||
pandas-dev/pandas | pandas-dev__pandas-7665 | 03bbed5954aaed70ae2dc9ba2bac69b72fe2bb16 | diff --git a/pandas/io/data.py b/pandas/io/data.py
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -661,31 +661,35 @@ def get_options_data(self, month=None, year=None, expiry=None):
_OPTIONS_BASE_URL = 'http://finance.yahoo.com/q/op?s={sym}'
- def _get_option_tables(self, month, year, expiry):
+ def _get_option_tables(self, expiry):
+ root = self._get_option_page_from_yahoo(expiry)
+ tables = self._parse_option_page_from_yahoo(root)
+ m1 = _two_char_month(expiry.month)
+ table_name = '_tables' + m1 + str(expiry.year)[-2:]
+ setattr(self, table_name, tables)
+ return tables
- year, month, expiry = self._try_parse_dates(year, month, expiry)
+ def _get_option_page_from_yahoo(self, expiry):
url = self._OPTIONS_BASE_URL.format(sym=self.symbol)
- if month and year: # try to get specified month from yahoo finance
- m1 = _two_char_month(month)
+ m1 = _two_char_month(expiry.month)
- # if this month use other url
- if month == CUR_MONTH and year == CUR_YEAR:
- url += '+Options'
- else:
- url += '&m={year}-{m1}'.format(year=year, m1=m1)
- else: # Default to current month
+ # if this month use other url
+ if expiry.month == CUR_MONTH and expiry.year == CUR_YEAR:
url += '+Options'
+ else:
+ url += '&m={year}-{m1}'.format(year=expiry.year, m1=m1)
root = self._parse_url(url)
+ return root
+
+ def _parse_option_page_from_yahoo(self, root):
+
tables = root.xpath('.//table')
ntables = len(tables)
if ntables == 0:
- raise RemoteDataError("No tables found at {0!r}".format(url))
-
- table_name = '_tables' + m1 + str(year)[-2:]
- setattr(self, table_name, tables)
+ raise RemoteDataError("No tables found")
try:
self.underlying_price, self.quote_time = self._get_underlying_price(root)
@@ -723,7 +727,7 @@ def _get_option_data(self, month, year, expiry, name):
try:
tables = getattr(self, table_name)
except AttributeError:
- tables = self._get_option_tables(month, year, expiry)
+ tables = self._get_option_tables(expiry)
ntables = len(tables)
table_loc = self._TABLE_LOC[name]
@@ -903,13 +907,14 @@ def get_near_stock_price(self, above_below=2, call=True, put=False,
meth_name = 'get_{0}_data'.format(nam[:-1])
df = getattr(self, meth_name)(expiry=expiry)
- start_index = np.where(df.index.get_level_values('Strike')
+ if self.underlying_price:
+ start_index = np.where(df.index.get_level_values('Strike')
> self.underlying_price)[0][0]
- get_range = slice(start_index - above_below,
+ get_range = slice(start_index - above_below,
start_index + above_below + 1)
- chop = df[get_range].dropna(how='all')
- data[nam] = chop
+ chop = df[get_range].dropna(how='all')
+ data[nam] = chop
return concat([data[nam] for nam in to_ret]).sortlevel()
@@ -948,6 +953,8 @@ def _try_parse_dates(year, month, expiry):
year = CUR_YEAR
month = CUR_MONTH
expiry = dt.date(year, month, 1)
+ else:
+ expiry = dt.date(year, month, 1)
return year, month, expiry
@@ -1127,7 +1134,11 @@ def _get_expiry_months(self):
url = 'http://finance.yahoo.com/q/op?s={sym}'.format(sym=self.symbol)
root = self._parse_url(url)
- links = root.xpath('.//*[@id="yfncsumtab"]')[0].xpath('.//a')
+ try:
+ links = root.xpath('.//*[@id="yfncsumtab"]')[0].xpath('.//a')
+ except IndexError:
+ return RemoteDataError('Expiry months not available')
+
month_gen = (element.attrib['href'].split('=')[-1]
for element in links
if '/q/op?s=' in element.attrib['href']
| TST: yahoo retrieve tests failure
cc @dstephens99
prob just need to protect these a bit
here as well: https://travis-ci.org/jreback/pandas/jobs/28958102
```
======================================================================
ERROR: test_get_all_data (pandas.io.tests.test_data.TestYahooOptions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\util\testing.py", line 1337, in wrapper
return t(*args, **kwargs)
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\io\tests\test_data.py", line 307, in test_get_all_data
data = self.aapl.get_all_data(put=True)
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\io\data.py", line 1095, in get_all_data
months = self._get_expiry_months()
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\io\data.py", line 1130, in _get_expiry_months
links = root.xpath('.//*[@id="yfncsumtab"]')[0].xpath('.//a')
IndexError: list index out of range
----------------------------------------------------------------------
Ran 6897 tests in 353.413s
FAILED (SKIP=315, errors=1)
C:\Users\Jeff Reback\Documents\GitHub\pandas>
```
| 2014-07-04T04:58:46Z | [] | [] |
Traceback (most recent call last):
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\util\testing.py", line 1337, in wrapper
return t(*args, **kwargs)
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\io\tests\test_data.py", line 307, in test_get_all_data
data = self.aapl.get_all_data(put=True)
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\io\data.py", line 1095, in get_all_data
months = self._get_expiry_months()
File "c:\Users\Jeff Reback\Documents\GitHub\pandas\build\lib.win32-2.7\pandas\io\data.py", line 1130, in _get_expiry_months
links = root.xpath('.//*[@id="yfncsumtab"]')[0].xpath('.//a')
IndexError: list index out of range
| 15,446 |
||||
pandas-dev/pandas | pandas-dev__pandas-7675 | d17f1e948de1455e012c8a9ada0f424d67e9ff17 | TST: test_ts_plot_format_coord ValueError: Unknown format code 'f' for object of type 'str'
Only with ubuntu 12.04 i386:
```
======================================================================
ERROR: test_ts_plot_format_coord (pandas.tseries.tests.test_plotting.TestTSPlot)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0+git345-g8cd3dd6/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_plotting.py", line 138, in test_ts_plot_format_coord
check_format_of_first_point(annual.plot(), 't = 2014 y = 1.000000')
File "/tmp/buildd/pandas-0.14.0+git345-g8cd3dd6/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_plotting.py", line 135, in check_format_of_first_point
self.assertEqual(expected_string, ax.format_coord(first_x, first_y))
File "/tmp/buildd/pandas-0.14.0+git345-g8cd3dd6/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/plotting.py", line 90, in <lambda>
y))
ValueError: Unknown format code 'f' for object of type 'str'
```
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.3.final.0
python-bits: 32
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.14.0.dev
nose: 1.1.2
Cython: 0.15.1
numpy: 1.6.1
scipy: 0.9.0
statsmodels: 0.5.0
IPython: None
sphinx: 1.1.3
patsy: 0.2.1
scikits.timeseries: None
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 2.3.1
numexpr: 1.4.2
matplotlib: 1.1.1rc
openpyxl: 1.7.0
xlrd: 0.6.1
xlwt: 0.7.2
xlsxwriter: None
lxml: None
bs4: 4.0.2
html5lib: 0.90
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| cc @sinhrks and thoughts in this?
Not sure, but maybe `dtype` is changed in either process? Whichever, it may better to add error handling process in `format_coord`.
- Series construction using single value (may result in `object` dtype?)
- Or matplotlib internal process?
@yarikoptic Can you attach the result of the following code to confirm when `dtype` is changed?
```
import pandas as pd
print('instanciate with value')
annual = pd.Series(1, index=pd.date_range('2014-01-01', periods=3, freq='A-DEC'))
print('dtype', annual.dtypes, annual.values)
ax = annual.plot()
first_line = ax.get_lines()[0]
first_y = first_line.get_ydata()[0]
print('first_y', type(first_y), first_y)
first_y = first_line.get_ydata(orig=True)[0]
print('first_y_orig', type(first_y), first_y)
first_y = first_line.get_ydata(orig=False)[0]
print('first_y_conv', type(first_y), first_y)
print('instanciate with list')
annual = pd.Series([1, 1, 1], index=pd.date_range('2014-01-01', periods=3, freq='A-DEC'))
print('dtype', annual.dtypes, annual.values)
ax = annual.plot()
first_line = ax.get_lines()[0]
first_y = first_line.get_ydata()[0]
print('first_y', type(first_y), first_y)
first_y = first_line.get_ydata(orig=True)[0]
print('first_y_orig', type(first_y), first_y)
first_y = first_line.get_ydata(orig=False)[0]
print('first_y_conv', type(first_y), first_y)
```
here you go sir
```
# PYTHONPATH=/tmp/buildd/pandas-0.14.0+git345-g8cd3dd6/debian/tmp/usr/lib/python2.7/dist-packages/ xvfb-run -a -s "-screen 0 1280x1024x24 -noreset" python /tmp/testcode.py
instanciate with value
('dtype', dtype('int64'), array([1, 1, 1], dtype=int64))
('first_y', <type 'numpy.int64'>, 1)
('first_y_orig', <type 'numpy.int64'>, 1)
instanciate with list
('dtype', dtype('int64'), array([1, 1, 1], dtype=int64))
('first_y', <type 'numpy.int64'>, 1)
('first_y_orig', <type 'numpy.int64'>, 1)
```
most probably it was not a type change but rather some incorrect type checking inside somewhere, and probably elderly numpy to blame (since doesn't appear with newer releases I guess was fixed)?
```
(Pdb) print "t = {0} y = {1:8f}".format(Period(ordinal=int(t), freq=ax.freq),y)
*** ValueError: Unknown format code 'f' for object of type 'str'
(Pdb) print y
1
(Pdb) print type(y)
<type 'numpy.int64'>
(Pdb) print np.__version__
1.6.1
```
@yarikoptic this works for me on 1.6.1, but using matplotlib 1.1.1 (I don't have 1.1.1rc). Can you update and try with that?
| 2014-07-06T17:43:50Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0+git345-g8cd3dd6/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_plotting.py", line 138, in test_ts_plot_format_coord
check_format_of_first_point(annual.plot(), 't = 2014 y = 1.000000')
File "/tmp/buildd/pandas-0.14.0+git345-g8cd3dd6/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/tests/test_plotting.py", line 135, in check_format_of_first_point
self.assertEqual(expected_string, ax.format_coord(first_x, first_y))
File "/tmp/buildd/pandas-0.14.0+git345-g8cd3dd6/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tseries/plotting.py", line 90, in <lambda>
y))
ValueError: Unknown format code 'f' for object of type 'str'
| 15,449 |
||||
pandas-dev/pandas | pandas-dev__pandas-7696 | e131df1a60fb6909f8300664fd7359409f257441 | diff --git a/pandas/util/testing.py b/pandas/util/testing.py
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -41,7 +41,8 @@
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex
-from pandas import _testing
+from pandas import _testing, _np_version_under1p7
+
from pandas.io.common import urlopen
@@ -209,6 +210,12 @@ def setUpClass(cls):
cls.setUpClass = setUpClass
return cls
+def _skip_if_not_numpy17_friendly():
+ # not friendly for < 1.7
+ if _np_version_under1p7:
+ import nose
+ raise nose.SkipTest("numpy >= 1.7 is required")
+
def _skip_if_no_scipy():
try:
import scipy.stats
| test_select_dtypes_not_an_attr_but_still_valid_dtype ValueError: 'timedelta64[ns]' is too specific of a frequency, try passing 'timedelta64'
0.14.0+git393-g959e3e4 building on not so recent ones (32bit builds are failing on that another one while testing for python2, so never see this one on python3)
```
======================================================================
ERROR: test_select_dtypes_not_an_attr_but_still_valid_dtype (pandas.tests.test_frame.TestDataFrame)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/tests/test_frame.py", line 13054, in test_select_dtypes_not_an_attr_but_still_valid_dtype
r = df.select_dtypes(include=['i8', 'O', 'timedelta64[ns]'])
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1946, in select_dtypes
selection)
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1945, in <lambda>
frozenset(map(com._get_dtype_from_object, x)),
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/common.py", line 1650, in _get_dtype_from_object
return _get_dtype_from_object(np.dtype(dtype))
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/common.py", line 1635, in _get_dtype_from_object
_validate_date_like_dtype(dtype)
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/common.py", line 1613, in _validate_date_like_dtype
% (dtype.name, dtype.type.__name__))
ValueError: 'timedelta64[ns]' is too specific of a frequency, try passing 'timedelta64'
```
```
pandas_0.14.0+git393-g959e3e4-1~nd12.04+1_amd64.build:ERROR: test_select_dtypes_not_an_attr_but_still_valid_dtype (pandas.tests.test_frame.TestDataFrame)
pandas_0.14.0+git393-g959e3e4-1~nd12.10+1_amd64.build:ERROR: test_select_dtypes_not_an_attr_but_still_valid_dtype (pandas.tests.test_frame.TestDataFrame)
pandas_0.14.0+git393-g959e3e4-1~nd70+1_amd64.build:ERROR: test_select_dtypes_not_an_attr_but_still_valid_dtype (pandas.tests.test_frame.TestDataFrame)
```
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.2.3.final.0
python-bits: 64
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.14.0.dev
nose: 1.1.2
Cython: None
numpy: 1.6.2
scipy: 0.10.1
statsmodels: None
IPython: None
sphinx: 1.1.3
patsy: None
scikits.timeseries: None
dateutil: 2.0
pytz: 2012c
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.1.0
html5lib: None
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| 2014-07-08T14:15:02Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/tests/test_frame.py", line 13054, in test_select_dtypes_not_an_attr_but_still_valid_dtype
r = df.select_dtypes(include=['i8', 'O', 'timedelta64[ns]'])
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1946, in select_dtypes
selection)
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/frame.py", line 1945, in <lambda>
frozenset(map(com._get_dtype_from_object, x)),
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/common.py", line 1650, in _get_dtype_from_object
return _get_dtype_from_object(np.dtype(dtype))
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/common.py", line 1635, in _get_dtype_from_object
_validate_date_like_dtype(dtype)
File "/tmp/buildd/pandas-0.14.0+git393-g959e3e4/debian/tmp/usr/lib/python3/dist-packages/pandas/core/common.py", line 1613, in _validate_date_like_dtype
% (dtype.name, dtype.type.__name__))
ValueError: 'timedelta64[ns]' is too specific of a frequency, try passing 'timedelta64'
| 15,452 |
||||
pandas-dev/pandas | pandas-dev__pandas-7728 | 5adb0b6ed8fbc18f6e288e777b84feab4969b9d4 | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -152,6 +152,7 @@ There are no experimental changes in 0.15.0
Bug Fixes
~~~~~~~~~
+- Bug in ``get`` where an ``IndexError`` would not cause the default value to be returned (:issue:`7725`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1038,7 +1038,7 @@ def get(self, key, default=None):
"""
try:
return self[key]
- except (KeyError, ValueError):
+ except (KeyError, ValueError, IndexError):
return default
def __getitem__(self, item):
| BUG(?): get default value doesn't work for Series
`Series` have a `get` method, but using the default value doesn't work for certain type combinations:
```
>>> s = pd.Series([1,2,3], index=["a","b","c"])
>>> s
a 1
b 2
c 3
dtype: int64
>>> s.get("d", 0)
0
>>> s.get(10, 0)
Traceback (most recent call last):
File "<ipython-input-18-26d73ac73179>", line 1, in <module>
s.get(10, 0)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_421_g20dfc6b-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1040, in get
return self[key]
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_421_g20dfc6b-py2.7-linux-x86_64.egg/pandas/core/series.py", line 484, in __getitem__
result = self.index.get_value(self, key)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_421_g20dfc6b-py2.7-linux-x86_64.egg/pandas/core/index.py", line 1202, in get_value
return tslib.get_value_box(s, key)
File "tslib.pyx", line 540, in pandas.tslib.get_value_box (pandas/tslib.c:11831)
File "tslib.pyx", line 555, in pandas.tslib.get_value_box (pandas/tslib.c:11678)
IndexError: index out of bounds
```
I'm not sure whether it makes the most sense just to teach `.get` to catch IndexErrors as well as KeyErrors and ValueErrors (which is what it does now), or whether a deeper change is warranted.
| I think could just catch the `IndexError` as well (or you can instead re-raise `TypeError` and return default on the catch-all). This is de-facto a wrapper around `__getitem__` (which on `TypeError` tries to figure things out).
| 2014-07-11T01:17:23Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-18-26d73ac73179>", line 1, in <module>
s.get(10, 0)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_421_g20dfc6b-py2.7-linux-x86_64.egg/pandas/core/generic.py", line 1040, in get
return self[key]
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_421_g20dfc6b-py2.7-linux-x86_64.egg/pandas/core/series.py", line 484, in __getitem__
result = self.index.get_value(self, key)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0_421_g20dfc6b-py2.7-linux-x86_64.egg/pandas/core/index.py", line 1202, in get_value
return tslib.get_value_box(s, key)
File "tslib.pyx", line 540, in pandas.tslib.get_value_box (pandas/tslib.c:11831)
File "tslib.pyx", line 555, in pandas.tslib.get_value_box (pandas/tslib.c:11678)
IndexError: index out of bounds
| 15,455 |
|||
pandas-dev/pandas | pandas-dev__pandas-7789 | 6e67f0a148914b7c7da2dc18f07d9cf91c9037e6 | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -189,6 +189,7 @@ Bug Fixes
- Bug in ``DataFrame.as_matrix()`` with mixed ``datetime64[ns]`` and ``timedelta64[ns]`` dtypes (:issue:`7778`)
+- Bug in pickles contains ``DateOffset`` may raise ``AttributeError`` when ``normalize`` attribute is reffered internally (:issue:`7748`)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -130,6 +130,9 @@ def __add__(date):
_cacheable = False
_normalize_cache = True
+ # default for prior pickles
+ normalize = False
+
def __init__(self, n=1, normalize=False, **kwds):
self.n = int(n)
self.normalize = normalize
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -578,6 +578,7 @@ def pxd(name):
'tests/data/legacy_pickle/0.11.0/*.pickle',
'tests/data/legacy_pickle/0.12.0/*.pickle',
'tests/data/legacy_pickle/0.13.0/*.pickle',
+ 'tests/data/legacy_pickle/0.14.0/*.pickle',
'tests/data/*.csv',
'tests/data/*.dta',
'tests/data/*.txt',
| AttributeError: 'Hour' object has no attribute 'normalize' when masking a time series
Hi All,
I have a strange bug when I want to mask values of a time series (that I get from a pickled Panel).
I have the following Series:
``` python
In[30]: ts=pd.read_pickle('df_issue.pkl')['wind_speed']
In [31]:ts
Out[31]:
2013-12-31 16:00:00 NaN
2013-12-31 17:00:00 NaN
2013-12-31 18:00:00 9.845031
2013-12-31 19:00:00 NaN
2013-12-31 20:00:00 NaN
2013-12-31 21:00:00 NaN
2013-12-31 22:00:00 NaN
2013-12-31 23:00:00 NaN
Freq: H, Name: wind_speed, dtype: float64
```
And I have an exception when I try to mask it:
``` python
In [32]: ts<0.
Traceback (most recent call last):
File "<ipython-input-32-534147d368f7>", line 1, in <module>
ts<0.
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 585, in wrapper
res[mask] = masker
File "/usr/local/lib/python2.7/dist-packages/pandas/core/series.py", line 637, in __setitem__
self.where(~key, value, inplace=True)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/generic.py", line 3238, in where
inplace=True)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 2219, in putmask
return self.apply('putmask', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 2185, in apply
b_items = self.items[b.mgr_locs.indexer]
File "/usr/local/lib/python2.7/dist-packages/pandas/tseries/index.py", line 1387, in __getitem__
new_offset = key.step * self.offset
File "/usr/local/lib/python2.7/dist-packages/pandas/tseries/offsets.py", line 265, in __rmul__
return self.__mul__(someInt)
File "/usr/local/lib/python2.7/dist-packages/pandas/tseries/offsets.py", line 262, in __mul__
return self.__class__(n=someInt * self.n, normalize=self.normalize, **self.kwds)
AttributeError: 'Hour' object has no attribute 'normalize'
```
If I make the same Series manually, it is working:
``` python
ts1=pd.Series(data=[np.nan,np.nan,9.845031]+[np.nan]*5,index=pd.date_range('2013-12-31 16:00:00',periods=8,freq='H'))
In [36]: ts1
Out[36]:
2013-12-31 16:00:00 NaN
2013-12-31 17:00:00 NaN
2013-12-31 18:00:00 9.845031
2013-12-31 19:00:00 NaN
2013-12-31 20:00:00 NaN
2013-12-31 21:00:00 NaN
2013-12-31 22:00:00 NaN
2013-12-31 23:00:00 NaN
Freq: H, dtype: float64
In [37]: ts1<0.
Out[37]:
2013-12-31 16:00:00 False
2013-12-31 17:00:00 False
2013-12-31 18:00:00 False
2013-12-31 19:00:00 False
2013-12-31 20:00:00 False
2013-12-31 21:00:00 False
2013-12-31 22:00:00 False
2013-12-31 23:00:00 False
Freq: H, dtype: bool
```
I really don't understand why...
You can find the pickle file here: http://we.tl/lrsFvmanVl
Thanks,
Greg
Here is the show_version output:
## INSTALLED VERSIONS
commit: None
python: 2.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 3.8.0-37-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.14.1
nose: 1.3.0
Cython: 0.20.1
numpy: 1.8.1
scipy: 0.13.3
statsmodels: 0.6.0.dev-Unknown
IPython: 1.1.0
sphinx: 1.1.3
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.4
bottleneck: 0.6.0
tables: 3.1.1
numexpr: 2.0.1
matplotlib: 1.3.1
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.2
xlsxwriter: None
lxml: 3.0.0
bs4: 4.3.2
html5lib: None
httplib2: 0.7.2
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
| when/how did you create the pickle? e.g. in a prior version, which one?
df_pickle.pkl is created using pandas 0.14.1. However, the raw data in the pickle file are results of calculations done on data originally stored in HDF5 (created with pandas 0.12).
can you show the original creation of the `df`?
Not really because this is an output of several steps impossible to show here...
FYI, I just downgraded to pandas 0.13.1 and the issue above doesn't appear anymore.
Do you think all this could be due to the raw data loaded from HDF5 generated with pandas 0.12?
prob has to do with the pickling of the `Hour` object from the HDF5. Its an older version. I think you could just do something like
`df.index = df.index.copy()` in 0.14.1 and then it should work.
`df.index = df.index.copy()` doesn't work but doing the following works
```
new_df = pd.DataFrame(df.values,columns=df.columns, index=[i for i in df.index])
new_df['variable'] <0.
```
It is a bit slow due to `[i for i in df.index]` though (the df.index can be quite large). Do you have a faster alternative that may work as well?
you can try this:
`df.index = Index(df.index,freq=df.index.freqstr` to recreate a new index, and the key is to create _new_ frequency (rather than using the existing)
It worked! Thanks a lot!
gr8. this is not a bug per-se, more of a slightly change in the compat of frequencies in terms of backward-compat for pickle. Not a whole lot I think can do about this.
cc @sinhrks, maybe change these to `normalize=getattr(self,'normalize',None)` ?
| 2014-07-18T15:58:00Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-32-534147d368f7>", line 1, in <module>
ts<0.
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 585, in wrapper
res[mask] = masker
File "/usr/local/lib/python2.7/dist-packages/pandas/core/series.py", line 637, in __setitem__
self.where(~key, value, inplace=True)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/generic.py", line 3238, in where
inplace=True)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 2219, in putmask
return self.apply('putmask', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 2185, in apply
b_items = self.items[b.mgr_locs.indexer]
File "/usr/local/lib/python2.7/dist-packages/pandas/tseries/index.py", line 1387, in __getitem__
new_offset = key.step * self.offset
File "/usr/local/lib/python2.7/dist-packages/pandas/tseries/offsets.py", line 265, in __rmul__
return self.__mul__(someInt)
File "/usr/local/lib/python2.7/dist-packages/pandas/tseries/offsets.py", line 262, in __mul__
return self.__class__(n=someInt * self.n, normalize=self.normalize, **self.kwds)
AttributeError: 'Hour' object has no attribute 'normalize'
| 15,462 |
|||
pandas-dev/pandas | pandas-dev__pandas-7810 | 34cecd84d530123bca0824606e9209ca89c0d40c | ValueError: Cannot compare tz-naive and tz-aware timestamps
Trying to generate a time series based on timezone raises exception.
```
>>> import pandas
>>> pandas.date_range('2013-01-01T00:00:00+05:30','2014-03-07T23:59:59+05:30',freq='AS')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 1794, in date_range
closed=closed)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 196, in __new__
infer_dst=infer_dst)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 406, in _generate
index = _generate_regular_range(start, end, periods, offset)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 1750, in _generate_regular_range
dates = list(xdr)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/offsets.py", line 1871, in generate_range
if periods is None and end < start:
File "tslib.pyx", line 611, in pandas.tslib._Timestamp.__richcmp__ (pandas/tslib.c:10872)
File "tslib.pyx", line 640, in pandas.tslib._Timestamp._assert_tzawareness_compat (pandas/tslib.c:11186)
ValueError: Cannot compare tz-naive and tz-aware timestamps
```
```
pandas==0.13.1
```
| create this way
```
In [3]: pandas.date_range('2013-01-01T00:00:00','2014-03-07T23:59:59',freq='AS','US/Eastern')
Out[3]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00, 2014-01-01 00:00:00]
Length: 2, Freq: AS-JAN, Timezone: US/Eastern
```
What you initially are suggesting requires a modification, want to do a pull-request for this?
Can work on current master, maybe fixed in #7465. Will add test cases for this.
```
pd.date_range('2013-01-01T00:00:00+05:30','2014-03-07T23:59:59+05:30',freq='AS')
# <class 'pandas.tseries.index.DatetimeIndex'>
# [2013-01-01 00:00:00+05:30, 2014-01-01 00:00:00+05:30]
# Length: 2, Freq: AS-JAN, Timezone: tzoffset(None, 19800)
```
| 2014-07-20T14:52:20Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 1794, in date_range
closed=closed)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 196, in __new__
infer_dst=infer_dst)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 406, in _generate
index = _generate_regular_range(start, end, periods, offset)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/index.py", line 1750, in _generate_regular_range
dates = list(xdr)
File "/Users/cruiser/work/metroleads/lib/python2.7/site-packages/pandas/tseries/offsets.py", line 1871, in generate_range
if periods is None and end < start:
File "tslib.pyx", line 611, in pandas.tslib._Timestamp.__richcmp__ (pandas/tslib.c:10872)
File "tslib.pyx", line 640, in pandas.tslib._Timestamp._assert_tzawareness_compat (pandas/tslib.c:11186)
ValueError: Cannot compare tz-naive and tz-aware timestamps
| 15,467 |
||||
pandas-dev/pandas | pandas-dev__pandas-8026 | 534784be2be4d5f03daa208f04d897365798c852 | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -217,7 +217,7 @@ Internal Refactoring
In 0.15.0 ``Index`` has internally been refactored to no longer sub-class ``ndarray``
but instead subclass ``PandasObject``, similarly to the rest of the pandas objects. This change allows very easy sub-classing and creation of new index types. This should be
-a transparent change with only very limited API implications (:issue:`5080`, :issue:`7439`, :issue:`7796`)
+a transparent change with only very limited API implications (:issue:`5080`, :issue:`7439`, :issue:`7796`, :issue:`8024`)
- you may need to unpickle pandas version < 0.15.0 pickles using ``pd.read_pickle`` rather than ``pickle.load``. See :ref:`pickle docs <io.pickle>`
- when plotting with a ``PeriodIndex``. The ``matplotlib`` internal axes will now be arrays of ``Period`` rather than a ``PeriodIndex``. (this is similar to how a ``DatetimeIndex`` passes arrays of ``datetimes`` now)
diff --git a/pandas/core/index.py b/pandas/core/index.py
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1506,6 +1506,9 @@ def _possibly_promote(self, other):
from pandas.tseries.index import DatetimeIndex
if self.inferred_type == 'date' and isinstance(other, DatetimeIndex):
return DatetimeIndex(self), other
+ elif self.inferred_type == 'boolean':
+ if self.dtype != 'object':
+ return self.astype('object'), other.astype('object')
return self, other
def groupby(self, to_groupby):
| Regression in series.map?
```
import pandas
from statsmodels import datasets
# load the data and clean it a bit
affairs = datasets.fair.load_pandas()
datas = affairs.exog
# any time greater than 0 is cheating
datas['cheated'] = affairs.endog > 0
# sort by the marriage quality and give meaningful name
# [rate_marriage, age, yrs_married, children,
# religious, educ, occupation, occupation_husb]
datas = datas.sort(['rate_marriage', 'religious'])
num_to_desc = {1: 'awful', 2: 'bad', 3: 'intermediate',
4: 'good', 5: 'wonderful'}
datas['rate_marriage'] = datas['rate_marriage'].map(num_to_desc)
num_to_faith = {1: 'non religious', 2: 'poorly religious', 3: 'religious',
4: 'very religious'}
datas['religious'] = datas['religious'].map(num_to_faith)
num_to_cheat = {False: 'faithful', True: 'cheated'}
datas['cheated'] = datas['cheated'].map(num_to_cheat)
```
part of the following test that fails on pythonxy Ubuntu testing
#
## ERROR: statsmodels.graphics.tests.test_mosaicplot.test_mosaic
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(_self.arg)
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py",
line 146, in skipper_func
return f(_args, **kwargs)
File "/build/buildd/statsmodels-0.6.0~ppa18~revno/debian/python-statsmodels/usr/lib/python2.7/dist-packages/statsmodels/graphics/tests/test_mosaicplot.py",
line 124, in test_mosaic
datas['cheated'] = datas['cheated'].map(num_to_cheat)
File "/usr/lib/pymodules/python2.7/pandas/core/series.py", line 1960, in map
indexer = arg.index.get_indexer(values)
File "/usr/lib/pymodules/python2.7/pandas/core/index.py", line 1460,
in get_indexer
if not self.is_unique:
File "properties.pyx", line 34, in pandas.lib.cache_readonly.**get**
(pandas/lib.c:38722)
File "/usr/lib/pymodules/python2.7/pandas/core/index.py", line 571,
in is_unique
return self._engine.is_unique
File "index.pyx", line 205, in
pandas.index.IndexEngine.is_unique.__get__ (pandas/index.c:4338)
File "index.pyx", line 234, in
pandas.index.IndexEngine._do_unique_check (pandas/index.c:4790)
File "index.pyx", line 247, in
pandas.index.IndexEngine._ensure_mapping_populated
(pandas/index.c:4995)
File "index.pyx", line 253, in pandas.index.IndexEngine.initialize
(pandas/index.c:5092)
File "hashtable.pyx", line 731, in
pandas.hashtable.PyObjectHashTable.map_locations
(pandas/hashtable.c:12440)
ValueError: Does not understand character buffer dtype format string ('?')
```
This works on '0.13.1' but not on '0.14.1-202-g7d702e9'
```
| can u post the series right before the map?
so can make a test from this
| 2014-08-14T11:50:31Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(_self.arg)
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py",
line 146, in skipper_func
| 15,504 |
|||
pandas-dev/pandas | pandas-dev__pandas-8030 | f8225c087320b46074c583aa7f803f311abfdf21 | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -533,7 +533,11 @@ Bug Fixes
+- Bug in installation where ``html_encoding/*.html`` wasn't installed and
+ therefore some tests were not running correctly (:issue:`7927`).
+- Bug in ``read_html`` where ``bytes`` objects were not tested for in
+ ``_read`` (:issue:`7927`).
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -15,8 +15,8 @@
from pandas.io.common import _is_url, urlopen, parse_url
from pandas.io.parsers import TextParser
-from pandas.compat import (lrange, lmap, u, string_types, iteritems, text_type,
- raise_with_traceback)
+from pandas.compat import (lrange, lmap, u, string_types, iteritems,
+ raise_with_traceback, binary_type)
from pandas.core import common as com
from pandas import Series
@@ -51,6 +51,9 @@
_RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
+char_types = string_types + (binary_type,)
+
+
def _remove_whitespace(s, regex=_RE_WHITESPACE):
"""Replace extra whitespace inside of a string with a single space.
@@ -114,13 +117,13 @@ def _read(obj):
text = url.read()
elif hasattr(obj, 'read'):
text = obj.read()
- elif isinstance(obj, string_types):
+ elif isinstance(obj, char_types):
text = obj
try:
if os.path.isfile(text):
with open(text, 'rb') as f:
return f.read()
- except TypeError:
+ except (TypeError, ValueError):
pass
else:
raise TypeError("Cannot read object of type %r" % type(obj).__name__)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -583,6 +583,7 @@ def pxd(name):
'tests/data/*.xlsm',
'tests/data/*.table',
'tests/data/*.html',
+ 'tests/data/html_encoding/*.html',
'tests/test_json/data/*.json'],
'pandas.tools': ['tests/*.csv'],
'pandas.tests': ['data/*.pickle',
| TST: test_encode in test_html
@cpcloud not sure if this is something I did (or didn't do)
I was testing the index sub-class on 3.4 (may have appeared on travis too)
`/mnt/home/jreback/venv/py3.4/index/pandas/io/tests/data/html_encoding/chinese_utf-16.html'`
```
======================================================================
ERROR: test_encode (pandas.io.tests.test_html.TestReadHtmlEncoding)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/tests/test_html.py", line 627, in test_encode
from_string = self.read_string(f, encoding).pop()
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/tests/test_html.py", line 622, in read_string
return self.read_html(fobj.read(), encoding=encoding, index_col=0)
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/tests/test_html.py", line 610, in read_html
return read_html(*args, **kwargs)
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/html.py", line 843, in read_html
parse_dates, tupleize_cols, thousands, attrs, encoding)
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/html.py", line 709, in _parse
raise_with_traceback(retained)
File "/mnt/home/jreback/venv/py3.4/index/pandas/compat/__init__.py", line 705, in raise_with_traceback
raise exc.with_traceback(traceback)
TypeError: Cannot read object of type 'bytes'
----------------------------------------------------------------------
Ran 64 tests in 61.014s
FAILED (errors=1)
(py3.4)jreback@sheep:~/venv/py3.4/index$ nosetests pandas//io/tests/test_html.py --pdb --pdb-failure^C
(py3.4)jreback@sheep:~/venv/py3.4/index$ python ci/print_versions.py
INSTALLED VERSIONS
------------------
commit: d1c4fbb0d170cfaf920a27907c014e8cc45752d1
python: 3.4.0.beta.3
python-bits: 64
OS: Linux
OS-release: 2.6.32-5-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: None
nose: 1.3.3
Cython: 0.20.2
numpy: 1.8.0
scipy: 0.13.3
statsmodels: None
IPython: None
sphinx: None
patsy: 0.3.0
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.4
bottleneck: 0.8.0
tables: 3.1.0
numexpr: 2.4
matplotlib: 1.3.1
openpyxl: 2.0.4
xlrd: 0.9.3
xlwt: None
xlsxwriter: 0.5.6
lxml: 3.3.5
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.6
pymysql: 0.6.1.None
psycopg2: 2.5.2 (dt dec pq3 ext)
```
| looks like those farm animals are `byte`-ing.
sorry i couldn't resist
jokes aside not sure what's going on here let me take a look
@jreback any reason why this doesn't currently fail?
i can repro locally, but this failure isn't showing up on travis
I think this is using a pretty recent bs4 (4.3.2), not testing on travis with that
but that's not actually where the bug is. it happens because i don't check for bytes and str types in `_read` i only check for str types ... this is way before any parser libraries get called
this line in the tests
``` python
with open(f, 'rb') as fobj:
return self.read_html(fobj.read(), encoding=encoding, index_col=0)
```
should pass in a `bytes` type in py3 and this line
``` python
elif isinstance(obj, string_types):
```
should test `False` (because `string_types == (str,)` in py3 and `obj` is `bytes`) and thus the type error
@jreback don't worry about it i'll figure it out
actually those tests aren't even being run on travis
https://travis-ci.org/cpcloud/pandas/jobs/32559013
really, they don't appear to be skipped?
i know it's strange ... i think there's an installation issue tho not sure
i changed to `python setup.py develop` in the conda builds
i don't see why that should matter tho
ok it turns out that _does_ matter. when i install using the sdist method everything passes and when i run it using `make develop` it doesn't. mind boggling
the test _is_ run tho which is why this is strange
@jreback any reason to use `sdist` vs `make develop` on travis?
no idea
oh duh this is totally a data path issue,
`self.files` is `[]` so nothing is actually iterated over
| 2014-08-14T17:46:50Z | [] | [] |
Traceback (most recent call last):
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/tests/test_html.py", line 627, in test_encode
from_string = self.read_string(f, encoding).pop()
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/tests/test_html.py", line 622, in read_string
return self.read_html(fobj.read(), encoding=encoding, index_col=0)
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/tests/test_html.py", line 610, in read_html
return read_html(*args, **kwargs)
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/html.py", line 843, in read_html
parse_dates, tupleize_cols, thousands, attrs, encoding)
File "/mnt/home/jreback/venv/py3.4/index/pandas/io/html.py", line 709, in _parse
raise_with_traceback(retained)
File "/mnt/home/jreback/venv/py3.4/index/pandas/compat/__init__.py", line 705, in raise_with_traceback
raise exc.with_traceback(traceback)
TypeError: Cannot read object of type 'bytes'
| 15,506 |
|||
pandas-dev/pandas | pandas-dev__pandas-8054 | 3a7d8f1067f47261b59d2cf591bc0363998d7058 | test_dateindex_conversion fails on Python 3.4 / NumPy 1.8.1 / MPL 1.4 master / Ubuntu 12.04
With a recent checkout from master (bfd5348d824a721dd0d896bb06e63e4ad801ba51), running this
```
python3.4 `which nosetests` pandas
```
gives me this:
```
======================================================================
ERROR: test_dateindex_conversion (pandas.tseries.tests.test_converter.TestDateTimeConverter)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/pandas/tseries/tests/test_converter.py", line 77, in test_dateindex_conversion
xp = converter.dates.date2num(dateindex)
File "/usr/local/lib/python3.4/dist-packages/matplotlib/dates.py", line 310, in date2num
return _to_ordinalf_np_vectorized(d)
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/function_base.py", line 1573, in __call__
return self._vectorize_call(func=func, args=vargs)
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/function_base.py", line 1633, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/function_base.py", line 1597, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "/usr/local/lib/python3.4/dist-packages/matplotlib/dates.py", line 204, in _to_ordinalf
base = float(dt.toordinal())
AttributeError: 'numpy.datetime64' object has no attribute 'toordinal'
----------------------------------------------------------------------
Ran 7152 tests in 4107.297s
FAILED (SKIP=381, errors=1)
```
Looks like it's actually matplotlib's fault -- I notice there's a relevant ticket here:
https://github.com/matplotlib/matplotlib/issues/2259
But is this test expected to fail, as a result of that? And if so, should it be marked as skipped?
I'm using a very recent checkout of matplotlib (https://github.com/matplotlib/matplotlib/commit/4b1bd6301d69f856deca9c614af563f5fb4d1e90).
| we currently don't test with dev matplotlib (we do test with numpy dev and several versions of matplotlib released though)
could setup another build to do it
need to model after the numpy dev build
interested in doing a pr to try this (I will have to create some wheels for other deps - but not that hard)
Is it definitely only a problem with dev versions? matplotlib/matplotlib#2259 has been around for nearly a year.
Happy to help out with testing Pandas against MPL dev, if that's useful -- having said that, I'm on a mission to get a first proper release of [Snake Charmer](https://github.com/andrewclegg/snake-charmer) out, which is taking quite a lot of devops-ish attention... So I may not be the fastest.
I think only a problem with matplotlib dev as 1.3.1 tests ok
@TomAugspurger correct me if I am wrong
ok if u have time pls submit a pr - otherwise will prob do it after 0.14.0 is released
snake charmer looks interesting!
FYI u might want to build pip wheels and host from GitHub
this would make your install time extremely fast (we do this for pandas test and install pretty much the whole stack in a couple of minutes - mainly it's download time)
I think this was the relevant [issue](https://github.com/pydata/pandas/issues/6636) and [PR](https://github.com/pydata/pandas/pull/6650)
I recall hitting this problem before. I'll see if I can track down where.
@TomAugspurger ?
@TomAugspurger ?
mlp dev here.
It is my understanding that mpl has never claimed to support np.datetime64, so we think it is a bug on the pandas side for sending in datetime64
We are in the process of getting 1.4.0 out the door (tagged rc4 today) so we should get this sorted out asap.
cc @sinhrks can u have a look at this
Yep, could reproduce using mpl 1.4.dev.
Because `DatetimeIndex` will not be passed as it is from plotting function, it looks OK to change the test to call `dateindex._mpl_repr()`.
https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L1145
I'll check other tests and send a PR.
| 2014-08-18T12:46:26Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/pandas/tseries/tests/test_converter.py", line 77, in test_dateindex_conversion
xp = converter.dates.date2num(dateindex)
File "/usr/local/lib/python3.4/dist-packages/matplotlib/dates.py", line 310, in date2num
return _to_ordinalf_np_vectorized(d)
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/function_base.py", line 1573, in __call__
return self._vectorize_call(func=func, args=vargs)
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/function_base.py", line 1633, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "/usr/local/lib/python3.4/dist-packages/numpy/lib/function_base.py", line 1597, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "/usr/local/lib/python3.4/dist-packages/matplotlib/dates.py", line 204, in _to_ordinalf
base = float(dt.toordinal())
AttributeError: 'numpy.datetime64' object has no attribute 'toordinal'
| 15,512 |
||||
pandas-dev/pandas | pandas-dev__pandas-8090 | a72d95163b4d268012709255d7a52bbe5c1a7eb6 | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -427,6 +427,7 @@ Enhancements
~~~~~~~~~~~~
- Added support for a ``chunksize`` parameter to ``to_sql`` function. This allows DataFrame to be written in chunks and avoid packet-size overflow errors (:issue:`8062`)
+- Added support for writing ``datetime.date`` and ``datetime.time`` object columns with ``to_sql`` (:issue:`6932`).
- Added support for bool, uint8, uint16 and uint32 datatypes in ``to_stata`` (:issue:`7097`, :issue:`7365`)
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -11,6 +11,7 @@
import re
import numpy as np
+import pandas.lib as lib
import pandas.core.common as com
from pandas.compat import lzip, map, zip, raise_with_traceback, string_types
from pandas.core.api import DataFrame, Series
@@ -684,13 +685,14 @@ def _get_column_names_and_types(self, dtype_mapper):
if self.index is not None:
for i, idx_label in enumerate(self.index):
idx_type = dtype_mapper(
- self.frame.index.get_level_values(i).dtype)
+ self.frame.index.get_level_values(i))
column_names_and_types.append((idx_label, idx_type))
- column_names_and_types += zip(
- list(map(str, self.frame.columns)),
- map(dtype_mapper, self.frame.dtypes)
- )
+ column_names_and_types += [
+ (str(self.frame.columns[i]),
+ dtype_mapper(self.frame.iloc[:,i]))
+ for i in range(len(self.frame.columns))
+ ]
return column_names_and_types
def _create_table_statement(self):
@@ -756,30 +758,33 @@ def _harmonize_columns(self, parse_dates=None):
except KeyError:
pass # this column not in results
- def _sqlalchemy_type(self, arr_or_dtype):
+ def _sqlalchemy_type(self, col):
from sqlalchemy.types import (BigInteger, Float, Text, Boolean,
- DateTime, Date, Interval)
+ DateTime, Date, Time, Interval)
- if arr_or_dtype is date:
- return Date
- if com.is_datetime64_dtype(arr_or_dtype):
+ if com.is_datetime64_dtype(col):
try:
- tz = arr_or_dtype.tzinfo
+ tz = col.tzinfo
return DateTime(timezone=True)
except:
return DateTime
- if com.is_timedelta64_dtype(arr_or_dtype):
+ if com.is_timedelta64_dtype(col):
warnings.warn("the 'timedelta' type is not supported, and will be "
"written as integer values (ns frequency) to the "
"database.", UserWarning)
return BigInteger
- elif com.is_float_dtype(arr_or_dtype):
+ elif com.is_float_dtype(col):
return Float
- elif com.is_integer_dtype(arr_or_dtype):
+ elif com.is_integer_dtype(col):
# TODO: Refine integer size.
return BigInteger
- elif com.is_bool_dtype(arr_or_dtype):
+ elif com.is_bool_dtype(col):
return Boolean
+ inferred = lib.infer_dtype(com._ensure_object(col))
+ if inferred == 'date':
+ return Date
+ if inferred == 'time':
+ return Time
return Text
def _numpy_type(self, sqltype):
@@ -908,7 +913,11 @@ def _create_sql_schema(self, frame, table_name):
},
'date': {
'mysql': 'DATE',
- 'sqlite': 'TIMESTAMP',
+ 'sqlite': 'DATE',
+ },
+ 'time': {
+ 'mysql': 'TIME',
+ 'sqlite': 'TIME',
},
'bool': {
'mysql': 'BOOLEAN',
@@ -1014,8 +1023,8 @@ def _create_table_statement(self):
create_statement = template % {'name': self.name, 'columns': columns}
return create_statement
- def _sql_type_name(self, dtype):
- pytype = dtype.type
+ def _sql_type_name(self, col):
+ pytype = col.dtype.type
pytype_name = "text"
if issubclass(pytype, np.floating):
pytype_name = "float"
@@ -1029,10 +1038,14 @@ def _sql_type_name(self, dtype):
elif issubclass(pytype, np.datetime64) or pytype is datetime:
# Caution: np.datetime64 is also a subclass of np.number.
pytype_name = "datetime"
- elif pytype is datetime.date:
- pytype_name = "date"
elif issubclass(pytype, np.bool_):
pytype_name = "bool"
+ elif issubclass(pytype, np.object):
+ pytype = lib.infer_dtype(com._ensure_object(col))
+ if pytype == "date":
+ pytype_name = "date"
+ elif pytype == "time":
+ pytype_name = "time"
return _SQL_TYPES[pytype_name][self.pd_sql.flavor]
| BUG: add support for writing datetime.date and datetime.time columns using to_sql
Hi-
the following commands throw a DataError --
``` python
con = sqlalchemy.create_engine("mssql+pyodbc://server?driver=SQL Server Native Client 11.0")
df = pd.DataFrame([datetime.time(7,10), datetime.time(7,20)], columns="a")
sql.to_sql(df, "TBL_TEMP", con, index=False)
```
throws the following error:
``` python
Traceback (most recent call last):
File "<ipython-input-275-80a6d739629c>", line 1, in <module>
sql.to_sql(df, "TBL_TEMP3", con, index=False)
File "N:\Python\sql.py", line 399, in to_sql
index_label=index_label)
File "N:\Python\sql.py", line 774, in to_sql
table.insert()
File "N:\Python\sql.py", line 538, in insert
self.pd_sql.execute(ins, data_list)
File "N:\Python\sql.py", line 734, in execute
return self.engine.execute(*args, **kwargs)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 1598, in execute
return connection.execute(statement, *multiparams, **params)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 664, in execute
return meth(self, multiparams, params)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\sql\elements.py", line 282, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 761, in _execute_clauseelement
compiled_sql, distilled_params
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 874, in _execute_context
context)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 1023, in _handle_dbapi_exception
exc_info
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\util\compat.py", line 174, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\util\compat.py", line 167, in reraise
raise value.with_traceback(tb)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 856, in _execute_context
context)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\default.py", line 385, in do_executemany
cursor.executemany(statement, parameters)
DataError: (DataError) ('22018', '[22018] [Microsoft][SQL Server Native Client 11.0][SQL Server]Operand type clash: time is incompatible with text (206) (SQLExecDirectW)') 'INSERT INTO [TBL_TEMP3] (a) VALUES (?)' ((datetime.time(7, 10),), (datetime.time(7, 20),))
```
I have two columns, one with datetime.date and one with datetime.time, which both exhibited this problem. I force-converted the datetime.date column via pd.to_datetime into a datetimeindex, which to_sql/sqlalchemy correctly formats into an SQL-acceptable date format. However, to_datetime does not work on datetime.date, leaving the pandas datatype as "object" instead of datetime64ns.
Thanks,
| `datetime.date` and `datetime.time` are always `object` formats; these are not force converted, unless you explicty pass a dtype (e.g. `datetime64[ns]`). This is sort of a compatibility issue as people want to use them, so they are represented as object dtype.
@jreback Is there any way to detect the 'real' type of a column when it has dtype `object`? (I suppose just looking at the type of eg the first value, but that is not really robust)
Because, in principle, sqlalchemy has `datetime.date/time` types that map to sql types, and mysql has a `DATE` and `TIME` type, so they could actually be written to the database correctly.
BTW, this line now in the codebase: https://github.com/pydata/pandas/blob/master/pandas/io/sql.py#L655 I suppose is useless, as the dtype of a dataframe column can never be `datetime.date`?
yes that line would never be true and is useless
lib.infer_dtype(com._ensure_object(arr))
would return 'date' or 'time' if all elements are date or time
else would return 'mixed'
I updated this issue to also remember that `datetime.date` should be added (there is a line of code for that, but it is not working, and it was never tested).
So adding `datetime.time` and `datetime.date`:
- so if dtype=object, check if it is date or time (https://github.com/pydata/pandas/issues/6932#issuecomment-41147038)
- return the appropriate SQLAlchemy type
| 2014-08-22T09:41:46Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-275-80a6d739629c>", line 1, in <module>
sql.to_sql(df, "TBL_TEMP3", con, index=False)
File "N:\Python\sql.py", line 399, in to_sql
index_label=index_label)
File "N:\Python\sql.py", line 774, in to_sql
table.insert()
File "N:\Python\sql.py", line 538, in insert
self.pd_sql.execute(ins, data_list)
File "N:\Python\sql.py", line 734, in execute
return self.engine.execute(*args, **kwargs)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 1598, in execute
return connection.execute(statement, *multiparams, **params)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 664, in execute
return meth(self, multiparams, params)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\sql\elements.py", line 282, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 761, in _execute_clauseelement
compiled_sql, distilled_params
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 874, in _execute_context
context)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 1023, in _handle_dbapi_exception
exc_info
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\util\compat.py", line 174, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\util\compat.py", line 167, in reraise
raise value.with_traceback(tb)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\base.py", line 856, in _execute_context
context)
File "C:\WinPython3.3.3.2\python-3.3.3.amd64\lib\site-packages\sqlalchemy\engine\default.py", line 385, in do_executemany
cursor.executemany(statement, parameters)
DataError: (DataError) ('22018', '[22018] [Microsoft][SQL Server Native Client 11.0][SQL Server]Operand type clash: time is incompatible with text (206) (SQLExecDirectW)') 'INSERT INTO [TBL_TEMP3] (a) VALUES (?)' ((datetime.time(7, 10),), (datetime.time(7, 20),))
| 15,516 |
|||
pandas-dev/pandas | pandas-dev__pandas-8331 | 7c319fdb8fe8dce65683da5fe8418bdc3cf71f3e | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -276,6 +276,7 @@ API changes
Index(['a','b','c']).difference(Index(['b','c','d']))
- ``DataFrame.info()`` now ends its output with a newline character (:issue:`8114`)
+- add ``copy=True`` argument to ``pd.concat`` to enable pass thrue of complete blocks (:issue:`8252`)
.. _whatsnew_0150.dt:
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -666,7 +666,7 @@ def _sort_labels(uniques, left, right):
def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
- keys=None, levels=None, names=None, verify_integrity=False):
+ keys=None, levels=None, names=None, verify_integrity=False, copy=True):
"""
Concatenate pandas objects along a particular axis with optional set logic
along the other axes. Can also add a layer of hierarchical indexing on the
@@ -704,6 +704,8 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
concatenating objects where the concatenation axis does not have
meaningful indexing information. Note the the index values on the other
axes are still respected in the join.
+ copy : boolean, default True
+ If False, do not copy data unnecessarily
Notes
-----
@@ -716,7 +718,8 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
op = _Concatenator(objs, axis=axis, join_axes=join_axes,
ignore_index=ignore_index, join=join,
keys=keys, levels=levels, names=names,
- verify_integrity=verify_integrity)
+ verify_integrity=verify_integrity,
+ copy=copy)
return op.get_result()
@@ -727,7 +730,7 @@ class _Concatenator(object):
def __init__(self, objs, axis=0, join='outer', join_axes=None,
keys=None, levels=None, names=None,
- ignore_index=False, verify_integrity=False):
+ ignore_index=False, verify_integrity=False, copy=True):
if not isinstance(objs, (list,tuple,types.GeneratorType,dict,TextFileReader)):
raise TypeError('first argument must be a list-like of pandas '
'objects, you passed an object of type '
@@ -846,6 +849,7 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None,
self.ignore_index = ignore_index
self.verify_integrity = verify_integrity
+ self.copy = copy
self.new_axes = self._get_new_axes()
@@ -879,7 +883,9 @@ def get_result(self):
mgrs_indexers.append((obj._data, indexers))
new_data = concatenate_block_managers(
- mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=True)
+ mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy)
+ if not self.copy:
+ new_data._consolidate_inplace()
return self.objs[0]._from_axes(new_data, self.new_axes).__finalize__(self, method='concat')
| MemoryError with more than 1E9 rows
I have 240GB of RAM. Nothing else running on the machine. I'm trying to create 1.5E9 rows, which I think should create a data frame of around 100GB, but getting this MemoryError. This works fine with 1E9 but not 1.5E9. I could understand a limit at about 2^31 (2E9) or 2^32 (4E9) but all 240GB seems exhausted (according to htop) at somewhere between 1E9 and 1.5E9 rows. Any ideas? Thanks.
``` python
$ python3
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> import numpy as np
>>> import timeit
>>> pd.__version__
'0.14.1'
>>> def randChar(f, numGrp, N) :
... things = [f%x for x in range(numGrp)]
... return [things[x] for x in np.random.choice(numGrp, N)]
...
>>> def randFloat(numGrp, N) :
... things = [round(100*np.random.random(),4) for x in range(numGrp)]
... return [things[x] for x in np.random.choice(numGrp, N)]
...
>>> N=int(1.5e9) # N=int(1e9) works fine
>>> K=100
>>> DF = pd.DataFrame({
... 'id1' : randChar("id%03d", K, N), # large groups (char)
... 'id2' : randChar("id%03d", K, N), # large groups (char)
... 'id3' : randChar("id%010d", N//K, N), # small groups (char)
... 'id4' : np.random.choice(K, N), # large groups (int)
... 'id5' : np.random.choice(K, N), # large groups (int)
... 'id6' : np.random.choice(N//K, N), # small groups (int)
... 'v1' : np.random.choice(5, N), # int in range [1,5]
... 'v2' : np.random.choice(5, N), # int in range [1,5]
... 'v3' : randFloat(100,N) # numeric e.g. 23.5749
... })
Traceback (most recent call last):
File "<stdin>", line 10, in <module>
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 203, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 327, in _init_dict
dtype=dtype)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 4630, in _arrays_to_mgr
return create_block_manager_from_arrays(arrays, arr_names, axes)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3235, in create_block_manager_from_arrays
blocks = form_blocks(arrays, names, axes)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3322, in form_blocks
object_items, np.object_)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3346, in _simple_blockify
values, placement = _stack_arrays(tuples, dtype)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3410, in _stack_arrays
stacked = np.empty(shape, dtype=dtype)
MemoryError
```
``` bash
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Stepping: 4
CPU MHz: 2494.070
BogoMIPS: 5054.21
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
$ free -h
total used free shared buffers cached
Mem: 240G 2.3G 237G 364K 66M 632M
-/+ buffers/cache: 1.6G 238G
Swap: 0B 0B 0B
$
```
An earlier question on S.O. is here : http://stackoverflow.com/questions/25631076/is-this-the-fastest-way-to-group-in-pandas
| You can try separately creating Series (with each of the columns first), then putting them into a dict and creating the frame. However you might be having a problem finding contiguous memory.
| 2014-09-20T16:00:41Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 10, in <module>
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 203, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 327, in _init_dict
dtype=dtype)
File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 4630, in _arrays_to_mgr
return create_block_manager_from_arrays(arrays, arr_names, axes)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3235, in create_block_manager_from_arrays
blocks = form_blocks(arrays, names, axes)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3322, in form_blocks
object_items, np.object_)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3346, in _simple_blockify
values, placement = _stack_arrays(tuples, dtype)
File "/usr/lib/python3/dist-packages/pandas/core/internals.py", line 3410, in _stack_arrays
stacked = np.empty(shape, dtype=dtype)
MemoryError
| 15,551 |
|||
pandas-dev/pandas | pandas-dev__pandas-8651 | 46c52e20d8418b0668738007564f3a8669275aaf | AttributeError: 'module' object has no attribute 'open_file'
```
======================================================================
ERROR: test_frame_select_complex2 (pandas.io.tests.test_pytables.TestHDFStore)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_pytables.py", line 3686, in test_frame_select_complex2
parms.to_hdf(pp,'df',mode='w',format='table',data_columns=['A'])
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/core/generic.py", line 896, in to_hdf
return pytables.to_hdf(path_or_buf, key, self, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 293, in to_hdf
complib=complib) as store:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 274, in get_store
store = HDFStore(path, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 423, in __init__
self.open(mode=mode, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 553, in open
self._handle = tables.open_file(self._path, self._mode, **kwargs)
AttributeError: 'module' object has no attribute 'open_file'
```
on the same old ubuntu 13.10
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.5.final.0
python-bits: 64
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.15.0
nose: 1.3.0
Cython: 0.19
numpy: 1.7.1
scipy: 0.12.0
statsmodels: 0.5.0
IPython: None
sphinx: 1.1.3
patsy: 0.3.0
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 2.4.0
numexpr: 2.0.1
matplotlib: 1.2.1
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.4
xlsxwriter: None
lxml: None
bs4: 4.2.0
html5lib: 0.95-dev
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| same here this PyTables too old and not supported
Numexpr too
we do s delayed import of tables so it's possible that the test machinery swallows the error
| 2014-10-27T12:09:24Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/tests/test_pytables.py", line 3686, in test_frame_select_complex2
parms.to_hdf(pp,'df',mode='w',format='table',data_columns=['A'])
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/core/generic.py", line 896, in to_hdf
return pytables.to_hdf(path_or_buf, key, self, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 293, in to_hdf
complib=complib) as store:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 274, in get_store
store = HDFStore(path, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 423, in __init__
self.open(mode=mode, **kwargs)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 553, in open
self._handle = tables.open_file(self._path, self._mode, **kwargs)
AttributeError: 'module' object has no attribute 'open_file'
| 15,593 |
||||
pandas-dev/pandas | pandas-dev__pandas-8699 | fb124fd26fd0ad46cbc80e80735782ff25d24ad0 | diff --git a/doc/source/whatsnew/v0.15.1.txt b/doc/source/whatsnew/v0.15.1.txt
--- a/doc/source/whatsnew/v0.15.1.txt
+++ b/doc/source/whatsnew/v0.15.1.txt
@@ -1,14 +1,12 @@
.. _whatsnew_0151:
-v0.15.1 (November ??, 2014)
------------------------
+v0.15.1 (November 8, 2014)
+--------------------------
-This is a minor release from 0.15.0 and includes a small number of API changes, several new features,
+This is a minor bug-fix release from 0.15.0 and includes a small number of API changes, several new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
users upgrade to this version.
-- Highlights include:
-
- :ref:`Enhancements <whatsnew_0151.enhancements>`
- :ref:`API Changes <whatsnew_0151.api>`
- :ref:`Performance Improvements <whatsnew_0151.performance>`
@@ -30,10 +28,10 @@ API changes
.. code-block:: python
- # this was underreported and actually took (in < 0.15.1) about 24008 bytes
+ # this was underreported in prior versions
In [1]: dfi.memory_usage(index=True)
Out[1]:
- Index 8000
+ Index 8000 # took about 24008 bytes in < 0.15.1
A 8000
dtype: int64
@@ -178,7 +176,7 @@ Experimental
Bug Fixes
~~~~~~~~~
-
+- Bug in unpickling of a ``CustomBusinessDay`` object (:issue:`8591`)
- Bug in coercing ``Categorical`` to a records array, e.g. ``df.to_records()`` (:issue:`8626`)
- Bug in ``Categorical`` not created properly with ``Series.to_frame()`` (:issue:`8626`)
- Bug in coercing in astype of a ``Categorical`` of a passed ``pd.Categorical`` (this now raises ``TypeError`` correctly), (:issue:`8626`)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -614,6 +614,14 @@ def __getstate__(self):
"""Return a pickleable state"""
state = self.__dict__.copy()
del state['calendar']
+
+ # we don't want to actually pickle the calendar object
+ # as its a np.busyday; we recreate on deserilization
+ try:
+ state['kwds'].pop('calendar')
+ except:
+ pass
+
return state
def __setstate__(self, state):
| Pandas 15.0: TypeError: can't pickle busdaycalendar objects
Using the documentation's own CustomBusinessDay example, you can no longer save objects with a CustomBusinessDay frequency to an HDF store in Pandas 15.0 Final (but worked in 14.1):
``` python
import pandas as pd
import numpy as np
from pandas.tseries.offsets import CustomBusinessDay
from datetime import datetime
weekmask_egypt = 'Sun Mon Tue Wed Thu'
holidays = ['2012-05-01', datetime(2013, 5, 1), np.datetime64('2014-05-01')]
bday_egypt = CustomBusinessDay(holidays=holidays, weekmask=weekmask_egypt)
dt = datetime(2013, 4, 30)
dts = pd.date_range(dt, periods=5, freq=bday_egypt)
s = (pd.Series(dts.weekday, dts).map(pd.Series('Mon Tue Wed Thu Fri Sat Sun'.split())))
store = pd.HDFStore('test.hdf')
store.put('test',s)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 831, in put
self._write_to_group(key, value, append=append, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 1280, in _write_to_group
s.write(obj=value, append=append, complib=complib, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 2557, in write
self.write_index('index', obj.index)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 2316, in write_index
node._v_attrs.freq = index.freq
File "/usr/local/lib/python2.7/site-packages/tables/attributeset.py", line 455, in __setattr__
self._g__setattr(name, value)
File "/usr/local/lib/python2.7/site-packages/tables/attributeset.py", line 397, in _g__setattr
self._g_setattr(self._v_node, name, stvalue)
File "hdf5extension.pyx", line 700, in tables.hdf5extension.AttributeSet._g_setattr (tables/hdf5extension.c:6623)
File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle busdaycalendar objects
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.8.final.0
python-bits: 64
OS: Darwin
OS-release: 13.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
pandas: 0.15.0
nose: 1.3.1
Cython: 0.20.2
numpy: 1.8.1
scipy: 0.14.0
statsmodels: None
IPython: 2.2.0
sphinx: None
patsy: None
dateutil: 2.2
pytz: 2014.4
bottleneck: 0.8.0
tables: 3.1.1
numexpr: 2.4
matplotlib: 1.3.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| cc @bjonen
this pickles/unpickles just fine, but in this case the `__getstate__` in the CustomeBusinessDay offset delets the calendard, but not from the `kwds` (so the pickle fails). can you do a pr to fix (you can use this test case), and maybe run thru all of the offsets just in case, they are prob not directly tested for pickliblity.
@bazeli thanks for reporting
Right now I do not have the time to look into this. It will be a couple of weeks but I'll do it unless someone else wants to take it earlier.
| 2014-10-31T19:34:33Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 831, in put
self._write_to_group(key, value, append=append, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 1280, in _write_to_group
s.write(obj=value, append=append, complib=complib, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 2557, in write
self.write_index('index', obj.index)
File "/usr/local/lib/python2.7/site-packages/pandas/io/pytables.py", line 2316, in write_index
node._v_attrs.freq = index.freq
File "/usr/local/lib/python2.7/site-packages/tables/attributeset.py", line 455, in __setattr__
self._g__setattr(name, value)
File "/usr/local/lib/python2.7/site-packages/tables/attributeset.py", line 397, in _g__setattr
self._g_setattr(self._v_node, name, stvalue)
File "hdf5extension.pyx", line 700, in tables.hdf5extension.AttributeSet._g_setattr (tables/hdf5extension.c:6623)
File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle busdaycalendar objects
| 15,604 |
|||
pandas-dev/pandas | pandas-dev__pandas-8810 | 4faf62026dfbfafb4f8c3676adc33ad02f465858 | diff --git a/doc/source/whatsnew/v0.15.2.txt b/doc/source/whatsnew/v0.15.2.txt
--- a/doc/source/whatsnew/v0.15.2.txt
+++ b/doc/source/whatsnew/v0.15.2.txt
@@ -63,3 +63,4 @@ Bug Fixes
- Bug in slicing a multi-index with an empty list and at least one boolean indexer (:issue:`8781`)
- ``io.data.Options`` now raises ``RemoteDataError`` when no expiry dates are available from Yahoo (:issue:`8761`).
- ``Timedelta`` kwargs may now be numpy ints and floats (:issue:`8757`).
+- Skip testing of histogram plots for matplotlib <= 1.2 (:issue:`8648`).
| ubuntu 13.10 with mpl : test_hist_df* TypeError: barh() got multiple values for keyword argument 'bottom'
0.15.0
```
======================================================================
ERROR: test_hist_df (pandas.tests.test_graphics.TestDataFramePlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 2061, in test_hist_df
axes = df.plot(kind='hist', rot=50, fontsize=8, orientation='horizontal')
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 2453, in plot_frame
**kwds)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 2293, in _plot
plot_obj.generate()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 920, in generate
self._make_plot()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 1949, in _make_plot
artists = plotf(ax, y, column_num=i, **kwds)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 1929, in plotf
bottom=bottom, **kwds)
File "/usr/lib/pymodules/python2.7/matplotlib/axes.py", line 8180, in hist
color=c, bottom=bottom)
TypeError: barh() got multiple values for keyword argument 'bottom'
```
```
NSTALLED VERSIONS
------------------
commit: None
python: 2.7.5.final.0
python-bits: 64
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.15.0
nose: 1.3.0
Cython: 0.19
numpy: 1.7.1
scipy: 0.12.0
statsmodels: 0.5.0
IPython: None
sphinx: 1.1.3
patsy: 0.3.0
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 2.4.0
numexpr: 2.0.1
matplotlib: 1.2.1
openpyxl: 1.7.0
xlrd: 0.9.2
xlwt: 0.7.4
xlsxwriter: None
lxml: None
bs4: 4.2.0
html5lib: 0.95-dev
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| I'm not sure what the official word is on what versions of matplotlib pandas tries to support, but v1.2.1 is getting pretty old by now. I would try installing a more recent version through pip, e.g.:
``` bash
sudo apt-get build-dep python-matplotlib
sudo pip install matplotlib
```
@onesandzeroes this just needs a skip test for this function (for matplotlib < 1.3)
their are some examples already of how to do this IIRC.
want to do a PR?
we test with 1.3.1 on the 2.7_LOCALE build, so this is not picked up.
| 2014-11-14T01:16:21Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 2061, in test_hist_df
axes = df.plot(kind='hist', rot=50, fontsize=8, orientation='horizontal')
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 2453, in plot_frame
**kwds)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 2293, in _plot
plot_obj.generate()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 920, in generate
self._make_plot()
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 1949, in _make_plot
artists = plotf(ax, y, column_num=i, **kwds)
File "/tmp/buildd/pandas-0.15.0/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 1929, in plotf
bottom=bottom, **kwds)
File "/usr/lib/pymodules/python2.7/matplotlib/axes.py", line 8180, in hist
color=c, bottom=bottom)
TypeError: barh() got multiple values for keyword argument 'bottom'
| 15,622 |
|||
pandas-dev/pandas | pandas-dev__pandas-8847 | 079bd88dd10a44c8d096f023fc8bd1049df24f6c | diff --git a/doc/source/whatsnew/v0.15.2.txt b/doc/source/whatsnew/v0.15.2.txt
--- a/doc/source/whatsnew/v0.15.2.txt
+++ b/doc/source/whatsnew/v0.15.2.txt
@@ -75,7 +75,7 @@ Bug Fixes
-
+- Defined ``.size`` attribute across ``NDFrame`` objects to provide compat with numpy >= 1.9.1; buggy with ``np.array_split`` (:issue:`8846`)
- Skip testing of histogram plots for matplotlib <= 1.2 (:issue:`8648`).
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -381,6 +381,11 @@ def ndim(self):
"Number of axes / array dimensions"
return self._data.ndim
+ @property
+ def size(self):
+ "number of elements in the NDFrame"
+ return np.prod(self.shape)
+
def _expand_axes(self, key):
new_axes = []
for k, ax in zip(key, self.axes):
| splitting pandas dataframe - np.array_split error
I just noticed that after upgrading to numpy 1.9.0, when I'm trying to split dataframe with pandas 0.15.1 with the code:
```
split_dfs = np.array_split(big_df,8)
```
I get the error:
```
Traceback (most recent call last):
File "./test.py", line 127, in <module>
split_dfs = np.array_split(big_df,8)
File "/usr/lib/python2.7/site-packages/numpy/lib/shape_base.py", line 426, in array_split
if sub_arys[-1].size == 0 and sub_arys[-1].ndim != 1:
File "/usr/lib/python2.7/site-packages/pandas-0.15.1-py2.7-linux-x86_64.egg/pandas /core/generic.py", line 1936, in __getattr__
(type(self).__name__, name))
AttributeError: 'DataFrame' object has no attribute 'size'
```
with pandas 0.15.1 and numpy 1.8.1 it works fine.
I'm using pandas 0.15.1 on arch linux and python2.7
| This is due to a change in numpy: https://github.com/numpy/numpy/pull/4102 (`.size` is used to check if a FutureWarning should be raised or not, introduced in numpy 1.9.0).
But, this is a numpy function, and not really guaranteed to work with pandas dataframes although it did before (or should numpy do an `asarray`? @jreback , or should DataFrame have a `size` attribute?)
For a pandas-native split functionality, see the enhancement request: #7387
| 2014-11-18T11:09:54Z | [] | [] |
Traceback (most recent call last):
File "./test.py", line 127, in <module>
split_dfs = np.array_split(big_df,8)
File "/usr/lib/python2.7/site-packages/numpy/lib/shape_base.py", line 426, in array_split
if sub_arys[-1].size == 0 and sub_arys[-1].ndim != 1:
File "/usr/lib/python2.7/site-packages/pandas-0.15.1-py2.7-linux-x86_64.egg/pandas /core/generic.py", line 1936, in __getattr__
(type(self).__name__, name))
AttributeError: 'DataFrame' object has no attribute 'size'
| 15,630 |
|||
pandas-dev/pandas | pandas-dev__pandas-8982 | 4ab540943b0c489f55db2083e39c6c1e2edce066 | diff --git a/pandas/io/stata.py b/pandas/io/stata.py
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -611,9 +611,10 @@ class StataMissingValue(StringMixin):
MISSING_VALUES = {}
bases = (101, 32741, 2147483621)
for b in bases:
- MISSING_VALUES[b] = '.'
+ # Conversion to long to avoid hash issues on 32 bit platforms #8968
+ MISSING_VALUES[compat.long(b)] = '.'
for i in range(1, 27):
- MISSING_VALUES[i + b] = '.' + chr(96 + i)
+ MISSING_VALUES[compat.long(i + b)] = '.' + chr(96 + i)
float32_base = b'\x00\x00\x00\x7f'
increment = struct.unpack('<i', b'\x00\x08\x00\x00')[0]
@@ -643,6 +644,8 @@ class StataMissingValue(StringMixin):
def __init__(self, value):
self._value = value
+ # Conversion to long to avoid hash issues on 32 bit platforms #8968
+ value = compat.long(value) if value < 2147483648 else float(value)
self._str = self.MISSING_VALUES[value]
string = property(lambda self: self._str,
@@ -1375,13 +1378,6 @@ def _pad_bytes(name, length):
return name + "\x00" * (length - len(name))
-def _default_names(nvar):
- """
- Returns default Stata names v1, v2, ... vnvar
- """
- return ["v%d" % i for i in range(1, nvar+1)]
-
-
def _convert_datetime_to_stata_type(fmt):
"""
Converts from one of the stata date formats to a type in TYPE_MAP
| test_missing_value_conversion on ubuntu 13.10 32bit KeyError: 2147483647
```
======================================================================
ERROR: test_missing_value_conversion (pandas.io.tests.test_stata.TestStata)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_stata.py", line 651, in test_missing_value_conversion
parsed_113 = read_stata(self.dta17_113, convert_missing=True)
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/stata.py", line 69, in read_stata
order_categoricals)
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/stata.py", line 1278, in data
missing_value = StataMissingValue(um)
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/stata.py", line 646, in __init__
self._str = self.MISSING_VALUES[value]
KeyError: 2147483647
```
doesn't happen on amd64
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.3.2.final.0
python-bits: 32
OS: Linux
OS-release: 3.2.0-4-amd64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.15.1.dev
nose: 1.3.0
Cython: 0.19
numpy: 1.7.1
scipy: 0.12.0
statsmodels: None
IPython: None
sphinx: 1.1.3
patsy: None
dateutil: 2.0
pytz: 2012c
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.2.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.2.0
html5lib: None
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
| cc @bashtage
Bizzare. This is straight python
``` python
MISSING_VALUES = {}
bases = (101, 32741, 2147483621)
for b in bases:
MISSING_VALUES[b] = '.'
for i in range(1, 27):
MISSING_VALUES[i + b] = '.' + chr(96 + i)
```
This certainly appears to create `MISSING_VALUE[2147483621 + 26]` (2147483647).
I don't have any 32 bit to test on - I suppose that is is a 32 bit issue since this is the largest 32-bit integer
so on 32-bit
the max values i `np.iinfo(np.int32).max` (which is that number)
you cannot add to it (well you can, but it squelches the overflow) and its 'undefined' IIRC.
however I have seen where the max value 'wraps' around, the the max values i one less that as show here.
are these bases a stata thing?
Yes, these are a Stata choice to use the highest integer values of each type to represent missing values. From what I can see, it should be perfectly fine since it is not adding anything to this value.
Just some thoughts - I am guessing that `hash(np.int32(2147483647))` is not the same as `hash(int(2147483647))` - perhaps numpy is treating it as `2147483647L` which might have a different hash on 32 bit. I suppose one way to workaround this would be to explicitly cast any integer values to int when looking up (any missing < 2147483647 must be an int).
| 2014-12-03T12:40:22Z | [] | [] |
Traceback (most recent call last):
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_stata.py", line 651, in test_missing_value_conversion
parsed_113 = read_stata(self.dta17_113, convert_missing=True)
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/stata.py", line 69, in read_stata
order_categoricals)
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/stata.py", line 1278, in data
missing_value = StataMissingValue(um)
File "/tmp/buildd/pandas-0.15.1+git125-ge463818/debian/tmp/usr/lib/python3/dist-packages/pandas/io/stata.py", line 646, in __init__
self._str = self.MISSING_VALUES[value]
KeyError: 2147483647
| 15,648 |
|||
pandas-dev/pandas | pandas-dev__pandas-8988 | 12d71f0d89e55293ebc09961de50a2c55e0d5495 | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -19,6 +19,7 @@
from pandas.core.common import isnull
from pandas.core.base import PandasObject
from pandas.tseries.tools import to_datetime
+from pandas.util.decorators import Appender
from contextlib import contextmanager
@@ -1533,6 +1534,7 @@ def get_schema(frame, name, flavor='sqlite', keys=None, con=None):
# legacy names, with depreciation warnings and copied docs
+@Appender(read_sql.__doc__, join='\n')
def read_frame(*args, **kwargs):
"""DEPRECATED - use read_sql
"""
@@ -1540,6 +1542,7 @@ def read_frame(*args, **kwargs):
return read_sql(*args, **kwargs)
+@Appender(read_sql.__doc__, join='\n')
def frame_query(*args, **kwargs):
"""DEPRECATED - use read_sql
"""
@@ -1587,8 +1590,3 @@ def write_frame(frame, name, con, flavor='sqlite', if_exists='fail', **kwargs):
index = kwargs.pop('index', False)
return to_sql(frame, name, con, flavor=flavor, if_exists=if_exists,
index=index, **kwargs)
-
-
-# Append wrapped function docstrings
-read_frame.__doc__ += read_sql.__doc__
-frame_query.__doc__ += read_sql.__doc__
| Can't run in optimized mode on Ubuntu
If I start python with a -O flag, I get an error as soon as I import pandas
``` python
python -O
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username/.virtualenvs/thm/local/lib/python2.7/site-packages/pandas/__init__.py", line 45, in <module>
from pandas.io.api import *
File "/home/username/.virtualenvs/thm/local/lib/python2.7/site-packages/pandas/io/api.py", line 11, in <module>
from pandas.io.sql import read_sql, read_sql_table, read_sql_query
File "/home/username/.virtualenvs/thm/local/lib/python2.7/site-packages/pandas/io/sql.py", line 1243, in <module>
read_frame.__doc__ += read_sql.__doc__
TypeError: unsupported operand type(s) for +=: 'NoneType' and 'NoneType'
```
The lines causing the issue are:
``` python
# Append wrapped function docstrings
read_frame.__doc__ += read_sql.__doc__
frame_query.__doc__ += read_sql.__doc__
```
It appears as if python is stripping the docstrings, which then causes issues. Commenting the lines out fixes the problem.
I'm on Ubuntu 14.04.1 LTS, my version info is:
``` python
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-29-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.14.1
nose: 1.3.0
Cython: None
numpy: 1.8.0
scipy: 0.9.0
statsmodels: None
IPython: None
sphinx: None
patsy: None
scikits.timeseries: None
dateutil: 2.2
pytz: 2013.9
bottleneck: 0.8.0
tables: None
numexpr: 2.4
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.7
pymysql: None
psycopg2: None
```
| I'm on OS X and have the same issue with python3.
## INSTALLED VERSIONS
commit: None
python: 3.4.2.final.0
python-bits: 64
OS: Darwin
OS-release: 14.0.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.15.1
nose: None
Cython: None
numpy: 1.9.1
scipy: 0.14.0
statsmodels: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.2
pytz: 2014.9
bottleneck: 0.8.0
tables: None
numexpr: 2.4
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: None
pymysql: None
psycopg2: None
I suppose that these doc strings should use the Appender decorator which deals this issue iirc
pull requests welcome!
| 2014-12-03T22:51:12Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username/.virtualenvs/thm/local/lib/python2.7/site-packages/pandas/__init__.py", line 45, in <module>
from pandas.io.api import *
File "/home/username/.virtualenvs/thm/local/lib/python2.7/site-packages/pandas/io/api.py", line 11, in <module>
from pandas.io.sql import read_sql, read_sql_table, read_sql_query
File "/home/username/.virtualenvs/thm/local/lib/python2.7/site-packages/pandas/io/sql.py", line 1243, in <module>
read_frame.__doc__ += read_sql.__doc__
TypeError: unsupported operand type(s) for +=: 'NoneType' and 'NoneType'
| 15,650 |
|||
pandas-dev/pandas | pandas-dev__pandas-9137 | ab20769cef69dd842010f6e35e207ce0ba2df226 | TST: wb test failing
cc @jnmclarty
can you take a look
```
FSS.S
======================================================================
FAIL: test_wdi_download (pandas.io.tests.test_wb.TestWB)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/jreback/pandas/pandas/util/testing.py", line 1441, in wrapper
return t(*args, **kwargs)
File "/Users/jreback/pandas/pandas/io/tests/test_wb.py", line 48, in test_wdi_download
assert_frame_equal(result, pandas.DataFrame(expected))
File "/Users/jreback/pandas/pandas/util/testing.py", line 730, in assert_frame_equal
check_exact=check_exact)
File "/Users/jreback/pandas/pandas/util/testing.py", line 674, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "das/src/testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2740)
File "das/src/testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1825)
File "das/src/testing.pyx", line 140, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2572)
AssertionError: expected 39682.47225 but got 39677.30176, with decimal 5
----------------------------------------------------------------------
Ran 5 tests in 7.469s
```
| Will do, late tonight, or tomorrow. I should have just overhauled the initial author's tests. Pandas should be testing that data comes back. We don't need to make sure that the right data comes back.
| 2014-12-23T01:41:05Z | [] | [] |
Traceback (most recent call last):
File "/Users/jreback/pandas/pandas/util/testing.py", line 1441, in wrapper
return t(*args, **kwargs)
File "/Users/jreback/pandas/pandas/io/tests/test_wb.py", line 48, in test_wdi_download
assert_frame_equal(result, pandas.DataFrame(expected))
File "/Users/jreback/pandas/pandas/util/testing.py", line 730, in assert_frame_equal
check_exact=check_exact)
File "/Users/jreback/pandas/pandas/util/testing.py", line 674, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "das/src/testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2740)
File "das/src/testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1825)
File "das/src/testing.pyx", line 140, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2572)
AssertionError: expected 39682.47225 but got 39677.30176, with decimal 5
| 15,669 |
||||
pandas-dev/pandas | pandas-dev__pandas-9289 | fda50121453f76142b00ff57b017b8a3ef692f69 | diff --git a/doc/source/whatsnew/v0.16.0.txt b/doc/source/whatsnew/v0.16.0.txt
--- a/doc/source/whatsnew/v0.16.0.txt
+++ b/doc/source/whatsnew/v0.16.0.txt
@@ -101,6 +101,7 @@ Enhancements
- Added ``Timestamp.to_datetime64()`` to complement ``Timedelta.to_timedelta64()`` (:issue:`9255`)
- ``tseries.frequencies.to_offset()`` now accepts ``Timedelta`` as input (:issue:`9064`)
+- ``Timedelta`` will now accept nanoseconds keyword in constructor (:issue:`9273`)
Performance
~~~~~~~~~~~
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1722,10 +1722,11 @@ class Timedelta(_Timedelta):
kwargs = dict([ (k, _to_py_int_float(v)) for k, v in iteritems(kwargs) ])
try:
- value = timedelta(**kwargs)
+ nano = kwargs.pop('nanoseconds',0)
+ value = convert_to_timedelta64(timedelta(**kwargs),'ns',False) + nano
except TypeError as e:
raise ValueError("cannot construct a TimeDelta from the passed arguments, allowed keywords are "
- "[days, seconds, microseconds, milliseconds, minutes, hours, weeks]")
+ "[weeks, days, hours, minutes, seconds, milliseconds, microseconds, nanoseconds]")
if isinstance(value, Timedelta):
value = value.value
| pd.Timedelta constructor should accept nanoseconds attribute in the arguments
xref #9226
Noticing an inconsistency. pd.Timedelta doesn't allow nanoseconds in constructor but its components list till nanoseconds. Here is what I can reproduce. (used then current master on OS X 10.10.1)
Example:
<pre>
Python 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from pandas.tslib import Timedelta
>>> td = Timedelta(nanoseconds=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/tslib.pyx", line 1723, in pandas.tslib.Timedelta.__new__ (pandas/tslib.c:29743)
raise ValueError("cannot construct a TimeDelta from the passed arguments, allowed keywords
are " [days, seconds, microseconds, milliseconds, minutes, hours, weeks]
>>> td=Timedelta(seconds=1)
>>> td.components._fields
('days', 'hours', 'minutes', 'seconds', 'milliseconds', 'microseconds', 'nanoseconds')
</pre>
| This is ONLY for an actual passed argument of 'nanoseconds'. String parsing/interp is good.
```
In [3]: pd.Timedelta('1ns').components
Out[3]: Components(days=0, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0, nanoseconds=1)
```
@jreback I have changed the title accordingly. Timdelta constructor fails to accept when nanoseconds is supplied as an attribute but otherwise pd.Timedelta does support nanoseconds. Also default resolution in the constructor is nanoseconds. For e.g.,
In [7]: t4 = Timedelta(100)
In [8]: t4.components
Out[8]: Components(days=0, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0, nanoseconds=100)
I am testing a fix for this. will submit a PR
Moved DateOffset nano related issue to a separate one here #9284 as it seems like a separate issue and not related to this one.
| 2015-01-18T05:56:43Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/tslib.pyx", line 1723, in pandas.tslib.Timedelta.__new__ (pandas/tslib.c:29743)
| 15,683 |
|||
pandas-dev/pandas | pandas-dev__pandas-9358 | 224a66d708579a081cf139659217e88b576f8b47 | diff --git a/pandas/io/data.py b/pandas/io/data.py
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -171,7 +171,15 @@ def _retry_read_url(url, retry_count, pause, name):
# return 2 rows for the most recent business day
if len(rs) > 2 and rs.index[-1] == rs.index[-2]: # pragma: no cover
rs = rs[:-1]
- return rs
+
+ #Get rid of unicode characters in index name.
+ try:
+ rs.index.name = rs.index.name.decode('unicode_escape').encode('ascii', 'ignore')
+ except AttributeError:
+ #Python 3 string has no decode method.
+ rs.index.name = rs.index.name.encode('ascii', 'ignore').decode()
+
+ return rs
raise IOError("after %d tries, %s did not "
"return a 200 for url %r" % (retry_count, name, url))
@@ -686,7 +694,7 @@ def _option_frames_from_url(self, url):
if not hasattr(self, 'underlying_price'):
try:
- self.underlying_price, self.quote_time = self._get_underlying_price(url)
+ self.underlying_price, self.quote_time = self._underlying_price_and_time_from_url(url)
except IndexError:
self.underlying_price, self.quote_time = np.nan, np.nan
@@ -701,23 +709,38 @@ def _option_frames_from_url(self, url):
return {'calls': calls, 'puts': puts}
- def _get_underlying_price(self, url):
+ def _underlying_price_and_time_from_url(self, url):
root = self._parse_url(url)
- underlying_price = float(root.xpath('.//*[@class="time_rtq_ticker Fz-30 Fw-b"]')[0]\
- .getchildren()[0].text)
+ underlying_price = self._underlying_price_from_root(root)
+ quote_time = self._quote_time_from_root(root)
+ return underlying_price, quote_time
+
+ @staticmethod
+ def _underlying_price_from_root(root):
+ underlying_price = root.xpath('.//*[@class="time_rtq_ticker Fz-30 Fw-b"]')[0]\
+ .getchildren()[0].text
+ underlying_price = underlying_price.replace(',', '') #GH11
+ try:
+ underlying_price = float(underlying_price)
+ except ValueError:
+ underlying_price = np.nan
+
+ return underlying_price
+
+ @staticmethod
+ def _quote_time_from_root(root):
#Gets the time of the quote, note this is actually the time of the underlying price.
try:
quote_time_text = root.xpath('.//*[@class="time_rtq Fz-m"]')[0].getchildren()[1].getchildren()[0].text
##TODO: Enable timezone matching when strptime can match EST with %Z
quote_time_text = quote_time_text.split(' ')[0]
quote_time = dt.datetime.strptime(quote_time_text, "%I:%M%p")
-
quote_time = quote_time.replace(year=CUR_YEAR, month=CUR_MONTH, day=CUR_DAY)
except ValueError:
quote_time = np.nan
- return underlying_price, quote_time
+ return quote_time
def _get_option_data(self, expiry, name):
frame_name = '_frames' + self._expiry_to_string(expiry)
| Yahoo options DataReader can't parse underlying prices with commas
I ran into this problem retrieving option data for the SPX settlement index:
```
>>> options_object = web.Options('^spxpm', 'yahoo')
>>> options_object.get_call_data( expiry=datetime.date(2014, 12, 20) )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 785, in get_call_data
return self._get_data_in_date_range(expiry, call=True, put=False)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 1098, in _get_data_in_date_range
frame = self._get_option_data(expiry=expiry_date, name=name)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 717, in _get_option_data
frames = self._get_option_frames_from_yahoo(expiry)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 655, in _get_option_frames_from_yahoo
option_frames = self._option_frames_from_url(url)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 684, in _option_frames_from_url
self.underlying_price, self.quote_time = self._get_underlying_price(url)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 696, in _get_underlying_price
.getchildren()[0].text)
ValueError: invalid literal for float(): 2,071.92
```
Here's my show_versions output:
```
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.8.final.0
python-bits: 64
OS: Darwin
OS-release: 13.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.15.1
nose: 1.3.4
Cython: 0.21
numpy: 1.9.0
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 2.2.0
sphinx: 1.2.3
patsy: 0.3.0
dateutil: 2.2
pytz: 2014.7
bottleneck: None
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.4.0
openpyxl: 1.8.5
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.7
lxml: 3.4.0
bs4: 4.3.2
html5lib: 0.9999-dev
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.7
pymysql: None
psycopg2: None
```
| pull-requests to fix are welcome!
Excellent! I'll get it in this weekend.
Experiencing the same issue also on the S&P.
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.8.final.0
python-bits: 64
OS: Linux
OS-release: 3.16.0-28-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.15.2
nose: 1.3.4
Cython: None
numpy: 1.8.2
scipy: 0.14.0
statsmodels: None
IPython: 2.3.0
sphinx: 1.2.2
patsy: None
dateutil: 2.2
pytz: 2014.7
bottleneck: None
tables: None
numexpr: None
matplotlib: 1.4.2
openpyxl: None
xlrd: None
xlwt: 0.7.5
xlsxwriter: None
lxml: 3.3.6
bs4: 4.3.2
html5lib: 0.999
httplib2: 0.9
apiclient: None
rpy2: 2.4.3
sqlalchemy: None
pymysql: None
psycopg2: None
```
| 2015-01-26T16:03:18Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 785, in get_call_data
return self._get_data_in_date_range(expiry, call=True, put=False)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 1098, in _get_data_in_date_range
frame = self._get_option_data(expiry=expiry_date, name=name)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 717, in _get_option_data
frames = self._get_option_frames_from_yahoo(expiry)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 655, in _get_option_frames_from_yahoo
option_frames = self._option_frames_from_url(url)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 684, in _option_frames_from_url
self.underlying_price, self.quote_time = self._get_underlying_price(url)
File "/Users/pollyp/anaconda/lib/python2.7/site-packages/pandas/io/data.py", line 696, in _get_underlying_price
.getchildren()[0].text)
ValueError: invalid literal for float(): 2,071.92
| 15,692 |
|||
pandas-dev/pandas | pandas-dev__pandas-9597 | 1fab6fc4d0242a97c51f0edd7e769087e35899e2 | diff --git a/doc/source/whatsnew/v0.16.0.txt b/doc/source/whatsnew/v0.16.0.txt
--- a/doc/source/whatsnew/v0.16.0.txt
+++ b/doc/source/whatsnew/v0.16.0.txt
@@ -521,7 +521,7 @@ Bug Fixes
- ``SparseSeries`` and ``SparsePanel`` now accept zero argument constructors (same as their non-sparse counterparts) (:issue:`9272`).
-
+- Regression in merging Categoricals and object dtypes (:issue:`9426`)
- Bug in ``read_csv`` with buffer overflows with certain malformed input files (:issue:`9205`)
- Bug in groupby MultiIndex with missing pair (:issue:`9049`, :issue:`9344`)
- Fixed bug in ``Series.groupby`` where grouping on ``MultiIndex`` levels would ignore the sort argument (:issue:`9444`)
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1146,7 +1146,9 @@ def _maybe_promote(dtype, fill_value=np.nan):
dtype = np.object_
# in case we have a string that looked like a number
- if issubclass(np.dtype(dtype).type, compat.string_types):
+ if is_categorical_dtype(dtype):
+ dtype = dtype
+ elif issubclass(np.dtype(dtype).type, compat.string_types):
dtype = np.object_
return dtype, fill_value
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -4327,8 +4327,9 @@ def dtype(self):
if not self.needs_filling:
return self.block.dtype
else:
- return np.dtype(com._maybe_promote(self.block.dtype,
- self.block.fill_value)[0])
+ return com._get_dtype(com._maybe_promote(self.block.dtype,
+ self.block.fill_value)[0])
+
return self._dtype
@cache_readonly
| Merge fails when dataframe contains categoricals
Trying to perform a left merge between two dataframes using a column of type object. If I include categoricals in the right dataframe, I get the following error. Trying to reproduce with a toy dataset but no luck so far.
``` python
out = pd.merge(left, right, how='left', left_on='left_id', right_on='right_id')
Traceback (most recent call last):
File ".../pandas/tools/merge.py", line 39, in merge return op.get_result()
File ".../pandas/tools/merge.py", line 201, in get_result concat_axis=0, copy=self.copy)
File ".../pandas/core/internals.py", line 4046, in concatenate_block_managers for placement, join_units in concat_plan]
File ".../pandas/core/internals.py", line 4135, in concatenate_join_units empty_dtype, upcasted_na = get_empty_dtype_and_na(join_units)
File ".../pandas/core/internals.py", line 4074, in get_empty_dtype_and_na dtypes[i] = unit.dtype
File ".../pandas/src/properties.pyx", line 34, in pandas.lib.cache_readonly.__get__ (pandas/lib.c:40664)
File ".../pandas/core/internals.py", line 4349, in dtype self.block.fill_value)[0])
File ".../pandas/core/common.py", line 1128, in _maybe_promote if issubclass(np.dtype(dtype).type, compat.string_types):
TypeError: data type not understood
```
| pd.show_versions()
df.info() and df.head() for each frame
``` python
df = pd.merge(left, right, how='left', left_on='b', right_on='c')
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.9.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-45-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.15.2
nose: 1.3.4
Cython: 0.21
numpy: 1.9.1
scipy: 0.15.1
statsmodels: None
IPython: 2.3.1
sphinx: 1.2.3
patsy: 0.3.0
dateutil: 1.5
pytz: 2014.9
bottleneck: None
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.4.2
openpyxl: 1.8.5
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.7
lxml: 3.4.0
bs4: 4.3.2
html5lib: None
httplib2: None
apiclient: None
rpy2: 2.4.4
sqlalchemy: 0.9.7
pymysql: None
psycopg2: None
None
print left.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 29040 entries, 0 to 29039
Data columns (total 2 columns):
a 29040 non-null object
b 29040 non-null object
dtypes: object(2)
memory usage: 680.6+ KB
None
print left.head()
a b
0 00640000008PbqmAAC 0013000000CBGKbAAP
1 00640000008PbqmAAC 0013000000CBGKbAAP
2 00640000008PbqmAAC 0013000000CBGKbAAP
3 00640000008PbqmAAC 0013000000CBGKbAAP
4 00640000008PbqmAAC 0013000000CBGKbAAP
print right.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2952 entries, 0 to 2951
Data columns (total 2 columns):
c 2952 non-null object
d 2952 non-null category
dtypes: category(1), object(1)
memory usage: 49.2+ KB
None
print right.head()
c d
0 0014000000G3eszAAB null
1 0014000000G3TTVAA3 null
2 0014000000G4H6yAAF null
3 0014000000G4HpmAAF null
4 0014000000G4IR8AAN null
```
and you merging in the categorical column? iirc I think we allow this kind of object/cat merging (as the merge column) but would need a specifc example to see what the issue is
I'm merging on an object column and merging in a category column.
I have a reproducible example now:
``` python
right = pd.DataFrame({'c': {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'},
'd': {0: 'null', 1: 'null', 2: 'null', 3: 'null', 4: 'null'}})
right['d'] = right['d'].astype('category')
left = pd.DataFrame({'a': {0: 'f', 1: 'f', 2: 'f', 3: 'f', 4: 'f'},
'b': {0: 'g', 1: 'g', 2: 'g', 3: 'g', 4: 'g'}})
df = pd.merge(left, right, how='left', left_on='b', right_on='c')
```
hmm I don't think this is tested (only with concat). ok, marking as a bug. I think pretty easy to resolve though. You are welcome to dig in if you'd like.
I just ran into this in production code. Any hints on how this could be fixed? I'd gladly try.
FYI, I don't get this bug in 0.15.1
@lminer: Confirmed here, downgrading helped.
@lminer, thanks! Confirmed as well, working fine with 0.15.1.
| 2015-03-06T00:32:55Z | [] | [] |
Traceback (most recent call last):
File ".../pandas/tools/merge.py", line 39, in merge return op.get_result()
File ".../pandas/tools/merge.py", line 201, in get_result concat_axis=0, copy=self.copy)
File ".../pandas/core/internals.py", line 4046, in concatenate_block_managers for placement, join_units in concat_plan]
File ".../pandas/core/internals.py", line 4135, in concatenate_join_units empty_dtype, upcasted_na = get_empty_dtype_and_na(join_units)
File ".../pandas/core/internals.py", line 4074, in get_empty_dtype_and_na dtypes[i] = unit.dtype
File ".../pandas/src/properties.pyx", line 34, in pandas.lib.cache_readonly.__get__ (pandas/lib.c:40664)
File ".../pandas/core/internals.py", line 4349, in dtype self.block.fill_value)[0])
File ".../pandas/core/common.py", line 1128, in _maybe_promote if issubclass(np.dtype(dtype).type, compat.string_types):
TypeError: data type not understood
| 15,710 |
|||
pantsbuild/pants | pantsbuild__pants-10035 | 7268985491bb935c6c9845726ff91981211910fe | diff --git a/src/python/pants/bin/daemon_pants_runner.py b/src/python/pants/bin/daemon_pants_runner.py
--- a/src/python/pants/bin/daemon_pants_runner.py
+++ b/src/python/pants/bin/daemon_pants_runner.py
@@ -20,7 +20,7 @@
)
from pants.init.util import clean_global_runtime_state
from pants.option.options_bootstrapper import OptionsBootstrapper
-from pants.pantsd.service.scheduler_service import SchedulerService
+from pants.pantsd.pants_daemon_core import PantsDaemonCore
from pants.util.contextutil import argv_as, hermetic_environment_as, stdio_as
logger = logging.getLogger(__name__)
@@ -33,9 +33,9 @@ class ExclusiveRequestTimeout(Exception):
class DaemonPantsRunner(RawFdRunner):
"""A RawFdRunner (callable) that will be called for each client request to Pantsd."""
- def __init__(self, scheduler_service: SchedulerService) -> None:
+ def __init__(self, core: PantsDaemonCore) -> None:
super().__init__()
- self._scheduler_service = scheduler_service
+ self._core = core
self._run_lock = Lock()
@staticmethod
@@ -139,7 +139,7 @@ def _run(self, working_dir: str) -> ExitCode:
# Run using the pre-warmed Session.
with self._stderr_logging(global_bootstrap_options):
try:
- scheduler = self._scheduler_service.prepare()
+ scheduler = self._core.prepare_scheduler(options_bootstrapper)
runner = LocalPantsRunner.create(
os.environ, options_bootstrapper, scheduler=scheduler
)
diff --git a/src/python/pants/bin/local_pants_runner.py b/src/python/pants/bin/local_pants_runner.py
--- a/src/python/pants/bin/local_pants_runner.py
+++ b/src/python/pants/bin/local_pants_runner.py
@@ -83,7 +83,7 @@ def _init_graph_session(
native = Native()
native.set_panic_handler()
graph_scheduler_helper = scheduler or EngineInitializer.setup_legacy_graph(
- native, options_bootstrapper, build_config
+ options_bootstrapper, build_config
)
global_scope = options.for_global_scope()
diff --git a/src/python/pants/engine/internals/native.py b/src/python/pants/engine/internals/native.py
--- a/src/python/pants/engine/internals/native.py
+++ b/src/python/pants/engine/internals/native.py
@@ -122,6 +122,7 @@ class Native(metaclass=SingletonMetaclass):
def __init__(self):
self.externs = Externs(self.lib)
self.lib.externs_set(self.externs)
+ self._executor = self.lib.PyExecutor()
class BinaryLocationError(Exception):
pass
@@ -188,11 +189,20 @@ def override_thread_logging_destination_to_just_stderr(self):
def match_path_globs(self, path_globs: PathGlobs, paths: Iterable[str]) -> bool:
return cast(bool, self.lib.match_path_globs(path_globs, tuple(paths)))
- def nailgun_server_await_bound(self, scheduler, nailgun_server) -> int:
- return cast(int, self.lib.nailgun_server_await_bound(scheduler, nailgun_server))
+ def nailgun_server_await_bound(self, nailgun_server) -> int:
+ """Blocks until the server has bound a port, and then returns the port.
- def new_nailgun_server(self, scheduler, port: int, runner: RawFdRunner):
- return self.lib.nailgun_server_create(scheduler, port, runner)
+ Returns the actual port the server has successfully bound to, or raises an exception if the
+ server has exited.
+ """
+ return cast(int, self.lib.nailgun_server_await_bound(self._executor, nailgun_server))
+
+ def new_nailgun_server(self, port: int, runner: RawFdRunner):
+ """Creates a nailgun server with a requested port.
+
+ Returns the server and the actual port it bound to.
+ """
+ return self.lib.nailgun_server_create(self._executor, port, runner)
def new_tasks(self):
return self.lib.PyTasks()
@@ -263,6 +273,7 @@ def new_scheduler(
)
return self.lib.scheduler_create(
+ self._executor,
tasks,
engine_types,
# Project tree.
diff --git a/src/python/pants/engine/internals/scheduler.py b/src/python/pants/engine/internals/scheduler.py
--- a/src/python/pants/engine/internals/scheduler.py
+++ b/src/python/pants/engine/internals/scheduler.py
@@ -19,7 +19,6 @@
PathGlobsAndRoot,
)
from pants.engine.interactive_runner import InteractiveProcessRequest, InteractiveProcessResult
-from pants.engine.internals.native import RawFdRunner
from pants.engine.internals.nodes import Return, Throw
from pants.engine.rules import Rule, RuleIndex, TaskRule
from pants.engine.selectors import Params
@@ -286,21 +285,6 @@ def lease_files_in_graph(self, session):
def garbage_collect_store(self):
self._native.lib.garbage_collect_store(self._scheduler)
- def nailgun_server_await_bound(self, nailgun_server) -> int:
- """Blocks until the server has bound a port, and then returns the port.
-
- Returns the actual port the server has successfully bound to, or raises an exception if the
- server has exited.
- """
- return cast(int, self._native.nailgun_server_await_bound(self._scheduler, nailgun_server))
-
- def new_nailgun_server(self, port_requested: int, runner: RawFdRunner):
- """Creates a nailgun server with a requested port.
-
- Returns the server and the actual port it bound to.
- """
- return self._native.new_nailgun_server(self._scheduler, port_requested, runner)
-
def new_session(
self,
zipkin_trace_v2: bool,
diff --git a/src/python/pants/init/engine_initializer.py b/src/python/pants/init/engine_initializer.py
--- a/src/python/pants/init/engine_initializer.py
+++ b/src/python/pants/init/engine_initializer.py
@@ -302,11 +302,10 @@ def _make_goal_map_from_rules(rules):
@staticmethod
def setup_legacy_graph(
- native: Native,
- options_bootstrapper: OptionsBootstrapper,
- build_configuration: BuildConfiguration,
+ options_bootstrapper: OptionsBootstrapper, build_configuration: BuildConfiguration,
) -> LegacyGraphScheduler:
"""Construct and return the components necessary for LegacyBuildGraph construction."""
+ native = Native()
build_root = get_buildroot()
bootstrap_options = options_bootstrapper.bootstrap_options.for_global_scope()
use_gitignore = bootstrap_options.pants_ignore_use_gitignore
diff --git a/src/python/pants/init/options_initializer.py b/src/python/pants/init/options_initializer.py
--- a/src/python/pants/init/options_initializer.py
+++ b/src/python/pants/init/options_initializer.py
@@ -130,7 +130,7 @@ def add(absolute_path, include=False):
return pants_ignore
@staticmethod
- def compute_pantsd_invalidation_globs(buildroot, bootstrap_options):
+ def compute_pantsd_invalidation_globs(buildroot, bootstrap_options, absolute_pidfile):
"""Computes the merged value of the `--pantsd-invalidation-globs` option.
Combines --pythonpath and --pants-config-files files that are in {buildroot} dir with those
@@ -141,6 +141,7 @@ def compute_pantsd_invalidation_globs(buildroot, bootstrap_options):
# Globs calculated from the sys.path and other file-like configuration need to be sanitized
# to relative globs (where possible).
potentially_absolute_globs = (
+ absolute_pidfile,
*sys.path,
*bootstrap_options.pythonpath,
*bootstrap_options.pants_config_files,
diff --git a/src/python/pants/option/global_options.py b/src/python/pants/option/global_options.py
--- a/src/python/pants/option/global_options.py
+++ b/src/python/pants/option/global_options.py
@@ -152,9 +152,8 @@ class GlobalOptions(Subsystem):
def register_bootstrap_options(cls, register):
"""Register bootstrap options.
- "Bootstrap options" are a small set of options whose values are useful when registering other
- options. Therefore we must bootstrap them early, before other options are registered, let
- alone parsed.
+ "Bootstrap options" are the set of options necessary to create a Scheduler. If an option is
+ not consumed during creation of a Scheduler, it should be in `register_options` instead.
Bootstrap option values can be interpolated into the config file, and can be referenced
programmatically in registration code, e.g., as register.bootstrap.pants_workdir.
@@ -223,6 +222,7 @@ def register_bootstrap_options(cls, register):
"--pants-version",
advanced=True,
default=pants_version(),
+ daemon=True,
help="Use this pants version. Note Pants code only uses this to verify that you are "
"using the requested version, as Pants cannot dynamically change the version it "
"is using once the program is already running. This option is useful to set in "
@@ -323,6 +323,7 @@ def register_bootstrap_options(cls, register):
advanced=True,
metavar="<dir>",
default=os.path.join(buildroot, ".pants.d"),
+ daemon=True,
help="Write intermediate output files to this dir.",
)
register(
@@ -330,6 +331,7 @@ def register_bootstrap_options(cls, register):
advanced=True,
metavar="<dir>",
default=None,
+ daemon=True,
help="When set, a base directory in which to store `--pants-workdir` contents. "
"If this option is a set, the workdir will be created as symlink into a "
"per-workspace subdirectory.",
@@ -352,6 +354,7 @@ def register_bootstrap_options(cls, register):
"--pants-subprocessdir",
advanced=True,
default=os.path.join(buildroot, ".pids"),
+ daemon=True,
help="The directory to use for tracking subprocess metadata, if any. This should "
"live outside of the dir used by `--pants-workdir` to allow for tracking "
"subprocesses that outlive the workdir data (e.g. `./pants server`).",
@@ -360,19 +363,30 @@ def register_bootstrap_options(cls, register):
"--pants-config-files",
advanced=True,
type=list,
- daemon=False,
+ # NB: We don't fingerprint the list of config files, because the content of the config
+ # files independently affects fingerprints.
+ fingerprint=False,
default=[get_default_pants_config_file()],
help="Paths to Pants config files.",
)
# TODO: Deprecate the --pantsrc/--pantsrc-files options? This would require being able
# to set extra config file locations in an initial bootstrap config file.
- register("--pantsrc", advanced=True, type=bool, default=True, help="Use pantsrc files.")
+ register(
+ "--pantsrc",
+ advanced=True,
+ type=bool,
+ default=True,
+ # NB: See `--pants-config-files`.
+ fingerprint=False,
+ help="Use pantsrc files.",
+ )
register(
"--pantsrc-files",
advanced=True,
type=list,
metavar="<path>",
- daemon=False,
+ # NB: See `--pants-config-files`.
+ fingerprint=False,
default=["/etc/pantsrc", "~/.pants.rc"],
help=(
"Override config with values from these files, using syntax matching that of "
@@ -389,7 +403,8 @@ def register_bootstrap_options(cls, register):
"--spec-file",
type=list,
dest="spec_files",
- daemon=False,
+ # NB: See `--pants-config-files`.
+ fingerprint=False,
help="Read additional specs from this file (e.g. target addresses or file names). "
"Each spec should be one per line.",
)
@@ -397,7 +412,6 @@ def register_bootstrap_options(cls, register):
"--verify-config",
type=bool,
default=True,
- daemon=False,
advanced=True,
help="Verify that all config file values correspond to known options.",
)
@@ -457,7 +471,6 @@ def register_bootstrap_options(cls, register):
advanced=True,
type=list,
default=[],
- daemon=False,
metavar="<regexp>",
help="Exclude target roots that match these regexes.",
)
@@ -477,6 +490,7 @@ def register_bootstrap_options(cls, register):
"--logdir",
advanced=True,
metavar="<dir>",
+ daemon=True,
help="Write logs to files under this directory.",
)
@@ -485,6 +499,7 @@ def register_bootstrap_options(cls, register):
advanced=True,
type=bool,
default=True,
+ daemon=True,
help=(
"Enables use of the pants daemon (pantsd). pantsd can significantly improve "
"runtime performance by lowering per-run startup cost, and by caching filesystem "
@@ -500,7 +515,6 @@ def register_bootstrap_options(cls, register):
advanced=True,
type=bool,
default=False,
- daemon=False,
help="Enable concurrent runs of pants. Without this enabled, pants will "
"start up all concurrent invocations (e.g. in other terminals) without pantsd. "
"Enabling this option requires parallel pants invocations to block on the first",
@@ -527,7 +541,6 @@ def register_bootstrap_options(cls, register):
advanced=True,
type=float,
default=60.0,
- daemon=False,
help="The maximum amount of time to wait for the invocation to start until "
"raising a timeout exception. "
"Because pantsd currently does not support parallel runs, "
@@ -551,7 +564,6 @@ def register_bootstrap_options(cls, register):
advanced=True,
default=None,
type=dir_option,
- daemon=False,
help="A directory to write execution and rule graphs to as `dot` files. The contents "
"of the directory will be overwritten if any filenames collide.",
)
@@ -559,6 +571,7 @@ def register_bootstrap_options(cls, register):
"--print-exception-stacktrace",
advanced=True,
type=bool,
+ fingerprint=False,
help="Print to console the full exception stack trace if encountered.",
)
@@ -576,7 +589,6 @@ def register_bootstrap_options(cls, register):
type=int,
default=30,
advanced=True,
- daemon=False,
help="Timeout in seconds for URL reads when fetching binary tools from the "
"repos specified by --baseurls.",
)
@@ -607,6 +619,7 @@ def register_bootstrap_options(cls, register):
advanced=True,
type=int,
default=0,
+ daemon=True,
help="The port to bind the pants nailgun server to. Defaults to a random port.",
)
# TODO(#7514): Make this default to 1.0 seconds if stdin is a tty!
@@ -622,6 +635,7 @@ def register_bootstrap_options(cls, register):
"--pantsd-log-dir",
advanced=True,
default=None,
+ daemon=True,
help="The directory to log pantsd output to.",
)
register(
@@ -629,6 +643,7 @@ def register_bootstrap_options(cls, register):
advanced=True,
type=list,
default=[],
+ daemon=True,
help="Filesystem events matching any of these globs will trigger a daemon restart. "
"Pants' own code, plugins, and `--pants-config-files` are inherently invalidated.",
)
@@ -927,7 +942,6 @@ def register_options(cls, register):
type=bool,
default=sys.stdout.isatty(),
recursive=True,
- daemon=False,
help="Set whether log messages are displayed in color.",
)
@@ -943,7 +957,6 @@ def register_options(cls, register):
"--dynamic-ui",
type=bool,
default=sys.stderr.isatty(),
- daemon=False,
help="Display a dynamically-updating console UI as pants runs.",
)
@@ -951,7 +964,6 @@ def register_options(cls, register):
"--v2-ui",
default=False,
type=bool,
- daemon=False,
removal_version="1.31.0.dev0",
removal_hint="Use --dynamic-ui instead.",
help="Whether to show v2 engine execution progress.",
@@ -1003,7 +1015,6 @@ def register_options(cls, register):
"--quiet",
type=bool,
recursive=True,
- daemon=False,
passive=no_v1,
help="Squelches most console output. NOTE: Some tasks default to behaving quietly: "
"inverting this option supports making them noisier than they would be otherwise.",
diff --git a/src/python/pants/option/options.py b/src/python/pants/option/options.py
--- a/src/python/pants/option/options.py
+++ b/src/python/pants/option/options.py
@@ -534,7 +534,11 @@ def for_scope(
return values
def get_fingerprintable_for_scope(
- self, bottom_scope, include_passthru=None, fingerprint_key=None, invert=False
+ self,
+ bottom_scope: str,
+ include_passthru: Optional[bool] = None,
+ fingerprint_key: str = "fingerprint",
+ invert: bool = False,
):
"""Returns a list of fingerprintable (option type, option value) pairs for the given scope.
@@ -544,11 +548,11 @@ def get_fingerprintable_for_scope(
This method also searches enclosing options scopes of `bottom_scope` to determine the set of
fingerprintable pairs.
- :param str bottom_scope: The scope to gather fingerprintable options for.
- :param bool include_passthru: Whether to include passthru args captured by `bottom_scope` in the
- fingerprintable options.
- :param string fingerprint_key: The option kwarg to match against (defaults to 'fingerprint').
- :param bool invert: Whether or not to invert the boolean check for the fingerprint_key value.
+ :param bottom_scope: The scope to gather fingerprintable options for.
+ :param include_passthru: Whether to include passthru args captured by `bottom_scope` in the
+ fingerprintable options.
+ :param fingerprint_key: The option kwarg to match against (defaults to 'fingerprint').
+ :param invert: Whether or not to invert the boolean check for the fingerprint_key value.
:API: public
"""
@@ -562,7 +566,6 @@ def get_fingerprintable_for_scope(
),
)
- fingerprint_key = fingerprint_key or "fingerprint"
fingerprint_default = bool(invert)
pairs = []
@@ -575,7 +578,7 @@ def get_fingerprintable_for_scope(
for (_, kwargs) in sorted(parser.option_registrations_iter()):
if kwargs.get("recursive", False) and not kwargs.get("recursive_root", False):
continue # We only need to fprint recursive options once.
- if kwargs.get(fingerprint_key, fingerprint_default) is not True:
+ if not kwargs.get(fingerprint_key, fingerprint_default):
continue
# Note that we read the value from scope, even if the registration was on an enclosing
# scope, to get the right value for recursive options (and because this mirrors what
diff --git a/src/python/pants/option/options_fingerprinter.py b/src/python/pants/option/options_fingerprinter.py
--- a/src/python/pants/option/options_fingerprinter.py
+++ b/src/python/pants/option/options_fingerprinter.py
@@ -34,7 +34,9 @@ class OptionsFingerprinter:
"""
@classmethod
- def combined_options_fingerprint_for_scope(cls, scope, options, build_graph=None, **kwargs):
+ def combined_options_fingerprint_for_scope(
+ cls, scope, options, build_graph=None, **kwargs
+ ) -> str:
"""Given options and a scope, compute a combined fingerprint for the scope.
:param string scope: The scope to fingerprint.
diff --git a/src/python/pants/pantsd/pants_daemon.py b/src/python/pants/pantsd/pants_daemon.py
--- a/src/python/pants/pantsd/pants_daemon.py
+++ b/src/python/pants/pantsd/pants_daemon.py
@@ -4,30 +4,28 @@
import logging
import os
import sys
-import threading
+import time
from contextlib import contextmanager
-from typing import IO, Iterator
+from typing import IO, Any, Iterator
from setproctitle import setproctitle as set_process_title
from pants.base.build_environment import get_buildroot
-from pants.base.exception_sink import ExceptionSink, SignalHandler
+from pants.base.exception_sink import ExceptionSink
from pants.bin.daemon_pants_runner import DaemonPantsRunner
from pants.engine.internals.native import Native
-from pants.engine.unions import UnionMembership
-from pants.init.engine_initializer import EngineInitializer, LegacyGraphScheduler
+from pants.init.engine_initializer import LegacyGraphScheduler
from pants.init.logging import clear_logging_handlers, init_rust_logger, setup_logging_to_file
-from pants.init.options_initializer import BuildConfigInitializer, OptionsInitializer
+from pants.init.options_initializer import OptionsInitializer
from pants.option.option_value_container import OptionValueContainer
from pants.option.options import Options
from pants.option.options_bootstrapper import OptionsBootstrapper
+from pants.pantsd.pants_daemon_core import PantsDaemonCore
from pants.pantsd.process_manager import PantsDaemonProcessManager
from pants.pantsd.service.fs_event_service import FSEventService
-from pants.pantsd.service.pailgun_service import PailgunService
from pants.pantsd.service.pants_service import PantsServices
from pants.pantsd.service.scheduler_service import SchedulerService
from pants.pantsd.service.store_gc_service import StoreGCService
-from pants.pantsd.watchman import Watchman
from pants.pantsd.watchman_launcher import WatchmanLauncher
from pants.util.contextutil import stdio_as
from pants.util.logging import LogLevel
@@ -77,15 +75,6 @@ def buffer(self):
return self
-class PantsDaemonSignalHandler(SignalHandler):
- def __init__(self, daemon):
- super().__init__()
- self._daemon = daemon
-
- def handle_sigint(self, signum, _frame):
- self._daemon.terminate(include_watchman=False)
-
-
class PantsDaemon(PantsDaemonProcessManager):
"""A daemon that manages PantsService instances."""
@@ -108,52 +97,42 @@ def create(cls, options_bootstrapper) -> "PantsDaemon":
initialize the engine). See the impl of `maybe_launch` for an example
of the intended usage.
"""
+ native = Native()
+ native.override_thread_logging_destination_to_just_pantsd()
+
bootstrap_options = options_bootstrapper.bootstrap_options
bootstrap_options_values = bootstrap_options.for_global_scope()
- build_root = get_buildroot()
- native = Native()
- build_config = BuildConfigInitializer.get(options_bootstrapper)
- legacy_graph_scheduler = EngineInitializer.setup_legacy_graph(
- native, options_bootstrapper, build_config
- )
- # TODO: https://github.com/pantsbuild/pants/issues/3479
- watchman_launcher = WatchmanLauncher.create(bootstrap_options_values)
- watchman_launcher.maybe_launch()
- watchman = watchman_launcher.watchman
- services = cls._setup_services(
- build_root,
- bootstrap_options_values,
- legacy_graph_scheduler,
- native,
- watchman,
- union_membership=UnionMembership(build_config.union_rules()),
+ core = PantsDaemonCore(cls._setup_services)
+
+ server = native.new_nailgun_server(
+ bootstrap_options_values.pantsd_pailgun_port, DaemonPantsRunner(core),
)
return PantsDaemon(
native=native,
- build_root=build_root,
work_dir=bootstrap_options_values.pants_workdir,
log_level=bootstrap_options_values.level,
- services=services,
+ server=server,
+ core=core,
metadata_base_dir=bootstrap_options_values.pants_subprocessdir,
bootstrap_options=bootstrap_options,
)
@staticmethod
def _setup_services(
- build_root: str,
- bootstrap_options: OptionValueContainer,
- legacy_graph_scheduler: LegacyGraphScheduler,
- native: Native,
- watchman: Watchman,
- union_membership: UnionMembership,
+ bootstrap_options: OptionValueContainer, legacy_graph_scheduler: LegacyGraphScheduler,
):
"""Initialize pantsd services.
:returns: A PantsServices instance.
"""
- native.override_thread_logging_destination_to_just_pantsd()
+ build_root = get_buildroot()
+
+ # TODO: https://github.com/pantsbuild/pants/issues/3479
+ watchman_launcher = WatchmanLauncher.create(bootstrap_options)
+ watchman_launcher.maybe_launch()
+ watchman = watchman_launcher.watchman
fs_event_service = (
FSEventService(
watchman, scheduler=legacy_graph_scheduler.scheduler, build_root=build_root
@@ -163,7 +142,9 @@ def _setup_services(
)
invalidation_globs = OptionsInitializer.compute_pantsd_invalidation_globs(
- build_root, bootstrap_options
+ build_root,
+ bootstrap_options,
+ PantsDaemon.metadata_file_path("pantsd", "pid", bootstrap_options.pants_subprocessdir),
)
scheduler_service = SchedulerService(
@@ -171,39 +152,27 @@ def _setup_services(
legacy_graph_scheduler=legacy_graph_scheduler,
build_root=build_root,
invalidation_globs=invalidation_globs,
- union_membership=union_membership,
+ max_memory_usage_pid=os.getpid(),
max_memory_usage_in_bytes=bootstrap_options.pantsd_max_memory_usage,
)
- pailgun_service = PailgunService(
- bootstrap_options.pantsd_pailgun_port,
- DaemonPantsRunner(scheduler_service),
- scheduler_service,
- )
-
store_gc_service = StoreGCService(legacy_graph_scheduler.scheduler)
return PantsServices(
services=tuple(
service
- for service in (
- fs_event_service,
- scheduler_service,
- pailgun_service,
- store_gc_service,
- )
+ for service in (fs_event_service, scheduler_service, store_gc_service,)
if service is not None
),
- port_map=dict(pailgun=pailgun_service.pailgun_port()),
)
def __init__(
self,
native: Native,
- build_root: str,
work_dir: str,
log_level: LogLevel,
- services: PantsServices,
+ server: Any,
+ core: PantsDaemonCore,
metadata_base_dir: str,
bootstrap_options: Options,
):
@@ -211,19 +180,20 @@ def __init__(
NB: A PantsDaemon instance is generally instantiated via `create`.
:param native: A `Native` instance.
- :param build_root: The pants build root.
:param work_dir: The pants work directory.
:param log_level: The log level to use for daemon logging.
- :param services: A registry of services to use in this run.
+ :param server: A native PyNailgunServer instance (not currently a nameable type).
+ :param core: A PantsDaemonCore.
:param metadata_base_dir: The ProcessManager metadata base dir.
:param bootstrap_options: The bootstrap options.
"""
super().__init__(bootstrap_options, daemon_entrypoint=__name__)
self._native = native
- self._build_root = build_root
+ self._build_root = get_buildroot()
self._work_dir = work_dir
self._log_level = log_level
- self._services = services
+ self._server = server
+ self._core = core
self._bootstrap_options = bootstrap_options
self._log_show_rust_3rdparty = (
bootstrap_options.for_global_scope().log_show_rust_3rdparty
@@ -232,31 +202,6 @@ def __init__(
)
self._logger = logging.getLogger(__name__)
- # N.B. This Event is used as nothing more than a convenient atomic flag - nothing waits on it.
- self._kill_switch = threading.Event()
-
- @property
- def is_killed(self):
- return self._kill_switch.is_set()
-
- def shutdown(self, service_thread_map):
- """Gracefully terminate all services and kill the main PantsDaemon loop."""
- with self._services.lifecycle_lock:
- for service, service_thread in service_thread_map.items():
- self._logger.info(f"terminating pantsd service: {service}")
- service.terminate()
- service_thread.join(self.JOIN_TIMEOUT_SECONDS)
- self._logger.info("terminating pantsd")
- self._kill_switch.set()
-
- def terminate(self, include_watchman=True):
- """Terminates pantsd and watchman.
-
- N.B. This should always be called under care of the `lifecycle_lock`.
- """
- super().terminate()
- if include_watchman:
- self.watchman_launcher.terminate()
@staticmethod
def _close_stdio():
@@ -308,71 +253,15 @@ def _pantsd_logging(self) -> Iterator[IO[str]]:
self._logger.debug("logging initialized")
yield log_handler.stream
- @staticmethod
- def _make_thread(service):
- name = f"{service.__class__.__name__}Thread"
-
- def target():
- Native().override_thread_logging_destination_to_just_pantsd()
- service.run()
-
- t = threading.Thread(target=target, name=name)
- t.daemon = True
- return t
-
- def _run_services(self, pants_services):
- """Service runner main loop."""
- if not pants_services.services:
- self._logger.critical("no services to run, bailing!")
- return
-
- for service in pants_services.services:
- self._logger.info(f"setting up service {service}")
- service.setup(self._services)
-
- service_thread_map = {
- service: self._make_thread(service) for service in pants_services.services
- }
-
- # Start services.
- for service, service_thread in service_thread_map.items():
- self._logger.info(f"starting service {service}")
- try:
- service_thread.start()
- except (RuntimeError, FSEventService.ServiceError):
- self.shutdown(service_thread_map)
- raise PantsDaemon.StartupFailure(
- f"service {service} failed to start, shutting down!"
- )
-
- # Once all services are started, write our pid and notify the SchedulerService to start
- # watching it.
- self._initialize_pid()
-
- # Monitor services.
- while not self.is_killed:
- for service, service_thread in service_thread_map.items():
- if not service_thread.is_alive():
- self.shutdown(service_thread_map)
- raise PantsDaemon.RuntimeFailure(
- f"service failure for {service}, shutting down!"
- )
- else:
- # Avoid excessive CPU utilization.
- service_thread.join(self.JOIN_TIMEOUT_SECONDS)
-
- def _write_named_sockets(self, socket_map):
- """Write multiple named sockets using a socket mapping."""
- for socket_name, socket_info in socket_map.items():
- self.write_named_socket(socket_name, socket_info)
+ def _write_nailgun_port(self):
+ """Write the nailgun port to a well known file."""
+ self.write_socket(self._native.nailgun_server_await_bound(self._server))
def _initialize_pid(self):
- """Writes out our pid and metadata, and begin watching it for validity.
+ """Writes out our pid and metadata.
- Once written and watched, does a one-time read of the pid to confirm that we haven't raced
- another process starting.
-
- All services must already have been initialized before this is called.
+ Once written, does a one-time read of the pid to confirm that we haven't raced another
+ process starting.
"""
# Write the pidfile.
@@ -381,23 +270,7 @@ def _initialize_pid(self):
self.write_metadata_by_name(
"pantsd", self.FINGERPRINT_KEY, ensure_text(self.options_fingerprint)
)
- scheduler_services = [s for s in self._services.services if isinstance(s, SchedulerService)]
- for scheduler_service in scheduler_services:
- scheduler_service.begin_monitoring_memory_usage(pid)
-
- # If we can, add the pidfile to watching via the scheduler.
pidfile_absolute = self._metadata_file_path("pantsd", "pid")
- if pidfile_absolute.startswith(self._build_root):
- for scheduler_service in scheduler_services:
- scheduler_service.add_invalidation_glob(
- os.path.relpath(pidfile_absolute, self._build_root)
- )
- else:
- logging.getLogger(__name__).warning(
- "Not watching pantsd pidfile because subprocessdir is outside of buildroot. Having "
- "subprocessdir be a child of buildroot (as it is by default) may help avoid stray "
- "pantsd processes."
- )
# Finally, once watched, confirm that we didn't race another process.
try:
@@ -440,11 +313,19 @@ def run_sync(self):
# Set the process name in ps output to 'pantsd' vs './pants compile src/etc:: -ldebug'.
set_process_title(f"pantsd [{self._build_root}]")
- # Write service socket information to .pids.
- self._write_named_sockets(self._services.port_map)
+ # Write our pid and the server's port to .pids. Order matters a bit here, because
+ # technically all that is necessary to connect is the port, and Services are lazily
+ # initialized by the core when a connection is established. Our pid needs to be on
+ # disk before that happens.
+ self._initialize_pid()
+ self._write_nailgun_port()
+
+ # Check periodically whether the core is valid, and exit if it is not.
+ while self._core.is_valid():
+ time.sleep(self.JOIN_TIMEOUT_SECONDS)
- # Enter the main service runner loop.
- self._run_services(self._services)
+ # We're exiting: join the server to avoid interrupting ongoing runs.
+ # TODO: This will happen via #8200.
def launch():
diff --git a/src/python/pants/pantsd/pants_daemon_client.py b/src/python/pants/pantsd/pants_daemon_client.py
--- a/src/python/pants/pantsd/pants_daemon_client.py
+++ b/src/python/pants/pantsd/pants_daemon_client.py
@@ -36,9 +36,7 @@ def maybe_launch(self) -> "PantsDaemonClient.Handle":
else:
# We're already launched.
return PantsDaemonClient.Handle(
- self.await_pid(10),
- self.read_named_socket("pailgun", int),
- self._metadata_base_dir,
+ self.await_pid(10), self.await_socket(10), self._metadata_base_dir,
)
def restart(self) -> "PantsDaemonClient.Handle":
@@ -55,8 +53,8 @@ def _launch(self) -> "PantsDaemonClient.Handle":
self.terminate()
self._logger.debug("launching pantsd")
self.daemon_spawn()
- # Wait up to 60 seconds for pantsd to write its pidfile.
+ # Wait up to 60 seconds each for pantsd to write its pidfile and open its socket.
pantsd_pid = self.await_pid(60)
- listening_port = self.read_named_socket("pailgun", int)
+ listening_port = self.await_socket(60)
self._logger.debug(f"pantsd is running at pid {self.pid}, pailgun port is {listening_port}")
return self.Handle(pantsd_pid, listening_port, self._metadata_base_dir)
diff --git a/src/python/pants/pantsd/pants_daemon_core.py b/src/python/pants/pantsd/pants_daemon_core.py
new file mode 100644
--- /dev/null
+++ b/src/python/pants/pantsd/pants_daemon_core.py
@@ -0,0 +1,107 @@
+# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).
+# Licensed under the Apache License, Version 2.0 (see LICENSE).
+
+import logging
+import threading
+from typing import Optional
+
+from typing_extensions import Protocol
+
+from pants.init.engine_initializer import EngineInitializer, LegacyGraphScheduler
+from pants.init.options_initializer import BuildConfigInitializer
+from pants.option.option_value_container import OptionValueContainer
+from pants.option.options_bootstrapper import OptionsBootstrapper
+from pants.option.options_fingerprinter import OptionsFingerprinter
+from pants.option.scope import GLOBAL_SCOPE
+from pants.pantsd.service.pants_service import PantsServices
+
+logger = logging.getLogger(__name__)
+
+
+class PantsServicesConstructor(Protocol):
+ def __call__(
+ self, bootstrap_options: OptionValueContainer, legacy_graph_scheduler: LegacyGraphScheduler,
+ ) -> PantsServices:
+ ...
+
+
+class PantsDaemonCore:
+ """A container for the state of a PantsDaemon that is affected by the bootstrap options.
+
+ This class also serves to avoid a reference cycle between DaemonPantsRunner and PantsDaemon,
+ which both have a reference to the core, and use it to get access to the Scheduler and current
+ PantsServices.
+ """
+
+ def __init__(self, services_constructor: PantsServicesConstructor):
+ self._services_constructor = services_constructor
+ self._lifecycle_lock = threading.RLock()
+ # N.B. This Event is used as nothing more than an atomic flag - nothing waits on it.
+ self._kill_switch = threading.Event()
+
+ self._scheduler: Optional[LegacyGraphScheduler] = None
+ self._services: Optional[PantsServices] = None
+ self._fingerprint: Optional[str] = None
+
+ def is_valid(self) -> bool:
+ """Return true if the core is valid.
+
+ This mostly means confirming that if any services have been started, that they are still
+ alive.
+ """
+ if self._kill_switch.is_set():
+ logger.error("Client failed to create a Scheduler: shutting down.")
+ return False
+ with self._lifecycle_lock:
+ if self._services is None:
+ return True
+ return self._services.are_all_alive()
+
+ def _init_scheduler(
+ self, options_fingerprint: str, options_bootstrapper: OptionsBootstrapper
+ ) -> None:
+ """(Re-)Initialize the scheduler.
+
+ Must be called under the lifecycle lock.
+ """
+ try:
+ if self._scheduler:
+ logger.info("initialization options changed: reinitializing pantsd...")
+ else:
+ logger.info("initializing pantsd...")
+ if self._services:
+ self._services.shutdown()
+ build_config = BuildConfigInitializer.get(options_bootstrapper)
+ self._scheduler = EngineInitializer.setup_legacy_graph(
+ options_bootstrapper, build_config
+ )
+ bootstrap_options_values = options_bootstrapper.bootstrap_options.for_global_scope()
+ self._services = self._services_constructor(bootstrap_options_values, self._scheduler)
+ self._fingerprint = options_fingerprint
+ logger.info("pantsd initialized.")
+ except Exception as e:
+ self._kill_switch.set()
+ self._scheduler = None
+ raise e
+
+ def prepare_scheduler(self, options_bootstrapper: OptionsBootstrapper) -> LegacyGraphScheduler:
+ """Get a scheduler for the given options_bootstrapper.
+
+ Runs in a client context (generally in DaemonPantsRunner) so logging is sent to the client.
+ """
+
+ # Compute the fingerprint of the bootstrap options. Note that unlike
+ # PantsDaemonProcessManager (which fingerprints only `daemon=True` options), this
+ # fingerprints all fingerprintable options in the bootstrap options, which are
+ # all used to construct a Scheduler.
+ options_fingerprint = OptionsFingerprinter.combined_options_fingerprint_for_scope(
+ GLOBAL_SCOPE, options_bootstrapper.bootstrap_options, invert=True,
+ )
+
+ with self._lifecycle_lock:
+ if self._scheduler is None or options_fingerprint != self._fingerprint:
+ # The fingerprint mismatches, either because this is the first run (and there is no
+ # fingerprint) or because relevant options have changed. Create a new scheduler and services.
+ self._init_scheduler(options_fingerprint, options_bootstrapper)
+ assert self._scheduler is not None
+ return self._scheduler
diff --git a/src/python/pants/pantsd/process_manager.py b/src/python/pants/pantsd/process_manager.py
--- a/src/python/pants/pantsd/process_manager.py
+++ b/src/python/pants/pantsd/process_manager.py
@@ -416,14 +416,6 @@ def write_socket(self, socket_info):
"""Write the local processes socket information (TCP port or UNIX socket)."""
self.write_metadata_by_name(self._name, "socket", str(socket_info))
- def write_named_socket(self, socket_name, socket_info):
- """A multi-tenant, named alternative to ProcessManager.write_socket()."""
- self.write_metadata_by_name(self._name, "socket_{}".format(socket_name), str(socket_info))
-
- def read_named_socket(self, socket_name, socket_type):
- """A multi-tenant, named alternative to ProcessManager.socket."""
- return self.read_metadata_by_name(self._name, "socket_{}".format(socket_name), socket_type)
-
def _as_process(self):
"""Returns a psutil `Process` object wrapping our pid.
@@ -712,8 +704,18 @@ def __init__(self, bootstrap_options: Options, daemon_entrypoint: str):
@property
def options_fingerprint(self):
+ """Returns the options fingerprint for the pantsd process.
+
+ This should cover all options consumed by the pantsd process itself in order to start: also
+ known as the "micro-bootstrap" options. These options are marked `daemon=True` in the global
+ options.
+
+ The `daemon=True` options are a small subset of the bootstrap options. Independently, the
+ PantsDaemonCore fingerprints the entire set of bootstrap options to identify when the
+ Scheduler needs need to be re-initialized.
+ """
return OptionsFingerprinter.combined_options_fingerprint_for_scope(
- GLOBAL_SCOPE, self._bootstrap_options, fingerprint_key="daemon", invert=True
+ GLOBAL_SCOPE, self._bootstrap_options, fingerprint_key="daemon"
)
def needs_restart(self, option_fingerprint):
diff --git a/src/python/pants/pantsd/service/pailgun_service.py b/src/python/pants/pantsd/service/pailgun_service.py
deleted file mode 100644
--- a/src/python/pants/pantsd/service/pailgun_service.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).
-# Licensed under the Apache License, Version 2.0 (see LICENSE).
-
-import logging
-import time
-
-from pants.engine.internals.native import RawFdRunner
-from pants.pantsd.service.pants_service import PantsService
-from pants.pantsd.service.scheduler_service import SchedulerService
-
-logger = logging.getLogger(__name__)
-
-
-class PailgunService(PantsService):
- """A service that runs the Pailgun server."""
-
- def __init__(
- self, port_requested: int, runner: RawFdRunner, scheduler_service: SchedulerService,
- ):
- """
- :param port_requested: A port to bind the service to, or 0 to choose a random port (which
- will be exposed by `pailgun_port`).
- :param runner: A runner for inbound requests. Generally this will be a method of
- `DaemonPantsRunner`.
- :param scheduler_service: The SchedulerService instance for access to the resident scheduler.
- """
- super().__init__()
-
- self._scheduler = scheduler_service._scheduler
- self._server = self._setup_server(port_requested, runner)
-
- def _setup_server(self, port_requested, runner):
- return self._scheduler.new_nailgun_server(port_requested, runner)
-
- def pailgun_port(self):
- return self._scheduler.nailgun_server_await_bound(self._server)
-
- def run(self):
- """Main service entrypoint.
-
- Called via Thread.start() via PantsDaemon.run().
- """
- try:
- logger.info("started pailgun server on port {}".format(self.pailgun_port()))
- while not self._state.is_terminating:
- # Once the server has started, `await_bound` will return quickly with an error if it
- # has exited.
- self.pailgun_port()
- time.sleep(0.5)
- except BaseException:
- logger.error("pailgun service shutting down due to an error", exc_info=True)
- self.terminate()
- finally:
- logger.info("pailgun service on shutting down")
-
- def terminate(self):
- """Override of PantsService.terminate() that drops the server when terminated."""
- self._server = None
- super().terminate()
diff --git a/src/python/pants/pantsd/service/pants_service.py b/src/python/pants/pantsd/service/pants_service.py
--- a/src/python/pants/pantsd/service/pants_service.py
+++ b/src/python/pants/pantsd/service/pants_service.py
@@ -1,14 +1,18 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
+import logging
import threading
import time
from abc import ABC, abstractmethod
from dataclasses import dataclass
-from typing import Any, Dict, Optional, Tuple
+from typing import Dict, KeysView, Tuple
+from pants.engine.internals.native import Native
from pants.util.meta import frozen_after_init
+logger = logging.getLogger(__name__)
+
class PantsService(ABC):
"""Pants daemon service base class.
@@ -33,11 +37,8 @@ def __init__(self):
self.name = self.__class__.__name__
self._state = _ServiceState()
- def setup(self, services):
- """Called before `run` to allow for service->service or other side-effecting setup.
-
- :param PantsServices services: A registry of all services within this run.
- """
+ def setup(self, services: Tuple["PantsService", ...]):
+ """Called before `run` to allow for service->service or other side-effecting setup."""
self.services = services
@abstractmethod
@@ -200,27 +201,63 @@ def is_terminating(self):
@frozen_after_init
@dataclass(unsafe_hash=True)
class PantsServices:
- """A registry of PantsServices instances."""
-
- services: Tuple[PantsService, ...]
- port_map: Dict
- lifecycle_lock: Any
-
- def __init__(
- self,
- services: Optional[Tuple[PantsService, ...]] = None,
- port_map: Optional[Dict] = None,
- lifecycle_lock=None,
- ) -> None:
- """
- :param port_map: A dict of (port_name -> port_info) for named ports hosted by the services.
- :param lifecycle_lock: A lock to guard lifecycle changes for the services. This can be used by
- individual services to safeguard daemon-synchronous sections that should
- be protected from abrupt teardown. Notably, this lock is currently
- acquired for an entire pailgun request (by PailgunServer). NB: This is a
- `threading.RLock` instance, but the constructor for RLock is an alias for
- a native function, rather than an actual type.
+ """A collection of running PantsServices threads."""
+
+ JOIN_TIMEOUT_SECONDS = 1
+
+ _service_threads: Dict[PantsService, threading.Thread]
+
+ def __init__(self, services: Tuple[PantsService, ...] = ()) -> None:
+ self._service_threads = self._start(services)
+
+ @classmethod
+ def _make_thread(cls, service):
+ name = f"{service.__class__.__name__}Thread"
+
+ def target():
+ Native().override_thread_logging_destination_to_just_pantsd()
+ service.run()
+
+ t = threading.Thread(target=target, name=name)
+ t.daemon = True
+ return t
+
+ @classmethod
+ def _start(cls, services: Tuple[PantsService, ...]) -> Dict[PantsService, threading.Thread]:
+ """Launch a thread per service."""
+
+ for service in services:
+ logger.debug(f"setting up service {service}")
+ service.setup(services)
+
+ service_thread_map = {service: cls._make_thread(service) for service in services}
+
+ for service, service_thread in service_thread_map.items():
+ logger.debug(f"starting service {service}")
+ service_thread.start()
+
+ return service_thread_map
+
+ @property
+ def services(self) -> KeysView[PantsService]:
+ return self._service_threads.keys()
+
+ def are_all_alive(self) -> bool:
+ """Return true if all services threads are still alive, and false if any have died.
+
+ This method does not have sideeffects: if one service thread has died, the rest should be
+ killed and joined via `self.shutdown()`.
"""
- self.services = services or tuple()
- self.port_map = port_map or dict()
- self.lifecycle_lock = lifecycle_lock or threading.RLock()
+ for service, service_thread in self._service_threads.items():
+ if not service_thread.is_alive():
+ logger.error(f"service failure for {service}.")
+ return False
+ return True
+
+ def shutdown(self) -> None:
+ """Shut down and join all service threads."""
+ for service, service_thread in self._service_threads.items():
+ service.terminate()
+ for service, service_thread in self._service_threads.items():
+ logger.debug(f"terminating pantsd service: {service}")
+ service_thread.join(self.JOIN_TIMEOUT_SECONDS)
diff --git a/src/python/pants/pantsd/service/scheduler_service.py b/src/python/pants/pantsd/service/scheduler_service.py
--- a/src/python/pants/pantsd/service/scheduler_service.py
+++ b/src/python/pants/pantsd/service/scheduler_service.py
@@ -8,7 +8,6 @@
from pants.engine.fs import PathGlobs, Snapshot
from pants.engine.internals.scheduler import ExecutionTimeoutError
-from pants.engine.unions import UnionMembership
from pants.init.engine_initializer import LegacyGraphScheduler
from pants.pantsd.service.fs_event_service import FSEventService
from pants.pantsd.service.pants_service import PantsService
@@ -32,7 +31,7 @@ def __init__(
legacy_graph_scheduler: LegacyGraphScheduler,
build_root: str,
invalidation_globs: List[str],
- union_membership: UnionMembership,
+ max_memory_usage_pid: int,
max_memory_usage_in_bytes: int,
) -> None:
"""
@@ -41,14 +40,14 @@ def __init__(
:param build_root: The current build root.
:param invalidation_globs: A list of `globs` that when encountered in filesystem event
subscriptions will tear down the daemon.
- :param max_memory_usage_in_bytes: The maximum memory usage of the process, which is
- monitored after startup.
+ :param max_memory_usage_pid: A pid to monitor the memory usage of (generally our own!).
+ :param max_memory_usage_in_bytes: The maximum memory usage of the process: the service will
+ shut down if it observes more than this amount in use.
"""
super().__init__()
self._fs_event_service = fs_event_service
self._graph_helper = legacy_graph_scheduler
self._build_root = build_root
- self._union_membership = union_membership
self._scheduler = legacy_graph_scheduler.scheduler
# This session is only used for checking whether any invalidation globs have been invalidated.
@@ -58,15 +57,14 @@ def __init__(
)
self._logger = logging.getLogger(__name__)
- # NB: We declare these as a single field so that they can be changed atomically
- # by add_invalidation_glob.
+ # NB: We declare these as a single field so that they can be changed atomically.
self._invalidation_globs_and_snapshot: Tuple[Tuple[str, ...], Optional[Snapshot]] = (
tuple(invalidation_globs),
None,
)
+ self._max_memory_usage_pid = max_memory_usage_pid
self._max_memory_usage_in_bytes = max_memory_usage_in_bytes
- self._monitored_pantsd_pid: Optional[int] = None
def _get_snapshot(self, globs: Tuple[str, ...], poll: bool) -> Optional[Snapshot]:
"""Returns a Snapshot of the input globs.
@@ -85,44 +83,10 @@ def _get_snapshot(self, globs: Tuple[str, ...], poll: bool) -> Optional[Snapshot
return None
raise
- def setup(self, services):
- """Service setup."""
- super().setup(services)
-
- # N.B. We compute the invalidating fileset eagerly at launch with an assumption that files
- # that exist at startup are the only ones that can affect the running daemon.
- globs, _ = self._invalidation_globs_and_snapshot
- self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False))
- self._logger.info("watching invalidation patterns: {}".format(globs))
-
- def begin_monitoring_memory_usage(self, pantsd_pid: int):
- """After pantsd has started, we monitor its memory usage relative to a configured value."""
- self._monitored_pantsd_pid = pantsd_pid
-
- def add_invalidation_glob(self, glob: str):
- """Add an invalidation glob to monitoring after startup.
-
- NB: This exists effectively entirely because pantsd needs to be fully started before writing
- its pid file: all other globs should be passed via the constructor.
- """
- self._logger.info("adding invalidation pattern: {}".format(glob))
-
- # Check one more time synchronously with our current set of globs.
- self._check_invalidation_globs(poll=False)
-
- # Synchronously invalidate the path on disk to prevent races with async invalidation, which
- # might otherwise take time to notice that the file had been created.
- self._scheduler.invalidate_files([glob])
-
- # Swap out the globs and snapshot.
- globs, _ = self._invalidation_globs_and_snapshot
- globs = globs + (glob,)
- self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False))
-
def _check_invalidation_globs(self, poll: bool):
"""Check the digest of our invalidation Snapshot and exit if it has changed."""
globs, invalidation_snapshot = self._invalidation_globs_and_snapshot
- assert invalidation_snapshot is not None, "Service.setup was not called."
+ assert invalidation_snapshot is not None, "Should have been eagerly initialized in run."
snapshot = self._get_snapshot(globs, poll=poll)
if snapshot is None or snapshot.digest == invalidation_snapshot.digest:
@@ -142,14 +106,11 @@ def _check_invalidation_globs(self, poll: bool):
self.terminate()
def _check_memory_usage(self):
- if self._monitored_pantsd_pid is None:
- return
-
try:
- memory_usage_in_bytes = psutil.Process(self._monitored_pantsd_pid).memory_info()[0]
+ memory_usage_in_bytes = psutil.Process(self._max_memory_usage_pid).memory_info()[0]
if memory_usage_in_bytes > self._max_memory_usage_in_bytes:
raise Exception(
- f"pantsd process {self._monitored_pantsd_pid} was using "
+ f"pantsd process {self._max_memory_usage_pid} was using "
f"{memory_usage_in_bytes} bytes of memory (above the limit of "
f"{self._max_memory_usage_in_bytes} bytes)."
)
@@ -166,18 +127,14 @@ def _check_invalidation_watcher_liveness(self):
self._logger.critical(f"The scheduler was invalidated: {e!r}")
self.terminate()
- def prepare(self) -> LegacyGraphScheduler:
- # If any nodes exist in the product graph, wait for the initial watchman event to avoid
- # racing watchman startup vs invalidation events.
- if self._fs_event_service is not None and self._scheduler.graph_len() > 0:
- self._logger.debug(
- "fs event service is running and graph_len > 0: waiting for initial watchman event"
- )
- self._fs_event_service.await_started()
- return self._graph_helper
-
def run(self):
"""Main service entrypoint."""
+ # N.B. We compute the invalidating fileset eagerly at launch with an assumption that files
+ # that exist at startup are the only ones that can affect the running daemon.
+ globs, _ = self._invalidation_globs_and_snapshot
+ self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False))
+ self._logger.debug("watching invalidation patterns: {}".format(globs))
+
while not self._state.is_terminating:
self._state.maybe_pause()
self._check_invalidation_watcher_liveness()
| `TestPantsDaemonIntegration.test_pantsd_lifecycle_invalidation` CI failure
This would appear to be rare, but as seen in the wild:
```
==================== FAILURES ====================
TestPantsDaemonIntegration.test_pantsd_lifecycle_invalidation
self = <pants_test.pantsd.test_pantsd_integration.TestPantsDaemonIntegration testMethod=test_pantsd_lifecycle_invalidation>
def test_pantsd_lifecycle_invalidation(self):
"""Runs pants commands with pantsd enabled, in a loop, alternating between options that
should invalidate pantsd and incur a restart and then asserts for pid consistency.
"""
with self.pantsd_successful_run_context() as (pantsd_run, checker, _, _):
variants = (
['debug', 'help'],
['info', 'help']
)
last_pid = None
for cmd in itertools.chain(*itertools.repeat(variants, 3)):
# Run with a CLI flag.
> pantsd_run(['-l{}'.format(cmd[0]), cmd[1]])
.pants.d/pyprep/sources/fb7306e3a97ef92b3cc5872e1eb3041fcf84d2b4/pants_test/pantsd/test_pantsd_integration.py:266:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.pants.d/pyprep/sources/fb7306e3a97ef92b3cc5872e1eb3041fcf84d2b4/pants_test/pantsd/test_pantsd_integration.py:173: in assert_success_runner
runs_created,
E AssertionError: Expected 1 RunTracker run to be created per pantsd run: was 0
-------------- Captured stdout call --------------
pantsd log is /home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pantsd/pantsd.log
>>> config:
{u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}
running: ./pants kill-pantsd (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}) (extra_env={})
completed in 3.103058815 seconds
running: ./pants -ldebug help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}) (extra_env=None)
completed in 2.9121260643 seconds
PantsDaemonMonitor: pid is 11083 is_alive=True
running: ./pants help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'debug', u'enable_pantsd': True}}) (extra_env=None)
completed in 1.26216602325 seconds
PantsDaemonMonitor: pid is 11083 is_alive=True
running: ./pants -linfo help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'debug', u'enable_pantsd': True}}) (extra_env=None)
completed in 2.80230903625 seconds
PantsDaemonMonitor: pid is 11241 is_alive=True
running: ./pants help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}) (extra_env=None)
completed in 1.24617195129 seconds
PantsDaemonMonitor: pid is 11241 is_alive=True
running: ./pants -ldebug help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}) (extra_env=None)
completed in 2.71507883072 seconds
PantsDaemonMonitor: pid is 11396 is_alive=True
running: ./pants help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'debug', u'enable_pantsd': True}}) (extra_env=None)
completed in 1.2286400795 seconds
PantsDaemonMonitor: pid is 11396 is_alive=True
running: ./pants -linfo help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'debug', u'enable_pantsd': True}}) (extra_env=None)
completed in 2.66272878647 seconds
PantsDaemonMonitor: pid is 11551 is_alive=True
running: ./pants help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}) (extra_env=None)
completed in 1.20683908463 seconds
PantsDaemonMonitor: pid is 11551 is_alive=True
running: ./pants -ldebug help (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}) (extra_env=None)
completed in 1.06143808365 seconds
===============================================================
- BEGIN pantsd.log --------------------------------------------
===============================================================
I0713 17:01:07.157752 10967 pants_daemon.py:360] pantsd starting, log level is INFO
I0713 17:01:07.158081 10967 pants_daemon.py:309] setting up service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7f09787fbb10>
I0713 17:01:07.158202 10967 pants_daemon.py:309] setting up service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7f097c26a450>
I0713 17:01:07.158286 10967 scheduler_service.py:74] watching invalidating files: set([])
I0713 17:01:07.158359 10967 pants_daemon.py:309] setting up service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7f09787a6d10>
I0713 17:01:07.158416 10967 pants_daemon.py:309] setting up service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7f09787a6f90>
I0713 17:01:07.158575 10967 pants_daemon.py:328] starting service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7f09787fbb10>
I0713 17:01:07.158945 10967 pants_daemon.py:328] starting service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7f09787a6f90>
I0713 17:01:07.159451 10967 pants_daemon.py:328] starting service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7f097c26a450>
I0713 17:01:07.161123 10967 pants_daemon.py:328] starting service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7f09787a6d10>
I0713 17:01:07.161972 10967 pailgun_service.py:102] starting pailgun server on port 46870
I0713 17:01:07.211781 10967 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini kill-pantsd`
I0713 17:01:07.425451 10967 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini kill-pantsd`
I0713 17:01:07.829041 10967 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'all_files', u'version': u'4.9.1', u'clock': u'c:1531501266:10948:1:3'}
I0713 17:01:07.835637 10967 scheduler_service.py:82] enqueuing 10596 changes for subscription all_files
I0713 17:01:07.877067 10967 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'pantsd_pid', u'version': u'4.9.1', u'clock': u'c:1531501266:10948:1:8'}
I0713 17:01:07.878284 10967 scheduler_service.py:82] enqueuing 1 changes for subscription pantsd_pid
D0713 17:01:10.226243 11083 pants_daemon.py:302] logging initialized
I0713 17:01:10.226421 11083 pants_daemon.py:360] pantsd starting, log level is DEBUG
I0713 17:01:10.226687 11083 pants_daemon.py:309] setting up service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7efc7d704b10>
I0713 17:01:10.226804 11083 pants_daemon.py:309] setting up service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7efc81172450>
I0713 17:01:10.226883 11083 scheduler_service.py:74] watching invalidating files: set([])
I0713 17:01:10.226955 11083 pants_daemon.py:309] setting up service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7efc7d693750>
I0713 17:01:10.227013 11083 pants_daemon.py:309] setting up service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7efc7d6aec90>
I0713 17:01:10.227164 11083 pants_daemon.py:328] starting service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7efc7d704b10>
I0713 17:01:10.227474 11083 pants_daemon.py:328] starting service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7efc7d6aec90>
I0713 17:01:10.228490 11083 pants_daemon.py:328] starting service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7efc81172450>
I0713 17:01:10.228949 11083 pants_daemon.py:328] starting service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7efc7d693750>
I0713 17:01:10.229293 11083 pailgun_service.py:102] starting pailgun server on port 38800
D0713 17:01:10.229176 11083 store_gc_service.py:40] Extending leases
D0713 17:01:10.230290 11083 store_gc_service.py:42] Done extending leases
D0713 17:01:10.230699 11083 watchman.py:68] setting initial watchman timeout to 30.0
I0713 17:01:10.296534 11083 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -ldebug help`
D0713 17:01:10.296696 11083 pailgun_server.py:73] pailgun request environment: {u'NAILGUN_PATHSEPARATOR': u':', u'TRAVIS_PULL_REQUEST_BRANCH': u'python/resolve/fix-current-platform-handling', u'rvm_version': u'1.29.3 (latest)', u'LC_CTYPE': u'en_US.UTF-8', u'TRAVIS': u'true', u'NAILGUN_TTY_2': u'0', u'NAILGUN_TTY_0': u'0', u'NAILGUN_TTY_1': u'0', u'TRAVIS_REPO_SLUG': u'pantsbuild/pants', u'PANTSD_RUNTRACKER_CLIENT_START_TIME': u'1531501268.91', u'TRAVIS_STACK_LANGUAGES': u'__garnet__ c c++ clojure cplusplus cpp default go groovy java node_js php pure_java python ruby scala', u'JRUBY_OPTS': u' --client -J-XX:+TieredCompilation -J-XX:TieredStopAtLevel=1 -J-Xss2m -Xcompile.invokedynamic=false', u'VIRTUAL_ENV': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv', u'SHELL': u'/bin/bash', u'TRAVIS_UID': u'2000', u'PYENV_SHELL': u'bash', u'TRAVIS_BRANCH': u'master', u'TRAVIS_PULL_REQUEST_SLUG': u'jsirois/pants', u'HISTSIZE': u'1000', u'NVM_BIN': u'/home/travis/.nvm/versions/node/v8.9.1/bin', u'RBENV_SHELL': u'bash', u'MANPATH': u'/home/travis/.nvm/versions/node/v8.9.1/share/man:/home/travis/.kiex/elixirs/elixir-1.4.5/man:/home/travis/.rvm/rubies/ruby-2.4.1/share/man:/usr/local/man:/usr/local/cmake-3.9.2/man:/usr/local/clang-5.0.0/share/man:/usr/local/share/man:/usr/share/man:/home/travis/.rvm/man', u'JAVA_HOME': u'/usr/lib/jvm/java-8-oracle', u'XDG_RUNTIME_DIR': u'/run/user/2000', u'PYTHONPATH': u'/home/travis/build/pantsbuild/pants/src/python:', u'_system_type': u'Linux', u'TRAVIS_SECURE_ENV_VARS': u'false', u'MY_RUBY_HOME': u'/home/travis/.rvm/rubies/ruby-2.4.1', u'XDG_SESSION_ID': u'2', u'TRAVIS_DIST': u'trusty', u'RUBY_VERSION': u'ruby-2.4.1', u'CXX': u'g++', u'PIP_DISABLE_PIP_VERSION_CHECK': u'1', u'_system_version': u'14.04', u'TRAVIS_COMMIT_RANGE': u'2e171f73d1cc32256cf93d5b04246fda2ccb58f3...fe2907e76d98458a78ee066377965df4ca16ee9e', u'MAIL': u'/var/mail/travis', u'SSH_CONNECTION': u'10.10.4.33 36036 10.20.0.218 22', u'GOPATH': u'/home/travis/gopath', u'CONTINUOUS_INTEGRATION': u'true', u'GOROOT': u'/home/travis/.gimme/versions/go1.7.4.linux.amd64', u'TRAVIS_STACK_TIMESTAMP': u'2017-12-05 19:33:09 UTC', u'RACK_ENV': u'test', u'USER': u'travis', u'PYTHONUNBUFFERED': u'1', u'PS1': u'(pants_dev_deps.venv) ', u'PS4': u'+', u'SHLVL': u'3', u'TRAVIS_PULL_REQUEST_SHA': u'fe2907e76d98458a78ee066377965df4ca16ee9e', u'SHARD': u'Python integration tests for pants - shard 3', u'MERB_ENV': u'test', u'JDK_SWITCHER_DEFAULT': u'oraclejdk8', u'GIT_ASKPASS': u'echo', u'GEM_PATH': u'/home/travis/.rvm/gems/ruby-2.4.1:/home/travis/.rvm/gems/ruby-2.4.1@global', u'HAS_ANTARES_THREE_LITTLE_FRONZIES_BADGE': u'true', u'TRAVIS_EVENT_TYPE': u'pull_request', u'TRAVIS_TAG': u'', u'NAILGUN_FILESEPARATOR': u'/', u'TRAVIS_BUILD_NUMBER': u'18440', u'PYENV_ROOT': u'/opt/pyenv', u'TRAVIS_STACK_FEATURES': u'basic cassandra chromium couchdb disabled-ipv6 docker docker-compose elasticsearch firefox go-toolchain google-chrome jdk memcached mongodb mysql neo4j nodejs_interpreter perl_interpreter perlbrew phantomjs postgresql python_interpreter rabbitmq redis riak ruby_interpreter sqlite xserver', u'_system_name': u'Ubuntu', u'PAGER': u'cat', u'PYTEST_PASSTHRU_ARGS': u'-v --duration=3', u'TRAVIS_SUDO': u'true', u'MIX_ARCHIVES': u'/home/travis/.kiex/mix/elixir-1.4.5', u'TRAVIS_BUILD_ID': u'403626566', u'PANTS_CONFIG_FILES': u'/home/travis/build/pantsbuild/pants/pants.travis-ci.ini', u'NVM_DIR': u'/home/travis/.nvm', u'TRAVIS_STACK_NAME': u'garnet', u'HOME': u'/home/travis', u'TRAVIS_PULL_REQUEST': u'6104', u'LANG': u'en_US.UTF-8', u'TRAVIS_COMMIT': u'a6f28670c5c10199bd4a2bd1f34d8181f721326e', u'TRAVIS_STACK_JOB_BOARD_REGISTER': u'/.job-board-register.yml', u'_system_arch': u'x86_64', u'MYSQL_UNIX_PORT': u'/var/run/mysqld/mysqld.sock', u'CI': u'true', u'rvm_prefix': u'/home/travis', u'DEBIAN_FRONTEND': u'noninteractive', u'TRAVIS_PRE_CHEF_BOOTSTRAP_TIME': u'2017-12-05T19:32:55', u'TRAVIS_COMMIT_MESSAGE': u'Merge fe2907e76d98458a78ee066377965df4ca16ee9e into 2e171f73d1cc32256cf93d5b04246fda2ccb58f3', u'IRBRC': u'/home/travis/.rvm/rubies/ruby-2.4.1/.irbrc', u'rvm_path': u'/home/travis/.rvm', u'CASHER_DIR': u'/home/travis/.casher', u'COLUMNS': u'50', u'TRAVIS_STACK_NODE_ATTRIBUTES': u'/.node-attributes.yml', u'SSH_TTY': u'/dev/pts/0', u'PERLBREW_HOME': u'/home/travis/.perlbrew', u'GEM_HOME': u'/home/travis/.rvm/gems/ruby-2.4.1', u'HAS_JOSH_K_SEAL_OF_APPROVAL': u'true', u'PYTHON_CFLAGS': u'-g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security', u'COMPOSER_NO_INTERACTION': u'1', u'NVM_CD_FLAGS': u'', u'TRAVIS_BUILD_STAGE_NAME': u'Test pants', u'SSH_CLIENT': u'10.10.4.33 36036 22', u'PERLBREW_BASHRC_VERSION': u'0.80', u'LOGNAME': u'travis', u'TRAVIS_INIT': u'upstart', u'PATH': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv/bin:/home/travis/.rvm/gems/ruby-2.4.1/bin:/home/travis/.rvm/gems/ruby-2.4.1@global/bin:/home/travis/.rvm/rubies/ruby-2.4.1/bin:/home/travis/.rvm/bin:/home/travis/virtualenv/python2.7.13/bin:/home/travis/bin:/home/travis/.local/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.9.1/bin:/home/travis/.kiex/elixirs/elixir-1.4.5/bin:/home/travis/.kiex/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.7.4.linux.amd64/bin:/usr/local/phantomjs/bin:/usr/local/phantomjs:/usr/local/neo4j-3.2.7/bin:/usr/local/maven-3.5.2/bin:/usr/local/cmake-3.9.2/bin:/usr/local/clang-5.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin', u'TRAVIS_ALLOW_FAILURE': u'false', u'elapsed_start_time': u'1531498647', u'TERM': u'xterm', u'TZ': u'UTC', u'HISTFILESIZE': u'2000', u'TRAVIS_OSX_IMAGE': u'', u'rvm_bin_path': u'/home/travis/.rvm/bin', u'RAILS_ENV': u'test', u'PERLBREW_ROOT': u'/home/travis/perl5/perlbrew', u'TRAVIS_JOB_NUMBER': u'18440.10', u'PYTHON_CONFIGURE_OPTS': u'--enable-unicode=ucs4 --with-wide-unicode --enable-shared --enable-ipv6 --enable-loadable-sqlite-extensions --with-computed-gotos', u'LC_ALL': u'en_US.UTF-8', u'TRAVIS_JOB_ID': u'403626576', u'PYTEST_CURRENT_TEST': u'../tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_lifecycle_invalidation (call)', u'TRAVIS_PYTHON_VERSION': u'2.7.13', u'TRAVIS_LANGUAGE': u'python', u'TRAVIS_BUILD_DIR': u'/home/travis/build/pantsbuild/pants', u'HISTCONTROL': u'ignoredups:ignorespace', u'PWD': u'/home/travis/build/pantsbuild/pants', u'TRAVIS_OS_NAME': u'linux', u'ELIXIR_VERSION': u'1.4.5', u'rvm_pretty_print_flag': u'auto'}
D0713 17:01:10.297046 11083 pailgun_service.py:60] execution commandline: [u'./pants', u'--no-pantsrc', u'--pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d', u'--kill-nailguns', u'--print-exception-stacktrace=True', u'--pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini', u'-ldebug', u'help']
D0713 17:01:10.478298 11083 pailgun_service.py:67] warming the product graph via <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7efc81172450>
D0713 17:01:10.491489 11083 target_roots_calculator.py:145] spec_roots are: None
D0713 17:01:10.491966 11083 target_roots_calculator.py:146] changed_request is: ChangedRequest(changes_since=None, diffspec=None, include_dependees=none, fast=False)
D0713 17:01:10.492297 11083 target_roots_calculator.py:147] owned_files are: []
D0713 17:01:10.500983 11083 git.py:294] Executing: git --git-dir=/home/travis/build/pantsbuild/pants/.git --work-tree=/home/travis/build/pantsbuild/pants rev-parse --abbrev-ref HEAD
D0713 17:01:10.506392 11083 build_environment.py:83] Detected git repository at /home/travis/build/pantsbuild/pants on branch None
D0713 17:01:10.506901 11083 engine_initializer.py:155] warming target_roots for: TargetRoots(specs=None)
D0713 17:01:10.507821 11083 scheduler.py:496] computed 0 nodes in 0.000370 seconds. there are 0 total nodes.
D0713 17:01:10.509743 11083 process_manager.py:213] purging metadata directory: /home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids/pantsd-run-2018-07-13t17_01_10_508043
D0713 17:01:10.510088 11083 process_manager.py:460] forking <pants.bin.daemon_pants_runner.DaemonPantsRunner object at 0x7efc82464210>
I0713 17:01:10.517646 11083 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -ldebug help`
D0713 17:01:10.842865 11083 watchman.py:152] set post-startup watchman timeout to 5.0
D0713 17:01:10.843183 11083 watchman.py:177] watchman command_list is: [[u'subscribe', '/home/travis/build/pantsbuild/pants', u'all_files', {'fields': [u'name'], 'expression': [u'allof', [u'not', [u'dirname', u'dist', [u'depth', u'eq', 0]]], [u'not', [u'pcre', u'^\\..*', u'wholename']], [u'not', [u'match', u'*.pyc']]]}], [u'subscribe', '/home/travis/build/pantsbuild/pants', u'pantsd_pid', {'fields': [u'name'], 'expression': [u'allof', [u'dirname', u'.pants.d/tmp/tmpsYhrVh.pants.d/.pids/pantsd'], [u'name', u'pid']]}]]
I0713 17:01:10.893743 11083 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'all_files', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:6'}
I0713 17:01:10.902504 11083 scheduler_service.py:82] enqueuing 10596 changes for subscription all_files
I0713 17:01:10.934089 11083 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'pantsd_pid', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:8'}
D0713 17:01:10.934344 11083 fs_event_service.py:161] callback ID 1 for all_files succeeded
I0713 17:01:10.935307 11083 scheduler_service.py:82] enqueuing 1 changes for subscription pantsd_pid
D0713 17:01:10.935621 11083 fs_event_service.py:161] callback ID 2 for pantsd_pid succeeded
D0713 17:01:10.959840 11083 scheduler_service.py:138] processing 10596 files for subscription all_files (first_event=True)
D0713 17:01:10.961242 11083 scheduler_service.py:138] processing 1 files for subscription pantsd_pid (first_event=True)
I0713 17:01:11.853085 11083 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
D0713 17:01:11.853262 11083 pailgun_server.py:73] pailgun request environment: {u'NAILGUN_PATHSEPARATOR': u':', u'TRAVIS_PULL_REQUEST_BRANCH': u'python/resolve/fix-current-platform-handling', u'rvm_version': u'1.29.3 (latest)', u'LC_CTYPE': u'en_US.UTF-8', u'TRAVIS': u'true', u'NAILGUN_TTY_2': u'0', u'NAILGUN_TTY_0': u'0', u'NAILGUN_TTY_1': u'0', u'TRAVIS_REPO_SLUG': u'pantsbuild/pants', u'PANTSD_RUNTRACKER_CLIENT_START_TIME': u'1531501271.84', u'TRAVIS_STACK_LANGUAGES': u'__garnet__ c c++ clojure cplusplus cpp default go groovy java node_js php pure_java python ruby scala', u'JRUBY_OPTS': u' --client -J-XX:+TieredCompilation -J-XX:TieredStopAtLevel=1 -J-Xss2m -Xcompile.invokedynamic=false', u'VIRTUAL_ENV': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv', u'SHELL': u'/bin/bash', u'TRAVIS_UID': u'2000', u'PYENV_SHELL': u'bash', u'TRAVIS_BRANCH': u'master', u'TRAVIS_PULL_REQUEST_SLUG': u'jsirois/pants', u'HISTSIZE': u'1000', u'NVM_BIN': u'/home/travis/.nvm/versions/node/v8.9.1/bin', u'RBENV_SHELL': u'bash', u'MANPATH': u'/home/travis/.nvm/versions/node/v8.9.1/share/man:/home/travis/.kiex/elixirs/elixir-1.4.5/man:/home/travis/.rvm/rubies/ruby-2.4.1/share/man:/usr/local/man:/usr/local/cmake-3.9.2/man:/usr/local/clang-5.0.0/share/man:/usr/local/share/man:/usr/share/man:/home/travis/.rvm/man', u'JAVA_HOME': u'/usr/lib/jvm/java-8-oracle', u'XDG_RUNTIME_DIR': u'/run/user/2000', u'PYTHONPATH': u'/home/travis/build/pantsbuild/pants/src/python:', u'_system_type': u'Linux', u'TRAVIS_SECURE_ENV_VARS': u'false', u'MY_RUBY_HOME': u'/home/travis/.rvm/rubies/ruby-2.4.1', u'XDG_SESSION_ID': u'2', u'TRAVIS_DIST': u'trusty', u'RUBY_VERSION': u'ruby-2.4.1', u'CXX': u'g++', u'PIP_DISABLE_PIP_VERSION_CHECK': u'1', u'_system_version': u'14.04', u'TRAVIS_COMMIT_RANGE': u'2e171f73d1cc32256cf93d5b04246fda2ccb58f3...fe2907e76d98458a78ee066377965df4ca16ee9e', u'MAIL': u'/var/mail/travis', u'SSH_CONNECTION': u'10.10.4.33 36036 10.20.0.218 22', u'GOPATH': u'/home/travis/gopath', u'CONTINUOUS_INTEGRATION': u'true', u'GOROOT': u'/home/travis/.gimme/versions/go1.7.4.linux.amd64', u'TRAVIS_STACK_TIMESTAMP': u'2017-12-05 19:33:09 UTC', u'RACK_ENV': u'test', u'USER': u'travis', u'PYTHONUNBUFFERED': u'1', u'PS1': u'(pants_dev_deps.venv) ', u'PS4': u'+', u'SHLVL': u'3', u'TRAVIS_PULL_REQUEST_SHA': u'fe2907e76d98458a78ee066377965df4ca16ee9e', u'SHARD': u'Python integration tests for pants - shard 3', u'MERB_ENV': u'test', u'JDK_SWITCHER_DEFAULT': u'oraclejdk8', u'GIT_ASKPASS': u'echo', u'GEM_PATH': u'/home/travis/.rvm/gems/ruby-2.4.1:/home/travis/.rvm/gems/ruby-2.4.1@global', u'HAS_ANTARES_THREE_LITTLE_FRONZIES_BADGE': u'true', u'TRAVIS_EVENT_TYPE': u'pull_request', u'TRAVIS_TAG': u'', u'NAILGUN_FILESEPARATOR': u'/', u'TRAVIS_BUILD_NUMBER': u'18440', u'PYENV_ROOT': u'/opt/pyenv', u'TRAVIS_STACK_FEATURES': u'basic cassandra chromium couchdb disabled-ipv6 docker docker-compose elasticsearch firefox go-toolchain google-chrome jdk memcached mongodb mysql neo4j nodejs_interpreter perl_interpreter perlbrew phantomjs postgresql python_interpreter rabbitmq redis riak ruby_interpreter sqlite xserver', u'_system_name': u'Ubuntu', u'PAGER': u'cat', u'PYTEST_PASSTHRU_ARGS': u'-v --duration=3', u'TRAVIS_SUDO': u'true', u'MIX_ARCHIVES': u'/home/travis/.kiex/mix/elixir-1.4.5', u'TRAVIS_BUILD_ID': u'403626566', u'PANTS_CONFIG_FILES': u'/home/travis/build/pantsbuild/pants/pants.travis-ci.ini', u'NVM_DIR': u'/home/travis/.nvm', u'TRAVIS_STACK_NAME': u'garnet', u'HOME': u'/home/travis', u'TRAVIS_PULL_REQUEST': u'6104', u'LANG': u'en_US.UTF-8', u'TRAVIS_COMMIT': u'a6f28670c5c10199bd4a2bd1f34d8181f721326e', u'TRAVIS_STACK_JOB_BOARD_REGISTER': u'/.job-board-register.yml', u'_system_arch': u'x86_64', u'MYSQL_UNIX_PORT': u'/var/run/mysqld/mysqld.sock', u'CI': u'true', u'rvm_prefix': u'/home/travis', u'DEBIAN_FRONTEND': u'noninteractive', u'TRAVIS_PRE_CHEF_BOOTSTRAP_TIME': u'2017-12-05T19:32:55', u'TRAVIS_COMMIT_MESSAGE': u'Merge fe2907e76d98458a78ee066377965df4ca16ee9e into 2e171f73d1cc32256cf93d5b04246fda2ccb58f3', u'IRBRC': u'/home/travis/.rvm/rubies/ruby-2.4.1/.irbrc', u'rvm_path': u'/home/travis/.rvm', u'CASHER_DIR': u'/home/travis/.casher', u'COLUMNS': u'50', u'TRAVIS_STACK_NODE_ATTRIBUTES': u'/.node-attributes.yml', u'SSH_TTY': u'/dev/pts/0', u'PERLBREW_HOME': u'/home/travis/.perlbrew', u'GEM_HOME': u'/home/travis/.rvm/gems/ruby-2.4.1', u'HAS_JOSH_K_SEAL_OF_APPROVAL': u'true', u'PYTHON_CFLAGS': u'-g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security', u'COMPOSER_NO_INTERACTION': u'1', u'NVM_CD_FLAGS': u'', u'TRAVIS_BUILD_STAGE_NAME': u'Test pants', u'SSH_CLIENT': u'10.10.4.33 36036 22', u'PERLBREW_BASHRC_VERSION': u'0.80', u'LOGNAME': u'travis', u'TRAVIS_INIT': u'upstart', u'PATH': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv/bin:/home/travis/.rvm/gems/ruby-2.4.1/bin:/home/travis/.rvm/gems/ruby-2.4.1@global/bin:/home/travis/.rvm/rubies/ruby-2.4.1/bin:/home/travis/.rvm/bin:/home/travis/virtualenv/python2.7.13/bin:/home/travis/bin:/home/travis/.local/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.9.1/bin:/home/travis/.kiex/elixirs/elixir-1.4.5/bin:/home/travis/.kiex/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.7.4.linux.amd64/bin:/usr/local/phantomjs/bin:/usr/local/phantomjs:/usr/local/neo4j-3.2.7/bin:/usr/local/maven-3.5.2/bin:/usr/local/cmake-3.9.2/bin:/usr/local/clang-5.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin', u'TRAVIS_ALLOW_FAILURE': u'false', u'elapsed_start_time': u'1531498647', u'TERM': u'xterm', u'TZ': u'UTC', u'HISTFILESIZE': u'2000', u'TRAVIS_OSX_IMAGE': u'', u'rvm_bin_path': u'/home/travis/.rvm/bin', u'RAILS_ENV': u'test', u'PERLBREW_ROOT': u'/home/travis/perl5/perlbrew', u'TRAVIS_JOB_NUMBER': u'18440.10', u'PYTHON_CONFIGURE_OPTS': u'--enable-unicode=ucs4 --with-wide-unicode --enable-shared --enable-ipv6 --enable-loadable-sqlite-extensions --with-computed-gotos', u'LC_ALL': u'en_US.UTF-8', u'TRAVIS_JOB_ID': u'403626576', u'PYTEST_CURRENT_TEST': u'../tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_lifecycle_invalidation (call)', u'TRAVIS_PYTHON_VERSION': u'2.7.13', u'TRAVIS_LANGUAGE': u'python', u'TRAVIS_BUILD_DIR': u'/home/travis/build/pantsbuild/pants', u'HISTCONTROL': u'ignoredups:ignorespace', u'PWD': u'/home/travis/build/pantsbuild/pants', u'TRAVIS_OS_NAME': u'linux', u'ELIXIR_VERSION': u'1.4.5', u'rvm_pretty_print_flag': u'auto'}
D0713 17:01:11.853672 11083 pailgun_service.py:60] execution commandline: [u'./pants', u'--no-pantsrc', u'--pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d', u'--kill-nailguns', u'--print-exception-stacktrace=True', u'--pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini', u'help']
D0713 17:01:11.991677 11083 pailgun_service.py:67] warming the product graph via <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7efc81172450>
D0713 17:01:11.998593 11083 target_roots_calculator.py:145] spec_roots are: None
D0713 17:01:11.998742 11083 target_roots_calculator.py:146] changed_request is: ChangedRequest(changes_since=None, diffspec=None, include_dependees=none, fast=False)
D0713 17:01:11.998908 11083 target_roots_calculator.py:147] owned_files are: []
D0713 17:01:11.999090 11083 engine_initializer.py:155] warming target_roots for: TargetRoots(specs=None)
D0713 17:01:11.999563 11083 scheduler.py:496] computed 0 nodes in 0.000279 seconds. there are 0 total nodes.
D0713 17:01:11.999991 11083 process_manager.py:213] purging metadata directory: /home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids/pantsd-run-2018-07-13t17_01_11_999732
D0713 17:01:12.000281 11083 process_manager.py:460] forking <pants.bin.daemon_pants_runner.DaemonPantsRunner object at 0x7efc7d6bdb10>
I0713 17:01:12.008246 11083 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
I0713 17:01:14.511209 11241 pants_daemon.py:360] pantsd starting, log level is INFO
I0713 17:01:14.511677 11241 pants_daemon.py:309] setting up service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7f8083eb1b10>
I0713 17:01:14.511878 11241 pants_daemon.py:309] setting up service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7f8087920450>
I0713 17:01:14.512000 11241 scheduler_service.py:74] watching invalidating files: set([])
I0713 17:01:14.512159 11241 pants_daemon.py:309] setting up service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7f8083e5dfd0>
I0713 17:01:14.512275 11241 pants_daemon.py:309] setting up service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7f8083e5ddd0>
I0713 17:01:14.512548 11241 pants_daemon.py:328] starting service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7f8083eb1b10>
I0713 17:01:14.513623 11241 pants_daemon.py:328] starting service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7f8083e5ddd0>
I0713 17:01:14.514419 11241 pants_daemon.py:328] starting service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7f8087920450>
I0713 17:01:14.514931 11241 pants_daemon.py:328] starting service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7f8083e5dfd0>
I0713 17:01:14.515222 11241 pailgun_service.py:102] starting pailgun server on port 38478
I0713 17:01:14.564955 11241 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'all_files', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:105'}
I0713 17:01:14.566349 11241 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -linfo help`
I0713 17:01:14.568869 11241 scheduler_service.py:82] enqueuing 10596 changes for subscription all_files
I0713 17:01:14.600944 11241 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'pantsd_pid', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:108'}
I0713 17:01:14.602884 11241 scheduler_service.py:82] enqueuing 1 changes for subscription pantsd_pid
I0713 17:01:14.747867 11241 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -linfo help`
I0713 17:01:15.908832 11241 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
I0713 17:01:16.073143 11241 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
D0713 17:01:18.465053 11396 pants_daemon.py:302] logging initialized
I0713 17:01:18.465276 11396 pants_daemon.py:360] pantsd starting, log level is DEBUG
I0713 17:01:18.465723 11396 pants_daemon.py:309] setting up service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7fd2427cab10>
I0713 17:01:18.465912 11396 pants_daemon.py:309] setting up service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7fd246238450>
I0713 17:01:18.466047 11396 scheduler_service.py:74] watching invalidating files: set([])
I0713 17:01:18.466186 11396 pants_daemon.py:309] setting up service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7fd242774d10>
I0713 17:01:18.466299 11396 pants_daemon.py:309] setting up service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7fd242774f50>
I0713 17:01:18.466587 11396 pants_daemon.py:328] starting service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7fd2427cab10>
I0713 17:01:18.467003 11396 pants_daemon.py:328] starting service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7fd242774f50>
I0713 17:01:18.467569 11396 pants_daemon.py:328] starting service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7fd246238450>
D0713 17:01:18.467767 11396 store_gc_service.py:40] Extending leases
D0713 17:01:18.468694 11396 store_gc_service.py:42] Done extending leases
I0713 17:01:18.469058 11396 pants_daemon.py:328] starting service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7fd242774d10>
I0713 17:01:18.469566 11396 pailgun_service.py:102] starting pailgun server on port 33411
D0713 17:01:18.470180 11396 watchman.py:68] setting initial watchman timeout to 30.0
D0713 17:01:18.486447 11396 watchman.py:152] set post-startup watchman timeout to 5.0
D0713 17:01:18.486927 11396 watchman.py:177] watchman command_list is: [[u'subscribe', '/home/travis/build/pantsbuild/pants', u'all_files', {'fields': [u'name'], 'expression': [u'allof', [u'not', [u'dirname', u'dist', [u'depth', u'eq', 0]]], [u'not', [u'pcre', u'^\\..*', u'wholename']], [u'not', [u'match', u'*.pyc']]]}], [u'subscribe', '/home/travis/build/pantsbuild/pants', u'pantsd_pid', {'fields': [u'name'], 'expression': [u'allof', [u'dirname', u'.pants.d/tmp/tmpsYhrVh.pants.d/.pids/pantsd'], [u'name', u'pid']]}]]
I0713 17:01:18.517385 11396 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'all_files', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:209'}
I0713 17:01:18.521579 11396 scheduler_service.py:82] enqueuing 10596 changes for subscription all_files
D0713 17:01:18.561434 11396 scheduler_service.py:138] processing 10596 files for subscription all_files (first_event=True)
I0713 17:01:18.561686 11396 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'pantsd_pid', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:210'}
D0713 17:01:18.563321 11396 fs_event_service.py:161] callback ID 1 for all_files succeeded
I0713 17:01:18.563710 11396 scheduler_service.py:82] enqueuing 1 changes for subscription pantsd_pid
D0713 17:01:18.564069 11396 scheduler_service.py:138] processing 1 files for subscription pantsd_pid (first_event=True)
D0713 17:01:18.564371 11396 fs_event_service.py:161] callback ID 2 for pantsd_pid succeeded
I0713 17:01:18.565150 11396 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -ldebug help`
D0713 17:01:18.565243 11396 pailgun_server.py:73] pailgun request environment: {u'NAILGUN_PATHSEPARATOR': u':', u'TRAVIS_PULL_REQUEST_BRANCH': u'python/resolve/fix-current-platform-handling', u'rvm_version': u'1.29.3 (latest)', u'LC_CTYPE': u'en_US.UTF-8', u'TRAVIS': u'true', u'NAILGUN_TTY_2': u'0', u'NAILGUN_TTY_0': u'0', u'NAILGUN_TTY_1': u'0', u'TRAVIS_REPO_SLUG': u'pantsbuild/pants', u'PANTSD_RUNTRACKER_CLIENT_START_TIME': u'1531501277.17', u'TRAVIS_STACK_LANGUAGES': u'__garnet__ c c++ clojure cplusplus cpp default go groovy java node_js php pure_java python ruby scala', u'JRUBY_OPTS': u' --client -J-XX:+TieredCompilation -J-XX:TieredStopAtLevel=1 -J-Xss2m -Xcompile.invokedynamic=false', u'VIRTUAL_ENV': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv', u'SHELL': u'/bin/bash', u'TRAVIS_UID': u'2000', u'PYENV_SHELL': u'bash', u'TRAVIS_BRANCH': u'master', u'TRAVIS_PULL_REQUEST_SLUG': u'jsirois/pants', u'HISTSIZE': u'1000', u'NVM_BIN': u'/home/travis/.nvm/versions/node/v8.9.1/bin', u'RBENV_SHELL': u'bash', u'MANPATH': u'/home/travis/.nvm/versions/node/v8.9.1/share/man:/home/travis/.kiex/elixirs/elixir-1.4.5/man:/home/travis/.rvm/rubies/ruby-2.4.1/share/man:/usr/local/man:/usr/local/cmake-3.9.2/man:/usr/local/clang-5.0.0/share/man:/usr/local/share/man:/usr/share/man:/home/travis/.rvm/man', u'JAVA_HOME': u'/usr/lib/jvm/java-8-oracle', u'XDG_RUNTIME_DIR': u'/run/user/2000', u'PYTHONPATH': u'/home/travis/build/pantsbuild/pants/src/python:', u'_system_type': u'Linux', u'TRAVIS_SECURE_ENV_VARS': u'false', u'MY_RUBY_HOME': u'/home/travis/.rvm/rubies/ruby-2.4.1', u'XDG_SESSION_ID': u'2', u'TRAVIS_DIST': u'trusty', u'RUBY_VERSION': u'ruby-2.4.1', u'CXX': u'g++', u'PIP_DISABLE_PIP_VERSION_CHECK': u'1', u'_system_version': u'14.04', u'TRAVIS_COMMIT_RANGE': u'2e171f73d1cc32256cf93d5b04246fda2ccb58f3...fe2907e76d98458a78ee066377965df4ca16ee9e', u'MAIL': u'/var/mail/travis', u'SSH_CONNECTION': u'10.10.4.33 36036 10.20.0.218 22', u'GOPATH': u'/home/travis/gopath', u'CONTINUOUS_INTEGRATION': u'true', u'GOROOT': u'/home/travis/.gimme/versions/go1.7.4.linux.amd64', u'TRAVIS_STACK_TIMESTAMP': u'2017-12-05 19:33:09 UTC', u'RACK_ENV': u'test', u'USER': u'travis', u'PYTHONUNBUFFERED': u'1', u'PS1': u'(pants_dev_deps.venv) ', u'PS4': u'+', u'SHLVL': u'3', u'TRAVIS_PULL_REQUEST_SHA': u'fe2907e76d98458a78ee066377965df4ca16ee9e', u'SHARD': u'Python integration tests for pants - shard 3', u'MERB_ENV': u'test', u'JDK_SWITCHER_DEFAULT': u'oraclejdk8', u'GIT_ASKPASS': u'echo', u'GEM_PATH': u'/home/travis/.rvm/gems/ruby-2.4.1:/home/travis/.rvm/gems/ruby-2.4.1@global', u'HAS_ANTARES_THREE_LITTLE_FRONZIES_BADGE': u'true', u'TRAVIS_EVENT_TYPE': u'pull_request', u'TRAVIS_TAG': u'', u'NAILGUN_FILESEPARATOR': u'/', u'TRAVIS_BUILD_NUMBER': u'18440', u'PYENV_ROOT': u'/opt/pyenv', u'TRAVIS_STACK_FEATURES': u'basic cassandra chromium couchdb disabled-ipv6 docker docker-compose elasticsearch firefox go-toolchain google-chrome jdk memcached mongodb mysql neo4j nodejs_interpreter perl_interpreter perlbrew phantomjs postgresql python_interpreter rabbitmq redis riak ruby_interpreter sqlite xserver', u'_system_name': u'Ubuntu', u'PAGER': u'cat', u'PYTEST_PASSTHRU_ARGS': u'-v --duration=3', u'TRAVIS_SUDO': u'true', u'MIX_ARCHIVES': u'/home/travis/.kiex/mix/elixir-1.4.5', u'TRAVIS_BUILD_ID': u'403626566', u'PANTS_CONFIG_FILES': u'/home/travis/build/pantsbuild/pants/pants.travis-ci.ini', u'NVM_DIR': u'/home/travis/.nvm', u'TRAVIS_STACK_NAME': u'garnet', u'HOME': u'/home/travis', u'TRAVIS_PULL_REQUEST': u'6104', u'LANG': u'en_US.UTF-8', u'TRAVIS_COMMIT': u'a6f28670c5c10199bd4a2bd1f34d8181f721326e', u'TRAVIS_STACK_JOB_BOARD_REGISTER': u'/.job-board-register.yml', u'_system_arch': u'x86_64', u'MYSQL_UNIX_PORT': u'/var/run/mysqld/mysqld.sock', u'CI': u'true', u'rvm_prefix': u'/home/travis', u'DEBIAN_FRONTEND': u'noninteractive', u'TRAVIS_PRE_CHEF_BOOTSTRAP_TIME': u'2017-12-05T19:32:55', u'TRAVIS_COMMIT_MESSAGE': u'Merge fe2907e76d98458a78ee066377965df4ca16ee9e into 2e171f73d1cc32256cf93d5b04246fda2ccb58f3', u'IRBRC': u'/home/travis/.rvm/rubies/ruby-2.4.1/.irbrc', u'rvm_path': u'/home/travis/.rvm', u'CASHER_DIR': u'/home/travis/.casher', u'COLUMNS': u'50', u'TRAVIS_STACK_NODE_ATTRIBUTES': u'/.node-attributes.yml', u'SSH_TTY': u'/dev/pts/0', u'PERLBREW_HOME': u'/home/travis/.perlbrew', u'GEM_HOME': u'/home/travis/.rvm/gems/ruby-2.4.1', u'HAS_JOSH_K_SEAL_OF_APPROVAL': u'true', u'PYTHON_CFLAGS': u'-g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security', u'COMPOSER_NO_INTERACTION': u'1', u'NVM_CD_FLAGS': u'', u'TRAVIS_BUILD_STAGE_NAME': u'Test pants', u'SSH_CLIENT': u'10.10.4.33 36036 22', u'PERLBREW_BASHRC_VERSION': u'0.80', u'LOGNAME': u'travis', u'TRAVIS_INIT': u'upstart', u'PATH': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv/bin:/home/travis/.rvm/gems/ruby-2.4.1/bin:/home/travis/.rvm/gems/ruby-2.4.1@global/bin:/home/travis/.rvm/rubies/ruby-2.4.1/bin:/home/travis/.rvm/bin:/home/travis/virtualenv/python2.7.13/bin:/home/travis/bin:/home/travis/.local/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.9.1/bin:/home/travis/.kiex/elixirs/elixir-1.4.5/bin:/home/travis/.kiex/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.7.4.linux.amd64/bin:/usr/local/phantomjs/bin:/usr/local/phantomjs:/usr/local/neo4j-3.2.7/bin:/usr/local/maven-3.5.2/bin:/usr/local/cmake-3.9.2/bin:/usr/local/clang-5.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin', u'TRAVIS_ALLOW_FAILURE': u'false', u'elapsed_start_time': u'1531498647', u'TERM': u'xterm', u'TZ': u'UTC', u'HISTFILESIZE': u'2000', u'TRAVIS_OSX_IMAGE': u'', u'rvm_bin_path': u'/home/travis/.rvm/bin', u'RAILS_ENV': u'test', u'PERLBREW_ROOT': u'/home/travis/perl5/perlbrew', u'TRAVIS_JOB_NUMBER': u'18440.10', u'PYTHON_CONFIGURE_OPTS': u'--enable-unicode=ucs4 --with-wide-unicode --enable-shared --enable-ipv6 --enable-loadable-sqlite-extensions --with-computed-gotos', u'LC_ALL': u'en_US.UTF-8', u'TRAVIS_JOB_ID': u'403626576', u'PYTEST_CURRENT_TEST': u'../tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_lifecycle_invalidation (call)', u'TRAVIS_PYTHON_VERSION': u'2.7.13', u'TRAVIS_LANGUAGE': u'python', u'TRAVIS_BUILD_DIR': u'/home/travis/build/pantsbuild/pants', u'HISTCONTROL': u'ignoredups:ignorespace', u'PWD': u'/home/travis/build/pantsbuild/pants', u'TRAVIS_OS_NAME': u'linux', u'ELIXIR_VERSION': u'1.4.5', u'rvm_pretty_print_flag': u'auto'}
D0713 17:01:18.565757 11396 pailgun_service.py:60] execution commandline: [u'./pants', u'--no-pantsrc', u'--pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d', u'--kill-nailguns', u'--print-exception-stacktrace=True', u'--pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini', u'-ldebug', u'help']
D0713 17:01:18.687866 11396 pailgun_service.py:67] warming the product graph via <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7fd246238450>
D0713 17:01:18.694708 11396 target_roots_calculator.py:145] spec_roots are: None
D0713 17:01:18.694891 11396 target_roots_calculator.py:146] changed_request is: ChangedRequest(changes_since=None, diffspec=None, include_dependees=none, fast=False)
D0713 17:01:18.695102 11396 target_roots_calculator.py:147] owned_files are: []
D0713 17:01:18.699723 11396 git.py:294] Executing: git --git-dir=/home/travis/build/pantsbuild/pants/.git --work-tree=/home/travis/build/pantsbuild/pants rev-parse --abbrev-ref HEAD
D0713 17:01:18.703603 11396 build_environment.py:83] Detected git repository at /home/travis/build/pantsbuild/pants on branch None
D0713 17:01:18.703967 11396 engine_initializer.py:155] warming target_roots for: TargetRoots(specs=None)
D0713 17:01:18.704945 11396 scheduler.py:496] computed 0 nodes in 0.000494 seconds. there are 0 total nodes.
D0713 17:01:18.706116 11396 process_manager.py:213] purging metadata directory: /home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids/pantsd-run-2018-07-13t17_01_18_705112
D0713 17:01:18.706414 11396 process_manager.py:460] forking <pants.bin.daemon_pants_runner.DaemonPantsRunner object at 0x7fd24752a210>
I0713 17:01:18.711800 11396 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -ldebug help`
I0713 17:01:19.862438 11396 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
D0713 17:01:19.862610 11396 pailgun_server.py:73] pailgun request environment: {u'NAILGUN_PATHSEPARATOR': u':', u'TRAVIS_PULL_REQUEST_BRANCH': u'python/resolve/fix-current-platform-handling', u'rvm_version': u'1.29.3 (latest)', u'LC_CTYPE': u'en_US.UTF-8', u'TRAVIS': u'true', u'NAILGUN_TTY_2': u'0', u'NAILGUN_TTY_0': u'0', u'NAILGUN_TTY_1': u'0', u'TRAVIS_REPO_SLUG': u'pantsbuild/pants', u'PANTSD_RUNTRACKER_CLIENT_START_TIME': u'1531501279.85', u'TRAVIS_STACK_LANGUAGES': u'__garnet__ c c++ clojure cplusplus cpp default go groovy java node_js php pure_java python ruby scala', u'JRUBY_OPTS': u' --client -J-XX:+TieredCompilation -J-XX:TieredStopAtLevel=1 -J-Xss2m -Xcompile.invokedynamic=false', u'VIRTUAL_ENV': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv', u'SHELL': u'/bin/bash', u'TRAVIS_UID': u'2000', u'PYENV_SHELL': u'bash', u'TRAVIS_BRANCH': u'master', u'TRAVIS_PULL_REQUEST_SLUG': u'jsirois/pants', u'HISTSIZE': u'1000', u'NVM_BIN': u'/home/travis/.nvm/versions/node/v8.9.1/bin', u'RBENV_SHELL': u'bash', u'MANPATH': u'/home/travis/.nvm/versions/node/v8.9.1/share/man:/home/travis/.kiex/elixirs/elixir-1.4.5/man:/home/travis/.rvm/rubies/ruby-2.4.1/share/man:/usr/local/man:/usr/local/cmake-3.9.2/man:/usr/local/clang-5.0.0/share/man:/usr/local/share/man:/usr/share/man:/home/travis/.rvm/man', u'JAVA_HOME': u'/usr/lib/jvm/java-8-oracle', u'XDG_RUNTIME_DIR': u'/run/user/2000', u'PYTHONPATH': u'/home/travis/build/pantsbuild/pants/src/python:', u'_system_type': u'Linux', u'TRAVIS_SECURE_ENV_VARS': u'false', u'MY_RUBY_HOME': u'/home/travis/.rvm/rubies/ruby-2.4.1', u'XDG_SESSION_ID': u'2', u'TRAVIS_DIST': u'trusty', u'RUBY_VERSION': u'ruby-2.4.1', u'CXX': u'g++', u'PIP_DISABLE_PIP_VERSION_CHECK': u'1', u'_system_version': u'14.04', u'TRAVIS_COMMIT_RANGE': u'2e171f73d1cc32256cf93d5b04246fda2ccb58f3...fe2907e76d98458a78ee066377965df4ca16ee9e', u'MAIL': u'/var/mail/travis', u'SSH_CONNECTION': u'10.10.4.33 36036 10.20.0.218 22', u'GOPATH': u'/home/travis/gopath', u'CONTINUOUS_INTEGRATION': u'true', u'GOROOT': u'/home/travis/.gimme/versions/go1.7.4.linux.amd64', u'TRAVIS_STACK_TIMESTAMP': u'2017-12-05 19:33:09 UTC', u'RACK_ENV': u'test', u'USER': u'travis', u'PYTHONUNBUFFERED': u'1', u'PS1': u'(pants_dev_deps.venv) ', u'PS4': u'+', u'SHLVL': u'3', u'TRAVIS_PULL_REQUEST_SHA': u'fe2907e76d98458a78ee066377965df4ca16ee9e', u'SHARD': u'Python integration tests for pants - shard 3', u'MERB_ENV': u'test', u'JDK_SWITCHER_DEFAULT': u'oraclejdk8', u'GIT_ASKPASS': u'echo', u'GEM_PATH': u'/home/travis/.rvm/gems/ruby-2.4.1:/home/travis/.rvm/gems/ruby-2.4.1@global', u'HAS_ANTARES_THREE_LITTLE_FRONZIES_BADGE': u'true', u'TRAVIS_EVENT_TYPE': u'pull_request', u'TRAVIS_TAG': u'', u'NAILGUN_FILESEPARATOR': u'/', u'TRAVIS_BUILD_NUMBER': u'18440', u'PYENV_ROOT': u'/opt/pyenv', u'TRAVIS_STACK_FEATURES': u'basic cassandra chromium couchdb disabled-ipv6 docker docker-compose elasticsearch firefox go-toolchain google-chrome jdk memcached mongodb mysql neo4j nodejs_interpreter perl_interpreter perlbrew phantomjs postgresql python_interpreter rabbitmq redis riak ruby_interpreter sqlite xserver', u'_system_name': u'Ubuntu', u'PAGER': u'cat', u'PYTEST_PASSTHRU_ARGS': u'-v --duration=3', u'TRAVIS_SUDO': u'true', u'MIX_ARCHIVES': u'/home/travis/.kiex/mix/elixir-1.4.5', u'TRAVIS_BUILD_ID': u'403626566', u'PANTS_CONFIG_FILES': u'/home/travis/build/pantsbuild/pants/pants.travis-ci.ini', u'NVM_DIR': u'/home/travis/.nvm', u'TRAVIS_STACK_NAME': u'garnet', u'HOME': u'/home/travis', u'TRAVIS_PULL_REQUEST': u'6104', u'LANG': u'en_US.UTF-8', u'TRAVIS_COMMIT': u'a6f28670c5c10199bd4a2bd1f34d8181f721326e', u'TRAVIS_STACK_JOB_BOARD_REGISTER': u'/.job-board-register.yml', u'_system_arch': u'x86_64', u'MYSQL_UNIX_PORT': u'/var/run/mysqld/mysqld.sock', u'CI': u'true', u'rvm_prefix': u'/home/travis', u'DEBIAN_FRONTEND': u'noninteractive', u'TRAVIS_PRE_CHEF_BOOTSTRAP_TIME': u'2017-12-05T19:32:55', u'TRAVIS_COMMIT_MESSAGE': u'Merge fe2907e76d98458a78ee066377965df4ca16ee9e into 2e171f73d1cc32256cf93d5b04246fda2ccb58f3', u'IRBRC': u'/home/travis/.rvm/rubies/ruby-2.4.1/.irbrc', u'rvm_path': u'/home/travis/.rvm', u'CASHER_DIR': u'/home/travis/.casher', u'COLUMNS': u'50', u'TRAVIS_STACK_NODE_ATTRIBUTES': u'/.node-attributes.yml', u'SSH_TTY': u'/dev/pts/0', u'PERLBREW_HOME': u'/home/travis/.perlbrew', u'GEM_HOME': u'/home/travis/.rvm/gems/ruby-2.4.1', u'HAS_JOSH_K_SEAL_OF_APPROVAL': u'true', u'PYTHON_CFLAGS': u'-g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security', u'COMPOSER_NO_INTERACTION': u'1', u'NVM_CD_FLAGS': u'', u'TRAVIS_BUILD_STAGE_NAME': u'Test pants', u'SSH_CLIENT': u'10.10.4.33 36036 22', u'PERLBREW_BASHRC_VERSION': u'0.80', u'LOGNAME': u'travis', u'TRAVIS_INIT': u'upstart', u'PATH': u'/home/travis/build/pantsbuild/pants/build-support/pants_dev_deps.venv/bin:/home/travis/.rvm/gems/ruby-2.4.1/bin:/home/travis/.rvm/gems/ruby-2.4.1@global/bin:/home/travis/.rvm/rubies/ruby-2.4.1/bin:/home/travis/.rvm/bin:/home/travis/virtualenv/python2.7.13/bin:/home/travis/bin:/home/travis/.local/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.9.1/bin:/home/travis/.kiex/elixirs/elixir-1.4.5/bin:/home/travis/.kiex/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.7.4.linux.amd64/bin:/usr/local/phantomjs/bin:/usr/local/phantomjs:/usr/local/neo4j-3.2.7/bin:/usr/local/maven-3.5.2/bin:/usr/local/cmake-3.9.2/bin:/usr/local/clang-5.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin', u'TRAVIS_ALLOW_FAILURE': u'false', u'elapsed_start_time': u'1531498647', u'TERM': u'xterm', u'TZ': u'UTC', u'HISTFILESIZE': u'2000', u'TRAVIS_OSX_IMAGE': u'', u'rvm_bin_path': u'/home/travis/.rvm/bin', u'RAILS_ENV': u'test', u'PERLBREW_ROOT': u'/home/travis/perl5/perlbrew', u'TRAVIS_JOB_NUMBER': u'18440.10', u'PYTHON_CONFIGURE_OPTS': u'--enable-unicode=ucs4 --with-wide-unicode --enable-shared --enable-ipv6 --enable-loadable-sqlite-extensions --with-computed-gotos', u'LC_ALL': u'en_US.UTF-8', u'TRAVIS_JOB_ID': u'403626576', u'PYTEST_CURRENT_TEST': u'../tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_lifecycle_invalidation (call)', u'TRAVIS_PYTHON_VERSION': u'2.7.13', u'TRAVIS_LANGUAGE': u'python', u'TRAVIS_BUILD_DIR': u'/home/travis/build/pantsbuild/pants', u'HISTCONTROL': u'ignoredups:ignorespace', u'PWD': u'/home/travis/build/pantsbuild/pants', u'TRAVIS_OS_NAME': u'linux', u'ELIXIR_VERSION': u'1.4.5', u'rvm_pretty_print_flag': u'auto'}
D0713 17:01:19.862945 11396 pailgun_service.py:60] execution commandline: [u'./pants', u'--no-pantsrc', u'--pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d', u'--kill-nailguns', u'--print-exception-stacktrace=True', u'--pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini', u'help']
D0713 17:01:19.993228 11396 pailgun_service.py:67] warming the product graph via <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7fd246238450>
D0713 17:01:20.000976 11396 target_roots_calculator.py:145] spec_roots are: None
D0713 17:01:20.001117 11396 target_roots_calculator.py:146] changed_request is: ChangedRequest(changes_since=None, diffspec=None, include_dependees=none, fast=False)
D0713 17:01:20.001271 11396 target_roots_calculator.py:147] owned_files are: []
D0713 17:01:20.001456 11396 engine_initializer.py:155] warming target_roots for: TargetRoots(specs=None)
D0713 17:01:20.001965 11396 scheduler.py:496] computed 0 nodes in 0.000332 seconds. there are 0 total nodes.
D0713 17:01:20.002372 11396 process_manager.py:213] purging metadata directory: /home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids/pantsd-run-2018-07-13t17_01_20_002125
D0713 17:01:20.002584 11396 process_manager.py:460] forking <pants.bin.daemon_pants_runner.DaemonPantsRunner object at 0x7fd242721bd0>
I0713 17:01:20.008790 11396 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
I0713 17:01:22.379790 11551 pants_daemon.py:360] pantsd starting, log level is INFO
I0713 17:01:22.380280 11551 pants_daemon.py:309] setting up service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7fd473fbfb10>
I0713 17:01:22.380458 11551 pants_daemon.py:309] setting up service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7fd477a2e450>
I0713 17:01:22.380594 11551 scheduler_service.py:74] watching invalidating files: set([])
I0713 17:01:22.380728 11551 pants_daemon.py:309] setting up service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7fd473f4fed0>
I0713 17:01:22.380857 11551 pants_daemon.py:309] setting up service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7fd473f6bc50>
I0713 17:01:22.381115 11551 pants_daemon.py:328] starting service <pants.pantsd.service.fs_event_service.FSEventService object at 0x7fd473fbfb10>
I0713 17:01:22.381432 11551 pants_daemon.py:328] starting service <pants.pantsd.service.store_gc_service.StoreGCService object at 0x7fd473f6bc50>
I0713 17:01:22.382055 11551 pants_daemon.py:328] starting service <pants.pantsd.service.scheduler_service.SchedulerService object at 0x7fd477a2e450>
I0713 17:01:22.382555 11551 pants_daemon.py:328] starting service <pants.pantsd.service.pailgun_service.PailgunService object at 0x7fd473f4fed0>
I0713 17:01:22.382962 11551 pailgun_service.py:102] starting pailgun server on port 45248
I0713 17:01:22.418956 11551 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'all_files', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:334'}
I0713 17:01:22.423099 11551 scheduler_service.py:82] enqueuing 10596 changes for subscription all_files
I0713 17:01:22.450211 11551 watchman.py:184] confirmed watchman subscription: {u'subscribe': 'pantsd_pid', u'version': u'4.9.1', u'clock': u'c:1531501269:11064:1:337'}
I0713 17:01:22.451726 11551 scheduler_service.py:82] enqueuing 1 changes for subscription pantsd_pid
I0713 17:01:22.452696 11551 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -linfo help`
I0713 17:01:22.611726 11551 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini -linfo help`
I0713 17:01:23.761673 11551 pailgun_server.py:72] handling pailgun request: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
I0713 17:01:23.907471 11551 pailgun_server.py:81] pailgun request completed: `./pants --no-pantsrc --pants-workdir=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d --kill-nailguns --print-exception-stacktrace=True --pants-config-files=/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.workdir.pants.d/pants.ini help`
===============================================================
- END pantsd.log ----------------------------------------------
===============================================================
running: ./pants kill-pantsd (config={u'GLOBAL': {u'watchman_socket_path': u'/tmp/watchman.10384.sock', u'pants_subprocessdir': u'/home/travis/build/pantsbuild/pants/.pants.d/tmp/tmpsYhrVh.pants.d/.pids', u'level': u'info', u'enable_pantsd': True}}) (extra_env={})
completed in 2.86789488792 seconds
PantsDaemonMonitor: pid is 11551 is_alive=False
generated xml file: /home/travis/build/pantsbuild/pants/.pants.d/test/pytest/tests.python.pants_test.pantsd.pantsd_integration/junitxml/TEST-tests.python.pants_test.pantsd.pantsd_integration.xml
============ slowest 3 test durations ============
23.09s call ../tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_lifecycle_invalidation
21.48s call ../tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_run
12.54s call ../tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_client_env_var_is_inherited_by_pantsd_runner_children
====== 1 failed, 2 passed in 57.27 seconds =======
```
Maybe a resurrection of #5056?
[pantsd] Pants does not eagerly fail when the rule graph is invalid
Apply this diff:
```diff
diff --git a/src/python/pants/backend/project_info/dependees.py b/src/python/pants/backend/project_info/dependees.py
index a8e877274..46f893d11 100644
--- a/src/python/pants/backend/project_info/dependees.py
+++ b/src/python/pants/backend/project_info/dependees.py
@@ -82,7 +82,7 @@ async def dependees_goal(
specified_addresses: Addresses, options: DependeesOptions, console: Console
) -> Dependees:
# Get every target in the project so that we can iterate over them to find their dependencies.
- all_targets = await Get[Targets](AddressSpecs([DescendantAddresses("")]))
+ all_targets = await Get[Targets](DependeesOptions([DescendantAddresses("")]))
dependencies_per_target = await MultiGet(
Get[Addresses](DependenciesRequest(tgt.get(Dependencies))) for tgt in all_targets
)
```
Then run `./v2`:
```
▶ ./v2
Rules with errors: 1
@goal_rule(pants.backend.project_info.dependees:80:dependees_goal(Addresses, DependeesOptions, Console) -> Dependees, gets=[Get[Targets](DependeesOptions), Get[Addresses](DependenciesRequest)]):
Ambiguous rules to compute Addresses with parameter types (Console, DependenciesRequest, FilesystemSpecs, OptionsBootstrapper):
@rule(
DependenciesRequest,
UnionMembership,
GlobalOptions,
) -> Addresses,
gets=[
Get[WrappedTarget](Address)
Get[InferredDependencies](InferPythonDependencies),
]
pants.engine.target:1678:resolve_dependencies
for (DependenciesRequest, OptionsBootstrapper)
@rule(AddressesWithOrigins) -> Addresses
pants.engine.internals.build_files:307:strip_address_origins
for (FilesystemSpecs, OptionsBootstrapper)
Ambiguous rules to compute Addresses with parameter types (DependenciesRequest, FilesystemSpecs, OptionsBootstrapper):
@rule(
DependenciesRequest,
UnionMembership,
GlobalOptions,
) -> Addresses,
gets=[
Get[WrappedTarget](Address)
Get[InferredDependencies](InferPythonDependencies),
]
pants.engine.target:1678:resolve_dependencies
for (DependenciesRequest, OptionsBootstrapper)
@rule(AddressesWithOrigins) -> Addresses
pants.engine.internals.build_files:307:strip_address_origins
for (FilesystemSpecs, OptionsBootstrapper)
NoneType: None
18:06:21 [INFO] waiting for pantsd to start...
18:06:26 [INFO] waiting for pantsd to start...
18:06:31 [INFO] waiting for pantsd to start...
18:06:36 [INFO] waiting for pantsd to start...
18:06:41 [INFO] waiting for pantsd to start...
18:06:46 [INFO] waiting for pantsd to start...
18:06:51 [INFO] waiting for pantsd to start...
18:06:56 [INFO] waiting for pantsd to start...
18:07:01 [INFO] waiting for pantsd to start...
18:07:06 [INFO] waiting for pantsd to start...
18:07:11 [INFO] waiting for pantsd to start...
18:07:16 [ERROR] exceeded timeout of 60 seconds while waiting for pantsd to start
Traceback (most recent call last):
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/bin/pants_exe.py", line 36, in main
exit_code = runner.run(start_time)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/bin/pants_runner.py", line 86, in run
return RemotePantsRunner(self.args, self.env, options_bootstrapper).run(start_time)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/bin/remote_pants_runner.py", line 209, in run
return self._run_pants_with_retry(self._client.maybe_launch())
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/pants_daemon_client.py", line 35, in maybe_launch
return self._launch()
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/pants_daemon_client.py", line 59, in _launch
pantsd_pid = self.await_pid(60)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 396, in await_pid
caster=int,
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 245, in await_metadata_by_name
self._wait_for_file(file_path, ongoing_msg, completed_msg, timeout=timeout)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 184, in _wait_for_file
return cls._deadline_until(file_waiter, ongoing_msg, completed_msg, timeout=timeout)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 158, in _deadline_until
timeout, ongoing_msg
pants.pantsd.process_manager.ProcessMetadataManager.Timeout: exceeded timeout of 60 seconds while waiting for pantsd to start
```
| @kwlzn you already had a peek I think, but here in case you need reference to the details.
Likely related test that timed out:
```
tests/python/pants_test/pantsd/test_pantsd_integration.py::TestPantsDaemonIntegration::test_pantsd_client_env_var_is_inherited_by_pantsd_runner_children <- pyprep/sources/687f782195d1805e96992ecb028ae0cc6c8a31f1/pants_test/pantsd/test_pantsd_integration.py
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received
The build has been terminated
```
| 2020-06-11T23:32:49Z | [] | [] |
Traceback (most recent call last):
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/bin/pants_exe.py", line 36, in main
exit_code = runner.run(start_time)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/bin/pants_runner.py", line 86, in run
return RemotePantsRunner(self.args, self.env, options_bootstrapper).run(start_time)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/bin/remote_pants_runner.py", line 209, in run
return self._run_pants_with_retry(self._client.maybe_launch())
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/pants_daemon_client.py", line 35, in maybe_launch
return self._launch()
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/pants_daemon_client.py", line 59, in _launch
pantsd_pid = self.await_pid(60)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 396, in await_pid
caster=int,
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 245, in await_metadata_by_name
self._wait_for_file(file_path, ongoing_msg, completed_msg, timeout=timeout)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 184, in _wait_for_file
return cls._deadline_until(file_waiter, ongoing_msg, completed_msg, timeout=timeout)
File "/Users/eric/DocsLocal/code/projects/pants/src/python/pants/pantsd/process_manager.py", line 158, in _deadline_until
timeout, ongoing_msg
pants.pantsd.process_manager.ProcessMetadataManager.Timeout: exceeded timeout of 60 seconds while waiting for pantsd to start
| 15,752 |
|||
pantsbuild/pants | pantsbuild__pants-10789 | d58a5f2b30b6f39bd0b1f6e126e1193591e03a95 | diff --git a/src/python/pants/engine/internals/scheduler.py b/src/python/pants/engine/internals/scheduler.py
--- a/src/python/pants/engine/internals/scheduler.py
+++ b/src/python/pants/engine/internals/scheduler.py
@@ -346,8 +346,8 @@ def python_signal() -> bool:
def lease_files_in_graph(self, session):
self._native.lib.lease_files_in_graph(self._scheduler, session)
- def garbage_collect_store(self):
- self._native.lib.garbage_collect_store(self._scheduler)
+ def garbage_collect_store(self, target_size_bytes: int) -> None:
+ self._native.lib.garbage_collect_store(self._scheduler, target_size_bytes)
def new_session(
self,
@@ -659,5 +659,5 @@ def write_digest(self, digest: Digest, *, path_prefix: Optional[str] = None) ->
def lease_files_in_graph(self):
self._scheduler.lease_files_in_graph(self._session)
- def garbage_collect_store(self):
- self._scheduler.garbage_collect_store()
+ def garbage_collect_store(self, target_size_bytes: int) -> None:
+ self._scheduler.garbage_collect_store(target_size_bytes)
diff --git a/src/python/pants/pantsd/service/store_gc_service.py b/src/python/pants/pantsd/service/store_gc_service.py
--- a/src/python/pants/pantsd/service/store_gc_service.py
+++ b/src/python/pants/pantsd/service/store_gc_service.py
@@ -13,14 +13,19 @@ class StoreGCService(PantsService):
This service both ensures that in-use files continue to be present in the engine's Store, and
performs occasional garbage collection to bound the size of the engine's Store.
+
+ NB: The lease extension interval should be significantly less than the rust-side
+ sharded_lmdb::DEFAULT_LEASE_TIME to ensure that valid leases are extended well before they
+ might expire.
"""
def __init__(
self,
scheduler: Scheduler,
period_secs=10,
- lease_extension_interval_secs=(30 * 60),
- gc_interval_secs=(4 * 60 * 60),
+ lease_extension_interval_secs=(15 * 60),
+ gc_interval_secs=(1 * 60 * 60),
+ target_size_bytes=(4 * 1024 * 1024 * 1024),
):
super().__init__()
self._scheduler_session = scheduler.new_session(build_id="store_gc_service_session")
@@ -29,6 +34,7 @@ def __init__(
self._period_secs = period_secs
self._lease_extension_interval_secs = lease_extension_interval_secs
self._gc_interval_secs = gc_interval_secs
+ self._target_size_bytes = target_size_bytes
self._set_next_gc()
self._set_next_lease_extension()
@@ -51,7 +57,7 @@ def _maybe_garbage_collect(self):
if time.time() < self._next_gc:
return
self._logger.info("Garbage collecting store")
- self._scheduler_session.garbage_collect_store()
+ self._scheduler_session.garbage_collect_store(self._target_size_bytes)
self._logger.info("Done garbage collecting store")
self._set_next_gc()
| Digest does not exist when switching repos with pantsd off in one, on in the other
```
18:08:49.83 [WARN] <unknown>:882: DeprecationWarning: invalid escape sequence \d
18:08:52.21 [WARN] Completed: Find PEX Python - No bootstrap Python executable could be found from the option `interpreter_search_paths` in the `[python-setup]` scope. Will attempt to run PEXes directly.
18:08:53.11 [WARN] /data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/base/exception_sink.py:359: DeprecationWarning: PY_SSIZE_T_CLEAN will be required for '#' formats
process_title=setproctitle.getproctitle(),
18:08:53.11 [ERROR] 1 Exception encountered:
Engine traceback:
in select
in `binary` goal
in pants.backend.python.rules.create_python_binary.create_python_binary
in pants.backend.python.rules.pex.two_step_create_pex
in pants.backend.python.rules.pex.create_pex
Traceback (no traceback):
<pants native internals>
Exception: String("Digest Digest(Fingerprint<97adba2ad1bfef3ba1b37d6b119e15498ed2a60392241b5ea7c28c602826dd6c>, 97) did not exist in the Store.")
Traceback (most recent call last):
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/bin/local_pants_runner.py", line 255, in run
engine_result = self._run_v2()
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/bin/local_pants_runner.py", line 166, in _run_v2
return self._maybe_run_v2_body(goals, poll=False)
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/bin/local_pants_runner.py", line 183, in _maybe_run_v2_body
return self.graph_session.run_goal_rules(
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/init/engine_initializer.py", line 130, in run_goal_rules
exit_code = self.scheduler_session.run_goal_rule(
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/engine/internals/scheduler.py", line 561, in run_goal_rule
self._raise_on_error([t for _, t in throws])
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/engine/internals/scheduler.py", line 520, in _raise_on_error
raise ExecutionError(
pants.engine.internals.scheduler.ExecutionError: 1 Exception encountered:
Engine traceback:
in select
in `binary` goal
in pants.backend.python.rules.create_python_binary.create_python_binary
in pants.backend.python.rules.pex.two_step_create_pex
in pants.backend.python.rules.pex.create_pex
Traceback (no traceback):
<pants native internals>
Exception: String("Digest Digest(Fingerprint<97adba2ad1bfef3ba1b37d6b119e15498ed2a60392241b5ea7c28c602826dd6c>, 97) did not exist in the Store.")
```
| sometime, I see this extra error info:
```
in pants.backend.python.rules.create_python_binary.create_python_binary
in pants.backend.python.rules.pex.two_step_create_pex
in pants.backend.python.rules.pex.create_pex
Traceback (no traceback):
<pants native internals>
Exception: String("Digest Digest(Fingerprint<3dcdb1a3bd62e17cef0250301a85627cdd844605c1e1a514076b429cd39e6870>, 97) did not exist in the Store.")
Fatal Python error: This thread state must be current when releasing
Python runtime state: finalizing (tstate=0x556a5a059f10)
Thread 0x00007f9609246700 (most recent call first):
<no Python frame>
Thread 0x00007f9603fff700 (most recent call first):
<no Python frame>
Current thread 0x00007f9609045700 (most recent call first):
<no Python frame>
Thread 0x00007f9608e44700 (most recent call first):
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 4269 in postParse
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1408 in _parseNoCache
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 3552 in parseImpl
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1402 in _parseNoCache
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 4005 in parseImpl
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1402 in _parseNoCache
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 3417 in parseImpl
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1402 in _parseNoCache
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 3400 in parseImpl
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1402 in _parseNoCache
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 3552 in parseImpl
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1402 in _parseNoCache
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 3417 in parseImpl
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1402 in _parseNoCache
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/pyparsing.py", line 1644 in parseString
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 98 in __init__
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3119 in __init__
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3109 in parse_requirements
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/backend/python/rules/pex_from_targets.py", line 209 in pex_from_targets
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/engine/internals/native.py", line 67 in generator_send
Thread 0x00007f9611881080 (most recent call first):
<no Python frame>
Aborted (core dumped)
```
@stuhood @gshuflin
. @asherf mentioned that this occurs on a machine where some repositories are using `pantsd` and some aren't. That should be accounted for in the lease extension code, but it might still have a gap. Determining whether the missing digest is an "inner" node (below a "root" digest that was kept alive) would likely be interesting.
Added to the `2.0.x` milestone.
I got this when running in the TC codebase (no pantsd) after just having had run in Pants's codebase (with pantsd):
```
Engine traceback:
in select
in `typecheck` goal
in Typecheck using MyPy
in pants.backend.python.util_rules.pex.create_pex
in pants.backend.python.util_rules.pex_cli.setup_pex_cli_process
in Find PEX Python
in Find binary path
in pants.engine.process.remove_platform_information
Traceback (no traceback):
<pants native internals>
Exception: Bytes from stdout Digest Digest(Fingerprint<55fd2022c440089e0812a6a9dc5affeba6ba3d30715595dfde9092125d93909b>, 93) not found in store
``` | 2020-09-16T01:16:31Z | [] | [] |
Traceback (most recent call last):
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/bin/local_pants_runner.py", line 255, in run
engine_result = self._run_v2()
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/bin/local_pants_runner.py", line 166, in _run_v2
return self._maybe_run_v2_body(goals, poll=False)
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/bin/local_pants_runner.py", line 183, in _maybe_run_v2_body
return self.graph_session.run_goal_rules(
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/init/engine_initializer.py", line 130, in run_goal_rules
exit_code = self.scheduler_session.run_goal_rule(
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/engine/internals/scheduler.py", line 561, in run_goal_rule
self._raise_on_error([t for _, t in throws])
File "/data/home/asher/.cache/pants/setup/bootstrap-Linux-x86_64/2.0.0a1_py38/lib/python3.8/site-packages/pants/engine/internals/scheduler.py", line 520, in _raise_on_error
raise ExecutionError(
pants.engine.internals.scheduler.ExecutionError: 1 Exception encountered:
| 15,769 |