repo
stringclasses 32
values | instance_id
stringlengths 13
37
| base_commit
stringlengths 40
40
| patch
stringlengths 1
1.89M
| test_patch
stringclasses 1
value | problem_statement
stringlengths 304
69k
| hints_text
stringlengths 0
246k
| created_at
stringlengths 20
20
| version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value | traceback
stringlengths 64
23.4k
| __index_level_0__
int64 29
19k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
numpy/numpy | numpy__numpy-24299 | b4c50e4862dd23ffcb27fe688d5e79e0456b2715 | diff --git a/numpy/f2py/crackfortran.py b/numpy/f2py/crackfortran.py
--- a/numpy/f2py/crackfortran.py
+++ b/numpy/f2py/crackfortran.py
@@ -1742,6 +1742,23 @@ def updatevars(typespec, selector, attrspec, entitydecl):
else:
del d1[k]
+ if 'len' in d1 and 'array' in d1:
+ if d1['len'] == '':
+ d1['len'] = d1['array']
+ del d1['array']
+ elif typespec == 'character':
+ if ('charselector' not in edecl) or (not edecl['charselector']):
+ edecl['charselector'] = {}
+ if 'len' in edecl['charselector']:
+ del edecl['charselector']['len']
+ edecl['charselector']['*'] = d1['len']
+ del d1['len']
+ else:
+ d1['array'] = d1['array'] + ',' + d1['len']
+ del d1['len']
+ errmess('updatevars: "%s %s" is mapped to "%s %s(%s)"\n' % (
+ typespec, e, typespec, ename, d1['array']))
+
if 'len' in d1:
if typespec in ['complex', 'integer', 'logical', 'real']:
if ('kindselector' not in edecl) or (not edecl['kindselector']):
@@ -1763,16 +1780,6 @@ def updatevars(typespec, selector, attrspec, entitydecl):
else:
edecl['='] = d1['init']
- if 'len' in d1 and 'array' in d1:
- if d1['len'] == '':
- d1['len'] = d1['array']
- del d1['array']
- else:
- d1['array'] = d1['array'] + ',' + d1['len']
- del d1['len']
- errmess('updatevars: "%s %s" is mapped to "%s %s(%s)"\n' % (
- typespec, e, typespec, ename, d1['array']))
-
if 'array' in d1:
dm = 'dimension(%s)' % d1['array']
if 'attrspec' not in edecl or (not edecl['attrspec']):
| BUG: f2py cannot compile files it used to be able to compile
### Describe the issue:
I'm attempting to use f2py to compile some fortran code. I'm able to do this using the following numpy versions:
- 1.21.0
- 1.22.0
- 1.23.0
- 1.24.0
- 1.24.2
Starting on v1.24.3, the same compilation code no longer works using f2py. Strangely enough, if I precompile one of files (LAPACK.f), the compilation can work with v1.24.3.
### Reproduce the code example:
```python
# I don't see how I can produce runnable code using f2py, as it requires the fortran source code
# This code produces a .so file in my project's home directory on numpy <= 1.24.2 but doesn't work on numpy >= 1.24.3
from pathlib import Path
from numpy import f2py
project_path = Path(__file__).resolve().parent
disort_directory = project_path.joinpath('disort4.0.99')
module_name = 'disort'
fortran_source_filenames = ['BDREF.f', 'DISOBRDF.f', 'ERRPACK.f', 'LAPACK.f', 'LINPAK.f', 'RDI1MACH.f']
fortran_paths = [disort_directory.joinpath(f) for f in fortran_source_filenames]
with open(disort_directory.joinpath('DISORT.f')) as disort_module:
f2py.compile(disort_module.read(), modulename=module_name, extra_args=fortran_paths)
# If I precompile LAPACK.f using:
# /usr/bin/gfortran -Wall -g -ffixed-form -fno-second-underscore -g -fno-second-underscore -fPIC -O3 -funroll-loops -c LAPACK.f
# then it works using numpy = 1.24.3. Note the only difference in the code is the LAPACK.f is now LAPACK.o
project_path = Path(__file__).resolve().parent
disort_directory = project_path.joinpath('disort4.0.99')
module_name = 'disort'
fortran_source_filenames = ['BDREF.f', 'DISOBRDF.f', 'ERRPACK.f', 'LAPACK.o', 'LINPAK.f', 'RDI1MACH.f']
fortran_paths = [disort_directory.joinpath(f) for f in fortran_source_filenames]
with open(disort_directory.joinpath('DISORT.f')) as disort_module:
f2py.compile(disort_module.read(), modulename=module_name, extra_args=fortran_paths)
```
### Error message:
```shell
There is no error message. f2py simply stops right before where it prints this line:
INFO: compiling Fortran sources
```
### Runtime information:
Line 1 output:
1.24.0
3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0]
(note that I have no idea why it says 1.24.0 when Pycharm assures me I'm using 1.24.2)
Line 2 output:
Exception ignored on calling ctypes callback function: <function ThreadpoolController._find_libraries_with_dl_iterate_phdr.<locals>.match_library_callback at 0x7f592f9c00d0>
Traceback (most recent call last):
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 584, in match_library_callback
self._make_controller_from_path(filepath)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 725, in _make_controller_from_path
lib_controller = lib_controller_class(
File "~/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 842, in __init__
super().__init__(**kwargs)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 810, in __init__
self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD)
File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen() error
[{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}}]
None
### Context for the issue:
I believe that LAPACK.f is one of the more widely used fortran codes. If it cannot compile in conjunction with other code, that could potentially disrupt a good number of users.
| Ping @HaoZeke, also to confirm, can you check if the issue persists on 1.25.0?
@seberg I can confirm that the issue persists on 1.25.0. It's actually why I noticed it in the first place. My code that used to run failed and then I tracked it down to the version number described above, in hopes that someone might have a better idea what caused it.
On `1.24.4` the error is:
```bash
❯ f2py -c --f90flags='-O3' -m disort BDREF.f DISOBRDF.f ERRPACK.f LAPACK.f LINPAK.f RDI1MACH.f
running build
running config_cc
INFO: unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
INFO: build_src
INFO: building extension "disort" sources
INFO: f2py options: []
INFO: f2py:> /tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c
creating /tmp/tmpuul0531u/src.linux-x86_64-3.9
Reading fortran codes...
Reading file 'BDREF.f' (format:fix,strict)
Reading file 'DISOBRDF.f' (format:fix,strict)
rmbadname1: Replacing "float" with "float_bn".
rmbadname1: Replacing "len" with "len_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "float" with "float_bn".
Reading file 'ERRPACK.f' (format:fix,strict)
Reading file 'LAPACK.f' (format:fix,strict)
rmbadname1: Replacing "max" with "max_bn".
Line #229 in LAPACK.f:" PARAMETER (ONE=1.0D+0,ZERO=0.0D+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
Line #745 in LAPACK.f:" PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #1363 in LAPACK.f:" PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
Line #2115 in LAPACK.f:" PARAMETER (ONE=1.0D+0,ZERO=0.0D+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "char" with "char_bn".
rmbadname1: Replacing "int" with "int_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #3999 in LAPACK.f:" PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
Line #4273 in LAPACK.f:" PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #5124 in LAPACK.f:" PARAMETER (ONE=1.0E+0,ZERO=0.0E+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #5462 in LAPACK.f:" PARAMETER (ONE=1.0E+0,ZERO=0.0E+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #6657 in LAPACK.f:" PARAMETER (ONE=1.0E+0,ZERO=0.0E+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Reading file 'LINPAK.f' (format:fix,strict)
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Reading file 'RDI1MACH.f' (format:fix,strict)
Post-processing...
Block: disort
Block: bdref
Block: brdf_hapke
Block: brdf_rpv
Block: brdf_rossli
Block: oceabrdf2
Block: shadow_eta
Block: disobrdf
{}
In: :disort:DISOBRDF.f:surfac2
vars2fortran: No typespec for argument "nazz".
Block: surfac2
Block: qgausn2
Block: zeroit2
Block: errmsg
Block: wrtbad
Block: wrtdim
Block: tstbad
Block: dgemm
In: :disort:LAPACK.f:dgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dger
In: :disort:LAPACK.f:dger
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dgetf2
In: :disort:LAPACK.f:dgetf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dgetf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dgetrf
In: :disort:LAPACK.f:dgetrf
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dgetrs
In: :disort:LAPACK.f:dgetrs
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dlamch
In: :disort:LAPACK.f:dlamch
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dlamch
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dlamc3
Block: dlaswp
Block: dscal
Block: dswap
Block: dtrsm
In: :disort:LAPACK.f:dtrsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dtrsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: idamax
Block: ieeeck
Block: ilaenv
Block: iparmq
In: :disort:LAPACK.f:iparmq
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: lsame
Block: scopy
Block: sgbtf2
In: :disort:LAPACK.f:sgbtf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgbtf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgbtrf
In: :disort:LAPACK.f:sgbtrf
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgbtrf
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgbtrs
In: :disort:LAPACK.f:sgbtrs
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgemm
In: :disort:LAPACK.f:sgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgemv
In: :disort:LAPACK.f:sgemv
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgemv
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sger
In: :disort:LAPACK.f:sger
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: slaswp
Block: stbsv
In: :disort:LAPACK.f:stbsv
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: strsm
In: :disort:LAPACK.f:strsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:strsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: xerbla
Block: sgbco
Block: sgbfa
Block: sgbsl
Block: sgeco
Block: sgefa
Block: sgesl
Block: sasum
Block: saxpy
Block: sdot
Block: sscal
Block: sswap
Block: isamax
Block: r1mach
Block: d1mach
Block: i1mach
Applying post-processing hooks...
character_backward_compatibility_hook
Post-processing (stage 2)...
Building modules...
Building module "disort"...
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "bdref"("bdref")...
Constructing wrapper function "bdref"...
bdref = bdref(mu,mup,dphi,brdf_type,brdf_arg)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "brdf_hapke"...
brdf_hapke(mup,mu,dphi,b0,hh,w,pi,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "brdf_rpv"...
brdf_rpv(mu_i,mu_r,dphi,rho0,kappa,g_hg,h0,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "brdf_rossli"...
brdf_rossli(mu_i,mu_r,dphi,k_iso,k_vol,k_geo,alpha0,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "oceabrdf2"...
oceabrdf2(do_shadow,refrac_index,ws,mu_i,mu_r,dphi,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "shadow_eta"("shadow_eta")...
Constructing wrapper function "shadow_eta"...
shadow_eta = shadow_eta(cos_theta,sigma_sq,pi)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "disobrdf"...
rhoq,rhou,emust,bemst,bdr_beam_analytic = disobrdf(usrang,umu,fbeam,umu0,lamber,albedo,onlyfl,rhoq,rhou,emust,bemst,debug,phi,phi0,bdr_beam_analytic,brdf_type,brdf_arg,nmug,[nstr,numu,nphi])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "surfac2"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
surfac2(albedo,delm0,cmu,fbeam,lamber,mazim,onlyfl,pi,umu,umu0,usrang,bdr,emu,bem,rmu,rhoq,rhou,emust,bemst,debug,gmu,gwt,cosmp,brdf_type,brdf_arg,[mi,mxumu,nn,numu,nazz,nstr,nmug])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "qgausn2"...
qgausn2(gmu,gwt,[m])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "zeroit2"...
zeroit2(a,[length])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "errmsg"...
errmsg(messag,fatal)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "wrtbad"("wrtbad")...
Constructing wrapper function "wrtbad"...
wrtbad = wrtbad(varnam)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "wrtdim"("wrtdim")...
Constructing wrapper function "wrtdim"...
wrtdim = wrtdim(dimnam,minval)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "tstbad"("tstbad")...
Constructing wrapper function "tstbad"...
tstbad = tstbad(varnam,relerr)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgemm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgemm(transa,transb,m,n,k,alpha,a,b,beta,c,[lda,ldb,ldc])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dger"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dger(m,n,alpha,x,incx,y,incy,a,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgetf2"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgetf2(m,n,a,ipiv,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgetrf"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgetrf(m,n,a,ipiv,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgetrs"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgetrs(trans,n,nrhs,a,ipiv,b,info,[lda,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "dlamch"("dlamch")...
Constructing wrapper function "dlamch"...
dlamch = dlamch(cmach)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "dlamc3"("dlamc3")...
Constructing wrapper function "dlamc3"...
dlamc3 = dlamc3(a,b)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dlaswp"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dlaswp(n,a,k1,k2,ipiv,incx,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dscal"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
dscal(n,da,dx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dswap"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dswap(n,dx,incx,dy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dtrsm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dtrsm(side,uplo,transa,diag,m,n,alpha,a,b,[lda,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "idamax"("idamax")...
Constructing wrapper function "idamax"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
idamax = idamax(n,dx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "ieeeck"("ieeeck")...
Constructing wrapper function "ieeeck"...
ieeeck = ieeeck(ispec,zero,one)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "ilaenv"("ilaenv")...
Constructing wrapper function "ilaenv"...
ilaenv = ilaenv(ispec,name,opts,n1,n2,n3,n4)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "iparmq"("iparmq")...
Constructing wrapper function "iparmq"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
getarrdims:warning: assumed shape array, using 0 instead of '*'
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
iparmq = iparmq(ispec,name,opts,n,ilo,ihi,lwork)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "lsame"("lsame")...
Constructing wrapper function "lsame"...
lsame = lsame(ca,cb)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "scopy"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
scopy(n,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbtf2"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbtf2(m,n,kl,ku,ab,ipiv,info,[ldab])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbtrf"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbtrf(m,n,kl,ku,ab,ipiv,info,[ldab])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbtrs"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbtrs(trans,n,kl,ku,nrhs,ab,ipiv,b,info,[ldab,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgemm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgemm(transa,transb,m,n,k,alpha,a,b,beta,c,[lda,ldb,ldc])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgemv"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgemv(trans,m,n,alpha,a,x,incx,beta,y,incy,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sger"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sger(m,n,alpha,x,incx,y,incy,a,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "slaswp"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
slaswp(n,a,k1,k2,ipiv,incx,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "stbsv"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
stbsv(uplo,trans,diag,n,k,a,x,incx,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "strsm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
strsm(side,uplo,transa,diag,m,n,alpha,a,b,[lda,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "xerbla"...
xerbla(srname,info)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbco"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbco(abd,n,ml,mu,ipvt,rcond,z,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbfa"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbfa(abd,n,ml,mu,ipvt,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbsl"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbsl(abd,n,ml,mu,ipvt,b,job,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgeco"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgeco(a,n,ipvt,rcond,z,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgefa"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgefa(a,n,ipvt,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgesl"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgesl(a,n,ipvt,b,job,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "sasum"("sasum")...
Constructing wrapper function "sasum"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
sasum = sasum(n,sx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "saxpy"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
saxpy(n,sa,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "sdot"("sdot")...
Constructing wrapper function "sdot"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sdot = sdot(n,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sscal"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
sscal(n,sa,sx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sswap"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sswap(n,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "isamax"("isamax")...
Constructing wrapper function "isamax"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
isamax = isamax(n,sx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "r1mach"("r1mach")...
Constructing wrapper function "r1mach"...
r1mach = r1mach(i)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "d1mach"("d1mach")...
Constructing wrapper function "d1mach"...
d1mach = d1mach(i)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "i1mach"("i1mach")...
Constructing wrapper function "i1mach"...
i1mach = i1mach(i)
Wrote C/API module "disort" to file "/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c"
Fortran 77 wrappers are saved to "/tmp/tmpuul0531u/src.linux-x86_64-3.9/disort-f2pywrappers.f"
INFO: adding '/tmp/tmpuul0531u/src.linux-x86_64-3.9/fortranobject.c' to sources.
INFO: adding '/tmp/tmpuul0531u/src.linux-x86_64-3.9' to include_dirs.
copying /home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/f2py/src/fortranobject.c -> /tmp/tmpuul0531u/src.linux-x86_64-3.9
copying /home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/f2py/src/fortranobject.h -> /tmp/tmpuul0531u/src.linux-x86_64-3.9
INFO: adding '/tmp/tmpuul0531u/src.linux-x86_64-3.9/disort-f2pywrappers.f' to sources.
INFO: build_src: building npy-pkg config files
running build_ext
INFO: customize UnixCCompiler
INFO: C compiler: /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -fPIC
creating /tmp/tmp27l9smg9/tmp
creating /tmp/tmp27l9smg9/tmp/tmp27l9smg9
INFO: compile options: '-MMD -MF /tmp/tmp27l9smg9/file.c.d -c'
INFO: x86_64-conda-linux-gnu-cc: /tmp/tmp27l9smg9/file.c
INFO: customize UnixCCompiler using build_ext
INFO: get_default_fcompiler: matching types: '['arm', 'gnu95', 'intel', 'lahey', 'pg', 'nv', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor', 'fujitsu']'
INFO: customize ArmFlangCompiler
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-gfortran
WARN: Could not locate executable armflang
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-ar
INFO: customize Gnu95FCompiler
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-ld
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-ranlib
INFO: customize Gnu95FCompiler
INFO: customize Gnu95FCompiler using build_ext
INFO: building 'disort' extension
INFO: compiling C sources
INFO: C compiler: /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -fPIC
creating /tmp/tmpuul0531u/tmp
creating /tmp/tmpuul0531u/tmp/tmpuul0531u
creating /tmp/tmpuul0531u/tmp/tmpuul0531u/src.linux-x86_64-3.9
INFO: compile options: '-DNPY_DISABLE_OPTIMIZATION=1 -I/tmp/tmpuul0531u/src.linux-x86_64-3.9 -I/home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/core/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include/python3.9 -c'
INFO: x86_64-conda-linux-gnu-cc: /tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c
INFO: x86_64-conda-linux-gnu-cc: /tmp/tmpuul0531u/src.linux-x86_64-3.9/fortranobject.c
/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c: In function 'f2py_rout_disort_iparmq':
/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c:4772:58: error: expected expression before ',' token
4772 | capi_name_as_array = ndarray_from_pyobj( NPY_STRING,,name_Dims,name_Rank, capi_name_intent,name_capi,capi_errmess);
| ^
/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c:4787:58: error: expected expression before ',' token
4787 | capi_opts_as_array = ndarray_from_pyobj( NPY_STRING,,opts_Dims,opts_Rank, capi_opts_intent,opts_capi,capi_errmess);
| ^
error: Command "/home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -fPIC -DNPY_DISABLE_OPTIMIZATION=1 -I/tmp/tmpuul0531u/src.linux-x86_64-3.9 -I/home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/core/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include/python3.9 -c /tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c -o /tmp/tmpuul0531u/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.o -MMD -MF /tmp/tmpuul0531u/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.o.d" failed with exit status 1
```
Where it seems that the relevant error is from changes to character handling:
```bash
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
/tmp/tmp5l8spqw9/src.linux-x86_64-3.9/disortmodule.c: In function 'f2py_rout_disort_iparmq':
/tmp/tmp5l8spqw9/src.linux-x86_64-3.9/disortmodule.c:4772:58: error: expected expression before ',' token
4772 | capi_name_as_array = ndarray_from_pyobj( NPY_STRING,,name_Dims,name_Rank, capi_name_intent,name_capi,capi_errmess);
| ^
/tmp/tmp5l8spqw9/src.linux-x86_64-3.9/disortmodule.c:4787:58: error: expected expression before ',' token
4787 | capi_opts_as_array = ndarray_from_pyobj( NPY_STRING,,opts_Dims,opts_Rank, capi_opts_intent,opts_capi,capi_errmess);
|
```
OTOH I would suspect something in the character handling of https://github.com/numpy/numpy/issues/23356 or https://github.com/numpy/numpy/pull/23194
Diffing the outputs b/w `1.24.3` and `1.24.4` only show:
```bash
Creating wrapper for Fortran function "iparmq"("iparmq")...
Constructing wrapper function "iparmq"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
```
Indeed, `f2py -c --f90flags='-O3' -m disort BDREF.f DISOBRDF.f ERRPACK.f LAPACK.f LINPAK.f RDI1MACH.f skip: iparmq` does seem to compile as well.
Will investigate ASAP. P.S. @kconnour the code being tested is [Pythonic-Distort](https://github.com/LDEO-CREW/Pythonic-DISORT/tree/main/disort4.0.99_f2py) right?
Hi @HaoZeke, thanks for investigating! I'm really glad to see someone else is getting a similar error.
Actually, the code I'm testing is on the api branch of [my repo](https://github.com/kconnour/pyRT_DISORT) but it should be extremely similar to the code in the repo you linked. We're both apparently trying to make a front-end to a popular open-source fortran algorithm. Note that I coded a workaround to this problem in pyproject.toml, where I force it to install numpy==1.24.0 in order to circumvent this issue... so if you ran the installation script from my repo, it shouldn't encounter this error.
I'm happy to provide any additional info to help diagnose this issue!
We ran into similar issues with our code, and after testing, it's because of having a decimal inside of a parameter declaration. Once that happens, everything later stops processing properly. The same seems be happening in the examples above
If you look at the error log posted, https://github.com/numpy/numpy/issues/24008#issuecomment-1601586519, you'll see the first error message is the attached.
```
Line #229 in LAPACK.f:" PARAMETER (ONE=1.0D+0,ZERO=0.0D+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
```
You could confirm whether this also works for your code by modifying that parameter to just be ONE=1, ZERO=0, and recompiling. If it gets past that point, that's the problem.
We worked around this by reverting crackfortran.py to prior to the changes introduced here: https://github.com/numpy/numpy/pull/23637/files
As to speculation: I think that the changes to the if blocks in https://github.com/numpy/numpy/pull/23637/commits caused an issue that assumes the input is an integer. Even more speculative - I see the kind selector is in that portion, and the kind selector requires integers.
(https://numpy.org/doc/stable/f2py/advanced.html#dealing-with-kind-specifiers) . But I'm not sure - I didn't spend enough time on understanding crackfortran.py once we discovered that reverting crackfortran.py fixed our problem. | 2023-07-31T06:34:32Z | [] | [] |
Traceback (most recent call last):
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 584, in match_library_callback
self._make_controller_from_path(filepath)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 725, in _make_controller_from_path
lib_controller = lib_controller_class(
File "~/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 842, in __init__
super().__init__(**kwargs)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 810, in __init__
self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD)
File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen() error
| 10,207 |
|||
numpy/numpy | numpy__numpy-24511 | 688f86133c78a3be569981a121d77166527d5b38 | diff --git a/numpy/_build_utils/gitversion.py b/numpy/_build_utils/gitversion.py
--- a/numpy/_build_utils/gitversion.py
+++ b/numpy/_build_utils/gitversion.py
@@ -24,6 +24,7 @@ def git_version(version):
import subprocess
import os.path
+ git_hash = ''
try:
p = subprocess.Popen(
['git', 'log', '-1', '--format="%H %aI"'],
@@ -48,8 +49,6 @@ def git_version(version):
# Only attach git tag to development versions
if 'dev' in version:
version += f'+git{git_date}.{git_hash[:7]}'
- else:
- git_hash = ''
return version, git_hash
| BUG: numpy 1.26.0b1 fails to build from sdist when no git is present
### Describe the issue:
----
`<edit>`: (@mattip) Adding the root cause:
The version is missing from the sdist for 1.26b1, resulting in an attempt to get it via `git`. But that does not work because ...
`<edit>`
----
The `gitversion.py` script introduced in #24196 is broken when no `git` binary is available.
### Reproduce the code example:
```python
$ python3 numpy/_build_utils/gitversion.py
```
### Error message:
```shell
$ python3 numpy/_build_utils/gitversion.py
Traceback (most recent call last):
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 68, in <module>
version, git_hash = git_version(init_version())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 53, in git_version
return version, git_hash
^^^^^^^^
UnboundLocalError: cannot access local variable 'git_hash' where it is not associated with a value
```
### Runtime information:
N/A
### Context for the issue:
https://github.com/sagemath/sage/pull/36123
| Thanks for the report @mkoeppe. That's a bug indeed. The `gitversion.py` script should check for a file containing the needed git hash (as was done with `if fs.exists('_version_meson.py')` before gh-24196). Building from an sdist should work when `git` is not installed.
@stefanv can you please have a look at this? | 2023-08-23T13:02:33Z | [] | [] |
Traceback (most recent call last):
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 68, in <module>
version, git_hash = git_version(init_version())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 53, in git_version
return version, git_hash
^^^^^^^^
UnboundLocalError: cannot access local variable 'git_hash' where it is not associated with a value
| 10,212 |
|||
numpy/numpy | numpy__numpy-24522 | ef94040e9a7db18620d825782b8a14650f7c3346 | diff --git a/numpy/_build_utils/gitversion.py b/numpy/_build_utils/gitversion.py
--- a/numpy/_build_utils/gitversion.py
+++ b/numpy/_build_utils/gitversion.py
@@ -24,6 +24,7 @@ def git_version(version):
import subprocess
import os.path
+ git_hash = ''
try:
p = subprocess.Popen(
['git', 'log', '-1', '--format="%H %aI"'],
@@ -48,8 +49,6 @@ def git_version(version):
# Only attach git tag to development versions
if 'dev' in version:
version += f'+git{git_date}.{git_hash[:7]}'
- else:
- git_hash = ''
return version, git_hash
| BUG: numpy 1.26.0b1 fails to build from sdist when no git is present
### Describe the issue:
----
`<edit>`: (@mattip) Adding the root cause:
The version is missing from the sdist for 1.26b1, resulting in an attempt to get it via `git`. But that does not work because ...
`<edit>`
----
The `gitversion.py` script introduced in #24196 is broken when no `git` binary is available.
### Reproduce the code example:
```python
$ python3 numpy/_build_utils/gitversion.py
```
### Error message:
```shell
$ python3 numpy/_build_utils/gitversion.py
Traceback (most recent call last):
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 68, in <module>
version, git_hash = git_version(init_version())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 53, in git_version
return version, git_hash
^^^^^^^^
UnboundLocalError: cannot access local variable 'git_hash' where it is not associated with a value
```
### Runtime information:
N/A
### Context for the issue:
https://github.com/sagemath/sage/pull/36123
| Thanks for the report @mkoeppe. That's a bug indeed. The `gitversion.py` script should check for a file containing the needed git hash (as was done with `if fs.exists('_version_meson.py')` before gh-24196). Building from an sdist should work when `git` is not installed.
@stefanv can you please have a look at this? | 2023-08-24T16:53:29Z | [] | [] |
Traceback (most recent call last):
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 68, in <module>
version, git_hash = git_version(init_version())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sage/local/var/lib/sage/venv-python3.11/var/tmp/sage/build/numpy-1.26.0b1/src/numpy/_build_utils/gitversion.py", line 53, in git_version
return version, git_hash
^^^^^^^^
UnboundLocalError: cannot access local variable 'git_hash' where it is not associated with a value
| 10,213 |
|||
numpy/numpy | numpy__numpy-24542 | 9d64cc2bbe44292c46af768fb5da9b7968c2c6cb | diff --git a/numpy/f2py/crackfortran.py b/numpy/f2py/crackfortran.py
--- a/numpy/f2py/crackfortran.py
+++ b/numpy/f2py/crackfortran.py
@@ -1742,6 +1742,23 @@ def updatevars(typespec, selector, attrspec, entitydecl):
else:
del d1[k]
+ if 'len' in d1 and 'array' in d1:
+ if d1['len'] == '':
+ d1['len'] = d1['array']
+ del d1['array']
+ elif typespec == 'character':
+ if ('charselector' not in edecl) or (not edecl['charselector']):
+ edecl['charselector'] = {}
+ if 'len' in edecl['charselector']:
+ del edecl['charselector']['len']
+ edecl['charselector']['*'] = d1['len']
+ del d1['len']
+ else:
+ d1['array'] = d1['array'] + ',' + d1['len']
+ del d1['len']
+ errmess('updatevars: "%s %s" is mapped to "%s %s(%s)"\n' % (
+ typespec, e, typespec, ename, d1['array']))
+
if 'len' in d1:
if typespec in ['complex', 'integer', 'logical', 'real']:
if ('kindselector' not in edecl) or (not edecl['kindselector']):
@@ -1763,16 +1780,6 @@ def updatevars(typespec, selector, attrspec, entitydecl):
else:
edecl['='] = d1['init']
- if 'len' in d1 and 'array' in d1:
- if d1['len'] == '':
- d1['len'] = d1['array']
- del d1['array']
- else:
- d1['array'] = d1['array'] + ',' + d1['len']
- del d1['len']
- errmess('updatevars: "%s %s" is mapped to "%s %s(%s)"\n' % (
- typespec, e, typespec, ename, d1['array']))
-
if 'array' in d1:
dm = 'dimension(%s)' % d1['array']
if 'attrspec' not in edecl or (not edecl['attrspec']):
| BUG: f2py cannot compile files it used to be able to compile
### Describe the issue:
I'm attempting to use f2py to compile some fortran code. I'm able to do this using the following numpy versions:
- 1.21.0
- 1.22.0
- 1.23.0
- 1.24.0
- 1.24.2
Starting on v1.24.3, the same compilation code no longer works using f2py. Strangely enough, if I precompile one of files (LAPACK.f), the compilation can work with v1.24.3.
### Reproduce the code example:
```python
# I don't see how I can produce runnable code using f2py, as it requires the fortran source code
# This code produces a .so file in my project's home directory on numpy <= 1.24.2 but doesn't work on numpy >= 1.24.3
from pathlib import Path
from numpy import f2py
project_path = Path(__file__).resolve().parent
disort_directory = project_path.joinpath('disort4.0.99')
module_name = 'disort'
fortran_source_filenames = ['BDREF.f', 'DISOBRDF.f', 'ERRPACK.f', 'LAPACK.f', 'LINPAK.f', 'RDI1MACH.f']
fortran_paths = [disort_directory.joinpath(f) for f in fortran_source_filenames]
with open(disort_directory.joinpath('DISORT.f')) as disort_module:
f2py.compile(disort_module.read(), modulename=module_name, extra_args=fortran_paths)
# If I precompile LAPACK.f using:
# /usr/bin/gfortran -Wall -g -ffixed-form -fno-second-underscore -g -fno-second-underscore -fPIC -O3 -funroll-loops -c LAPACK.f
# then it works using numpy = 1.24.3. Note the only difference in the code is the LAPACK.f is now LAPACK.o
project_path = Path(__file__).resolve().parent
disort_directory = project_path.joinpath('disort4.0.99')
module_name = 'disort'
fortran_source_filenames = ['BDREF.f', 'DISOBRDF.f', 'ERRPACK.f', 'LAPACK.o', 'LINPAK.f', 'RDI1MACH.f']
fortran_paths = [disort_directory.joinpath(f) for f in fortran_source_filenames]
with open(disort_directory.joinpath('DISORT.f')) as disort_module:
f2py.compile(disort_module.read(), modulename=module_name, extra_args=fortran_paths)
```
### Error message:
```shell
There is no error message. f2py simply stops right before where it prints this line:
INFO: compiling Fortran sources
```
### Runtime information:
Line 1 output:
1.24.0
3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0]
(note that I have no idea why it says 1.24.0 when Pycharm assures me I'm using 1.24.2)
Line 2 output:
Exception ignored on calling ctypes callback function: <function ThreadpoolController._find_libraries_with_dl_iterate_phdr.<locals>.match_library_callback at 0x7f592f9c00d0>
Traceback (most recent call last):
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 584, in match_library_callback
self._make_controller_from_path(filepath)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 725, in _make_controller_from_path
lib_controller = lib_controller_class(
File "~/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 842, in __init__
super().__init__(**kwargs)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 810, in __init__
self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD)
File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen() error
[{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}}]
None
### Context for the issue:
I believe that LAPACK.f is one of the more widely used fortran codes. If it cannot compile in conjunction with other code, that could potentially disrupt a good number of users.
| Ping @HaoZeke, also to confirm, can you check if the issue persists on 1.25.0?
@seberg I can confirm that the issue persists on 1.25.0. It's actually why I noticed it in the first place. My code that used to run failed and then I tracked it down to the version number described above, in hopes that someone might have a better idea what caused it.
On `1.24.4` the error is:
```bash
❯ f2py -c --f90flags='-O3' -m disort BDREF.f DISOBRDF.f ERRPACK.f LAPACK.f LINPAK.f RDI1MACH.f
running build
running config_cc
INFO: unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
INFO: build_src
INFO: building extension "disort" sources
INFO: f2py options: []
INFO: f2py:> /tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c
creating /tmp/tmpuul0531u/src.linux-x86_64-3.9
Reading fortran codes...
Reading file 'BDREF.f' (format:fix,strict)
Reading file 'DISOBRDF.f' (format:fix,strict)
rmbadname1: Replacing "float" with "float_bn".
rmbadname1: Replacing "len" with "len_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "float" with "float_bn".
Reading file 'ERRPACK.f' (format:fix,strict)
Reading file 'LAPACK.f' (format:fix,strict)
rmbadname1: Replacing "max" with "max_bn".
Line #229 in LAPACK.f:" PARAMETER (ONE=1.0D+0,ZERO=0.0D+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
Line #745 in LAPACK.f:" PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #1363 in LAPACK.f:" PARAMETER ( ONE = 1.0D+0, ZERO = 0.0D+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
Line #2115 in LAPACK.f:" PARAMETER (ONE=1.0D+0,ZERO=0.0D+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "char" with "char_bn".
rmbadname1: Replacing "int" with "int_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #3999 in LAPACK.f:" PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
Line #4273 in LAPACK.f:" PARAMETER ( ONE = 1.0E+0, ZERO = 0.0E+0 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #4277 in LAPACK.f:" PARAMETER ( NBMAX = 64, LDWORK = NBMAX+1 )"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #5124 in LAPACK.f:" PARAMETER (ONE=1.0E+0,ZERO=0.0E+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Line #5462 in LAPACK.f:" PARAMETER (ONE=1.0E+0,ZERO=0.0E+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Line #6657 in LAPACK.f:" PARAMETER (ONE=1.0E+0,ZERO=0.0E+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Reading file 'LINPAK.f' (format:fix,strict)
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "min" with "min_bn".
rmbadname1: Replacing "max" with "max_bn".
Reading file 'RDI1MACH.f' (format:fix,strict)
Post-processing...
Block: disort
Block: bdref
Block: brdf_hapke
Block: brdf_rpv
Block: brdf_rossli
Block: oceabrdf2
Block: shadow_eta
Block: disobrdf
{}
In: :disort:DISOBRDF.f:surfac2
vars2fortran: No typespec for argument "nazz".
Block: surfac2
Block: qgausn2
Block: zeroit2
Block: errmsg
Block: wrtbad
Block: wrtdim
Block: tstbad
Block: dgemm
In: :disort:LAPACK.f:dgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dger
In: :disort:LAPACK.f:dger
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dgetf2
In: :disort:LAPACK.f:dgetf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dgetf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dgetrf
In: :disort:LAPACK.f:dgetrf
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dgetrs
In: :disort:LAPACK.f:dgetrs
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dlamch
In: :disort:LAPACK.f:dlamch
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dlamch
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: dlamc3
Block: dlaswp
Block: dscal
Block: dswap
Block: dtrsm
In: :disort:LAPACK.f:dtrsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:dtrsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: idamax
Block: ieeeck
Block: ilaenv
Block: iparmq
In: :disort:LAPACK.f:iparmq
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: lsame
Block: scopy
Block: sgbtf2
In: :disort:LAPACK.f:sgbtf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgbtf2
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgbtrf
In: :disort:LAPACK.f:sgbtrf
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgbtrf
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgbtrs
In: :disort:LAPACK.f:sgbtrs
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgemm
In: :disort:LAPACK.f:sgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgemm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sgemv
In: :disort:LAPACK.f:sgemv
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:sgemv
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: sger
In: :disort:LAPACK.f:sger
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: slaswp
Block: stbsv
In: :disort:LAPACK.f:stbsv
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: strsm
In: :disort:LAPACK.f:strsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
In: :disort:LAPACK.f:strsm
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
Block: xerbla
Block: sgbco
Block: sgbfa
Block: sgbsl
Block: sgeco
Block: sgefa
Block: sgesl
Block: sasum
Block: saxpy
Block: sdot
Block: sscal
Block: sswap
Block: isamax
Block: r1mach
Block: d1mach
Block: i1mach
Applying post-processing hooks...
character_backward_compatibility_hook
Post-processing (stage 2)...
Building modules...
Building module "disort"...
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "bdref"("bdref")...
Constructing wrapper function "bdref"...
bdref = bdref(mu,mup,dphi,brdf_type,brdf_arg)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "brdf_hapke"...
brdf_hapke(mup,mu,dphi,b0,hh,w,pi,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "brdf_rpv"...
brdf_rpv(mu_i,mu_r,dphi,rho0,kappa,g_hg,h0,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "brdf_rossli"...
brdf_rossli(mu_i,mu_r,dphi,k_iso,k_vol,k_geo,alpha0,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "oceabrdf2"...
oceabrdf2(do_shadow,refrac_index,ws,mu_i,mu_r,dphi,brdf)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "shadow_eta"("shadow_eta")...
Constructing wrapper function "shadow_eta"...
shadow_eta = shadow_eta(cos_theta,sigma_sq,pi)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "disobrdf"...
rhoq,rhou,emust,bemst,bdr_beam_analytic = disobrdf(usrang,umu,fbeam,umu0,lamber,albedo,onlyfl,rhoq,rhou,emust,bemst,debug,phi,phi0,bdr_beam_analytic,brdf_type,brdf_arg,nmug,[nstr,numu,nphi])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "surfac2"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
surfac2(albedo,delm0,cmu,fbeam,lamber,mazim,onlyfl,pi,umu,umu0,usrang,bdr,emu,bem,rmu,rhoq,rhou,emust,bemst,debug,gmu,gwt,cosmp,brdf_type,brdf_arg,[mi,mxumu,nn,numu,nazz,nstr,nmug])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "qgausn2"...
qgausn2(gmu,gwt,[m])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "zeroit2"...
zeroit2(a,[length])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "errmsg"...
errmsg(messag,fatal)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "wrtbad"("wrtbad")...
Constructing wrapper function "wrtbad"...
wrtbad = wrtbad(varnam)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "wrtdim"("wrtdim")...
Constructing wrapper function "wrtdim"...
wrtdim = wrtdim(dimnam,minval)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "tstbad"("tstbad")...
Constructing wrapper function "tstbad"...
tstbad = tstbad(varnam,relerr)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgemm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgemm(transa,transb,m,n,k,alpha,a,b,beta,c,[lda,ldb,ldc])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dger"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dger(m,n,alpha,x,incx,y,incy,a,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgetf2"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgetf2(m,n,a,ipiv,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgetrf"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgetrf(m,n,a,ipiv,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dgetrs"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dgetrs(trans,n,nrhs,a,ipiv,b,info,[lda,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "dlamch"("dlamch")...
Constructing wrapper function "dlamch"...
dlamch = dlamch(cmach)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "dlamc3"("dlamc3")...
Constructing wrapper function "dlamc3"...
dlamc3 = dlamc3(a,b)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dlaswp"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dlaswp(n,a,k1,k2,ipiv,incx,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dscal"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
dscal(n,da,dx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dswap"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dswap(n,dx,incx,dy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "dtrsm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
dtrsm(side,uplo,transa,diag,m,n,alpha,a,b,[lda,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "idamax"("idamax")...
Constructing wrapper function "idamax"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
idamax = idamax(n,dx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "ieeeck"("ieeeck")...
Constructing wrapper function "ieeeck"...
ieeeck = ieeeck(ispec,zero,one)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "ilaenv"("ilaenv")...
Constructing wrapper function "ilaenv"...
ilaenv = ilaenv(ispec,name,opts,n1,n2,n3,n4)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "iparmq"("iparmq")...
Constructing wrapper function "iparmq"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
getarrdims:warning: assumed shape array, using 0 instead of '*'
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
iparmq = iparmq(ispec,name,opts,n,ilo,ihi,lwork)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "lsame"("lsame")...
Constructing wrapper function "lsame"...
lsame = lsame(ca,cb)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "scopy"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
scopy(n,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbtf2"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbtf2(m,n,kl,ku,ab,ipiv,info,[ldab])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbtrf"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbtrf(m,n,kl,ku,ab,ipiv,info,[ldab])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbtrs"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbtrs(trans,n,kl,ku,nrhs,ab,ipiv,b,info,[ldab,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgemm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgemm(transa,transb,m,n,k,alpha,a,b,beta,c,[lda,ldb,ldc])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgemv"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgemv(trans,m,n,alpha,a,x,incx,beta,y,incy,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sger"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sger(m,n,alpha,x,incx,y,incy,a,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "slaswp"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
slaswp(n,a,k1,k2,ipiv,incx,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "stbsv"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
stbsv(uplo,trans,diag,n,k,a,x,incx,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "strsm"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
strsm(side,uplo,transa,diag,m,n,alpha,a,b,[lda,ldb])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "xerbla"...
xerbla(srname,info)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbco"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbco(abd,n,ml,mu,ipvt,rcond,z,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbfa"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbfa(abd,n,ml,mu,ipvt,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgbsl"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgbsl(abd,n,ml,mu,ipvt,b,job,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgeco"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgeco(a,n,ipvt,rcond,z,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgefa"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgefa(a,n,ipvt,info,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sgesl"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sgesl(a,n,ipvt,b,job,[lda])
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "sasum"("sasum")...
Constructing wrapper function "sasum"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
sasum = sasum(n,sx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "saxpy"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
saxpy(n,sa,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "sdot"("sdot")...
Constructing wrapper function "sdot"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sdot = sdot(n,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sscal"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
sscal(n,sa,sx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Constructing wrapper function "sswap"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
sswap(n,sx,incx,sy,incy)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "isamax"("isamax")...
Constructing wrapper function "isamax"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
isamax = isamax(n,sx,incx)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "r1mach"("r1mach")...
Constructing wrapper function "r1mach"...
r1mach = r1mach(i)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "d1mach"("d1mach")...
Constructing wrapper function "d1mach"...
d1mach = d1mach(i)
Generating possibly empty wrappers"
Maybe empty "disort-f2pywrappers.f"
Creating wrapper for Fortran function "i1mach"("i1mach")...
Constructing wrapper function "i1mach"...
i1mach = i1mach(i)
Wrote C/API module "disort" to file "/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c"
Fortran 77 wrappers are saved to "/tmp/tmpuul0531u/src.linux-x86_64-3.9/disort-f2pywrappers.f"
INFO: adding '/tmp/tmpuul0531u/src.linux-x86_64-3.9/fortranobject.c' to sources.
INFO: adding '/tmp/tmpuul0531u/src.linux-x86_64-3.9' to include_dirs.
copying /home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/f2py/src/fortranobject.c -> /tmp/tmpuul0531u/src.linux-x86_64-3.9
copying /home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/f2py/src/fortranobject.h -> /tmp/tmpuul0531u/src.linux-x86_64-3.9
INFO: adding '/tmp/tmpuul0531u/src.linux-x86_64-3.9/disort-f2pywrappers.f' to sources.
INFO: build_src: building npy-pkg config files
running build_ext
INFO: customize UnixCCompiler
INFO: C compiler: /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -fPIC
creating /tmp/tmp27l9smg9/tmp
creating /tmp/tmp27l9smg9/tmp/tmp27l9smg9
INFO: compile options: '-MMD -MF /tmp/tmp27l9smg9/file.c.d -c'
INFO: x86_64-conda-linux-gnu-cc: /tmp/tmp27l9smg9/file.c
INFO: customize UnixCCompiler using build_ext
INFO: get_default_fcompiler: matching types: '['arm', 'gnu95', 'intel', 'lahey', 'pg', 'nv', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor', 'fujitsu']'
INFO: customize ArmFlangCompiler
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-gfortran
WARN: Could not locate executable armflang
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-ar
INFO: customize Gnu95FCompiler
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-ld
INFO: Found executable /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-ranlib
INFO: customize Gnu95FCompiler
INFO: customize Gnu95FCompiler using build_ext
INFO: building 'disort' extension
INFO: compiling C sources
INFO: C compiler: /home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -fPIC
creating /tmp/tmpuul0531u/tmp
creating /tmp/tmpuul0531u/tmp/tmpuul0531u
creating /tmp/tmpuul0531u/tmp/tmpuul0531u/src.linux-x86_64-3.9
INFO: compile options: '-DNPY_DISABLE_OPTIMIZATION=1 -I/tmp/tmpuul0531u/src.linux-x86_64-3.9 -I/home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/core/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include/python3.9 -c'
INFO: x86_64-conda-linux-gnu-cc: /tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c
INFO: x86_64-conda-linux-gnu-cc: /tmp/tmpuul0531u/src.linux-x86_64-3.9/fortranobject.c
/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c: In function 'f2py_rout_disort_iparmq':
/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c:4772:58: error: expected expression before ',' token
4772 | capi_name_as_array = ndarray_from_pyobj( NPY_STRING,,name_Dims,name_Rank, capi_name_intent,name_capi,capi_errmess);
| ^
/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c:4787:58: error: expected expression before ',' token
4787 | capi_opts_as_array = ndarray_from_pyobj( NPY_STRING,,opts_Dims,opts_Rank, capi_opts_intent,opts_capi,capi_errmess);
| ^
error: Command "/home/rgoswami/.micromamba/envs/numpy-dev/bin/x86_64-conda-linux-gnu-cc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /home/rgoswami/.micromamba/envs/numpy-dev/include -fPIC -DNPY_DISABLE_OPTIMIZATION=1 -I/tmp/tmpuul0531u/src.linux-x86_64-3.9 -I/home/rgoswami/Git/Github/Quansight/f2py_envs/numpy/numpy/core/include -I/home/rgoswami/.micromamba/envs/numpy-dev/include/python3.9 -c /tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.c -o /tmp/tmpuul0531u/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.o -MMD -MF /tmp/tmpuul0531u/tmp/tmpuul0531u/src.linux-x86_64-3.9/disortmodule.o.d" failed with exit status 1
```
Where it seems that the relevant error is from changes to character handling:
```bash
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
getstrlength: expected a signature of a string but got: {'typespec': 'character', 'charselector': {'*': ''}, 'attrspec': [], 'dimension': ['*']}
/tmp/tmp5l8spqw9/src.linux-x86_64-3.9/disortmodule.c: In function 'f2py_rout_disort_iparmq':
/tmp/tmp5l8spqw9/src.linux-x86_64-3.9/disortmodule.c:4772:58: error: expected expression before ',' token
4772 | capi_name_as_array = ndarray_from_pyobj( NPY_STRING,,name_Dims,name_Rank, capi_name_intent,name_capi,capi_errmess);
| ^
/tmp/tmp5l8spqw9/src.linux-x86_64-3.9/disortmodule.c:4787:58: error: expected expression before ',' token
4787 | capi_opts_as_array = ndarray_from_pyobj( NPY_STRING,,opts_Dims,opts_Rank, capi_opts_intent,opts_capi,capi_errmess);
|
```
OTOH I would suspect something in the character handling of https://github.com/numpy/numpy/issues/23356 or https://github.com/numpy/numpy/pull/23194
Diffing the outputs b/w `1.24.3` and `1.24.4` only show:
```bash
Creating wrapper for Fortran function "iparmq"("iparmq")...
Constructing wrapper function "iparmq"...
getarrdims:warning: assumed shape array, using 0 instead of '*'
getarrdims:warning: assumed shape array, using 0 instead of '*'
```
Indeed, `f2py -c --f90flags='-O3' -m disort BDREF.f DISOBRDF.f ERRPACK.f LAPACK.f LINPAK.f RDI1MACH.f skip: iparmq` does seem to compile as well.
Will investigate ASAP. P.S. @kconnour the code being tested is [Pythonic-Distort](https://github.com/LDEO-CREW/Pythonic-DISORT/tree/main/disort4.0.99_f2py) right?
Hi @HaoZeke, thanks for investigating! I'm really glad to see someone else is getting a similar error.
Actually, the code I'm testing is on the api branch of [my repo](https://github.com/kconnour/pyRT_DISORT) but it should be extremely similar to the code in the repo you linked. We're both apparently trying to make a front-end to a popular open-source fortran algorithm. Note that I coded a workaround to this problem in pyproject.toml, where I force it to install numpy==1.24.0 in order to circumvent this issue... so if you ran the installation script from my repo, it shouldn't encounter this error.
I'm happy to provide any additional info to help diagnose this issue!
We ran into similar issues with our code, and after testing, it's because of having a decimal inside of a parameter declaration. Once that happens, everything later stops processing properly. The same seems be happening in the examples above
If you look at the error log posted, https://github.com/numpy/numpy/issues/24008#issuecomment-1601586519, you'll see the first error message is the attached.
```
Line #229 in LAPACK.f:" PARAMETER (ONE=1.0D+0,ZERO=0.0D+0)"
get_parameters: got "eval() arg 1 must be a string, bytes or code object" on 4
```
You could confirm whether this also works for your code by modifying that parameter to just be ONE=1, ZERO=0, and recompiling. If it gets past that point, that's the problem.
We worked around this by reverting crackfortran.py to prior to the changes introduced here: https://github.com/numpy/numpy/pull/23637/files
As to speculation: I think that the changes to the if blocks in https://github.com/numpy/numpy/pull/23637/commits caused an issue that assumes the input is an integer. Even more speculative - I see the kind selector is in that portion, and the kind selector requires integers.
(https://numpy.org/doc/stable/f2py/advanced.html#dealing-with-kind-specifiers) . But I'm not sure - I didn't spend enough time on understanding crackfortran.py once we discovered that reverting crackfortran.py fixed our problem. | 2023-08-25T14:39:05Z | [] | [] |
Traceback (most recent call last):
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 584, in match_library_callback
self._make_controller_from_path(filepath)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 725, in _make_controller_from_path
lib_controller = lib_controller_class(
File "~/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 842, in __init__
super().__init__(**kwargs)
File "~/repos/pyRT_DISORT/venv/lib/python3.10/site-packages/threadpoolctl.py", line 810, in __init__
self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD)
File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen() error
| 10,215 |
|||
numpy/numpy | numpy__numpy-258 | b452014f0f9e8e6a69c6b95a62d70e3d99b9c0f9 | diff --git a/numpy/distutils/ccompiler.py b/numpy/distutils/ccompiler.py
--- a/numpy/distutils/ccompiler.py
+++ b/numpy/distutils/ccompiler.py
@@ -58,7 +58,11 @@ def CCompiler_spawn(self, cmd, display=None):
if s:
if is_sequence(cmd):
cmd = ' '.join(list(cmd))
- print(o)
+ try:
+ print(o)
+ except UnicodeError:
+ # When installing through pip, `o` can contain non-ascii chars
+ pass
if re.search('Too many open files', o):
msg = '\nTry rerunning setup command until build succeeds.'
else:
diff --git a/numpy/distutils/command/build_clib.py b/numpy/distutils/command/build_clib.py
--- a/numpy/distutils/command/build_clib.py
+++ b/numpy/distutils/command/build_clib.py
@@ -81,21 +81,23 @@ def run(self):
if self.have_f_sources():
from numpy.distutils.fcompiler import new_fcompiler
- self.fcompiler = new_fcompiler(compiler=self.fcompiler,
- verbose=self.verbose,
- dry_run=self.dry_run,
- force=self.force,
- requiref90='f90' in languages,
- c_compiler=self.compiler)
- if self.fcompiler is not None:
- self.fcompiler.customize(self.distribution)
+ self._f_compiler = new_fcompiler(compiler=self.fcompiler,
+ verbose=self.verbose,
+ dry_run=self.dry_run,
+ force=self.force,
+ requiref90='f90' in languages,
+ c_compiler=self.compiler)
+ if self._f_compiler is not None:
+ self._f_compiler.customize(self.distribution)
libraries = self.libraries
self.libraries = None
- self.fcompiler.customize_cmd(self)
+ self._f_compiler.customize_cmd(self)
self.libraries = libraries
- self.fcompiler.show_customization()
+ self._f_compiler.show_customization()
+ else:
+ self._f_compiler = None
self.build_libraries(self.libraries)
@@ -121,7 +123,7 @@ def build_libraries(self, libraries):
def build_a_library(self, build_info, lib_name, libraries):
# default compilers
compiler = self.compiler
- fcompiler = self.fcompiler
+ fcompiler = self._f_compiler
sources = build_info.get('sources')
if sources is None or not is_sequence(sources):
@@ -233,7 +235,7 @@ def build_a_library(self, build_info, lib_name, libraries):
debug=self.debug,
extra_postargs=extra_postargs)
- if requiref90 and self.fcompiler.module_dir_switch is None:
+ if requiref90 and self._f_compiler.module_dir_switch is None:
# move new compiled F90 module files to module_build_dir
for f in glob('*.mod'):
if f in existing_modules:
diff --git a/numpy/distutils/fcompiler/hpux.py b/numpy/distutils/fcompiler/hpux.py
--- a/numpy/distutils/fcompiler/hpux.py
+++ b/numpy/distutils/fcompiler/hpux.py
@@ -9,17 +9,17 @@ class HPUXFCompiler(FCompiler):
version_pattern = r'HP F90 (?P<version>[^\s*,]*)'
executables = {
- 'version_cmd' : ["<F90>", "+version"],
+ 'version_cmd' : ["f90", "+version"],
'compiler_f77' : ["f90"],
'compiler_fix' : ["f90"],
'compiler_f90' : ["f90"],
- 'linker_so' : None,
+ 'linker_so' : ["ld", "-b"],
'archiver' : ["ar", "-cr"],
'ranlib' : ["ranlib"]
}
module_dir_switch = None #XXX: fix me
module_include_switch = None #XXX: fix me
- pic_flags = ['+pic=long']
+ pic_flags = ['+Z']
def get_flags(self):
return self.pic_flags + ['+ppu', '+DD64']
def get_flags_opt(self):
diff --git a/numpy/distutils/fcompiler/ibm.py b/numpy/distutils/fcompiler/ibm.py
--- a/numpy/distutils/fcompiler/ibm.py
+++ b/numpy/distutils/fcompiler/ibm.py
@@ -12,7 +12,7 @@
class IBMFCompiler(FCompiler):
compiler_type = 'ibm'
description = 'IBM XL Fortran Compiler'
- version_pattern = r'(xlf\(1\)\s*|)IBM XL Fortran ((Advanced Edition |)Version |Enterprise Edition V)(?P<version>[^\s*]*)'
+ version_pattern = r'(xlf\(1\)\s*|)IBM XL Fortran ((Advanced Edition |)Version |Enterprise Edition V|for AIX, V)(?P<version>[^\s*]*)'
#IBM XL Fortran Enterprise Edition V10.1 for AIX \nVersion: 10.01.0000.0004
executables = {
@@ -86,7 +86,7 @@ def get_flags_linker_so(self):
return opt
def get_flags_opt(self):
- return ['-O5']
+ return ['-O3']
if __name__ == '__main__':
log.set_verbosity(2)
diff --git a/numpy/distutils/fcompiler/pg.py b/numpy/distutils/fcompiler/pg.py
--- a/numpy/distutils/fcompiler/pg.py
+++ b/numpy/distutils/fcompiler/pg.py
@@ -10,14 +10,14 @@ class PGroupFCompiler(FCompiler):
compiler_type = 'pg'
description = 'Portland Group Fortran Compiler'
- version_pattern = r'\s*pg(f77|f90|hpf) (?P<version>[\d.-]+).*'
+ version_pattern = r'\s*pg(f77|f90|hpf|fortran) (?P<version>[\d.-]+).*'
if platform == 'darwin':
executables = {
- 'version_cmd' : ["<F77>", "-V 2>/dev/null"],
- 'compiler_f77' : ["pgf77", "-dynamiclib"],
- 'compiler_fix' : ["pgf90", "-Mfixed", "-dynamiclib"],
- 'compiler_f90' : ["pgf90", "-dynamiclib"],
+ 'version_cmd' : ["<F77>", "-V"],
+ 'compiler_f77' : ["pgfortran", "-dynamiclib"],
+ 'compiler_fix' : ["pgfortran", "-Mfixed", "-dynamiclib"],
+ 'compiler_f90' : ["pgfortran", "-dynamiclib"],
'linker_so' : ["libtool"],
'archiver' : ["ar", "-cr"],
'ranlib' : ["ranlib"]
@@ -25,11 +25,11 @@ class PGroupFCompiler(FCompiler):
pic_flags = ['']
else:
executables = {
- 'version_cmd' : ["<F77>", "-V 2>/dev/null"],
- 'compiler_f77' : ["pgf77"],
- 'compiler_fix' : ["pgf90", "-Mfixed"],
- 'compiler_f90' : ["pgf90"],
- 'linker_so' : ["pgf90","-shared","-fpic"],
+ 'version_cmd' : ["<F77>", "-V"],
+ 'compiler_f77' : ["pgfortran"],
+ 'compiler_fix' : ["pgfortran", "-Mfixed"],
+ 'compiler_f90' : ["pgfortran"],
+ 'linker_so' : ["pgfortran","-shared","-fpic"],
'archiver' : ["ar", "-cr"],
'ranlib' : ["ranlib"]
}
diff --git a/numpy/distutils/system_info.py b/numpy/distutils/system_info.py
--- a/numpy/distutils/system_info.py
+++ b/numpy/distutils/system_info.py
@@ -125,6 +125,7 @@
from distutils.dist import Distribution
import distutils.sysconfig
from distutils import log
+from distutils.util import get_platform
from numpy.distutils.exec_command import \
find_executable, exec_command, get_pythonexe
@@ -193,14 +194,23 @@ def libpaths(paths,bits):
'/opt/local/lib','/sw/lib'], platform_bits)
default_include_dirs = ['/usr/local/include',
'/opt/include', '/usr/include',
+ # path of umfpack under macports
+ '/opt/local/include/ufsparse',
'/opt/local/include', '/sw/include',
'/usr/include/suitesparse']
default_src_dirs = ['.','/usr/local/src', '/opt/src','/sw/src']
-
default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib',
'/usr/lib'], platform_bits)
default_x11_include_dirs = ['/usr/X11R6/include','/usr/X11/include',
'/usr/include']
+ if os.path.exists('/usr/lib/X11'):
+ globbed_x11_dir = glob('/usr/lib/*/libX11.so')
+ if globbed_x11_dir:
+ x11_so_dir = os.path.split(globbed_x11_dir[0])[0]
+ default_x11_lib_dirs.extend([x11_so_dir, '/usr/lib/X11'])
+ default_x11_include_dirs.extend(['/usr/lib/X11/include',
+ '/usr/include/X11'])
+
if os.path.join(sys.prefix, 'lib') not in default_lib_dirs:
default_lib_dirs.insert(0,os.path.join(sys.prefix, 'lib'))
@@ -1273,7 +1283,6 @@ def get_atlas_version(**config):
result = _cached_atlas_version[key] = atlas_version, info
return result
-from distutils.util import get_platform
class lapack_opt_info(system_info):
@@ -1284,7 +1293,8 @@ def calc_info(self):
if sys.platform=='darwin' and not os.environ.get('ATLAS',None):
args = []
link_args = []
- if get_platform()[-4:] == 'i386':
+ if get_platform()[-4:] == 'i386' or 'intel' in get_platform() or \
+ 'i386' in platform.platform():
intel = 1
else:
intel = 0
@@ -1371,7 +1381,8 @@ def calc_info(self):
if sys.platform=='darwin' and not os.environ.get('ATLAS',None):
args = []
link_args = []
- if get_platform()[-4:] == 'i386':
+ if get_platform()[-4:] == 'i386' or 'intel' in get_platform() or \
+ 'i386' in platform.platform():
intel = 1
else:
intel = 0
diff --git a/numpy/distutils/unixccompiler.py b/numpy/distutils/unixccompiler.py
--- a/numpy/distutils/unixccompiler.py
+++ b/numpy/distutils/unixccompiler.py
@@ -17,6 +17,18 @@
# Note that UnixCCompiler._compile appeared in Python 2.3
def UnixCCompiler__compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts):
"""Compile a single source files with a Unix-style compiler."""
+ # HP ad-hoc fix, see ticket 1383
+ ccomp = self.compiler_so
+ if ccomp[0] == 'aCC':
+ # remove flags that will trigger ANSI-C mode for aCC
+ if '-Ae' in ccomp:
+ ccomp.remove('-Ae')
+ if '-Aa' in ccomp:
+ ccomp.remove('-Aa')
+ # add flags for (almost) sane C++ handling
+ ccomp += ['-AA']
+ self.compiler_so = ccomp
+
display = '%s: %s' % (os.path.basename(self.compiler_so[0]),src)
try:
self.spawn(self.compiler_so + cc_args + [src, '-o', obj] +
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -114,7 +114,12 @@ def write_version_py(filename='numpy/version.py'):
GIT_REVISION = git_version()
elif os.path.exists('numpy/version.py'):
# must be a source distribution, use existing version file
- from numpy.version import git_revision as GIT_REVISION
+ try:
+ from numpy.version import git_revision as GIT_REVISION
+ except ImportError:
+ raise ImportError("Unable to import git_revision. Try removing " \
+ "numpy/version.py and the build directory " \
+ "before building.")
else:
GIT_REVISION = "Unknown"
@@ -162,6 +167,19 @@ def setup_package():
if os.path.isfile(site_cfg):
shutil.copy(site_cfg, src_path)
+ # Ugly hack to make pip work with Python 3, see #1857.
+ # Explanation: pip messes with __file__ which interacts badly with the
+ # change in directory due to the 2to3 conversion. Therefore we restore
+ # __file__ to what it would have been otherwise.
+ global __file__
+ __file__ = os.path.join(os.curdir, os.path.basename(__file__))
+ if '--egg-base' in sys.argv:
+ # Change pip-egg-info entry to absolute path, so pip can find it
+ # after changing directory.
+ idx = sys.argv.index('--egg-base')
+ if sys.argv[idx + 1] == 'pip-egg-info':
+ sys.argv[idx + 1] = os.path.join(local_path, 'pip-egg-info')
+
old_path = os.getcwd()
os.chdir(src_path)
sys.path.insert(0, src_path)
| Error in linalg.norm() (Trac #785)
_Original ticket http://projects.scipy.org/numpy/ticket/785 on 2008-05-09 by trac user nick, assigned to unknown._
While working on a unit test for linalg.norm() (see ticket #1361), I discovered that if a vector is passed into the norm() method with 'fro' as the argument for the ord, an error occurs. Example:
```
>>> from numpy import linalg
>>> a = [1,2,3,4]
>>> linalg.norm(a,'fro')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/tmp/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 1262, in norm
return ((abs(x)**ord).sum())**(1.0/ord)
TypeError: unsupported operand type(s) for ** or pow(): 'numpy.ndarray' and 'str'
>>>
```
Performing the same test but omiting the parameter 'fro' allows the method to execute normally. A test case that exposes this is available in ticket #1361.
| 2012-04-22T11:18:04Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/tmp/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 1262, in norm
return ((abs(x)**ord).sum())**(1.0/ord)
TypeError: unsupported operand type(s) for ** or pow(): 'numpy.ndarray' and 'str'
| 10,219 |
||||
numpy/numpy | numpy__numpy-3447 | 2a5c2c8227b600654f31ed346c73cce77bef554d | diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -5643,6 +5643,9 @@ def __iter__(self):
else:
yield d
+ def __len__(self):
+ return self._data.__len__()
+
def filled(self, fill_value=None):
"""
Return a copy with masked fields filled with a given value.
| MaskedArray record: TypeError: len() of unsized object (Trac #2116)
_Original ticket http://projects.scipy.org/numpy/ticket/2116 on 2012-04-26 by trac user mwtoews, assigned to @pierregm._
I'm getting a inconsistent error while trying to get the length of a record from a masked array, but only when the mask is enabled. See the example:
```
import numpy as np
my_dtype = [('a','i'),('b','f')]
mar = np.ma.zeros(5, my_dtype)
# Length of the first record; no error raised
assert len(mar[0]) == 2
# Change mask for one of the fields
print(mar[0]) # (0, 0.0)
mar.mask[0][0] = True
print(mar[0]) # (--, 0.0)
# Repeat same command as above to reveal this bug
assert len(mar[0]) == 2
```
Raises the error:
```
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
TypeError: len() of unsized object
```
I am using !NumPy version 1.6.1 obtained from http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy and the version of Python is 2.5.1 [MSC v.1310 32 bit (Intel)].
| 2013-06-16T01:41:13Z | [] | [] |
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
TypeError: len() of unsized object
| 10,263 |
||||
numpy/numpy | numpy__numpy-3452 | 5ba9fead691439697c6d1c3768d4205cbe822bc2 | diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -5593,6 +5593,9 @@ def __iter__(self):
else:
yield d
+ def __len__(self):
+ return self._data.__len__()
+
def filled(self, fill_value=None):
"""
Return a copy with masked fields filled with a given value.
@@ -5930,9 +5933,10 @@ class _frommethod:
Name of the method to transform.
"""
- def __init__(self, methodname):
+ def __init__(self, methodname, reversed=False):
self.__name__ = methodname
self.__doc__ = self.getdoc()
+ self.reversed = reversed
#
def getdoc(self):
"Return the doc of the function (from the doc of the method)."
@@ -5944,6 +5948,11 @@ def getdoc(self):
return doc
#
def __call__(self, a, *args, **params):
+ if self.reversed:
+ args = list(args)
+ arr = args[0]
+ args[0] = a
+ a = arr
# Get the method from the array (if possible)
method_name = self.__name__
method = getattr(a, method_name, None)
@@ -5960,7 +5969,7 @@ def __call__(self, a, *args, **params):
all = _frommethod('all')
anomalies = anom = _frommethod('anom')
any = _frommethod('any')
-compress = _frommethod('compress')
+compress = _frommethod('compress', reversed=True)
cumprod = _frommethod('cumprod')
cumsum = _frommethod('cumsum')
copy = _frommethod('copy')
| MaskedArray record: TypeError: len() of unsized object (Trac #2116)
_Original ticket http://projects.scipy.org/numpy/ticket/2116 on 2012-04-26 by trac user mwtoews, assigned to @pierregm._
I'm getting a inconsistent error while trying to get the length of a record from a masked array, but only when the mask is enabled. See the example:
```
import numpy as np
my_dtype = [('a','i'),('b','f')]
mar = np.ma.zeros(5, my_dtype)
# Length of the first record; no error raised
assert len(mar[0]) == 2
# Change mask for one of the fields
print(mar[0]) # (0, 0.0)
mar.mask[0][0] = True
print(mar[0]) # (--, 0.0)
# Repeat same command as above to reveal this bug
assert len(mar[0]) == 2
```
Raises the error:
```
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
TypeError: len() of unsized object
```
I am using !NumPy version 1.6.1 obtained from http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy and the version of Python is 2.5.1 [MSC v.1310 32 bit (Intel)].
| 2013-06-17T01:40:29Z | [] | [] |
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
TypeError: len() of unsized object
| 10,267 |
||||
numpy/numpy | numpy__numpy-3854 | 106306dc9b0e3aaf0543d78c7c20761ef03f0213 | diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py
--- a/numpy/linalg/linalg.py
+++ b/numpy/linalg/linalg.py
@@ -368,10 +368,11 @@ def solve(a, b):
gufunc = _umath_linalg.solve1
else:
- if a.shape[-1] == 0 and b.shape[-2] == 0:
- a = a.reshape(a.shape[:-1] + (1,))
- bc = broadcast(a, b)
- return wrap(empty(bc.shape, dtype=result_t))
+ if b.size == 0:
+ if (a.shape[-1] == 0 and b.shape[-2] == 0) or b.shape[-1] == 0:
+ a = a[:,:1].reshape(a.shape[:-1] + (1,))
+ bc = broadcast(a, b)
+ return wrap(empty(bc.shape, dtype=result_t))
gufunc = _umath_linalg.solve
| solve() fails on 0-sized axis
With current 1.8, 5a0d09c:
```
import numpy as np
np.linalg.solve(np.eye(3), np.zeros((3, 0)))
```
gives
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/njs/src/numpy/.tox/py27/local/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 380, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
ValueError: cannot remove a zero-sized axis from an iterator
```
The result should be an array with shape `(3, 0)`.
This is a regression from 1.7, hence marking as a blocker.
Thanks to Jens Jørgen Mortensen for the original report: http://mail.scipy.org/pipermail/numpy-discussion/2013-October/067898.html
| 2013-10-02T16:46:10Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/njs/src/numpy/.tox/py27/local/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 380, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
ValueError: cannot remove a zero-sized axis from an iterator
| 10,284 |
||||
numpy/numpy | numpy__numpy-4276 | bd5894b29b897f16da8a3d64e0df94e93d6b2d4a | diff --git a/numpy/core/_methods.py b/numpy/core/_methods.py
--- a/numpy/core/_methods.py
+++ b/numpy/core/_methods.py
@@ -63,8 +63,10 @@ def _mean(a, axis=None, dtype=None, out=None, keepdims=False):
if isinstance(ret, mu.ndarray):
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
- else:
+ elif hasattr(ret, 'dtype'):
ret = ret.dtype.type(ret / rcount)
+ else:
+ ret = ret / rcount
return ret
@@ -107,8 +109,10 @@ def _var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
if isinstance(ret, mu.ndarray):
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
- else:
+ elif hasattr(ret, 'dtype'):
ret = ret.dtype.type(ret / rcount)
+ else:
+ ret = ret / rcount
return ret
@@ -118,7 +122,9 @@ def _std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
if isinstance(ret, mu.ndarray):
ret = um.sqrt(ret, out=ret)
- else:
+ elif hasattr(ret, 'dtype'):
ret = ret.dtype.type(um.sqrt(ret))
+ else:
+ ret = um.sqrt(ret)
return ret
| mean() (and median()) should work with "object" arrays
With NumPy 1.8, `mean()` started to break when calculating the (global) mean of an array that contains objects (arrays with an object `dtype`). This also breaks `median()` on such arrays. Here is an example:
```
>>> numpy.arange(10).astype(object).mean()
Traceback (most recent call last):
File "<ipython-input-11-782b7c0104c3>", line 1, in <module>
numpy.arange(10).astype(object).mean()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_methods.py", line 67, in _mean
ret = ret.dtype.type(ret / rcount)
AttributeError: 'int' object has no attribute 'dtype'
```
Another example is case of numbers with uncertainties from the uncertainties package (https://github.com/lebigot/uncertainties/issues/22).
I think that it would be better if NumPy did not assume that scalar results have a `dtype`, since arrays can contain objects that have a meaningful mean. I believe that such objects should not be forced to have a `dtype`, which is obviously NumPy specific (they even can't, for Python scalars like floats). Furthermore, a `dtype` is in principle not necessary for the calculation of the mean of such objects, so it would look strange if they had to have one.
The problem is that `numpy.mean()` assumes that the intermediate result obtained has a `dtype` (with a `type` attribute).
Therefore, I suggest that NumPy's `mean()` also handle arrays of objects that are not of the standard NumPy types (their `dtype` is object, and they contain objects that have a meaningful mean, like ints, floats, numbers with uncertainties, etc.).
| introduced in f16b12e87 by @charris
Hmmm, this is annoying. It seems to me like the only way to fix this is probably to see if `dtype` is given, and then use `np.dtype(dtype).type(...)` and otherwise just do the plain operation?
Yeah, I was thinking along the same lines.
I would be curious to see what the issue that prompted the change was, to see if I can come up with any kind of better suggestion (who knows): what was the problem, exactly?
The scalar returns didn't preserve type, i.e., float32 would go to float64. That was on account of type precedence between scalars being different than type precedence between scalars and arrays.
@charris do we care even about that? Or is it enough if the passed in dtype actually gets honored?
I honestly have troubles to figure out a good method of preserving the type quite right for the scalar result. I now think we may have to just check for object dtype input (or passed in dtype). The most secure method I can think of would be a new keyword argument to the ufuncs to skip PyArray_Return (would probably be slower though), but unless that is useful elsewhere it is not worth the trouble either.
Any progress on this one? It costed me a hour of debugging today. If doing this properly is hard, please consider fixing the error message so it is obvious what's wrong.
@charris do you have time to have a look at this?
I also think we accumulated enough fixes to warrant a 1.8.1 release if we add this and the C99 windows fix.
thoughts?
I'll get it done today sometime. Agree on 1.8.1, I came to that conclusion this morning. We should also fix the `divide` and `true_divide` ufuncs when the `dtype` is given.
I need to think about this a bit more before putting up a fix.
| 2014-02-10T05:05:28Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-11-782b7c0104c3>", line 1, in <module>
numpy.arange(10).astype(object).mean()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_methods.py", line 67, in _mean
ret = ret.dtype.type(ret / rcount)
AttributeError: 'int' object has no attribute 'dtype'
| 10,290 |
|||
numpy/numpy | numpy__numpy-4297 | 95f7a469b1e9ce460e31c41e1bd897ceff396f6b | diff --git a/numpy/core/_methods.py b/numpy/core/_methods.py
--- a/numpy/core/_methods.py
+++ b/numpy/core/_methods.py
@@ -63,8 +63,10 @@ def _mean(a, axis=None, dtype=None, out=None, keepdims=False):
if isinstance(ret, mu.ndarray):
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
- else:
+ elif hasattr(ret, 'dtype'):
ret = ret.dtype.type(ret / rcount)
+ else:
+ ret = ret / rcount
return ret
@@ -107,8 +109,10 @@ def _var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
if isinstance(ret, mu.ndarray):
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
- else:
+ elif hasattr(ret, 'dtype'):
ret = ret.dtype.type(ret / rcount)
+ else:
+ ret = ret / rcount
return ret
@@ -118,7 +122,9 @@ def _std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False):
if isinstance(ret, mu.ndarray):
ret = um.sqrt(ret, out=ret)
- else:
+ elif hasattr(ret, 'dtype'):
ret = ret.dtype.type(um.sqrt(ret))
+ else:
+ ret = um.sqrt(ret)
return ret
diff --git a/numpy/distutils/fcompiler/gnu.py b/numpy/distutils/fcompiler/gnu.py
--- a/numpy/distutils/fcompiler/gnu.py
+++ b/numpy/distutils/fcompiler/gnu.py
@@ -35,13 +35,13 @@ class GnuFCompiler(FCompiler):
def gnu_version_match(self, version_string):
"""Handle the different versions of GNU fortran compilers"""
- m = re.match(r'GNU Fortran', version_string)
+ m = re.search(r'GNU Fortran', version_string)
if not m:
return None
- m = re.match(r'GNU Fortran\s+95.*?([0-9-.]+)', version_string)
+ m = re.search(r'GNU Fortran\s+95.*?([0-9-.]+)', version_string)
if m:
return ('gfortran', m.group(1))
- m = re.match(r'GNU Fortran.*?\-?([0-9-.]+)', version_string)
+ m = re.search(r'GNU Fortran.*?\-?([0-9-.]+)', version_string)
if m:
v = m.group(1)
if v.startswith('0') or v.startswith('2') or v.startswith('3'):
diff --git a/numpy/f2py/__init__.py b/numpy/f2py/__init__.py
--- a/numpy/f2py/__init__.py
+++ b/numpy/f2py/__init__.py
@@ -28,20 +28,20 @@ def compile(source,
from numpy.distutils.exec_command import exec_command
import tempfile
if source_fn is None:
- fname = os.path.join(tempfile.mktemp()+'.f')
+ f = tempfile.NamedTemporaryFile(suffix='.f')
else:
- fname = source_fn
-
- f = open(fname, 'w')
- f.write(source)
- f.close()
-
- args = ' -c -m %s %s %s'%(modulename, fname, extra_args)
- c = '%s -c "import numpy.f2py as f2py2e;f2py2e.main()" %s' %(sys.executable, args)
- s, o = exec_command(c)
- if source_fn is None:
- try: os.remove(fname)
- except OSError: pass
+ f = open(source_fn, 'w')
+
+ try:
+ f.write(source)
+ f.flush()
+
+ args = ' -c -m %s %s %s'%(modulename, f.name, extra_args)
+ c = '%s -c "import numpy.f2py as f2py2e;f2py2e.main()" %s' % \
+ (sys.executable, args)
+ s, o = exec_command(c)
+ finally:
+ f.close()
return s
from numpy.testing import Tester
diff --git a/numpy/f2py/f2py2e.py b/numpy/f2py/f2py2e.py
--- a/numpy/f2py/f2py2e.py
+++ b/numpy/f2py/f2py2e.py
@@ -91,7 +91,7 @@
--lower is assumed with -h key, and --no-lower without -h key.
--build-dir <dirname> All f2py generated files are created in <dirname>.
- Default is tempfile.mktemp().
+ Default is tempfile.mkdtemp().
--overwrite-signature Overwrite existing signature file.
@@ -428,7 +428,7 @@ def run_compile():
del sys.argv[i]
else:
remove_build_dir = 1
- build_dir = os.path.join(tempfile.mktemp())
+ build_dir = tempfile.mkdtemp()
_reg1 = re.compile(r'[-][-]link[-]')
sysinfo_flags = [_m for _m in sys.argv[1:] if _reg1.match(_m)]
diff --git a/numpy/lib/financial.py b/numpy/lib/financial.py
--- a/numpy/lib/financial.py
+++ b/numpy/lib/financial.py
@@ -628,21 +628,29 @@ def irr(values):
Examples
--------
- >>> print round(np.irr([-100, 39, 59, 55, 20]), 5)
+ >>> round(irr([-100, 39, 59, 55, 20]), 5)
0.28095
+ >>> round(irr([-100, 0, 0, 74]), 5)
+ -0.0955
+ >>> round(irr([-100, 100, 0, -7]), 5)
+ -0.0833
+ >>> round(irr([-100, 100, 0, 7]), 5)
+ 0.06206
+ >>> round(irr([-5, 10.5, 1, -8, 1]), 5)
+ 0.0886
(Compare with the Example given for numpy.lib.financial.npv)
"""
res = np.roots(values[::-1])
- # Find the root(s) between 0 and 1
- mask = (res.imag == 0) & (res.real > 0) & (res.real <= 1)
- res = res[mask].real
+ mask = (res.imag == 0) & (res.real > 0)
if res.size == 0:
return np.nan
+ res = res[mask].real
+ # NPV(rate) = 0 can have more than one solution so we return
+ # only the solution closest to zero.
rate = 1.0/res - 1
- if rate.size == 1:
- rate = rate.item()
+ rate = rate.item(np.argmin(np.abs(rate)))
return rate
def npv(rate, values):
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,11 +18,10 @@
DOCLINES = __doc__.split("\n")
import os
-import shutil
import sys
-import re
import subprocess
+
if sys.version_info[:2] < (2, 6) or (3, 0) <= sys.version_info[0:2] < (3, 2):
raise RuntimeError("Python version 2.6, 2.7 or >= 3.2 required.")
@@ -31,6 +30,7 @@
else:
import __builtin__ as builtins
+
CLASSIFIERS = """\
Development Status :: 5 - Production/Stable
Intended Audience :: Science/Research
@@ -47,24 +47,13 @@
Operating System :: MacOS
"""
-NAME = 'numpy'
-MAINTAINER = "NumPy Developers"
-MAINTAINER_EMAIL = "numpy-discussion@scipy.org"
-DESCRIPTION = DOCLINES[0]
-LONG_DESCRIPTION = "\n".join(DOCLINES[2:])
-URL = "http://www.numpy.org"
-DOWNLOAD_URL = "http://sourceforge.net/projects/numpy/files/NumPy/"
-LICENSE = 'BSD'
-CLASSIFIERS = [_f for _f in CLASSIFIERS.split('\n') if _f]
-AUTHOR = "Travis E. Oliphant et al."
-AUTHOR_EMAIL = "oliphant@enthought.com"
-PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
MAJOR = 1
MINOR = 8
MICRO = 0
ISRELEASED = False
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
+
# Return the git revision as a string
def git_version():
def _minimal_ext_cmd(cmd):
@@ -100,18 +89,7 @@ def _minimal_ext_cmd(cmd):
builtins.__NUMPY_SETUP__ = True
-def write_version_py(filename='numpy/version.py'):
- cnt = """
-# THIS FILE IS GENERATED FROM NUMPY SETUP.PY
-short_version = '%(version)s'
-version = '%(version)s'
-full_version = '%(full_version)s'
-git_revision = '%(git_revision)s'
-release = %(isrelease)s
-
-if not release:
- version = full_version
-"""
+def get_version_info():
# Adding the git rev number needs to be done inside write_version_py(),
# otherwise the import of numpy.version messes up the build under Python 3.
FULLVERSION = VERSION
@@ -131,6 +109,23 @@ def write_version_py(filename='numpy/version.py'):
if not ISRELEASED:
FULLVERSION += '.dev-' + GIT_REVISION[:7]
+ return FULLVERSION, GIT_REVISION
+
+
+def write_version_py(filename='numpy/version.py'):
+ cnt = """
+# THIS FILE IS GENERATED FROM NUMPY SETUP.PY
+short_version = '%(version)s'
+version = '%(version)s'
+full_version = '%(full_version)s'
+git_revision = '%(git_revision)s'
+release = %(isrelease)s
+
+if not release:
+ version = full_version
+"""
+ FULLVERSION, GIT_REVISION = get_version_info()
+
a = open(filename, 'w')
try:
a.write(cnt % {'version': VERSION,
@@ -140,6 +135,7 @@ def write_version_py(filename='numpy/version.py'):
finally:
a.close()
+
def configuration(parent_package='',top_path=None):
from numpy.distutils.misc_util import Configuration
@@ -155,8 +151,36 @@ def configuration(parent_package='',top_path=None):
return config
-def setup_package():
+def check_submodules():
+ """ verify that the submodules are checked out and clean
+ use `git submodule update --init`; on failure
+ """
+ if not os.path.exists('.git'):
+ return
+ with open('.gitmodules') as f:
+ for l in f:
+ if 'path' in l:
+ p = l.split('=')[-1].strip()
+ if not os.path.exists(p):
+ raise ValueError('Submodule %s missing' % p)
+
+
+ proc = subprocess.Popen(['git', 'submodule', 'status'],
+ stdout=subprocess.PIPE)
+ status, _ = proc.communicate()
+ status = status.decode("ascii", "replace")
+ for line in status.splitlines():
+ if line.startswith('-') or line.startswith('+'):
+ raise ValueError('Submodule not clean: %s' % line)
+
+from distutils.command.sdist import sdist
+class sdist_checked(sdist):
+ """ check submodules on sdist to prevent incomplete tarballs """
+ def run(self):
+ check_submodules()
+ sdist.run(self)
+def setup_package():
src_path = os.path.dirname(os.path.abspath(sys.argv[0]))
old_path = os.getcwd()
os.chdir(src_path)
@@ -165,28 +189,51 @@ def setup_package():
# Rewrite the version file everytime
write_version_py()
+ metadata = dict(
+ name = 'numpy',
+ maintainer = "NumPy Developers",
+ maintainer_email = "numpy-discussion@scipy.org",
+ description = DOCLINES[0],
+ long_description = "\n".join(DOCLINES[2:]),
+ url = "http://www.numpy.org",
+ author = "Travis E. Oliphant et al.",
+ download_url = "http://sourceforge.net/projects/numpy/files/NumPy/",
+ license = 'BSD',
+ classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f],
+ platforms = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"],
+ test_suite='nose.collector',
+ cmdclass={"sdist": sdist_checked},
+ )
+
# Run build
- from numpy.distutils.core import setup
+ if len(sys.argv) >= 2 and ('--help' in sys.argv[1:] or
+ sys.argv[1] in ('--help-commands', 'egg_info', '--version',
+ 'clean')):
+ # Use setuptools for these commands (they don't work well or at all
+ # with distutils). For normal builds use distutils.
+ try:
+ from setuptools import setup
+ except ImportError:
+ from distutils.core import setup
+
+ FULLVERSION, GIT_REVISION = get_version_info()
+ metadata['version'] = FULLVERSION
+ elif len(sys.argv) >= 2 and sys.argv[1] == 'bdist_wheel':
+ # bdist_wheel needs setuptools
+ import setuptools
+ from numpy.distutils.core import setup
+ metadata['configuration'] = configuration
+ else:
+ from numpy.distutils.core import setup
+ metadata['configuration'] = configuration
try:
- setup(
- name=NAME,
- maintainer=MAINTAINER,
- maintainer_email=MAINTAINER_EMAIL,
- description=DESCRIPTION,
- long_description=LONG_DESCRIPTION,
- url=URL,
- download_url=DOWNLOAD_URL,
- license=LICENSE,
- classifiers=CLASSIFIERS,
- author=AUTHOR,
- author_email=AUTHOR_EMAIL,
- platforms=PLATFORMS,
- configuration=configuration )
+ setup(**metadata)
finally:
del sys.path[0]
os.chdir(old_path)
return
+
if __name__ == '__main__':
setup_package()
| mean() (and median()) should work with "object" arrays
With NumPy 1.8, `mean()` started to break when calculating the (global) mean of an array that contains objects (arrays with an object `dtype`). This also breaks `median()` on such arrays. Here is an example:
```
>>> numpy.arange(10).astype(object).mean()
Traceback (most recent call last):
File "<ipython-input-11-782b7c0104c3>", line 1, in <module>
numpy.arange(10).astype(object).mean()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_methods.py", line 67, in _mean
ret = ret.dtype.type(ret / rcount)
AttributeError: 'int' object has no attribute 'dtype'
```
Another example is case of numbers with uncertainties from the uncertainties package (https://github.com/lebigot/uncertainties/issues/22).
I think that it would be better if NumPy did not assume that scalar results have a `dtype`, since arrays can contain objects that have a meaningful mean. I believe that such objects should not be forced to have a `dtype`, which is obviously NumPy specific (they even can't, for Python scalars like floats). Furthermore, a `dtype` is in principle not necessary for the calculation of the mean of such objects, so it would look strange if they had to have one.
The problem is that `numpy.mean()` assumes that the intermediate result obtained has a `dtype` (with a `type` attribute).
Therefore, I suggest that NumPy's `mean()` also handle arrays of objects that are not of the standard NumPy types (their `dtype` is object, and they contain objects that have a meaningful mean, like ints, floats, numbers with uncertainties, etc.).
| introduced in f16b12e87 by @charris
Hmmm, this is annoying. It seems to me like the only way to fix this is probably to see if `dtype` is given, and then use `np.dtype(dtype).type(...)` and otherwise just do the plain operation?
Yeah, I was thinking along the same lines.
I would be curious to see what the issue that prompted the change was, to see if I can come up with any kind of better suggestion (who knows): what was the problem, exactly?
The scalar returns didn't preserve type, i.e., float32 would go to float64. That was on account of type precedence between scalars being different than type precedence between scalars and arrays.
@charris do we care even about that? Or is it enough if the passed in dtype actually gets honored?
I honestly have troubles to figure out a good method of preserving the type quite right for the scalar result. I now think we may have to just check for object dtype input (or passed in dtype). The most secure method I can think of would be a new keyword argument to the ufuncs to skip PyArray_Return (would probably be slower though), but unless that is useful elsewhere it is not worth the trouble either.
| 2014-02-15T17:25:25Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-11-782b7c0104c3>", line 1, in <module>
numpy.arange(10).astype(object).mean()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_methods.py", line 67, in _mean
ret = ret.dtype.type(ret / rcount)
AttributeError: 'int' object has no attribute 'dtype'
| 10,291 |
|||
numpy/numpy | numpy__numpy-4304 | 68ae04741f73953ef5680cb80fbb3dde5f160c90 | diff --git a/numpy/lib/twodim_base.py b/numpy/lib/twodim_base.py
--- a/numpy/lib/twodim_base.py
+++ b/numpy/lib/twodim_base.py
@@ -25,7 +25,7 @@ def fliplr(m):
Parameters
----------
m : array_like
- Input array.
+ Input array, must be at least 2-D.
Returns
-------
@@ -40,8 +40,7 @@ def fliplr(m):
Notes
-----
- Equivalent to A[:,::-1]. Does not require the array to be
- two-dimensional.
+ Equivalent to A[:,::-1]. Requires the array to be at least 2-D.
Examples
--------
| fliplr documentation incorrectly states 2-d *not* required
[`fliplr` documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.fliplr.html) incorrectly states 2-d _not_ required, but that is _not_ true. I think it is a typo, since `fliplr` was probably copied from `flipud` which _really_ doesn't require a 2-d array.
```
>>> import numpy as np
>>> a = np.array([1,2,3,4])
>>> np.fliplr(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\numpy\lib\twodim_base.py", line 61, in fliplr
raise ValueError("Input must be >= 2-d.")
ValueError: Input must be >= 2-d.
```
| 2014-02-16T20:32:54Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\numpy\lib\twodim_base.py", line 61, in fliplr
raise ValueError("Input must be >= 2-d.")
ValueError: Input must be >= 2-d.
| 10,293 |
||||
numpy/numpy | numpy__numpy-4305 | 68ae04741f73953ef5680cb80fbb3dde5f160c90 | diff --git a/numpy/f2py/cfuncs.py b/numpy/f2py/cfuncs.py
--- a/numpy/f2py/cfuncs.py
+++ b/numpy/f2py/cfuncs.py
@@ -312,7 +312,7 @@
needs['pyobj_from_string1']=['string']
cppmacros['pyobj_from_string1']='#define pyobj_from_string1(v) (PyString_FromString((char *)v))'
needs['pyobj_from_string1size']=['string']
-cppmacros['pyobj_from_string1size']='#define pyobj_from_string1size(v,len) (PyString_FromStringAndSize((char *)v, len))'
+cppmacros['pyobj_from_string1size']='#define pyobj_from_string1size(v,len) (PyUString_FromStringAndSize((char *)v, len))'
needs['TRYPYARRAYTEMPLATE']=['PRINTPYOBJERR']
cppmacros['TRYPYARRAYTEMPLATE']="""\
/* New SciPy */
| f2py sometimes generates python3-incompatible C wrappings
f2py uses deprecated CPython functions (notably `PyString_FromStringAndSize`) when wrapping callbacks with string parameters. An example is the FORTRAN-77 subroutine
``` fortran
SUBROUTINE BUG(CALLBACK, A)
IMPLICIT NONE
EXTERNAL CALLBACK
DOUBLE PRECISION CALLBACK
DOUBLE PRECISION A
CHARACTER*1 R
R = 'R'
!f2py intent(out) A
A = CALLBACK(R)
END SUBROUTINE
```
compiled with
```
f2py -c -m bug test.f # on Arch Linux, this is the python3 version of f2py
```
with the Python wrapping
``` python
#!/usr/bin/env python
from __future__ import print_function
import bug
def callback(code):
if code == 'F':
return 2.0
else:
return 3.0
print(bug.bug(callback))
```
Calling the Python2 version of `f2py` and running the script under Python 2.7 yields the correct answer. However, under Python3 we get
```
~/Documents/Code/Fortran/f2pyBug $ python3 test.py
Traceback (most recent call last):
File "test.py", line 4, in <module>
import bug
ImportError: /home/drwells/Documents/Code/Fortran/f2pyBug/bug.cpython-33m.so: undefined symbol: PyString_FromStringAndSize
```
I tested this under Arch Linux (fully updated) with Python 3.3.3 and Python 2.7.6. In both cases I used Numpy 1.8. I also tested this under Ubuntu (Quantal) with Python 3.2.3 and Python 2.7.3. In both cases I used Numpy 1.6.2.
| Fyi, unfortunately f2py is not regularly maintained at this point, so you
may or may not find someone stepping up to fix it. Your best bet might be
to see if you can figure out a patch yourself - we'd definitely be
interested in merging such a thing.
On 3 Feb 2014 13:51, "drwells" notifications@github.com wrote:
> f2py uses deprecated CPython functions (notably PyString_FromStringAndSize)
> when wrapping callbacks with string parameters. An example is the
> FORTRAN-77 subroutine
>
> ```
> SUBROUTINE BUG(CALLBACK, A)
> IMPLICIT NONE EXTERNAL CALLBACK
> DOUBLE PRECISION CALLBACK
> DOUBLE PRECISION A
> CHARACTER*1 R
> R = 'R'!f2py intent(out) A
> A = CALLBACK(R)
> END SUBROUTINE
> ```
>
> compiled with
>
> f2py -c -m bug test.f # on Arch Linux, this is the python3 version of f2py
>
> with the Python wrapping
>
> #!/usr/bin/env pythonfrom **future** import print_functionimport bug
> def callback(code):
> if code == 'F':
> return 2.0
> else:
> return 3.0
> print(bug.bug(callback))
>
> Calling the Python2 version of f2py and running the script under Python
> 2.7 yields the correct answer. However, under Python3 we get
>
> ~/Documents/Code/Fortran/f2pyBug $ python3 test.py
> Traceback (most recent call last):
> File "test.py", line 4, in <module>
> import bug
> ImportError: /home/drwells/Documents/Code/Fortran/f2pyBug/bug.cpython-33m.so: undefined symbol: PyString_FromStringAndSize
>
> I tested this under Arch Linux (fully updated) with Python 3.3.3 and
> Python 2.7.6. In both cases I used Numpy 1.8. I also tested this under
> Ubuntu (Quantal) with Python 3.2.3 and Python 2.7.3. In both cases I used
> Numpy 1.6.2.
>
> —
> Reply to this email directly or view it on GitHubhttps://github.com/numpy/numpy/issues/4256
> .
These are likely easy to fix by adding the missing defines to
numpy/f2py/src/fortranobject.h
| 2014-02-16T22:17:50Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 4, in <module>
import bug
ImportError: /home/drwells/Documents/Code/Fortran/f2pyBug/bug.cpython-33m.so: undefined symbol: PyString_FromStringAndSize
| 10,294 |
|||
numpy/numpy | numpy__numpy-4479 | 42be275997e56b7e21d3adab5d5a5142876db9f0 | diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py
--- a/numpy/lib/function_base.py
+++ b/numpy/lib/function_base.py
@@ -1593,6 +1593,7 @@ def __init__(self, pyfunc, otypes='', doc=None, excluded=None,
cache=False):
self.pyfunc = pyfunc
self.cache = cache
+ self._ufunc = None # Caching to improve default performance
if doc is None:
self.__doc__ = pyfunc.__doc__
@@ -1616,9 +1617,6 @@ def __init__(self, pyfunc, otypes='', doc=None, excluded=None,
excluded = set()
self.excluded = set(excluded)
- if self.otypes and not self.excluded:
- self._ufunc = None # Caching to improve default performance
-
def __call__(self, *args, **kwargs):
"""
Return arrays with the results of `pyfunc` broadcast (vectorized) over
@@ -1652,7 +1650,8 @@ def func(*vargs):
def _get_ufunc_and_otypes(self, func, args):
"""Return (ufunc, otypes)."""
# frompyfunc will fail if args is empty
- assert args
+ if not args:
+ raise ValueError('args can not be empty')
if self.otypes:
otypes = self.otypes
| _vectorize_call error when otype attribute is set after vectorize object creation
if a vectorize object is created without otype argument, an attempt to set it later causes an error:
``` python
v = numpy.vectorize( lambda x: x )
v.otypes='i'
v( [1,2] )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", line 1873, in __call__
return self._vectorize_call(func=func, args=vargs)
File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", line 1933, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", line 1886, in _get_ufunc_and_otypes
if func is self.pyfunc and self._ufunc is not None:
AttributeError: 'vectorize' object has no attribute '_ufunc'
```
| Still in 1.9-devel. `_get_ufunc_and_otypes` is buggy in other ways, it uses `assert` for flow control.
| 2014-03-11T08:20:58Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", line 1873, in __call__
return self._vectorize_call(func=func, args=vargs)
File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", line 1933, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", line 1886, in _get_ufunc_and_otypes
if func is self.pyfunc and self._ufunc is not None:
AttributeError: 'vectorize' object has no attribute '_ufunc'
| 10,310 |
|||
numpy/numpy | numpy__numpy-4588 | a0794f63d548e688e2eed76a9dc4e8df0ea33846 | diff --git a/numpy/lib/npyio.py b/numpy/lib/npyio.py
--- a/numpy/lib/npyio.py
+++ b/numpy/lib/npyio.py
@@ -845,6 +845,11 @@ def split_line(line):
continue
if usecols:
vals = [vals[i] for i in usecols]
+ if len(vals) != N:
+ line_num = i + skiprows + 1
+ raise ValueError("Wrong number of columns at line %d"
+ % line_num)
+
# Convert each value according to its column and store
items = [conv(val) for (conv, val) in zip(converters, vals)]
# Then pack it according to the dtype's nesting
| loadtxt should give line numbers of problems (Trac #1998)
_Original ticket http://projects.scipy.org/numpy/ticket/1998 on 2011-12-15 by @samtygier, assigned to unknown._
currently input like
```
1 2 3
4 5 6
78
```
will cause loadtxt to give the following error
```
Traceback (most recent call last):
File "./np_lt.py", line 5, in <module>
numpy.loadtxt(sys.argv[1])
File "/usr/lib/python2.7/site-packages/numpy/lib/npyio.py", line 804, in loadtxt
X = np.array(X, dtype)
ValueError: setting an array element with a sequence.
```
This would be far more useful, if it contained some information about where the problem was.
This is similar to the bug #1810 , but for the loadtxt function.
I attach a patch that gives the following message:
```
ValueError: Wrong number of columns at line 3
```
it also provides a test that the input raises the correct error. Inconveniently it is the same error type as before, so the test is not very useful. However I thing ValueError is probably the most appropriate in this case. Maybe there is a way to check the exception message.
| Attachment added by @samtygier on 2011-12-15: [numpy_loadtxt.diff](http://projects.scipy.org/numpy/attachment/ticket/1998/numpy_loadtxt.diff)
Good idea.
| 2014-04-05T05:03:53Z | [] | [] |
Traceback (most recent call last):
File "./np_lt.py", line 5, in <module>
numpy.loadtxt(sys.argv[1])
File "/usr/lib/python2.7/site-packages/numpy/lib/npyio.py", line 804, in loadtxt
X = np.array(X, dtype)
ValueError: setting an array element with a sequence.
| 10,314 |
|||
numpy/numpy | numpy__numpy-4677 | 8e0ac440329188b959520a0a7ce41ef60b2fb3c2 | diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -7018,8 +7018,6 @@ def asarray(a, dtype=None, order=None):
<class 'numpy.ma.core.MaskedArray'>
"""
- if dtype is None and type(a) is MaskedArray:
- return a
return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=False)
def asanyarray(a, dtype=None):
@@ -7065,8 +7063,6 @@ def asanyarray(a, dtype=None):
<class 'numpy.ma.core.MaskedArray'>
"""
- if dtype is None and isinstance(a, MaskedArray):
- return a
return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=True)
| Test error for scipy 0.15.0.dev-5d197ed
The first bad commit is d8fd28389adb491e24b7cdc25cd1b20f539310c3, the isolated error is
```
Traceback (most recent call last):
File "/home/charris/scipy-test-fail.py", line 53, in test_trim
assert_equal(trimx._mask.ravel(), expected)
File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 123, in assert_equal
return assert_array_equal(actual, desired, err_msg)
File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 196, in assert_array_equal
header='Arrays are not equal')
File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 189, in assert_array_compare
verbose=verbose, header=header)
File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/utils.py", line 651, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Arrays are not equal
(mismatch 9.09090909091%)
x: array([ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, False, False, False, False, False, False, False,...
y: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
----------------------------------------------------------------------
Ran 1 test in 0.005s
```
| Looks like the problem is returning a view rather than a copy.
Or more precisely, not making a copy and returning the original array. Probably the easiest fix is to revert this change, at least for numpy 1.9.
| 2014-05-05T17:28:43Z | [] | [] |
Traceback (most recent call last):
File "/home/charris/scipy-test-fail.py", line 53, in test_trim
assert_equal(trimx._mask.ravel(), expected)
File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 123, in assert_equal
return assert_array_equal(actual, desired, err_msg)
File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 196, in assert_array_equal
header='Arrays are not equal')
File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 189, in assert_array_compare
verbose=verbose, header=header)
File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/utils.py", line 651, in assert_array_compare
raise AssertionError(msg)
AssertionError:
| 10,318 |
|||
numpy/numpy | numpy__numpy-4792 | db710cefeecf51d6253e421712726c1506a6f65b | diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py
--- a/numpy/lib/function_base.py
+++ b/numpy/lib/function_base.py
@@ -651,7 +651,7 @@ def piecewise(x, condlist, funclist, *args, **kw):
The output is the same shape and type as x and is found by
calling the functions in `funclist` on the appropriate portions of `x`,
as defined by the boolean arrays in `condlist`. Portions not covered
- by any condition have undefined values.
+ by any condition have a default value of 0.
See Also
@@ -693,32 +693,24 @@ def piecewise(x, condlist, funclist, *args, **kw):
if (isscalar(condlist) or not (isinstance(condlist[0], list) or
isinstance(condlist[0], ndarray))):
condlist = [condlist]
- condlist = [asarray(c, dtype=bool) for c in condlist]
+ condlist = array(condlist, dtype=bool)
n = len(condlist)
- if n == n2 - 1: # compute the "otherwise" condition.
- totlist = condlist[0]
- for k in range(1, n):
- totlist |= condlist[k]
- condlist.append(~totlist)
- n += 1
- if (n != n2):
- raise ValueError(
- "function list and condition list must be the same")
- zerod = False
# This is a hack to work around problems with NumPy's
# handling of 0-d arrays and boolean indexing with
# numpy.bool_ scalars
+ zerod = False
if x.ndim == 0:
x = x[None]
zerod = True
- newcondlist = []
- for k in range(n):
- if condlist[k].ndim == 0:
- condition = condlist[k][None]
- else:
- condition = condlist[k]
- newcondlist.append(condition)
- condlist = newcondlist
+ if condlist.shape[-1] != 1:
+ condlist = condlist.T
+ if n == n2 - 1: # compute the "otherwise" condition.
+ totlist = np.logical_or.reduce(condlist, axis=0)
+ condlist = np.vstack([condlist, ~totlist])
+ n += 1
+ if (n != n2):
+ raise ValueError(
+ "function list and condition list must be the same")
y = zeros(x.shape, x.dtype)
for k in range(n):
| Fixed bug in numpy.piecewise() for 0-d array handling
Updated numpy/lib/function_base.py to fix bug in numpy.piecewise() for 0-d array handling and boolean indexing with scalars.
Bug test case:
```
>>> numpy.piecewise(5, [True, False], [1, 0])
Traceback (most recent call last):
[...]
y[condlist[k]] = item
ValueError: boolean index array has too many values
```
After fix:
```
>>> numpy.piecewise(5, [True, False], [1, 0])
array(1)
```
| This pull request [fails](http://travis-ci.org/numpy/numpy/builds/1788960) (merged e553e1b4 into 731cf3aa).
The current version of this pull request appears to fail the existing tests (e.g., this build reports failures: http://travis-ci.org/#!/numpy/numpy/jobs/1788964), so that would need to be fixed.
Also, some sort of test needs to be added to make sure that the bug is fixed (and stays that way).
Hi njsmith, thanks. I've fixed both my change to numpy.piecewise() as well as the test cases to cover both my bug case above as well as making sure that the tests are all logical.
This pull request [passes](http://travis-ci.org/numpy/numpy/builds/1804356) (merged e2ad1a72 into 731cf3aa).
I guess you rearranged the order of all the tests in the source? It makes the diff unreadable unfortunately -- I can't tell at all what you've actually changed without reverse-engineering it by hand, and I'm afraid I'm too lazy for that. Can you revert the no-op changes, or split them into a separate commit, or at least describe what changes were actually made?
Is this issue (i.e. this pull request) going to be fixed anytime soon? It's a bit anoying to construct a 1d array with one element each time I want to call this function with a scalar.
I guess it will be fixed as soon as someone does the work :-)
Hm, got it :) Should an issue be filed first?
The main point of filing an issue is to make sure that something doesn't get forgotten, and this PR is already sort of doing that (i.e., it shows up on the list of open issues if anyone goes to look at it). But feel free to file one if you like. If you're thinking about fixing the bug yourself, then we're pretty laid back about these things -- you can just go ahead and submit a new PR without filing an issue if you want. We'd rather get the fix than worry about the process :-).
The main thing that needs done is just to rewrite the unit tests. In my own testing and usage (and when I've tested with the original unit tests), the code works fine, but I just haven't had the time to sit down and rewrite them to be more similar to the original set and still be complete and systematic. I may get to it next week sometime, but until then I'm swamped.
-- Eric
On Nov 15, 2012, at 9:33 AM, njsmith notifications@github.com wrote:
> The main point of filing an issue is to make sure that something doesn't get forgotten, and this PR is already sort of doing that (i.e., it shows up on the list of open issues if anyone goes to look at it). But feel free to file one if you like. If you're thinking about fixing the bug yourself, then we're pretty laid back about these things -- you can just go ahead and submit a new PR without filing an issue if you want. We'd rather get the fix than worry about the process :-).
>
> —
> Reply to this email directly or view it on GitHub.
That's one of the things I was about to ask, it is ok to rewrite the tests? Because right now IMHO they don't seem very meaningful for the purpose of the function. I'd like to try it myself this weekend, or next one as much.
It's OK to make any changes whatsoever, the requirements are just that you
should have a reason, and we should be able to look at your changes and
figure out if they're good :-). How exactly to do that depends on the
change. Easily readable diffs, English explanations of what you did and
why... whatever works.
On 17 Nov 2012 00:02, "Juan Luis Cano Rodríguez" notifications@github.com
wrote:
> That's one of the things I was about to ask, it is ok to rewrite the
> tests? Because right now IMHO they don't seem very meaningful for the
> purpose of the function. I'd like to try it myself this weekend, or next
> one as much.
>
> —
> Reply to this email directly or view it on GitHubhttps://github.com/numpy/numpy/pull/331#issuecomment-10467386.
What is the status of this?
Well actually I didn't do it any of the weekends as you can notice :( But anyway now that you ask about it, if I recall correctly there was some discussion recently about rank-0 arrays in the mailing list but I am not able to find it. Are there any constraints on how to do this? Other than that it would be my first contribution to NumPy and I am a bit unsecure but I am willing to give it a try.
Yes, I was referring to this:
http://www.mail-archive.com/numpy-discussion@scipy.org/msg39968.html
My question was if there are any near future plans for rank-0 arrays that relate this and other issues in such a way that it requires changing low-level code or if we can address this individually.
No, I don't think there will be any near-future changes in how 0d arrays
work that will affect this bug.
On 1 Mar 2013 07:29, "Juan Luis Cano Rodríguez" notifications@github.com
wrote:
> Yes, I was referring to this:
>
> http://www.mail-archive.com/numpy-discussion@scipy.org/msg39968.html
>
> My question was if there are any near future plans for rank-0 arrays that
> relate this and other issues in such a way that it requires changing
> low-level code or if we can address this individually.
>
> —
> Reply to this email directly or view it on GitHubhttps://github.com/numpy/numpy/pull/331#issuecomment-14276274
> .
Sorry for the delay of months. Finishing up my degree and job hunting, so I've been...occupied. I reverted most of the unit test changes so that it's clearer what I changed. Maybe I'll re-up the unit test changes at some point, but this particular commit should be easier to understand when you diff between this state and the state 3 commits ago.
@njsmith Can this go in.
Ping Travisbot just because it was a long time ago.
@ericsuh Made some comments. The commits should also be squashed into one.
I've put up a cleaned version at https://github.com/charris/numpy in the branch gh-331
Needs finishing up.
I seem to have deleted my cleanup somewhere along the line.
In the docstring:
"Each boolean array in `condlist` selects a piece of `x`, and should therefore be of the same shape as `x`."
But actually even the tests do not respect this (and still everything works):
https://github.com/numpy/numpy/blob/b1c69df01b673cc086065112da6780d8548a0dfa/numpy/lib/tests/test_function_base.py#L1468
There are other bugs too. For instance, there's `test_0d` but even though `np.piecewise(x, x > 3, [4, 0])` passes, `np.piecewise(x, [x > 3, x <= 3], [4, 0])` fails (same error as originally reported). I don't think this behaviour is consistent - that's what I actually meant a year and a half ago. The precondition stated in the docstring makes sense for me and should be somehow enforced - is anybody against it? I cannot promise anything but I may try and solve this at once.
| 2014-06-07T20:44:26Z | [] | [] |
Traceback (most recent call last):
[...]
y[condlist[k]] = item
ValueError: boolean index array has too many values
| 10,321 |
|||
numpy/numpy | numpy__numpy-5149 | 58350f4608a22f4b4b66795f51eaefc206bd02b8 | diff --git a/numpy/ma/extras.py b/numpy/ma/extras.py
--- a/numpy/ma/extras.py
+++ b/numpy/ma/extras.py
@@ -434,8 +434,10 @@ def apply_over_axes(func, a, axes):
raise ValueError("function is not returning "
"an array of the correct shape")
return val
-apply_over_axes.__doc__ = np.apply_over_axes.__doc__[
- :np.apply_over_axes.__doc__.find('Notes')].rstrip() + \
+
+if apply_over_axes.__doc__ is not None:
+ apply_over_axes.__doc__ = np.apply_over_axes.__doc__[
+ :np.apply_over_axes.__doc__.find('Notes')].rstrip() + \
"""
Examples
@@ -462,7 +464,7 @@ def apply_over_axes(func, a, axes):
[[[46]
[--]
[124]]]
-"""
+ """
def average(a, axis=None, weights=None, returned=False):
| Numpy crashes with -OO
Hello, I ran `$ python -OO -c 'import numpy'` on a fresh miniconda install (after `$ conda install numpy`) and I got the following output:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/obernardo/miniconda/lib/python2.7/site-packages/numpy/__init__.py", line 191, in <module>
from . import ma
File "/Users/obernardo/miniconda/lib/python2.7/site-packages/numpy/ma/__init__.py", line 49, in <module>
from . import extras
File "/Users/obernardo/miniconda/lib/python2.7/site-packages/numpy/ma/extras.py", line 438, in <module>
:np.apply_over_axes.__doc__.find('Notes')].rstrip() + \
AttributeError: 'NoneType' object has no attribute 'find'
```
Running `$ python` gives me the following output (I am running OS X Mavericks on a Dec-2010 MacBook Pro):
```
Python 2.7.8 |Continuum Analytics, Inc.| (default, Aug 21 2014, 15:21:46)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
```
Finally, running `python -c 'import numpy'` works normally. I hope this is enough information for you all, but please let me know if you need more. (It's the second bug report I submit in my life, and the first one was about a typo in documentation!)
Thanks a lot for making numpy available to us, it is a very, very useful tool that works greatly for me.
| 2014-10-04T02:12:20Z | [] | [] |
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/obernardo/miniconda/lib/python2.7/site-packages/numpy/__init__.py", line 191, in <module>
from . import ma
File "/Users/obernardo/miniconda/lib/python2.7/site-packages/numpy/ma/__init__.py", line 49, in <module>
from . import extras
File "/Users/obernardo/miniconda/lib/python2.7/site-packages/numpy/ma/extras.py", line 438, in <module>
:np.apply_over_axes.__doc__.find('Notes')].rstrip() + \
AttributeError: 'NoneType' object has no attribute 'find'
| 10,330 |
||||
numpy/numpy | numpy__numpy-5455 | 14445500bdf67600f926c6426bad55977441dca0 | diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -145,10 +145,15 @@ class MaskError(MAError):
'S' : 'N/A',
'u' : 999999,
'V' : '???',
- 'U' : 'N/A',
- 'M8[D]' : np.datetime64('NaT', 'D'),
- 'M8[us]' : np.datetime64('NaT', 'us')
+ 'U' : 'N/A'
}
+
+# Add datetime64 and timedelta64 types
+for v in ["Y", "M", "W", "D", "h", "m", "s", "ms", "us", "ns", "ps",
+ "fs", "as"]:
+ default_filler["M8[" + v + "]"] = np.datetime64("NaT", v)
+ default_filler["m8[" + v + "]"] = np.timedelta64("NaT", v)
+
max_filler = ntypes._minvals
max_filler.update([(k, -np.inf) for k in [np.float32, np.float64]])
min_filler = ntypes._maxvals
@@ -194,7 +199,7 @@ def default_fill_value(obj):
999999
>>> np.ma.default_fill_value(np.array([1.1, 2., np.pi]))
1e+20
- >>> np.ma.default_fill_value(np.dtype(complex))
+ >>> np.ma.default_fill_value(np.dtype(complex))
(1e+20+0j)
"""
@@ -203,7 +208,7 @@ def default_fill_value(obj):
elif isinstance(obj, np.dtype):
if obj.subdtype:
defval = default_filler.get(obj.subdtype[0].kind, '?')
- elif obj.kind == 'M':
+ elif obj.kind in 'Mm':
defval = default_filler.get(obj.str[1:], '?')
else:
defval = default_filler.get(obj.kind, '?')
| Masked array view fails if structured dtype has datetime component
A view as `numpy.ma.MaskedArray` fails if the array has a structured dtype, including at least one part that is `datetime64`, as follows:
```
$ python3.3
Python 3.3.2+ (default, Feb 28 2014, 00:52:16)
[GCC 4.8.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import numpy, numpy.ma, numpy.version
>>> numpy.version.version
'1.8.0'
>>> A = numpy.empty(shape=(5,), dtype=[("A", "<f4"), ("B", "datetime64[ms]")])
>>> A.view(numpy.ma.MaskedArray)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 2800, in __array_finalize__
self._fill_value = _check_fill_value(None, self.dtype)
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 402, in _check_fill_value
dtype=ndtype,)
ValueError: Error parsing datetime string "?" at position 0
```
| A simpler `numpy.ma.MaskedArray([], shape=(0,), dtype=[("A", "<f4"), ("B", "datetime64[ms]")])` fails with the same `ValueError`.
Just tested with with the latest git repository. Problem still exists.
| 2015-01-15T00:07:57Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 2800, in __array_finalize__
self._fill_value = _check_fill_value(None, self.dtype)
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 402, in _check_fill_value
dtype=ndtype,)
ValueError: Error parsing datetime string "?" at position 0
| 10,346 |
|||
numpy/numpy | numpy__numpy-5498 | 54ebec335384ec1d6f8e65bdd35c4f359797dc0b | diff --git a/numpy/add_newdocs.py b/numpy/add_newdocs.py
--- a/numpy/add_newdocs.py
+++ b/numpy/add_newdocs.py
@@ -885,7 +885,7 @@ def luf(lamdaexpr, *args, **kwargs):
>>> np.zeros(5)
array([ 0., 0., 0., 0., 0.])
- >>> np.zeros((5,), dtype=numpy.int)
+ >>> np.zeros((5,), dtype=np.int)
array([0, 0, 0, 0, 0])
>>> np.zeros((2, 1))
| Error in help documentation of numpy.zeros
The help documentation for numpy zeros method (`import numpy; help(numpy.zeros)`) shows an example like this:
``` python
>>> np.zeros((5,), dtype=numpy.int)
array([0, 0, 0, 0, 0])
```
The documentation throughout `multiarray` module assumes that `numpy` is imported as `np` but the value for `dtype` keyword is passed as `numpy.int`. While the above snippet conveys the point across, it will throw a name error if some one actually tried to execute it:
``` python
>>> import numpy as np
>>> np.zeros((5,), dtype=numpy.int)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'numpy' is not defined
```
This can be fixed by consitent usage of `numpy` or `np`.
| 2015-01-24T09:50:00Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'numpy' is not defined
| 10,351 |
||||
numpy/numpy | numpy__numpy-5500 | 1cac77b27d10217aaeb468abb71ed49f241ce469 | diff --git a/numpy/lib/npyio.py b/numpy/lib/npyio.py
--- a/numpy/lib/npyio.py
+++ b/numpy/lib/npyio.py
@@ -717,7 +717,8 @@ def loadtxt(fname, dtype=float, comments='#', delimiter=None,
"""
# Type conversions for Py3 convenience
- comments = asbytes(comments)
+ if comments is not None:
+ comments = asbytes(comments)
user_converters = converters
if delimiter is not None:
delimiter = asbytes(delimiter)
@@ -790,7 +791,10 @@ def pack_items(items, packing):
def split_line(line):
"""Chop off comments, strip, and split at delimiter."""
- line = asbytes(line).split(comments)[0].strip(asbytes('\r\n'))
+ if comments is None:
+ line = asbytes(line).strip(asbytes('\r\n'))
+ else:
+ line = asbytes(line).split(comments)[0].strip(asbytes('\r\n'))
if line:
return line.split(delimiter)
else:
diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -145,10 +145,15 @@ class MaskError(MAError):
'S' : 'N/A',
'u' : 999999,
'V' : '???',
- 'U' : 'N/A',
- 'M8[D]' : np.datetime64('NaT', 'D'),
- 'M8[us]' : np.datetime64('NaT', 'us')
+ 'U' : 'N/A'
}
+
+# Add datetime64 and timedelta64 types
+for v in ["Y", "M", "W", "D", "h", "m", "s", "ms", "us", "ns", "ps",
+ "fs", "as"]:
+ default_filler["M8[" + v + "]"] = np.datetime64("NaT", v)
+ default_filler["m8[" + v + "]"] = np.timedelta64("NaT", v)
+
max_filler = ntypes._minvals
max_filler.update([(k, -np.inf) for k in [np.float32, np.float64]])
min_filler = ntypes._maxvals
@@ -194,7 +199,7 @@ def default_fill_value(obj):
999999
>>> np.ma.default_fill_value(np.array([1.1, 2., np.pi]))
1e+20
- >>> np.ma.default_fill_value(np.dtype(complex))
+ >>> np.ma.default_fill_value(np.dtype(complex))
(1e+20+0j)
"""
@@ -203,7 +208,7 @@ def default_fill_value(obj):
elif isinstance(obj, np.dtype):
if obj.subdtype:
defval = default_filler.get(obj.subdtype[0].kind, '?')
- elif obj.kind == 'M':
+ elif obj.kind in 'Mm':
defval = default_filler.get(obj.str[1:], '?')
else:
defval = default_filler.get(obj.kind, '?')
| loadtxt(comments=None) considers the string 'None' as a comment symbol
Numpy 1.8.2
`numpy.loadtxt(comments=None)` considers the string `'None'` as a comment symbol.
Expected behaviour:
`comments=None` should indicate that there are no comments symbols. Certainly not cast `None` to a string, what is this, Javascript?
Masked array view fails if structured dtype has datetime component
A view as `numpy.ma.MaskedArray` fails if the array has a structured dtype, including at least one part that is `datetime64`, as follows:
```
$ python3.3
Python 3.3.2+ (default, Feb 28 2014, 00:52:16)
[GCC 4.8.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import numpy, numpy.ma, numpy.version
>>> numpy.version.version
'1.8.0'
>>> A = numpy.empty(shape=(5,), dtype=[("A", "<f4"), ("B", "datetime64[ms]")])
>>> A.view(numpy.ma.MaskedArray)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 2800, in __array_finalize__
self._fill_value = _check_fill_value(None, self.dtype)
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 402, in _check_fill_value
dtype=ndtype,)
ValueError: Error parsing datetime string "?" at position 0
```
|
A simpler `numpy.ma.MaskedArray([], shape=(0,), dtype=[("A", "<f4"), ("B", "datetime64[ms]")])` fails with the same `ValueError`.
Just tested with with the latest git repository. Problem still exists.
| 2015-01-25T16:09:27Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 2800, in __array_finalize__
self._fill_value = _check_fill_value(None, self.dtype)
File "/home/gerrit/venv/python-3.3/lib/python3.3/site-packages/numpy/ma/core.py", line 402, in _check_fill_value
dtype=ndtype,)
ValueError: Error parsing datetime string "?" at position 0
| 10,352 |
|||
numpy/numpy | numpy__numpy-5548 | 30d755d8737505717d54ed32501261bb94130a7f | diff --git a/numpy/core/_internal.py b/numpy/core/_internal.py
--- a/numpy/core/_internal.py
+++ b/numpy/core/_internal.py
@@ -305,6 +305,174 @@ def _index_fields(ary, fields):
copy_dtype = {'names':view_dtype['names'], 'formats':view_dtype['formats']}
return array(view, dtype=copy_dtype, copy=True)
+def _get_all_field_offsets(dtype, base_offset=0):
+ """ Returns the types and offsets of all fields in a (possibly structured)
+ data type, including nested fields and subarrays.
+
+ Parameters
+ ----------
+ dtype : data-type
+ Data type to extract fields from.
+ base_offset : int, optional
+ Additional offset to add to all field offsets.
+
+ Returns
+ -------
+ fields : list of (data-type, int) pairs
+ A flat list of (dtype, byte offset) pairs.
+
+ """
+ fields = []
+ if dtype.fields is not None:
+ for name in dtype.names:
+ sub_dtype = dtype.fields[name][0]
+ sub_offset = dtype.fields[name][1] + base_offset
+ fields.extend(_get_all_field_offsets(sub_dtype, sub_offset))
+ else:
+ if dtype.shape:
+ sub_offsets = _get_all_field_offsets(dtype.base, base_offset)
+ count = 1
+ for dim in dtype.shape:
+ count *= dim
+ fields.extend((typ, off + dtype.base.itemsize*j)
+ for j in range(count) for (typ, off) in sub_offsets)
+ else:
+ fields.append((dtype, base_offset))
+ return fields
+
+def _check_field_overlap(new_fields, old_fields):
+ """ Perform object memory overlap tests for two data-types (see
+ _view_is_safe).
+
+ This function checks that new fields only access memory contained in old
+ fields, and that non-object fields are not interpreted as objects and vice
+ versa.
+
+ Parameters
+ ----------
+ new_fields : list of (data-type, int) pairs
+ Flat list of (dtype, byte offset) pairs for the new data type, as
+ returned by _get_all_field_offsets.
+ old_fields: list of (data-type, int) pairs
+ Flat list of (dtype, byte offset) pairs for the old data type, as
+ returned by _get_all_field_offsets.
+
+ Raises
+ ------
+ TypeError
+ If the new fields are incompatible with the old fields
+
+ """
+ from .numerictypes import object_
+ from .multiarray import dtype
+
+ #first go byte by byte and check we do not access bytes not in old_fields
+ new_bytes = set()
+ for tp, off in new_fields:
+ new_bytes.update(set(range(off, off+tp.itemsize)))
+ old_bytes = set()
+ for tp, off in old_fields:
+ old_bytes.update(set(range(off, off+tp.itemsize)))
+ if new_bytes.difference(old_bytes):
+ raise TypeError("view would access data parent array doesn't own")
+
+ #next check that we do not interpret non-Objects as Objects, and vv
+ obj_offsets = [off for (tp, off) in old_fields if tp.type is object_]
+ obj_size = dtype(object_).itemsize
+
+ for fld_dtype, fld_offset in new_fields:
+ if fld_dtype.type is object_:
+ # check we do not create object views where
+ # there are no objects.
+ if fld_offset not in obj_offsets:
+ raise TypeError("cannot view non-Object data as Object type")
+ else:
+ # next check we do not create non-object views
+ # where there are already objects.
+ # see validate_object_field_overlap for a similar computation.
+ for obj_offset in obj_offsets:
+ if (fld_offset < obj_offset + obj_size and
+ obj_offset < fld_offset + fld_dtype.itemsize):
+ raise TypeError("cannot view Object as non-Object type")
+
+def _getfield_is_safe(oldtype, newtype, offset):
+ """ Checks safety of getfield for object arrays.
+
+ As in _view_is_safe, we need to check that memory containing objects is not
+ reinterpreted as a non-object datatype and vice versa.
+
+ Parameters
+ ----------
+ oldtype : data-type
+ Data type of the original ndarray.
+ newtype : data-type
+ Data type of the field being accessed by ndarray.getfield
+ offset : int
+ Offset of the field being accessed by ndarray.getfield
+
+ Raises
+ ------
+ TypeError
+ If the field access is invalid
+
+ """
+ new_fields = _get_all_field_offsets(newtype, offset)
+ old_fields = _get_all_field_offsets(oldtype)
+ # raises if there is a problem
+ _check_field_overlap(new_fields, old_fields)
+
+def _view_is_safe(oldtype, newtype):
+ """ Checks safety of a view involving object arrays, for example when
+ doing::
+
+ np.zeros(10, dtype=oldtype).view(newtype)
+
+ We need to check that
+ 1) No memory that is not an object will be interpreted as a object,
+ 2) No memory containing an object will be interpreted as an arbitrary type.
+ Both cases can cause segfaults, eg in the case the view is written to.
+ Strategy here is to also disallow views where newtype has any field in a
+ place oldtype doesn't.
+
+ Parameters
+ ----------
+ oldtype : data-type
+ Data type of original ndarray
+ newtype : data-type
+ Data type of the view
+
+ Raises
+ ------
+ TypeError
+ If the new type is incompatible with the old type.
+
+ """
+ new_fields = _get_all_field_offsets(newtype)
+ new_size = newtype.itemsize
+
+ old_fields = _get_all_field_offsets(oldtype)
+ old_size = oldtype.itemsize
+
+ # if the itemsizes are not equal, we need to check that all the
+ # 'tiled positions' of the object match up. Here, we allow
+ # for arbirary itemsizes (even those possibly disallowed
+ # due to stride/data length issues).
+ if old_size == new_size:
+ new_num = old_num = 1
+ else:
+ gcd_new_old = _gcd(new_size, old_size)
+ new_num = old_size // gcd_new_old
+ old_num = new_size // gcd_new_old
+
+ # get position of fields within the tiling
+ new_fieldtile = [(tp, off + new_size*j)
+ for j in range(new_num) for (tp, off) in new_fields]
+ old_fieldtile = [(tp, off + old_size*j)
+ for j in range(old_num) for (tp, off) in old_fields]
+
+ # raises if there is a problem
+ _check_field_overlap(new_fieldtile, old_fieldtile)
+
# Given a string containing a PEP 3118 format specifier,
# construct a Numpy dtype
| TypeError: Cannot change data-type for object array
If I try and read in the array contained in this npy file:
https://gist.github.com/astrofrog/8c2d188005f31e0bba36/raw/3065c8fa220a6eaccbff20565d0d520c07e5e7e6/test.npy
then try and print out the array, so:
``` python
import numpy as np
array = np.load('test.npy')
print(array)
```
I get:
```
Traceback (most recent call last):
File "test2.py", line 5, in <module>
print(array)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/numeric.py", line 1767, in array_str
return array2string(a, max_line_width, precision, suppress_small, ' ', "", str)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/arrayprint.py", line 459, in array2string
separator, prefix, formatter=formatter)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/arrayprint.py", line 329, in _array2string
_summaryEdgeItems, summary_insert)[:-1]
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/arrayprint.py", line 526, in _formatArray
s += _formatArray(a[-i], format_function, rank-1, max_line_len,
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/records.py", line 481, in __getitem__
return obj.view(dtype=(self.dtype.type, obj.dtype.descr))
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/records.py", line 540, in view
return ndarray.view(self, dtype)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/records.py", line 457, in __setattr__
raise exctype(value)
TypeError: Cannot change data-type for object array.
```
This is with the latest developer version of Numpy (3c5409e4e38e6034d69d0042bf2a3bc854ef2e53) and Python 3.4 on MacOS X.
The dtype can be printed, as can individual columns, but the array as a whole can't. This doesn't occur in the latest stable release so may be a regression?
| 2015-02-09T05:49:41Z | [] | [] |
Traceback (most recent call last):
File "test2.py", line 5, in <module>
print(array)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/numeric.py", line 1767, in array_str
return array2string(a, max_line_width, precision, suppress_small, ' ', "", str)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/arrayprint.py", line 459, in array2string
separator, prefix, formatter=formatter)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/arrayprint.py", line 329, in _array2string
_summaryEdgeItems, summary_insert)[:-1]
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/arrayprint.py", line 526, in _formatArray
s += _formatArray(a[-i], format_function, rank-1, max_line_len,
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/records.py", line 481, in __getitem__
return obj.view(dtype=(self.dtype.type, obj.dtype.descr))
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/records.py", line 540, in view
return ndarray.view(self, dtype)
File "/Volumes/Raptor/miniconda3/envs/dev/lib/python3.4/site-packages/numpy/core/records.py", line 457, in __setattr__
raise exctype(value)
TypeError: Cannot change data-type for object array.
| 10,357 |
||||
numpy/numpy | numpy__numpy-5584 | 2e016ac65aceab4e08217794d6be7b365793976a | diff --git a/numpy/core/fromnumeric.py b/numpy/core/fromnumeric.py
--- a/numpy/core/fromnumeric.py
+++ b/numpy/core/fromnumeric.py
@@ -691,8 +691,16 @@ def argpartition(a, kth, axis=-1, kind='introselect', order=None):
>>> x[np.argpartition(x, (1, 3))]
array([1, 2, 3, 4])
+ >>> x = [3, 4, 2, 1]
+ >>> np.array(x)[np.argpartition(x, 3)]
+ array([2, 1, 3, 4])
+
"""
- return a.argpartition(kth, axis, kind=kind, order=order)
+ try:
+ argpartition = a.argpartition
+ except AttributeError:
+ return _wrapit(a, 'argpartition',kth, axis, kind, order)
+ return argpartition(kth, axis, kind=kind, order=order)
def sort(a, axis=-1, kind='quicksort', order=None):
| argpartition fails on non-ndarray array-likes
While `partition` works, as advertised, array-likeinputs like, e.g. lists:
```
>>> np.partition([5, 4, 3, 2, 1], 2)
array([1, 2, 3, 4, 5])
```
`argpartition` raises an error if its first argument is not an `ndarray`:
```
>>> np.argpartition([5, 4, 3, 2, 1], 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\WinPython-32bit-sp-dev\python-2.7.5\lib\site-packages\numpy\core\fromnumeric.py", line 680, in argpartition
return a.argpartition(kth, axis, kind=kind, order=order)
AttributeError: 'list' object has no attribute 'argpartition'
```
| If any array-like inputs are to be accepted, then as in `argsort`, this can be implemented:
```
try:
argpartition = a.argpartition
except AttributeError:
return _wrapit(a, 'argpartition', kth, axis, kind, order)
return argpartition(kth, axis, kind=kind, order=order)
```
else it can be like `sort` and `partition`:
```
if axis is None:
a = asanyarray(a).flatten()
axis = 0
else:
a = asanyarray(a).copy(order="K")
return a.argpartition(kth, axis, kind=kind, order=order)
```
Just asking out of curiosity, the following example is shown in the documentation :
```
>>> x = np.array([3, 4, 2, 1])
>>> x[np.argpartition(x, 3)]
array([2, 1, 3, 4])
```
Now, if any array-like inputs are allowed, then the following behavior is needed.
```
>>> x = [3, 4, 2, 1]
>>>np.array(x)[np.argpartition(x, 3)]
array([2, 1, 3, 4])
>>>x[np.argpartition(x, 3)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: only integer arrays with one element can be converted to an index
```
If this is the procedure to be followed, as in case of `argsort`, I think it should be documented. Please correct me if no such use is needed in general scenarios.
it probably should be like argsort, Don't remember why its not, maybe just a mistake, I can't remember there being a reason. compared to sort, partition can't be overridden from the C side but that shouldn't matter here.
I think the `partition` and `sort` C API functions are defined in `item_selection.c`. Can you please clarify on what `compared to sort, partition can't be overridden from the C side` means ? Could I have a go at fixing this ?
no, that is kind of intentional the C sort functions are part of a public struct, adding partition would break the ABI and just add more stuff that should not be in that place.
So I went with a internal lookupt able that could get hooks for user functions. But I don't think its really worthwhile, I don't think anybody would use it.
@juliantaylor Thanks for the clarification. I am new to working with these kind of projects and wasn't aware of the nature of public struct functions and the way in which they are maintained.
| 2015-02-19T13:56:08Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\WinPython-32bit-sp-dev\python-2.7.5\lib\site-packages\numpy\core\fromnumeric.py", line 680, in argpartition
return a.argpartition(kth, axis, kind=kind, order=order)
AttributeError: 'list' object has no attribute 'argpartition'
| 10,359 |
|||
numpy/numpy | numpy__numpy-5616 | bf6f80d994a154f25eb5e8beca5babdf31b13eca | diff --git a/numpy/core/fromnumeric.py b/numpy/core/fromnumeric.py
--- a/numpy/core/fromnumeric.py
+++ b/numpy/core/fromnumeric.py
@@ -679,8 +679,16 @@ def argpartition(a, kth, axis=-1, kind='introselect', order=None):
>>> x[np.argpartition(x, (1, 3))]
array([1, 2, 3, 4])
+ >>> x = [3, 4, 2, 1]
+ >>> np.array(x)[np.argpartition(x, 3)]
+ array([2, 1, 3, 4])
+
"""
- return a.argpartition(kth, axis, kind=kind, order=order)
+ try:
+ argpartition = a.argpartition
+ except AttributeError:
+ return _wrapit(a, 'argpartition',kth, axis, kind, order)
+ return argpartition(kth, axis, kind=kind, order=order)
def sort(a, axis=-1, kind='quicksort', order=None):
| argpartition fails on non-ndarray array-likes
While `partition` works, as advertised, array-likeinputs like, e.g. lists:
```
>>> np.partition([5, 4, 3, 2, 1], 2)
array([1, 2, 3, 4, 5])
```
`argpartition` raises an error if its first argument is not an `ndarray`:
```
>>> np.argpartition([5, 4, 3, 2, 1], 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\WinPython-32bit-sp-dev\python-2.7.5\lib\site-packages\numpy\core\fromnumeric.py", line 680, in argpartition
return a.argpartition(kth, axis, kind=kind, order=order)
AttributeError: 'list' object has no attribute 'argpartition'
```
| If any array-like inputs are to be accepted, then as in `argsort`, this can be implemented:
```
try:
argpartition = a.argpartition
except AttributeError:
return _wrapit(a, 'argpartition', kth, axis, kind, order)
return argpartition(kth, axis, kind=kind, order=order)
```
else it can be like `sort` and `partition`:
```
if axis is None:
a = asanyarray(a).flatten()
axis = 0
else:
a = asanyarray(a).copy(order="K")
return a.argpartition(kth, axis, kind=kind, order=order)
```
Just asking out of curiosity, the following example is shown in the documentation :
```
>>> x = np.array([3, 4, 2, 1])
>>> x[np.argpartition(x, 3)]
array([2, 1, 3, 4])
```
Now, if any array-like inputs are allowed, then the following behavior is needed.
```
>>> x = [3, 4, 2, 1]
>>>np.array(x)[np.argpartition(x, 3)]
array([2, 1, 3, 4])
>>>x[np.argpartition(x, 3)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: only integer arrays with one element can be converted to an index
```
If this is the procedure to be followed, as in case of `argsort`, I think it should be documented. Please correct me if no such use is needed in general scenarios.
it probably should be like argsort, Don't remember why its not, maybe just a mistake, I can't remember there being a reason. compared to sort, partition can't be overridden from the C side but that shouldn't matter here.
I think the `partition` and `sort` C API functions are defined in `item_selection.c`. Can you please clarify on what `compared to sort, partition can't be overridden from the C side` means ? Could I have a go at fixing this ?
no, that is kind of intentional the C sort functions are part of a public struct, adding partition would break the ABI and just add more stuff that should not be in that place.
So I went with a internal lookupt able that could get hooks for user functions. But I don't think its really worthwhile, I don't think anybody would use it.
@juliantaylor Thanks for the clarification. I am new to working with these kind of projects and wasn't aware of the nature of public struct functions and the way in which they are maintained.
| 2015-02-28T13:04:47Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\WinPython-32bit-sp-dev\python-2.7.5\lib\site-packages\numpy\core\fromnumeric.py", line 680, in argpartition
return a.argpartition(kth, axis, kind=kind, order=order)
AttributeError: 'list' object has no attribute 'argpartition'
| 10,361 |
|||
numpy/numpy | numpy__numpy-5742 | 147c60f83f401037ff29593826d2c5729a73c2c5 | diff --git a/numpy/lib/type_check.py b/numpy/lib/type_check.py
--- a/numpy/lib/type_check.py
+++ b/numpy/lib/type_check.py
@@ -324,12 +324,13 @@ def nan_to_num(x):
Returns
-------
- out : ndarray, float
- Array with the same shape as `x` and dtype of the element in `x` with
- the greatest precision. NaN is replaced by zero, and infinity
- (-infinity) is replaced by the largest (smallest or most negative)
- floating point value that fits in the output dtype. All finite numbers
- are upcast to the output dtype (default float64).
+ out : ndarray
+ New Array with the same shape as `x` and dtype of the element in
+ `x` with the greatest precision. If `x` is inexact, then NaN is
+ replaced by zero, and infinity (-infinity) is replaced by the
+ largest (smallest or most negative) floating point value that fits
+ in the output dtype. If `x` is not inexact, then a copy of `x` is
+ returned.
See Also
--------
@@ -354,33 +355,22 @@ def nan_to_num(x):
-1.28000000e+002, 1.28000000e+002])
"""
- try:
- t = x.dtype.type
- except AttributeError:
- t = obj2sctype(type(x))
- if issubclass(t, _nx.complexfloating):
- return nan_to_num(x.real) + 1j * nan_to_num(x.imag)
- else:
- try:
- y = x.copy()
- except AttributeError:
- y = array(x)
- if not issubclass(t, _nx.integer):
- if not y.shape:
- y = array([x])
- scalar = True
- else:
- scalar = False
- are_inf = isposinf(y)
- are_neg_inf = isneginf(y)
- are_nan = isnan(y)
- maxf, minf = _getmaxmin(y.dtype.type)
- y[are_nan] = 0
- y[are_inf] = maxf
- y[are_neg_inf] = minf
- if scalar:
- y = y[0]
- return y
+ x = _nx.array(x, subok=True)
+ xtype = x.dtype.type
+ if not issubclass(xtype, _nx.inexact):
+ return x
+
+ iscomplex = issubclass(xtype, _nx.complexfloating)
+ isscalar = (x.ndim == 0)
+
+ x = x[None] if isscalar else x
+ dest = (x.real, x.imag) if iscomplex else (x,)
+ maxf, minf = _getmaxmin(x.real.dtype)
+ for d in dest:
+ _nx.copyto(d, 0.0, where=isnan(d))
+ _nx.copyto(d, maxf, where=isposinf(d))
+ _nx.copyto(d, minf, where=isneginf(d))
+ return x[0] if isscalar else x
#-----------------------------------------------------------------------------
| An error is given if a integer list is passed to nan_to_num (Trac #880)
_Original ticket http://projects.scipy.org/numpy/ticket/880 on 2008-08-06 by @bsouthey, assigned to unknown._
When an integer list is passed to nan_to_num then an error occurs. This does not occur if the list contains floats or a numpy integer array is used.
```
np.nan_to_num([1.0,3]) # returns: array([ 1., 3.])
n=np.array([1,3])
np.nan_to_num(n) # returns: array([1, 3])
np.nan_to_num([1,3])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.5/site-packages/numpy/lib/type_check.py", line 135, in nan_to_num
maxf, minf = _getmaxmin(y.dtype.type)
File "/usr/lib64/python2.5/site-packages/numpy/lib/type_check.py", line 103, in _getmaxmin
f = getlimits.finfo(t)
File "/usr/lib64/python2.5/site-packages/numpy/lib/getlimits.py", line 46, in __new__
raise ValueError, "data type %r not inexact" % (dtype)
ValueError: data type <type 'numpy.int64'> not inexact
```
| Milestone changed to `1.4.0` by @cournape on 2009-03-09
Milestone changed to `Unscheduled` by @mwiebe on 2011-03-23
Still present in 1.9-devel.
| 2015-04-03T17:34:50Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.5/site-packages/numpy/lib/type_check.py", line 135, in nan_to_num
maxf, minf = _getmaxmin(y.dtype.type)
File "/usr/lib64/python2.5/site-packages/numpy/lib/type_check.py", line 103, in _getmaxmin
f = getlimits.finfo(t)
File "/usr/lib64/python2.5/site-packages/numpy/lib/getlimits.py", line 46, in __new__
raise ValueError, "data type %r not inexact" % (dtype)
ValueError: data type <type 'numpy.int64'> not inexact
| 10,367 |
|||
numpy/numpy | numpy__numpy-6556 | 2d899ea2301e155b06acf585020866ca1953bce5 | diff --git a/numpy/lib/stride_tricks.py b/numpy/lib/stride_tricks.py
--- a/numpy/lib/stride_tricks.py
+++ b/numpy/lib/stride_tricks.py
@@ -62,11 +62,14 @@ def _broadcast_to(array, shape, subok, readonly):
if any(size < 0 for size in shape):
raise ValueError('all elements of broadcast shape must be non-'
'negative')
+ needs_writeable = not readonly and array.flags.writeable
+ extras = ['reduce_ok'] if needs_writeable else []
+ op_flag = 'readwrite' if needs_writeable else 'readonly'
broadcast = np.nditer(
- (array,), flags=['multi_index', 'refs_ok', 'zerosize_ok'],
- op_flags=['readonly'], itershape=shape, order='C').itviews[0]
+ (array,), flags=['multi_index', 'refs_ok', 'zerosize_ok'] + extras,
+ op_flags=[op_flag], itershape=shape, order='C').itviews[0]
result = _maybe_view_as_subclass(array, broadcast)
- if not readonly and array.flags.writeable:
+ if needs_writeable and not result.flags.writeable:
result.flags.writeable = True
return result
| Error in broadcasting stride_tricks array
We hit a new test failure when testing against numpy 1.10.1 : https://github.com/nipy/nibabel/pull/358
The test failure boils down to this:
```
import numpy as np
shape = (2,)
strides = [0]
tricky_scalar = np.lib.stride_tricks.as_strided(np.array(0), shape, strides)
other = np.zeros((1,))
first, second = np.broadcast_arrays(tricky_scalar, other)
```
On Linux (not OSX) I get the following error, for numpy 1.10.1:
```
Traceback (most recent call last):
File "broadcast_bug.py", line 7, in <module>
first, second = np.broadcast_arrays(tricky_scalar, other)
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 200, in broadcast_arrays
for array in args]
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 70, in _broadcast_to
result.flags.writeable = True
ValueError: cannot set WRITEABLE flag to True of this array
```
There is no error for the same code on numpy 1.9.3.
I think this is the same issue, arising in scipy: https://github.com/scipy/scipy/pull/5374#issuecomment-148775998
| Hmm. I wrote the offending line of code, so I can take a look at this.
@shoyer Any progress?
| 2015-10-24T19:01:58Z | [] | [] |
Traceback (most recent call last):
File "broadcast_bug.py", line 7, in <module>
first, second = np.broadcast_arrays(tricky_scalar, other)
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 200, in broadcast_arrays
for array in args]
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 70, in _broadcast_to
result.flags.writeable = True
ValueError: cannot set WRITEABLE flag to True of this array
| 10,403 |
|||
numpy/numpy | numpy__numpy-6557 | 91093ff8c1ad9ccf8096fadc2e695d4039e529fc | diff --git a/numpy/lib/stride_tricks.py b/numpy/lib/stride_tricks.py
--- a/numpy/lib/stride_tricks.py
+++ b/numpy/lib/stride_tricks.py
@@ -62,11 +62,14 @@ def _broadcast_to(array, shape, subok, readonly):
if any(size < 0 for size in shape):
raise ValueError('all elements of broadcast shape must be non-'
'negative')
+ needs_writeable = not readonly and array.flags.writeable
+ extras = ['reduce_ok'] if needs_writeable else []
+ op_flag = 'readwrite' if needs_writeable else 'readonly'
broadcast = np.nditer(
- (array,), flags=['multi_index', 'refs_ok', 'zerosize_ok'],
- op_flags=['readonly'], itershape=shape, order='C').itviews[0]
+ (array,), flags=['multi_index', 'refs_ok', 'zerosize_ok'] + extras,
+ op_flags=[op_flag], itershape=shape, order='C').itviews[0]
result = _maybe_view_as_subclass(array, broadcast)
- if not readonly and array.flags.writeable:
+ if needs_writeable and not result.flags.writeable:
result.flags.writeable = True
return result
| Error in broadcasting stride_tricks array
We hit a new test failure when testing against numpy 1.10.1 : https://github.com/nipy/nibabel/pull/358
The test failure boils down to this:
```
import numpy as np
shape = (2,)
strides = [0]
tricky_scalar = np.lib.stride_tricks.as_strided(np.array(0), shape, strides)
other = np.zeros((1,))
first, second = np.broadcast_arrays(tricky_scalar, other)
```
On Linux (not OSX) I get the following error, for numpy 1.10.1:
```
Traceback (most recent call last):
File "broadcast_bug.py", line 7, in <module>
first, second = np.broadcast_arrays(tricky_scalar, other)
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 200, in broadcast_arrays
for array in args]
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 70, in _broadcast_to
result.flags.writeable = True
ValueError: cannot set WRITEABLE flag to True of this array
```
There is no error for the same code on numpy 1.9.3.
I think this is the same issue, arising in scipy: https://github.com/scipy/scipy/pull/5374#issuecomment-148775998
| Hmm. I wrote the offending line of code, so I can take a look at this.
@shoyer Any progress?
| 2015-10-24T20:53:18Z | [] | [] |
Traceback (most recent call last):
File "broadcast_bug.py", line 7, in <module>
first, second = np.broadcast_arrays(tricky_scalar, other)
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 200, in broadcast_arrays
for array in args]
File "/home/mb312/.virtualenvs/test/local/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 70, in _broadcast_to
result.flags.writeable = True
ValueError: cannot set WRITEABLE flag to True of this array
| 10,404 |
|||
numpy/numpy | numpy__numpy-6905 | e072d79f03610c33e336a9b700882d8905f9c958 | diff --git a/numpy/lib/stride_tricks.py b/numpy/lib/stride_tricks.py
--- a/numpy/lib/stride_tricks.py
+++ b/numpy/lib/stride_tricks.py
@@ -121,9 +121,6 @@ def _broadcast_shape(*args):
"""
if not args:
raise ValueError('must provide at least one argument')
- if len(args) == 1:
- # a single argument does not work with np.broadcast
- return np.asarray(args[0]).shape
# use the old-iterator because np.nditer does not handle size 0 arrays
# consistently
b = np.broadcast(*args[:32])
| Why does `numpy.broadcast` not accept a single input array?
I'm wondering what's the reasoning behind forbidding a single array as input to `numpy.broadcast`. When writing code which is supposed to work in N dimensions with N >= 1, it would be nice to be able to use `broadcast` as-is. A standard use case is the calculation of the shape of an output array when using meshgrids as input for function evaluation:
``` python
>>> coord_vecs = [[1, 2], [3, 4, 5]]
>>> mesh = np.meshgrid(*coord_vecs, indexing='ij', sparse=True) # for large stuff
>>> out_shape = np.broadcast(*mesh).shape
>>> out_shape
(2, 3)
>>> coord_vecs = [[1, 2]]
>>> mesh = np.meshgrid(*coord_vecs, indexing='ij', sparse=True) # still works
>>> out_shape = np.broadcast(*mesh).shape
Traceback (most recent call last):
...
ValueError: Need at least two and fewer than (32) array objects.
```
As noted above, `meshgrid` works and returns a list with one element, as expected. Why does `broadcast` not return a broadcast object with shape `(2,)` in the example above? The other public methods of such an object would probably also make sense with a single input array.
EDIT: Last sentence was wrong in the first version.
| Apparently, there is [one single place](https://github.com/numpy/numpy/blob/004639d07fd161d1394f5dda1b6ed42c777f3c80/numpy/core/src/multiarray/iterators.c#L1606-L1614) where this situation is caught. Without knowing much about internals, the function around that check seems to be completely generic, and simply replacing 2 with 1 could be the only change necessary to allow single-array(-like) input. No idea about side effects of such a change, though.
Well, maybe worth some playing around, then?
I also find this surprising.
Seems to work right away with the proposed (trivial) change:
``` python
>>> import numpy as np
>>> np.__path__ # My local GH copy
['/home/hkohr/Software/numpy/numpy']
>>> a = np.arange(3)
>>> bc = np.broadcast(a)
>>> bc.nd
1
>>> bc.numiter
1
>>> bc.shape
(3,)
>>> bc.size
3
```
I get three errors when running `np.test()`, but they're probably unrelated. I just post them for the record:
```
======================================================================
ERROR: Failure: ImportError (cannot import name ccompiler)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "numpy/distutils/__init__.py", line 8, in <module>
from . import ccompiler
File "numpy/distutils/ccompiler.py", line 8, in <module>
from distutils import ccompiler
File "numpy/distutils/__init__.py", line 8, in <module>
from . import ccompiler
File "numpy/distutils/ccompiler.py", line 8, in <module>
from distutils import ccompiler
ImportError: cannot import name ccompiler
======================================================================
ERROR: test suite for <module 'test_array_from_pyobj' from '/home/hkohr/Software/numpy/numpy/f2py/tests/test_array_from_pyobj.py'>
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/suite.py", line 209, in run
self.setUp()
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/suite.py", line 292, in setUp
self.setupContext(ancestor)
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/suite.py", line 315, in setupContext
try_run(context, names)
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/util.py", line 471, in try_run
return func()
File "/home/hkohr/Software/numpy/numpy/f2py/tests/test_array_from_pyobj.py", line 42, in setup
'test_array_from_pyobj_ext')
File "/home/hkohr/Software/numpy/numpy/f2py/tests/util.py", line 78, in wrapper
memo[key] = func(*a, **kw)
File "/home/hkohr/Software/numpy/numpy/f2py/tests/util.py", line 312, in build_module_distutils
__import__(module_name)
ImportError: /tmp/tmpqyEUs_/test_array_from_pyobj_ext.so: failed to map segment from shared object
======================================================================
ERROR: test_callback.TestF77Callback.test_string_callback
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/case.py", line 381, in setUp
try_run(self.inst, ('setup', 'setUp'))
File "/home/hkohr/.virtualenvs/numpy-py2.7/lib/python2.7/site-packages/nose/util.py", line 471, in try_run
return func()
File "/home/hkohr/Software/numpy/numpy/f2py/tests/util.py", line 361, in setUp
module_name=self.module_name)
File "/home/hkohr/Software/numpy/numpy/f2py/tests/util.py", line 78, in wrapper
memo[key] = func(*a, **kw)
File "/home/hkohr/Software/numpy/numpy/f2py/tests/util.py", line 169, in build_code
module_name=module_name)
File "/home/hkohr/Software/numpy/numpy/f2py/tests/util.py", line 78, in wrapper
memo[key] = func(*a, **kw)
File "/home/hkohr/Software/numpy/numpy/f2py/tests/util.py", line 149, in build_module
__import__(module_name)
ImportError: /tmp/tmpqyEUs_/_test_ext_module_5403.so: failed to map segment from shared object
----------------------------------------------------------------------
```
+1 on exploring this change: looking at the code it certainly seems like an unnecessary restriction.
Your test errors do not seem related, do you get the same errors if testing master unchanged? Would be good to put a PR together and see what Travis thinks.
| 2015-12-30T12:09:25Z | [] | [] |
Traceback (most recent call last):
...
ValueError: Need at least two and fewer than (32) array objects.
| 10,428 |
|||
numpy/numpy | numpy__numpy-7133 | e2805398f9a63b825f4a2aab22e9f169ff65aae9 | diff --git a/numpy/lib/npyio.py b/numpy/lib/npyio.py
--- a/numpy/lib/npyio.py
+++ b/numpy/lib/npyio.py
@@ -627,7 +627,11 @@ def _savez(file, args, kwds, compress, allow_pickle=True, pickle_kwargs=None):
zipf = zipfile_factory(file, mode="w", compression=compression)
# Stage arrays in a temporary file on disk, before writing to zip.
- fd, tmpfile = tempfile.mkstemp(suffix='-numpy.npy')
+
+ # Since target file might be big enough to exceed capacity of a global
+ # temporary directory, create temp file side-by-side with the target file.
+ file_dir, file_prefix = os.path.split(file) if _is_string_like(file) else (None, 'tmp')
+ fd, tmpfile = tempfile.mkstemp(prefix=file_prefix, dir=file_dir, suffix='-numpy.npy')
os.close(fd)
try:
for key, val in namedict.items():
@@ -640,6 +644,8 @@ def _savez(file, args, kwds, compress, allow_pickle=True, pickle_kwargs=None):
fid.close()
fid = None
zipf.write(tmpfile, arcname=fname)
+ except IOError as exc:
+ raise IOError("Failed to write to %s: %s" % (tmpfile, exc))
finally:
if fid:
fid.close()
| Saving large array fails with savez_compressed() but works with save()
I got this strange problem on a CentOS box, Python 2.6.6, Numpy 1.9.1:
```
[minhle@node069 ~]$ python
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.version.version
'1.9.1'
>>> a = np.ones((775890380,))
>>> np.savez_compressed('/home/minhle/scratch/test.npz', a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/minhle/.local/lib/python2.6/site-packages/numpy/lib/npyio.py", line 560, in savez_compressed
_savez(file, args, kwds, True)
File "/home/minhle/.local/lib/python2.6/site-packages/numpy/lib/npyio.py", line 597, in _savez
format.write_array(fid, np.asanyarray(val))
File "/home/minhle/.local/lib/python2.6/site-packages/numpy/lib/format.py", line 562, in write_array
array.tofile(fp)
IOError: 775890380 requested and 233691638 written
>>> np.save('/home/minhle/scratch/test.npy', a)
>>> b = np.load('/home/minhle/scratch/test.npy')
>>> b[:10]
array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
>>> quit()
[minhle@node069 ~]$ cat /etc/*-release
CentOS release 6.5 (Final)
Cluster Manager v5.2
slave
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
CentOS release 6.5 (Final)
CentOS release 6.5 (Final)
```
| Got the same error... any solution in sight?
I just got this error too but with `np.savez` as opposed to `np.savez_compressed`.
For future reference, using `np.save` instead of `np.savez[_compressed]` seems to work as @omerlevy's example code indicates.
So, is this an issue with `savez`? Anyone who knows numpy well have any ideas what's happening here?
`savez` needs to store files temporarily on disk, and perhaps you run out of space on `/tmp`.
Try setting `TMPDIR=$HOME/tmp` or so.
@pv Yeah, changing `TMPDIR` did the trick for me. It may also be worth noting that I'm on an NFS filesystem. I'm not sure what sort of effects, if any, that might have. I'm guessing that I'm into subtleties in the way that /tmp is configured.
However, I think numpy should make this issue explicit in the documentation and provide some semblance of an intelligent, approachable error message.
Agreed. I ran onto this issue on HPC resources too and I agree that it would be good if the error / warning message recommended checking to make sure `TMPDIR` was set properly with adequate space.
Ran into the same problem (kudos to ngoldbaum@IRC for pointing me here). I understand that for temp storage TEMPDIR is usually the logical location. BUT when dealing with large datasets, there is a reason why we might want to savez to a different location and /tmp would never be big enough. So why not to follow any other downloader's behavior and use targetfilename + '_tempSMTHRANDOM' suffix instead. This has higher possibility to succeed since would use the target location/partition which should have adequate amount of storage. Also error message should include location of that temp file to give users sensible feedback
| 2016-01-28T03:16:40Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/minhle/.local/lib/python2.6/site-packages/numpy/lib/npyio.py", line 560, in savez_compressed
_savez(file, args, kwds, True)
File "/home/minhle/.local/lib/python2.6/site-packages/numpy/lib/npyio.py", line 597, in _savez
format.write_array(fid, np.asanyarray(val))
File "/home/minhle/.local/lib/python2.6/site-packages/numpy/lib/format.py", line 562, in write_array
array.tofile(fp)
IOError: 775890380 requested and 233691638 written
| 10,431 |
|||
numpy/numpy | numpy__numpy-7152 | 9cfdb218b16cba97800fc2ad0f37e1af780ab664 | diff --git a/numpy/lib/arraypad.py b/numpy/lib/arraypad.py
--- a/numpy/lib/arraypad.py
+++ b/numpy/lib/arraypad.py
@@ -1337,7 +1337,7 @@ def pad(array, pad_width, mode, **kwargs):
'reflect_type': 'even',
}
- if isinstance(mode, str):
+ if isinstance(mode, np.compat.basestring):
# Make sure have allowed kwargs appropriate for mode
for key in kwargs:
if key not in allowedkwargs[mode]:
| from __future__ import unicode_literals breaks numpy.pad
The `pad` function accepts a `mode` parameter which can be a callable or str. [It uses the test `isinstance(mode, str)` to determine which is the case](https://github.com/numpy/numpy/blob/master/numpy/lib/arraypad.py#L1340). Using `from __future__ import unicode_literals` makes this fail, as the `isinstance` call returns `False`, eventually producing a `TypeError: 'unicode' object is not callable` when the `mode` argument is instead called as a function.
For example, the script:
```
from __future__ import unicode_literals
import numpy as np
np.pad([10], 2, mode='constant')
```
Fails using `python2.7` with the error:
```
Traceback (most recent call last):
File "test.py", line 4, in <module>
np.pad([10], 2, mode='constant')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/lib/arraypad.py", line 1348, in pad
kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/lib/shape_base.py", line 79, in apply_along_axis
res = func1d(arr[tuple(i.tolist())],*args)
TypeError: 'unicode' object is not callable
```
---
Would using the [`six.string_types`](https://pythonhosted.org/six/#six.string_types) comparison [_a la_ this stackexchange post](http://stackoverflow.com/a/11301392/230468) be a suitable fix?
| Yeah, `np.pad` (and in general anything that takes a string `mode`-style argument) ought to accept unicode strings on py2. I don't know what the very simplest cleanest way to write this check would be, but we'd certainly accept a pull request fixing it.
(Maybe a helper function like
```
def _py2_fixup_unicode(s):
if this_is_py2 and isinstance(s, unicode):
try:
return s.encode("ascii")
except UnicodeEncodeError:
pass
return s
```
and then functions with `mode`-style arguments do `mode = _fixup_py2_unicode(mode)` at the top?)
It sounds like that might be the best, most-general solution. In which case, is `sys.version_info` a reliable way to check the python version?
For `numpy.pad` in particular, since the only options are string and callable, what about using
```
if callable(mode):
# treat as function....
else:
# treat as str
```
That could also work, though since the specific thing we're worrying about
is unicode-on-py2 I kinda prefer checking for unicode -- plus in your way,
the code below has to be prepared to handle both unicode and str, while in
mine it only has to work for str. But really there are lots of ways that
would work -- someone just needs to write the patch :-)
On Sun, Jan 24, 2016 at 2:50 PM, Luke notifications@github.com wrote:
> It sounds like that might be the best, most-general solution. In which
> case, is sys.version_info a reliable way to check the python version?
>
> For numpy.pad in particular, since the only options are string and
> callable, what about using
>
> if callable(mode):
> # treat as function....
> else:
> # treat as str
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/numpy/numpy/issues/7112#issuecomment-174349793.
##
Nathaniel J. Smith -- https://vorpus.org http://vorpus.org
`np.compat.basestring` and/or `np.compat.sixu` might help? Anyway, I am not sure whether `np.pad` may be the least of our problems, do array fields and other C-side string arguments work with it? I don't remember fixing it, and I seem to remember it was a problem.
Now that we have dropped Python 3.2 and 3.3, sixu can be replaced by `u"..."`. Unicode literals broke a lot of code, which is why they weren't part of the Python2, Python3 code unification, but it may be that we can do something about that now. I don't recall the exact problems that came up.
The issue here isn't that we need Unicode literals but that np.pad should
respond gracefully if someone passes in a Unicode literal. I agree that
there are probably other places in the code that don't handle this well
either (I think that's @seberg's point?), but I guess we should wait for a
bug to be reported and then fix those too?
On Jan 25, 2016 7:09 AM, "Charles Harris" notifications@github.com wrote:
> Now that we have dropped Python 3.2 and 3.3, sixu can be replaced by
> u"...". Unicode literals broke a lot of code, which is why they weren't
> part of the Python2, Python3 code unification, but it may be that we can do
> something about that now. I don't recall the exact problems that came up.
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/numpy/numpy/issues/7112#issuecomment-174537453.
Yeah, as far as I understand Chuck we might now have a better chance to fix things then before. Array fields in dtypes definitely don't work in all regards at least. But that does not mean we should not just fix this. If we ever find something we cannot fix then so be it, but this should not be hard to fix.
| 2016-01-31T02:13:23Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 4, in <module>
np.pad([10], 2, mode='constant')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/lib/arraypad.py", line 1348, in pad
kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/lib/shape_base.py", line 79, in apply_along_axis
res = func1d(arr[tuple(i.tolist())],*args)
TypeError: 'unicode' object is not callable
| 10,432 |
|||
numpy/numpy | numpy__numpy-7587 | 1fc180b4c683e79649e5699303722995ca3e8ef9 | diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py
--- a/numpy/linalg/linalg.py
+++ b/numpy/linalg/linalg.py
@@ -23,7 +23,7 @@
csingle, cdouble, inexact, complexfloating, newaxis, ravel, all, Inf, dot,
add, multiply, sqrt, maximum, fastCopyAndTranspose, sum, isfinite, size,
finfo, errstate, geterrobj, longdouble, rollaxis, amin, amax, product, abs,
- broadcast, atleast_2d, intp, asanyarray, isscalar
+ broadcast, atleast_2d, intp, asanyarray, isscalar, object_
)
from numpy.lib import triu, asfarray
from numpy.linalg import lapack_lite, _umath_linalg
@@ -2112,7 +2112,7 @@ def norm(x, ord=None, axis=None, keepdims=False):
"""
x = asarray(x)
- if not issubclass(x.dtype.type, inexact):
+ if not issubclass(x.dtype.type, (inexact, object_)):
x = x.astype(float)
# Immediately handle some default, simple, fast, and common cases.
| Regression in linalg.norm() using dtype=object
In NumPy 1.10.1, this works:
``` python
>>> import numpy as np
>>> np.linalg.norm(np.array([np.array([0, 1]), 0, 0], dtype=object))
array([ 0., 1.])
```
In NumPy 1.11.0, however, it raises an exception:
``` python
>>> import numpy as np
>>> np.linalg.norm(np.array([np.array([0, 1]), 0, 0], dtype=object))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 2116, in norm
x = x.astype(float)
ValueError: setting an array element with a sequence.
```
Probably this
``` python
if not issubclass(x.dtype.type, inexact):
x = x.astype(float)
```
should be changed to
``` python
if not issubclass(x.dtype.type, (inexact, object_)):
x = x.astype(float)
```
?
Does that make sense?
If yes, I can make a PR.
| 2016-04-29T08:51:38Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 2116, in norm
x = x.astype(float)
ValueError: setting an array element with a sequence.
| 10,456 |
||||
numpy/numpy | numpy__numpy-7608 | 730e2881219f4af79ab0ad5930a3bea5ab60b098 | diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py
--- a/numpy/linalg/linalg.py
+++ b/numpy/linalg/linalg.py
@@ -23,7 +23,7 @@
csingle, cdouble, inexact, complexfloating, newaxis, ravel, all, Inf, dot,
add, multiply, sqrt, maximum, fastCopyAndTranspose, sum, isfinite, size,
finfo, errstate, geterrobj, longdouble, rollaxis, amin, amax, product, abs,
- broadcast, atleast_2d, intp, asanyarray, isscalar
+ broadcast, atleast_2d, intp, asanyarray, isscalar, object_
)
from numpy.lib import triu, asfarray
from numpy.linalg import lapack_lite, _umath_linalg
@@ -2112,7 +2112,7 @@ def norm(x, ord=None, axis=None, keepdims=False):
"""
x = asarray(x)
- if not issubclass(x.dtype.type, inexact):
+ if not issubclass(x.dtype.type, (inexact, object_)):
x = x.astype(float)
# Immediately handle some default, simple, fast, and common cases.
| Regression in linalg.norm() using dtype=object
In NumPy 1.10.1, this works:
``` python
>>> import numpy as np
>>> np.linalg.norm(np.array([np.array([0, 1]), 0, 0], dtype=object))
array([ 0., 1.])
```
In NumPy 1.11.0, however, it raises an exception:
``` python
>>> import numpy as np
>>> np.linalg.norm(np.array([np.array([0, 1]), 0, 0], dtype=object))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 2116, in norm
x = x.astype(float)
ValueError: setting an array element with a sequence.
```
Probably this
``` python
if not issubclass(x.dtype.type, inexact):
x = x.astype(float)
```
should be changed to
``` python
if not issubclass(x.dtype.type, (inexact, object_)):
x = x.astype(float)
```
?
Does that make sense?
If yes, I can make a PR.
| 2016-05-07T01:56:25Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 2116, in norm
x = x.astype(float)
ValueError: setting an array element with a sequence.
| 10,458 |
||||
numpy/numpy | numpy__numpy-8148 | b0a6e5583f2952a040c45d0d50f5e93372dc175b | diff --git a/numpy/compat/py3k.py b/numpy/compat/py3k.py
--- a/numpy/compat/py3k.py
+++ b/numpy/compat/py3k.py
@@ -118,7 +118,7 @@ def npy_load_module(name, fn, info=None):
mod : module
"""
- import importlib
+ import importlib.machinery
return importlib.machinery.SourceFileLoader(name, fn).load_module()
else:
def npy_load_module(name, fn, info=None):
| Configuration.add_subpackage fails on Python 3.4
Using numpy 1.11.2 and Python 3.4, trying to run a `setupy.py` script which uses subpackage using numpy.distutil fails when `importlib.machinery` cannot be found in the `npy_load_module` function.
For example using the following `setup.py` and `foo/setup.py` scripts:
``` Python
# setup.py
from numpy.distutils.misc_util import Configuration
from numpy.distutils.core import setup
def configuration():
config = Configuration(None, '', None)
config.add_subpackage('foo')
return config
setup(configuration=configuration)
```
``` Python
# foo/setup.py
def configuration(parent_package='', top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('foo', parent_package, top_path)
return config
```
Attempting to build the package:
```
$ python setup.py build_ext -i
Traceback (most recent call last):
File "setup.py", line 10, in <module>
setup(configuration=configuration)
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/core.py", line 135, in setup
config = configuration()
File "setup.py", line 7, in configuration
config.add_subpackage('foo')
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 1000, in add_subpackage
caller_level = 2)
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 969, in get_subpackage
caller_level = caller_level + 1)
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 882, in _get_configuration_from_setup_py
('.py', 'U', 1))
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/compat/py3k.py", line 112, in npy_load_module
return importlib.machinery.SourceFileLoader(name, fn).load_module()
AttributeError: 'module' object has no attribute 'machinery'
```
This works in Python 3.5 and 2.7 and with earlier versions of NumPy prior to the addition of `npy_load_module`.
This bug can likely be fixed by adding `import importlib` to the `npy_load_module`. I will submit a PR shortly.
A workaround is to `import importlib.machinery` somewhere in the root `setup.py` file.
| 2016-10-12T18:13:58Z | [] | [] |
Traceback (most recent call last):
File "setup.py", line 10, in <module>
setup(configuration=configuration)
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/core.py", line 135, in setup
config = configuration()
File "setup.py", line 7, in configuration
config.add_subpackage('foo')
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 1000, in add_subpackage
caller_level = 2)
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 969, in get_subpackage
caller_level = caller_level + 1)
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/distutils/misc_util.py", line 882, in _get_configuration_from_setup_py
('.py', 'U', 1))
File "/Users/jhelmus/anaconda/envs/py34/lib/python3.4/site-packages/numpy/compat/py3k.py", line 112, in npy_load_module
return importlib.machinery.SourceFileLoader(name, fn).load_module()
AttributeError: 'module' object has no attribute 'machinery'
| 10,492 |
||||
numpy/numpy | numpy__numpy-8384 | 5f5ccecbfc116284ed8c8d53cd8b203ceef5f7c7 | diff --git a/numpy/core/code_generators/genapi.py b/numpy/core/code_generators/genapi.py
--- a/numpy/core/code_generators/genapi.py
+++ b/numpy/core/code_generators/genapi.py
@@ -469,7 +469,7 @@ def fullapi_hash(api_dicts):
# To parse strings like 'hex = checksum' where hex is e.g. 0x1234567F and
# checksum a 128 bits md5 checksum (hex format as well)
-VERRE = re.compile('(^0x[\da-f]{8})\s*=\s*([\da-f]{32})')
+VERRE = re.compile(r'(^0x[\da-f]{8})\s*=\s*([\da-f]{32})')
def get_versions_hash():
d = []
diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py
--- a/numpy/lib/function_base.py
+++ b/numpy/lib/function_base.py
@@ -1644,7 +1644,7 @@ def gradient(f, *varargs, **kwargs):
+ \\left(h_{d}^{2} - h_{s}^{2}\\right)f\\left(x_{i}\\right)
- h_{d}^{2}f\\left(x_{i}-h_{s}\\right)}
{ h_{s}h_{d}\\left(h_{d} + h_{s}\\right)}
- + \mathcal{O}\\left(\\frac{h_{d}h_{s}^{2}
+ + \\mathcal{O}\\left(\\frac{h_{d}h_{s}^{2}
+ h_{s}h_{d}^{2}}{h_{d}
+ h_{s}}\\right)
@@ -1656,7 +1656,7 @@ def gradient(f, *varargs, **kwargs):
\\hat f_{i}^{(1)}=
\\frac{f\\left(x_{i+1}\\right) - f\\left(x_{i-1}\\right)}{2h}
- + \mathcal{O}\\left(h^{2}\\right)
+ + \\mathcal{O}\\left(h^{2}\\right)
With a similar procedure the forward/backward approximations used for
boundaries can be derived.
diff --git a/tools/find_deprecated_escaped_characters.py b/tools/find_deprecated_escaped_characters.py
new file mode 100644
--- /dev/null
+++ b/tools/find_deprecated_escaped_characters.py
@@ -0,0 +1,69 @@
+#! /usr/bin/env python
+"""
+Look for escape sequences deprecated in Python 3.6.
+
+Python 3.6 deprecates a number of non-escape sequences starting with `\` that
+were accepted before. For instance, '\(' was previously accepted but must now
+be written as '\\(' or r'\('.
+
+"""
+from __future__ import division, absolute_import, print_function
+
+import sys
+
+def main(root):
+ """Find deprecated escape sequences.
+
+ Checks for deprecated escape sequences in ``*.py files``. If `root` is a
+ file, that file is checked, if `root` is a directory all ``*.py`` files
+ found in a recursive descent are checked.
+
+ If a deprecated escape sequence is found, the file and line where found is
+ printed. Note that for multiline strings the line where the string ends is
+ printed and the error(s) are somewhere in the body of the string.
+
+ Parameters
+ ----------
+ root : str
+ File or directory to check.
+ Returns
+ -------
+ None
+
+ """
+ count = 0
+
+ if sys.version_info[:2] >= (3, 6):
+ import ast
+ import tokenize
+ import warnings
+ from pathlib import Path
+
+ base = Path(root)
+ paths = base.rglob("*.py") if base.is_dir() else [base]
+ for path in paths:
+ # use tokenize to auto-detect encoding on systems where no
+ # default encoding is defined (e.g. LANG='C')
+ with tokenize.open(str(path)) as f:
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always')
+ tree = ast.parse(f.read())
+ if w:
+ print("file: ", str(path))
+ for e in w:
+ print('line: ', e.lineno, ': ', e.message)
+ print()
+ count += len(w)
+ else:
+ raise RuntimeError("Python version must be >= 3.6")
+
+ print("Errors Found", count)
+
+
+if __name__ == "__main__":
+ from argparse import ArgumentParser
+
+ parser = ArgumentParser(description="Find deprecated escaped characters")
+ parser.add_argument('root', help='directory or file to be checked')
+ args = parser.parse_args()
+ main(args.root)
| `test_warning_calls` error on Python 3.6
As of today, the daily numpy wheel builds show the following error on master for all Python 3.6 builds:
```
======================================================================
ERROR: test_warnings.test_warning_calls
----------------------------------------------------------------------
Traceback (most recent call last):
File "/venv/lib/python3.6/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/venv/lib/python3.6/site-packages/numpy/tests/test_warnings.py", line 81, in test_warning_calls
tree = ast.parse(file.read())
File "/opt/cp36m/lib/python3.6/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1675
SyntaxError: invalid escape sequence \m
```
https://travis-ci.org/MacPython/numpy-wheels/builds/204888912
This is testing commit 5f5ccecbf . I see that the usual travis-ci builds tested the same commit without error : https://travis-ci.org/numpy/numpy/builds/204805650 . Any thoughts?
| 2016-12-14T20:14:05Z | [] | [] |
Traceback (most recent call last):
File "/venv/lib/python3.6/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/venv/lib/python3.6/site-packages/numpy/tests/test_warnings.py", line 81, in test_warning_calls
tree = ast.parse(file.read())
File "/opt/cp36m/lib/python3.6/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1675
SyntaxError: invalid escape sequence \m
| 10,504 |
||||
numpy/numpy | numpy__numpy-8497 | ee3ab365cb55cce6d0b9b6ed5cfbd8e3ede8cc66 | diff --git a/numpy/matrixlib/defmatrix.py b/numpy/matrixlib/defmatrix.py
--- a/numpy/matrixlib/defmatrix.py
+++ b/numpy/matrixlib/defmatrix.py
@@ -3,49 +3,15 @@
__all__ = ['matrix', 'bmat', 'mat', 'asmatrix']
import sys
+import ast
import numpy.core.numeric as N
from numpy.core.numeric import concatenate, isscalar, binary_repr, identity, asanyarray
from numpy.core.numerictypes import issubdtype
-# make translation table
-_numchars = '0123456789.-+jeEL'
-
-if sys.version_info[0] >= 3:
- class _NumCharTable:
- def __getitem__(self, i):
- if chr(i) in _numchars:
- return chr(i)
- else:
- return None
- _table = _NumCharTable()
- def _eval(astr):
- str_ = astr.translate(_table)
- if not str_:
- raise TypeError("Invalid data string supplied: " + astr)
- else:
- return eval(str_)
-
-else:
- _table = [None]*256
- for k in range(256):
- _table[k] = chr(k)
- _table = ''.join(_table)
-
- _todelete = []
- for k in _table:
- if k not in _numchars:
- _todelete.append(k)
- _todelete = ''.join(_todelete)
- del k
-
- def _eval(astr):
- str_ = astr.translate(_table, _todelete)
- if not str_:
- raise TypeError("Invalid data string supplied: " + astr)
- else:
- return eval(str_)
-
def _convert_from_string(data):
+ for char in '[]':
+ data = data.replace(char, '')
+
rows = data.split(';')
newdata = []
count = 0
@@ -54,7 +20,7 @@ def _convert_from_string(data):
newrow = []
for col in trow:
temp = col.split()
- newrow.extend(map(_eval, temp))
+ newrow.extend(map(ast.literal_eval, temp))
if count == 0:
Ncols = len(newrow)
elif len(newrow) != Ncols:
| np.matrix('True False True') throws error
When trying to create a numpy matrix using the string syntax
`a = np.matrix('1 2; 3 4')`
but with booleans instead of integers
`np.matrix('True True False')`
I see this error
```
>>> np.matrix('True True False')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/matrixlib/defmatrix.py", line 267, in __new__
data = _convert_from_string(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/matrixlib/defmatrix.py", line 57, in _convert_from_string
newrow.extend(map(_eval, temp))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/matrixlib/defmatrix.py", line 26, in _eval
return eval(str_)
File "<string>", line 1, in <module>
NameError: name 'e' is not defined
```
I'm not sure but is this desired behavior?
Thanks!
| 2017-01-18T21:30:07Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/matrixlib/defmatrix.py", line 267, in __new__
data = _convert_from_string(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/matrixlib/defmatrix.py", line 57, in _convert_from_string
newrow.extend(map(_eval, temp))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/matrixlib/defmatrix.py", line 26, in _eval
return eval(str_)
File "<string>", line 1, in <module>
NameError: name 'e' is not defined
| 10,510 |
||||
numpy/numpy | numpy__numpy-8508 | a621a2b700415a5c155546f9cb1f064c6099579e | diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -6130,8 +6130,11 @@ def __new__(self):
def __array_finalize__(self, obj):
return
- def __array_wrap__(self, obj):
- return self
+ def __array_prepare__(self, obj, context=None):
+ return self.view(MaskedArray).__array_prepare__(obj, context)
+
+ def __array_wrap__(self, obj, context=None):
+ return self.view(MaskedArray).__array_wrap__(obj, context)
def __str__(self):
return str(masked_print_option._display)
| __array_prepare__ produces bad shape for np.ma.masked
A somewhat contrived example
```python
>>> source = np.ma.masked # comes from user function, could be anything
>>> source = np.asanyarray(source) # force to array, so we can try to copy the type
>>> outarr = np.zeros((2, 2)) # allocate a raw ndarray for the result
>>> bad = source.__array_prepare__(outarr) # prepare that raw array for operations
>>> bad.shape
(2, 2)
>>> bad.data.shape
() # uh oh
>>> bad.mask.shape
() # spaghettios
```
Which leads to failures like
```python
>>> res.transpose((1, 0))
Traceback (most recent call last):
File "<pyshell#78>", line 1, in <module>
np.asanyarray(np.ma.masked).__array_prepare__(np.zeros((2, 2))).transpose((1, 0))
File "C:\Program Files\Python 3.5\lib\site-packages\numpy\ma\core.py", line 2509, in wrapped_method
result = getattr(self._data, funcname)(*args, **params)
ValueError: axes don't match array
```
This come up when trying to invoke #8441 on masked arrays.
Is this a bug, or an invalid use of `__array_prepare__`?
| Related:
```python
def assert_broadcasts(ufunc, a, b):
expected = np.broadcast(a, b).shape
actual = ufunc(a, b).shape
assert expected == actual
assert_broadcasts(np.add, [1, 2, 3], 1)
assert_broadcasts(np.add, [1, 2, 3], np.ma.masked) # AssertionError
``` | 2017-01-20T18:16:20Z | [] | [] |
Traceback (most recent call last):
File "<pyshell#78>", line 1, in <module>
np.asanyarray(np.ma.masked).__array_prepare__(np.zeros((2, 2))).transpose((1, 0))
File "C:\Program Files\Python 3.5\lib\site-packages\numpy\ma\core.py", line 2509, in wrapped_method
result = getattr(self._data, funcname)(*args, **params)
ValueError: axes don't match array
| 10,511 |
|||
numpy/numpy | numpy__numpy-8647 | b097bd7ed4fa8e574414c2a0df8e50ac27bffa6d | diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -6991,44 +6991,42 @@ def where(condition, x=_NoValue, y=_NoValue):
[6.0 -- 8.0]]
"""
- missing = (x is _NoValue, y is _NoValue).count(True)
+ # handle the single-argument case
+ missing = (x is _NoValue, y is _NoValue).count(True)
if missing == 1:
raise ValueError("Must provide both 'x' and 'y' or neither.")
if missing == 2:
- return filled(condition, 0).nonzero()
-
- # Both x and y are provided
-
- # Get the condition
- fc = filled(condition, 0).astype(MaskType)
- notfc = np.logical_not(fc)
-
- # Get the data
- xv = getdata(x)
- yv = getdata(y)
- if x is masked:
- ndtype = yv.dtype
- elif y is masked:
- ndtype = xv.dtype
- else:
- ndtype = np.find_common_type([xv.dtype, yv.dtype], [])
-
- # Construct an empty array and fill it
- d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray)
- np.copyto(d._data, xv.astype(ndtype), where=fc)
- np.copyto(d._data, yv.astype(ndtype), where=notfc)
-
- # Create an empty mask and fill it
- mask = np.zeros(fc.shape, dtype=MaskType)
- np.copyto(mask, getmask(x), where=fc)
- np.copyto(mask, getmask(y), where=notfc)
- mask |= getmaskarray(condition)
-
- # Use d._mask instead of d.mask to avoid copies
- d._mask = mask if mask.any() else nomask
+ return nonzero(condition)
+
+ # we only care if the condition is true - false or masked pick y
+ cf = filled(condition, False)
+ xd = getdata(x)
+ yd = getdata(y)
+
+ # we need the full arrays here for correct final dimensions
+ cm = getmaskarray(condition)
+ xm = getmaskarray(x)
+ ym = getmaskarray(y)
+
+ # deal with the fact that masked.dtype == float64, but we don't actually
+ # want to treat it as that.
+ if x is masked and y is not masked:
+ xd = np.zeros((), dtype=yd.dtype)
+ xm = np.ones((), dtype=ym.dtype)
+ elif y is masked and x is not masked:
+ yd = np.zeros((), dtype=xd.dtype)
+ ym = np.ones((), dtype=xm.dtype)
+
+ data = np.where(cf, xd, yd)
+ mask = np.where(cf, xm, ym)
+ mask = np.where(cm, np.ones((), dtype=mask.dtype), mask)
+
+ # collapse the mask, for backwards compatibility
+ if mask.dtype == np.bool_ and not mask.any():
+ mask = nomask
- return d
+ return masked_array(data, mask=mask)
def choose(indices, choices, out=None, mode='raise'):
| BUG: np.ma.where does not broadcast correctly
```python
>>> x = np.eye(3)
>>> y = np.eye(3)
>>> np.where([0, 1, 0], x, y)
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
>>> np.ma.where([0, 1, 0], x, y)
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
np.ma.where([0, 1, 0], x, y)
File "C:\Program Files\Python 3.5\lib\site-packages\numpy\ma\core.py", line 6964, in where
np.copyto(d._data, xv.astype(ndtype), where=fc)
ValueError: could not broadcast input array from shape (3,3) into shape (3)
```
BUG: np.ma.where does not handle structured dtypes correctly
```python
>>> dt = np.dtype([('a', int), ('b', int)])
>>> x = np.array([(1, 2), (3, 4)], dtype=dt)
>>> np.where([0, 1], x, np.array((2, 3), dtype=dt)).dtype
dtype([('a', '<i4'), ('b', '<i4')])
>>> np.ma.where([0, 1], x, np.array((2, 3), dtype=dt)).dtype
dtype('O')
```
| 2017-02-20T15:20:04Z | [] | [] |
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
np.ma.where([0, 1, 0], x, y)
File "C:\Program Files\Python 3.5\lib\site-packages\numpy\ma\core.py", line 6964, in where
np.copyto(d._data, xv.astype(ndtype), where=fc)
ValueError: could not broadcast input array from shape (3,3) into shape (3)
| 10,522 |
||||
numpy/numpy | numpy__numpy-8665 | 85cd7b7e1ef04a498dbb84fc7d7fb35881a73183 | diff --git a/numpy/ma/core.py b/numpy/ma/core.py
--- a/numpy/ma/core.py
+++ b/numpy/ma/core.py
@@ -6372,21 +6372,16 @@ def getdoc(self):
def __call__(self, a, *args, **params):
if self.reversed:
args = list(args)
- arr = args[0]
- args[0] = a
- a = arr
- # Get the method from the array (if possible)
+ a, args[0] = args[0], a
+
+ marr = asanyarray(a)
method_name = self.__name__
- method = getattr(a, method_name, None)
- if method is not None:
- return method(*args, **params)
- # Still here ? Then a is not a MaskedArray
- method = getattr(MaskedArray, method_name, None)
- if method is not None:
- return method(MaskedArray(a), *args, **params)
- # Still here ? OK, let's call the corresponding np function
- method = getattr(np, method_name)
- return method(a, *args, **params)
+ method = getattr(type(marr), method_name, None)
+ if method is None:
+ # use the corresponding np function
+ method = getattr(np, method_name)
+
+ return method(marr, *args, **params)
all = _frommethod('all')
@@ -6535,9 +6530,7 @@ def compressed(x):
Equivalent method.
"""
- if not isinstance(x, MaskedArray):
- x = asanyarray(x)
- return x.compressed()
+ return asanyarray(x).compressed()
def concatenate(arrays, axis=0):
@@ -7683,6 +7676,10 @@ def asanyarray(a, dtype=None):
<class 'numpy.ma.core.MaskedArray'>
"""
+ # workaround for #8666, to preserve identity. Ideally the bottom line
+ # would handle this for us.
+ if isinstance(a, MaskedArray) and (dtype is None or dtype == a.dtype):
+ return a
return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=True)
| np.ma.count and np.ma.copy on list input
The `np.ma.count` and `np.ma.copy` behave strangly if the input is a `list` or `tuple` (probably also other python builtins).
```
>>> np.ma.copy([1,2,3]) # unexpected behaviour
[1, 2, 3]
>>> np.copy([1,2,3]) # expected behaviour
array([1, 2, 3])
>>> np.ma.count([1,2,3])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Programming\Anaconda\envs\test\lib\site-packages\numpy\ma\core.py", line 6389, in __call__
return method(*args, **params)
TypeError: count() takes exactly one argument (0 given)
```
The reason for this behaviour is that `list` **has** a `count` and `copy` method and `np.ma._frommethod` tries to call it instead of `np.ma.MaskedArray.count` (and copy).
| 2017-02-22T10:22:33Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Programming\Anaconda\envs\test\lib\site-packages\numpy\ma\core.py", line 6389, in __call__
return method(*args, **params)
TypeError: count() takes exactly one argument (0 given)
| 10,524 |
||||
numpy/numpy | numpy__numpy-8750 | 6a3edf3210b439a4d1a51acb4e01bac017697ee6 | diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py
--- a/numpy/lib/function_base.py
+++ b/numpy/lib/function_base.py
@@ -1135,7 +1135,7 @@ def average(a, axis=None, weights=None, returned=False):
wgt = wgt.swapaxes(-1, axis)
scl = wgt.sum(axis=axis, dtype=result_dtype)
- if (scl == 0.0).any():
+ if np.any(scl == 0.0):
raise ZeroDivisionError(
"Weights sum to zero, can't be normalized")
| np.average crashes for 1D decimal object array (now only without weights)
I'm running macOS 10.12.3, Python 3.6.0, and NumPy 1.12.0.
When using the decimal package with NumPy, the arrays are of type 'object'. While most operations are performed flawlessly, calling np.average() with one of these arrays throws an error if weights are not provided (`AttributeError: 'decimal.Decimal' object has no attribute 'dtype'`), and a different error if weights are provided (`AttributeError: 'bool' object has no attribute 'any'`). Contrarily, calling np.mean() executes without error.
The source of the problem in both cases appears to be the assumption that the result of folding the input array results in a standard supported dtype, rather than being based on the operations available to the object type.
Therefore, I suspect that this issue will extend to any types which support the requisite numeric operations but are not natively supported by NumPy.
I resolved the weighted problem for my use case by changing the line `if (scl == 0.0).any()` (line 1138 in `lib/function_base.py`) to check if scl were an array first, and if not then removing the `any()` call, though I don't know if that solution is desirable or acceptable for the purposes of NumPy.
Sample input:
```
import numpy as np
import decimal as dc
values = np.array([dc.Decimal(x) for x in range(10)])
weights = np.array([dc.Decimal(x) for x in range(10)])
weights /= weights.sum()
print(np.mean(values))
print(np.average(values, weights=weights))
```
Corresponding output:
```
4.5
Traceback (most recent call last):
File "bugreport.py", line 9, in <module>
print(np.average(values, weights=weights))
File "/usr/local/lib/python3.6/site-packages/numpy/lib/function_base.py", line 1138, in average
if (scl == 0.0).any():
AttributeError: 'bool' object has no attribute 'any'
```
| Yes, this seems like a bug and certainly doesn't reflected the intended behavior as described in the docs. Decimal arrays should be acceptable as arguments for this function. | 2017-03-06T20:07:37Z | [] | [] |
Traceback (most recent call last):
File "bugreport.py", line 9, in <module>
print(np.average(values, weights=weights))
File "/usr/local/lib/python3.6/site-packages/numpy/lib/function_base.py", line 1138, in average
if (scl == 0.0).any():
AttributeError: 'bool' object has no attribute 'any'
| 10,531 |
|||
numpy/numpy | numpy__numpy-8762 | 485b099cd4b82d65dc38cb2b28c7119f003c76c4 | diff --git a/numpy/lib/polynomial.py b/numpy/lib/polynomial.py
--- a/numpy/lib/polynomial.py
+++ b/numpy/lib/polynomial.py
@@ -1036,17 +1036,47 @@ class poly1d(object):
poly1d([ 1, -3, 2])
"""
- coeffs = None
- order = None
- variable = None
__hash__ = None
- def __init__(self, c_or_r, r=0, variable=None):
+ @property
+ def coeffs(self):
+ """ The polynomial coefficients """
+ return self._coeffs
+
+ @property
+ def variable(self):
+ """ The name of the polynomial variable """
+ return self._variable
+
+ # calculated attributes
+ @property
+ def order(self):
+ """ The order or degree of the polynomial """
+ return len(self._coeffs) - 1
+
+ @property
+ def roots(self):
+ """ The roots of the polynomial, where self(x) == 0 """
+ return roots(self._coeffs)
+
+ # alias attributes
+ r = roots
+ c = coef = coefficients = coeffs
+ o = order
+
+ def __init__(self, c_or_r, r=False, variable=None):
if isinstance(c_or_r, poly1d):
- for key in c_or_r.__dict__.keys():
- self.__dict__[key] = c_or_r.__dict__[key]
+ self._variable = c_or_r._variable
+ self._coeffs = c_or_r._coeffs
+
+ if set(c_or_r.__dict__) - set(self.__dict__):
+ msg = ("In the future extra properties will not be copied "
+ "across when constructing one poly1d from another")
+ warnings.warn(msg, FutureWarning, stacklevel=2)
+ self.__dict__.update(c_or_r.__dict__)
+
if variable is not None:
- self.__dict__['variable'] = variable
+ self._variable = variable
return
if r:
c_or_r = poly(c_or_r)
@@ -1056,11 +1086,10 @@ def __init__(self, c_or_r, r=0, variable=None):
c_or_r = trim_zeros(c_or_r, trim='f')
if len(c_or_r) == 0:
c_or_r = NX.array([0.])
- self.__dict__['coeffs'] = c_or_r
- self.__dict__['order'] = len(c_or_r) - 1
+ self._coeffs = c_or_r
if variable is None:
variable = 'x'
- self.__dict__['variable'] = variable
+ self._variable = variable
def __array__(self, t=None):
if t:
@@ -1199,29 +1228,17 @@ def __rdiv__(self, other):
__rtruediv__ = __rdiv__
def __eq__(self, other):
+ if not isinstance(other, poly1d):
+ return NotImplemented
if self.coeffs.shape != other.coeffs.shape:
return False
return (self.coeffs == other.coeffs).all()
def __ne__(self, other):
+ if not isinstance(other, poly1d):
+ return NotImplemented
return not self.__eq__(other)
- def __setattr__(self, key, val):
- raise ValueError("Attributes cannot be changed this way.")
-
- def __getattr__(self, key):
- if key in ['r', 'roots']:
- return roots(self.coeffs)
- elif key in ['c', 'coef', 'coefficients']:
- return self.coeffs
- elif key in ['o']:
- return self.order
- else:
- try:
- return self.__dict__[key]
- except KeyError:
- raise AttributeError(
- "'%s' has no attribute '%s'" % (self.__class__, key))
def __getitem__(self, val):
ind = self.order - val
@@ -1237,10 +1254,9 @@ def __setitem__(self, key, val):
raise ValueError("Does not support negative powers.")
if key > self.order:
zr = NX.zeros(key-self.order, self.coeffs.dtype)
- self.__dict__['coeffs'] = NX.concatenate((zr, self.coeffs))
- self.__dict__['order'] = key
+ self._coeffs = NX.concatenate((zr, self.coeffs))
ind = 0
- self.__dict__['coeffs'][ind] = val
+ self._coeffs[ind] = val
return
def __iter__(self):
| numpy.poly1d.__eq__ method fails with AttributeError
I called the `inspect.signature` function on a `numpy.poly1d` object:
```python
import numpy as np
import inspect
poly = np.poly1d([1, 2, 3])
print(inspect.signature(poly))
```
and got an unexpected exception:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/python3.5/inspect.py", line 2987, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
File "/python3.5/inspect.py", line 2737, in from_callable
follow_wrapper_chains=follow_wrapped)
File "/python3.5/inspect.py", line 2228, in _signature_from_callable
if _signature_is_builtin(obj):
File "/python3.5/inspect.py", line 1785, in _signature_is_builtin
obj in (type, object))
File "/python3.5/site-packages/numpy/lib/polynomial.py", line 1203, in __eq__
if self.coeffs.shape != other.coeffs.shape:
AttributeError: type object 'type' has no attribute 'coeffs'
```
So `numpy.poly1d` objects can be compared only to objects with special attributes structure.
The current`__eq__` implementation from [v.1.12.x](https://github.com/numpy/numpy/blob/maintenance/1.12.x/numpy/lib/polynomial.py#L1201-L1204):
```python
def __eq__(self, other):
if self.coeffs.shape != other.coeffs.shape:
return False
return (self.coeffs == other.coeffs).all()
```
numpy version 1.12.0
| That should probably have a `if not isinstance(other, poly1d): return NotImplemented`. In fact, a whole bunch of the `__(.*)__` methods should have those lines
I think `if not isinstance(other, poly1d): return False` is better.
With `raise NotImplemented` we still will have the same problem.
Is there any reason to not return False in such cases?
> With raise NotImplemented we still will have the same problem.
@bondarevts: Indeed, that is why I suggested `return NotImplemented` and not `raise NotImplementedError`. `NotImplemented` here just means "I have no idea, ask the other guy", and does not result in an error
> Indeed, that is why I suggested return NotImplemented
@eric-wieser You're right. I read it wrong. Thank you for the clarification. | 2017-03-09T02:57:50Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/python3.5/inspect.py", line 2987, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
File "/python3.5/inspect.py", line 2737, in from_callable
follow_wrapper_chains=follow_wrapped)
File "/python3.5/inspect.py", line 2228, in _signature_from_callable
if _signature_is_builtin(obj):
File "/python3.5/inspect.py", line 1785, in _signature_is_builtin
obj in (type, object))
File "/python3.5/site-packages/numpy/lib/polynomial.py", line 1203, in __eq__
if self.coeffs.shape != other.coeffs.shape:
AttributeError: type object 'type' has no attribute 'coeffs'
| 10,532 |
|||
numpy/numpy | numpy__numpy-8827 | 03f3789efe4da2c56d2841ed027ef6735ca2f11b | diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py
--- a/numpy/linalg/linalg.py
+++ b/numpy/linalg/linalg.py
@@ -19,12 +19,13 @@
import warnings
from numpy.core import (
- array, asarray, zeros, empty, empty_like, transpose, intc, single, double,
+ array, asarray, zeros, empty, empty_like, intc, single, double,
csingle, cdouble, inexact, complexfloating, newaxis, ravel, all, Inf, dot,
add, multiply, sqrt, maximum, fastCopyAndTranspose, sum, isfinite, size,
finfo, errstate, geterrobj, longdouble, moveaxis, amin, amax, product, abs,
- broadcast, atleast_2d, intp, asanyarray, isscalar, object_, ones
- )
+ broadcast, atleast_2d, intp, asanyarray, isscalar, object_, ones, matmul,
+ swapaxes, divide)
+
from numpy.core.multiarray import normalize_axis_index
from numpy.lib import triu, asfarray
from numpy.linalg import lapack_lite, _umath_linalg
@@ -223,6 +224,22 @@ def _assertNoEmpty2d(*arrays):
if _isEmpty2d(a):
raise LinAlgError("Arrays cannot be empty")
+def transpose(a):
+ """
+ Transpose each matrix in a stack of matrices.
+
+ Unlike np.transpose, this only swaps the last two axes, rather than all of
+ them
+
+ Parameters
+ ----------
+ a : (...,M,N) array_like
+
+ Returns
+ -------
+ aT : (...,N,M) ndarray
+ """
+ return swapaxes(a, -1, -2)
# Linear equations
@@ -1279,7 +1296,7 @@ def eigh(a, UPLO='L'):
# Singular value decomposition
-def svd(a, full_matrices=1, compute_uv=1):
+def svd(a, full_matrices=True, compute_uv=True):
"""
Singular Value Decomposition.
@@ -1494,15 +1511,21 @@ def matrix_rank(M, tol=None):
Rank of the array is the number of SVD singular values of the array that are
greater than `tol`.
+ .. versionchanged:: 1.14
+ Can now operate on stacks of matrices
+
Parameters
----------
M : {(M,), (..., M, N)} array_like
input vector or stack of matrices
- tol : {None, float}, optional
- threshold below which SVD values are considered zero. If `tol` is
- None, and ``S`` is an array with singular values for `M`, and
- ``eps`` is the epsilon value for datatype of ``S``, then `tol` is
- set to ``S.max() * max(M.shape) * eps``.
+ tol : (...) array_like, float, optional
+ threshold below which SVD values are considered zero. If `tol` is
+ None, and ``S`` is an array with singular values for `M`, and
+ ``eps`` is the epsilon value for datatype of ``S``, then `tol` is
+ set to ``S.max() * max(M.shape) * eps``.
+
+ .. versionchanged:: 1.14
+ Broadcasted against the stack of matrices
Notes
-----
@@ -1569,6 +1592,8 @@ def matrix_rank(M, tol=None):
S = svd(M, compute_uv=False)
if tol is None:
tol = S.max(axis=-1, keepdims=True) * max(M.shape[-2:]) * finfo(S.dtype).eps
+ else:
+ tol = asarray(tol)[...,newaxis]
return (S > tol).sum(axis=-1)
@@ -1582,26 +1607,29 @@ def pinv(a, rcond=1e-15 ):
singular-value decomposition (SVD) and including all
*large* singular values.
+ .. versionchanged:: 1.14
+ Can now operate on stacks of matrices
+
Parameters
----------
- a : (M, N) array_like
- Matrix to be pseudo-inverted.
- rcond : float
- Cutoff for small singular values.
- Singular values smaller (in modulus) than
- `rcond` * largest_singular_value (again, in modulus)
- are set to zero.
+ a : (..., M, N) array_like
+ Matrix or stack of matrices to be pseudo-inverted.
+ rcond : (...) array_like of float
+ Cutoff for small singular values.
+ Singular values smaller (in modulus) than
+ `rcond` * largest_singular_value (again, in modulus)
+ are set to zero. Broadcasts against the stack of matrices
Returns
-------
- B : (N, M) ndarray
- The pseudo-inverse of `a`. If `a` is a `matrix` instance, then so
- is `B`.
+ B : (..., N, M) ndarray
+ The pseudo-inverse of `a`. If `a` is a `matrix` instance, then so
+ is `B`.
Raises
------
LinAlgError
- If the SVD computation does not converge.
+ If the SVD computation does not converge.
Notes
-----
@@ -1638,20 +1666,20 @@ def pinv(a, rcond=1e-15 ):
"""
a, wrap = _makearray(a)
+ rcond = asarray(rcond)
if _isEmpty2d(a):
res = empty(a.shape[:-2] + (a.shape[-1], a.shape[-2]), dtype=a.dtype)
return wrap(res)
a = a.conjugate()
- u, s, vt = svd(a, 0)
- m = u.shape[0]
- n = vt.shape[1]
- cutoff = rcond*maximum.reduce(s)
- for i in range(min(n, m)):
- if s[i] > cutoff:
- s[i] = 1./s[i]
- else:
- s[i] = 0.
- res = dot(transpose(vt), multiply(s[:, newaxis], transpose(u)))
+ u, s, vt = svd(a, full_matrices=False)
+
+ # discard small singular values
+ cutoff = rcond[..., newaxis] * amax(s, axis=-1, keepdims=True)
+ large = s > cutoff
+ s = divide(1, s, where=large, out=s)
+ s[~large] = 0
+
+ res = matmul(transpose(vt), multiply(s[..., newaxis], transpose(u)))
return wrap(res)
# Determinant
@@ -1987,13 +2015,13 @@ def lstsq(a, b, rcond="warn"):
resids = array([sum((ravel(bstar)[n:])**2)],
dtype=result_real_t)
else:
- x = array(transpose(bstar)[:n,:], dtype=result_t, copy=True)
+ x = array(bstar.T[:n,:], dtype=result_t, copy=True)
if results['rank'] == n and m > n:
if isComplexType(t):
- resids = sum(abs(transpose(bstar)[n:,:])**2, axis=0).astype(
+ resids = sum(abs(bstar.T[n:,:])**2, axis=0).astype(
result_real_t, copy=False)
else:
- resids = sum((transpose(bstar)[n:,:])**2, axis=0).astype(
+ resids = sum((bstar.T[n:,:])**2, axis=0).astype(
result_real_t, copy=False)
st = s[:min(n, m)].astype(result_real_t, copy=True)
| BUG: Linalg.pinv fails on stacks of matrices
```python
>>> a = np.stack((np.eye(3),)*4, axis=0)
>>> ai = np.linalg.inv(a)
>>> assert (a == ai).all()
>>> api = np.linalg.pinv(a)
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
np.linalg.pinv(a)
File "C:\Program Files\Python 3.5\lib\site-packages\numpy\linalg\linalg.py", line 1668, in pinv
if s[i] > cutoff:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
| 2017-03-25T00:01:50Z | [] | [] |
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
np.linalg.pinv(a)
File "C:\Program Files\Python 3.5\lib\site-packages\numpy\linalg\linalg.py", line 1668, in pinv
if s[i] > cutoff:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
| 10,536 |
||||
numpy/numpy | numpy__numpy-9285 | c6533b6c386dc0f4009e5f3c5c545dde4d1b48a4 | diff --git a/numpy/distutils/ccompiler.py b/numpy/distutils/ccompiler.py
--- a/numpy/distutils/ccompiler.py
+++ b/numpy/distutils/ccompiler.py
@@ -80,6 +80,7 @@ def _needs_build(obj, cc_args, extra_postargs, pp_opts):
return False
+
def replace_method(klass, method_name, func):
if sys.version_info[0] < 3:
m = types.MethodType(func, None, klass)
@@ -88,6 +89,25 @@ def replace_method(klass, method_name, func):
m = lambda self, *args, **kw: func(self, *args, **kw)
setattr(klass, method_name, m)
+
+######################################################################
+## Method that subclasses may redefine. But don't call this method,
+## it i private to CCompiler class and may return unexpected
+## results if used elsewhere. So, you have been warned..
+
+def CCompiler_find_executables(self):
+ """
+ Does nothing here, but is called by the get_version method and can be
+ overridden by subclasses. In particular it is redefined in the `FCompiler`
+ class where more documentation can be found.
+
+ """
+ pass
+
+
+replace_method(CCompiler, 'find_executables', CCompiler_find_executables)
+
+
# Using customized CCompiler.spawn.
def CCompiler_spawn(self, cmd, display=None):
"""
diff --git a/numpy/distutils/fcompiler/intel.py b/numpy/distutils/fcompiler/intel.py
--- a/numpy/distutils/fcompiler/intel.py
+++ b/numpy/distutils/fcompiler/intel.py
@@ -57,7 +57,7 @@ def get_flags(self):
def get_flags_opt(self): # Scipy test failures with -O2
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
return ['-xhost -fp-model strict -O1 -{}'.format(mpopt)]
def get_flags_arch(self):
@@ -123,7 +123,7 @@ def get_flags(self):
def get_flags_opt(self): # Scipy test failures with -O2
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
return ['-fp-model strict -O1 -{}'.format(mpopt)]
def get_flags_arch(self):
diff --git a/numpy/distutils/intelccompiler.py b/numpy/distutils/intelccompiler.py
--- a/numpy/distutils/intelccompiler.py
+++ b/numpy/distutils/intelccompiler.py
@@ -19,7 +19,7 @@ def __init__(self, verbose=0, dry_run=0, force=0):
UnixCCompiler.__init__(self, verbose, dry_run, force)
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
self.cc_exe = ('icc -fPIC -fp-model strict -O3 '
'-fomit-frame-pointer -{}').format(mpopt)
compiler = self.cc_exe
@@ -59,7 +59,7 @@ def __init__(self, verbose=0, dry_run=0, force=0):
UnixCCompiler.__init__(self, verbose, dry_run, force)
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
self.cc_exe = ('icc -m64 -fPIC -fp-model strict -O3 '
'-fomit-frame-pointer -{}').format(mpopt)
compiler = self.cc_exe
| numpy 1.13.0 doesn't build with Intel compilers
I'm running into the problem below when trying to build `numpy` 1.13.0 with Intel compilers; building earlier numpy versions (1.12.1, 1.11.1, 1.10.4, ...) with this approach works fine.
```
$ python setup.py build --compiler=intel --fcompiler=intelem
building library "npymath" sources
Found executable /path/to/icc
Could not locate executable ecc
Traceback (most recent call last):
File "setup.py", line 392, in <module>
setup_package()
File "setup.py", line 384, in setup_package
setup(**metadata)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build.py", line 47, in run
old_build.run(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 148, in run
self.build_sources()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 159, in build_sources
self.build_library_sources(*libname_info)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 294, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 377, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 672, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 248, in try_link
self._check_compiler()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/config.py", line 39, in _check_compiler
old_config._check_compiler(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 102, in _check_compiler
dry_run=self.dry_run, force=1)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 713, in new_compiler
compiler = klass(None, dry_run, force)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/intelccompiler.py", line 21, in __init__
v = self.get_version()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 582, in CCompiler_get_version
self.find_executables()
AttributeError: IntelCCompiler instance has no attribute 'find_executables'
```
The `Could not locate executable ecc` seems to be a hint, since this message doesn't occur with older numpy versions where building like this does work:
```
...
building library "npymath" sources
Found executable /path/to/icc
customize IntelEM64TFCompiler
Found executable /path/to/ifort
customize IntelEM64TFCompiler using config
C compiler: icc -fPIC -fp-model strict -O3 -fomit-frame-pointer -openmp
...
```
Why is `numpy` looking for `ecc` when it found `icc`, and not continuing to go and find `ifort` like it did before?
A similar problem was reported by @Eric89GXL in #9101, but there the problem was that `icc` could not be found, that's clearly not the problem here.
Seeing this with Python 2.7.13, Intel compilers 2017.1.132, Intel MKL 2017.1.132.
| I get the same error. Commenting out the line `self.find_executables()` works as a work-around. This method seems to be undefined for CCompiler/IntelCCompiler.
#8961 is the only change from 1.12.1 and I don't see how that would be related to this.
#8961 has added exactly the line causing this (`v = self.get_version()`).
So an alternate fix to #8961 suggested in #8941 was `-fopenmp`, which would not be version dependent. Might want to try that.
Hi everybody,
In regard to this change, in numpy 1.13.0 I've found using f2py and intel compilers I got the following compilation error:
`mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
AttributeError: LooseVersion instance has no attribute 'split'`
which can be solved changing the above line by:
`mpopt = 'openmp' if v and int(v.version[0]) < 15 else 'qopenmp'`
Hope this info can be useful,
ACM
Geez, version is a `LooseVersion` instance. Seems kinda useless given the lack of documentation of the class and the variations of version in the wild. That could be fixed by calling `str` on it I suppose.
So there are two problems:
* `LooseVersion` is not a string
* missing `find_executables`, which only seems to be defined for fortran compilers
@boegel Looks like "intel" is so generic that the Itanium platform is also being searched, weird.
EDIT: I don't think the ecc message is relevant here. | 2017-06-21T15:59:33Z | [] | [] |
Traceback (most recent call last):
File "setup.py", line 392, in <module>
setup_package()
File "setup.py", line 384, in setup_package
setup(**metadata)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build.py", line 47, in run
old_build.run(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 148, in run
self.build_sources()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 159, in build_sources
self.build_library_sources(*libname_info)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 294, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 377, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 672, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 248, in try_link
self._check_compiler()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/config.py", line 39, in _check_compiler
old_config._check_compiler(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 102, in _check_compiler
dry_run=self.dry_run, force=1)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 713, in new_compiler
compiler = klass(None, dry_run, force)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/intelccompiler.py", line 21, in __init__
v = self.get_version()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 582, in CCompiler_get_version
self.find_executables()
AttributeError: IntelCCompiler instance has no attribute 'find_executables'
| 10,556 |
|||
numpy/numpy | numpy__numpy-9299 | 753e148e424e89da60390d7f1e15e65153da5aa0 | diff --git a/numpy/distutils/ccompiler.py b/numpy/distutils/ccompiler.py
--- a/numpy/distutils/ccompiler.py
+++ b/numpy/distutils/ccompiler.py
@@ -80,6 +80,7 @@ def _needs_build(obj, cc_args, extra_postargs, pp_opts):
return False
+
def replace_method(klass, method_name, func):
if sys.version_info[0] < 3:
m = types.MethodType(func, None, klass)
@@ -88,6 +89,25 @@ def replace_method(klass, method_name, func):
m = lambda self, *args, **kw: func(self, *args, **kw)
setattr(klass, method_name, m)
+
+######################################################################
+## Method that subclasses may redefine. But don't call this method,
+## it i private to CCompiler class and may return unexpected
+## results if used elsewhere. So, you have been warned..
+
+def CCompiler_find_executables(self):
+ """
+ Does nothing here, but is called by the get_version method and can be
+ overridden by subclasses. In particular it is redefined in the `FCompiler`
+ class where more documentation can be found.
+
+ """
+ pass
+
+
+replace_method(CCompiler, 'find_executables', CCompiler_find_executables)
+
+
# Using customized CCompiler.spawn.
def CCompiler_spawn(self, cmd, display=None):
"""
diff --git a/numpy/distutils/fcompiler/intel.py b/numpy/distutils/fcompiler/intel.py
--- a/numpy/distutils/fcompiler/intel.py
+++ b/numpy/distutils/fcompiler/intel.py
@@ -57,7 +57,7 @@ def get_flags(self):
def get_flags_opt(self): # Scipy test failures with -O2
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
return ['-xhost -fp-model strict -O1 -{}'.format(mpopt)]
def get_flags_arch(self):
@@ -123,7 +123,7 @@ def get_flags(self):
def get_flags_opt(self): # Scipy test failures with -O2
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
return ['-fp-model strict -O1 -{}'.format(mpopt)]
def get_flags_arch(self):
diff --git a/numpy/distutils/intelccompiler.py b/numpy/distutils/intelccompiler.py
--- a/numpy/distutils/intelccompiler.py
+++ b/numpy/distutils/intelccompiler.py
@@ -19,7 +19,7 @@ def __init__(self, verbose=0, dry_run=0, force=0):
UnixCCompiler.__init__(self, verbose, dry_run, force)
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
self.cc_exe = ('icc -fPIC -fp-model strict -O3 '
'-fomit-frame-pointer -{}').format(mpopt)
compiler = self.cc_exe
@@ -59,7 +59,7 @@ def __init__(self, verbose=0, dry_run=0, force=0):
UnixCCompiler.__init__(self, verbose, dry_run, force)
v = self.get_version()
- mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
+ mpopt = 'openmp' if v and v < '15' else 'qopenmp'
self.cc_exe = ('icc -m64 -fPIC -fp-model strict -O3 '
'-fomit-frame-pointer -{}').format(mpopt)
compiler = self.cc_exe
| numpy 1.13.0 doesn't build with Intel compilers
I'm running into the problem below when trying to build `numpy` 1.13.0 with Intel compilers; building earlier numpy versions (1.12.1, 1.11.1, 1.10.4, ...) with this approach works fine.
```
$ python setup.py build --compiler=intel --fcompiler=intelem
building library "npymath" sources
Found executable /path/to/icc
Could not locate executable ecc
Traceback (most recent call last):
File "setup.py", line 392, in <module>
setup_package()
File "setup.py", line 384, in setup_package
setup(**metadata)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build.py", line 47, in run
old_build.run(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 148, in run
self.build_sources()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 159, in build_sources
self.build_library_sources(*libname_info)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 294, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 377, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 672, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 248, in try_link
self._check_compiler()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/config.py", line 39, in _check_compiler
old_config._check_compiler(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 102, in _check_compiler
dry_run=self.dry_run, force=1)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 713, in new_compiler
compiler = klass(None, dry_run, force)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/intelccompiler.py", line 21, in __init__
v = self.get_version()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 582, in CCompiler_get_version
self.find_executables()
AttributeError: IntelCCompiler instance has no attribute 'find_executables'
```
The `Could not locate executable ecc` seems to be a hint, since this message doesn't occur with older numpy versions where building like this does work:
```
...
building library "npymath" sources
Found executable /path/to/icc
customize IntelEM64TFCompiler
Found executable /path/to/ifort
customize IntelEM64TFCompiler using config
C compiler: icc -fPIC -fp-model strict -O3 -fomit-frame-pointer -openmp
...
```
Why is `numpy` looking for `ecc` when it found `icc`, and not continuing to go and find `ifort` like it did before?
A similar problem was reported by @Eric89GXL in #9101, but there the problem was that `icc` could not be found, that's clearly not the problem here.
Seeing this with Python 2.7.13, Intel compilers 2017.1.132, Intel MKL 2017.1.132.
| I get the same error. Commenting out the line `self.find_executables()` works as a work-around. This method seems to be undefined for CCompiler/IntelCCompiler.
#8961 is the only change from 1.12.1 and I don't see how that would be related to this.
#8961 has added exactly the line causing this (`v = self.get_version()`).
So an alternate fix to #8961 suggested in #8941 was `-fopenmp`, which would not be version dependent. Might want to try that.
Hi everybody,
In regard to this change, in numpy 1.13.0 I've found using f2py and intel compilers I got the following compilation error:
`mpopt = 'openmp' if v and int(v.split('.')[0]) < 15 else 'qopenmp'
AttributeError: LooseVersion instance has no attribute 'split'`
which can be solved changing the above line by:
`mpopt = 'openmp' if v and int(v.version[0]) < 15 else 'qopenmp'`
Hope this info can be useful,
ACM
Geez, version is a `LooseVersion` instance. Seems kinda useless given the lack of documentation of the class and the variations of version in the wild. That could be fixed by calling `str` on it I suppose.
So there are two problems:
* `LooseVersion` is not a string
* missing `find_executables`, which only seems to be defined for fortran compilers
@boegel Looks like "intel" is so generic that the Itanium platform is also being searched, weird.
EDIT: I don't think the ecc message is relevant here. | 2017-06-26T17:23:43Z | [] | [] |
Traceback (most recent call last):
File "setup.py", line 392, in <module>
setup_package()
File "setup.py", line 384, in setup_package
setup(**metadata)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build.py", line 47, in run
old_build.run(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 148, in run
self.build_sources()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 159, in build_sources
self.build_library_sources(*libname_info)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 294, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/build_src.py", line 377, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 672, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 248, in try_link
self._check_compiler()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/command/config.py", line 39, in _check_compiler
old_config._check_compiler(self)
File "/user/home/gent/vsc400/vsc40023/eb_phanpyscratch/CO7/haswell-ib/software/Python/2.7.13-intel-2017a/lib/python2.7/distutils/command/config.py", line 102, in _check_compiler
dry_run=self.dry_run, force=1)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 713, in new_compiler
compiler = klass(None, dry_run, force)
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/intelccompiler.py", line 21, in __init__
v = self.get_version()
File "/tmp/vsc40023/easybuild_build/numpy/1.13.0/intel-2017a-Python-2.7.13/numpy-1.13.0/numpy/distutils/ccompiler.py", line 582, in CCompiler_get_version
self.find_executables()
AttributeError: IntelCCompiler instance has no attribute 'find_executables'
| 10,557 |
|||
numpy/numpy | numpy__numpy-9552 | ae17d2c93dfac88cca9859d8b49490deb3991f41 | diff --git a/numpy/doc/basics.py b/numpy/doc/basics.py
--- a/numpy/doc/basics.py
+++ b/numpy/doc/basics.py
@@ -158,8 +158,8 @@
numpy provides with ``np.finfo(np.longdouble)``.
NumPy does not provide a dtype with more precision than C
-``long double``\s; in particular, the 128-bit IEEE quad precision
-data type (FORTRAN's ``REAL*16``\) is not available.
+``long double``\\s; in particular, the 128-bit IEEE quad precision
+data type (FORTRAN's ``REAL*16``\\) is not available.
For efficient memory alignment, ``np.longdouble`` is usually stored
padded with zero bits, either to 96 or 128 bits. Which is more efficient
| test_warning_calls "invalid escape sequence \s" on Python 3.6 daily wheel builds.
New errors today (13 August), for Linux and OSX:
```
======================================================================
ERROR: numpy.tests.test_warnings.test_warning_calls
----------------------------------------------------------------------
Traceback (most recent call last):
File "/venv/lib/python3.6/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/venv/lib/python3.6/site-packages/numpy/tests/test_warnings.py", line 79, in test_warning_calls
tree = ast.parse(file.read())
File "/usr/lib/python3.6/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 184
SyntaxError: invalid escape sequence \s
```
https://travis-ci.org/MacPython/numpy-wheels/jobs/264023630
https://travis-ci.org/MacPython/numpy-wheels/jobs/264023631
https://travis-ci.org/MacPython/numpy-wheels/jobs/264023635
| Was worried about that, the problem is
```
file: numpy/doc/basics.py
line: 184 : invalid escape sequence \s
```
Came from documentation fixes. Probably need a raw string, although I would prefer making the files genuine rst.
I should add that check to the tests... | 2017-08-13T23:06:02Z | [] | [] |
Traceback (most recent call last):
File "/venv/lib/python3.6/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/venv/lib/python3.6/site-packages/numpy/tests/test_warnings.py", line 79, in test_warning_calls
tree = ast.parse(file.read())
File "/usr/lib/python3.6/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 184
SyntaxError: invalid escape sequence \s
| 10,567 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-10568 | 8822264e185df57250ac15bdbb86ac5a383e6520 | diff --git a/demo/video_gpuaccel_demo.py b/demo/video_gpuaccel_demo.py
--- a/demo/video_gpuaccel_demo.py
+++ b/demo/video_gpuaccel_demo.py
@@ -52,7 +52,9 @@ def prefetch_batch_input_shape(model: nn.Module, ori_wh: Tuple[int,
test_pipeline = Compose(cfg.test_dataloader.dataset.pipeline)
data = {'img': np.zeros((h, w, 3), dtype=np.uint8), 'img_id': 0}
data = test_pipeline(data)
- _, data_sample = model.data_preprocessor([data], False)
+ data['inputs'] = [data['inputs']]
+ data['data_samples'] = [data['data_samples']]
+ data_sample = model.data_preprocessor(data, False)['data_samples']
batch_input_shape = data_sample[0].batch_input_shape
return batch_input_shape
@@ -69,8 +71,8 @@ def pack_data(frame_resize: np.ndarray, batch_input_shape: Tuple[int, int],
'scale_factor': (batch_input_shape[0] / ori_shape[0],
batch_input_shape[1] / ori_shape[1])
})
- frame_resize = torch.from_numpy(frame_resize).permute((2, 0, 1))
- data = {'inputs': frame_resize, 'data_sample': data_sample}
+ frame_resize = torch.from_numpy(frame_resize).permute((2, 0, 1)).cuda()
+ data = {'inputs': [frame_resize], 'data_samples': [data_sample]}
return data
@@ -112,7 +114,7 @@ def main():
for i, (frame_resize, frame_origin) in enumerate(
zip(track_iter_progress(video_resize), video_origin)):
data = pack_data(frame_resize, batch_input_shape, ori_shape)
- result = model.test_step([data])[0]
+ result = model.test_step(data)[0]
visualizer.add_datasample(
name='video',
| TypeError: list indices must be integers or slices, not str
When I run the demo code **video_gpuaccel_demo.py**, it has the following error. How to solve it, thanks.
Traceback (most recent call last):
File "demo/video_gpuaccel_demo.py", line 147, in <module>
main()
File "demo/video_gpuaccel_demo.py", line 102, in main
batch_input_shape = prefetch_batch_input_shape(
File "demo/video_gpuaccel_demo.py", line 60, in prefetch_batch_input_shape
_, data_sample = model.data_preprocessor([data], False)
File "C:\Anaconda\Anaconda\envs\mmdetection\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "c:\research\programmar\deeplearning\vehicle_classification\mmdet\models\data_preprocessors\data_preprocessor.py", line 121, in forward
batch_pad_shape = self._get_pad_shape(data)
File "c:\research\programmar\deeplearning\vehicle_classification\mmdet\models\data_preprocessors\data_preprocessor.py", line 154, in _get_pad_shape
_batch_inputs = data['inputs']
TypeError: list indices must be integers or slices, not str
| 2023-06-28T22:44:41Z | [] | [] |
Traceback (most recent call last):
File "demo/video_gpuaccel_demo.py", line 147, in <module>
main()
File "demo/video_gpuaccel_demo.py", line 102, in main
batch_input_shape = prefetch_batch_input_shape(
File "demo/video_gpuaccel_demo.py", line 60, in prefetch_batch_input_shape
_, data_sample = model.data_preprocessor([data], False)
File "C:\Anaconda\Anaconda\envs\mmdetection\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "c:\research\programmar\deeplearning\vehicle_classification\mmdet\models\data_preprocessors\data_preprocessor.py", line 121, in forward
batch_pad_shape = self._get_pad_shape(data)
File "c:\research\programmar\deeplearning\vehicle_classification\mmdet\models\data_preprocessors\data_preprocessor.py", line 154, in _get_pad_shape
_batch_inputs = data['inputs']
TypeError: list indices must be integers or slices, not str
| 10,594 |
||||
open-mmlab/mmdetection | open-mmlab__mmdetection-1099 | b6712d4a9abe261b34b6a62f89ed3ed1fb88fae1 | diff --git a/mmdet/core/bbox/__init__.py b/mmdet/core/bbox/__init__.py
--- a/mmdet/core/bbox/__init__.py
+++ b/mmdet/core/bbox/__init__.py
@@ -1,4 +1,3 @@
-from .assign_sampling import assign_and_sample, build_assigner, build_sampler
from .assigners import AssignResult, BaseAssigner, MaxIoUAssigner
from .bbox_target import bbox_target
from .geometry import bbox_overlaps
@@ -9,6 +8,9 @@
bbox_mapping, bbox_mapping_back, delta2bbox,
distance2bbox, roi2bbox)
+from .assign_sampling import ( # isort:skip, avoid recursive imports
+ assign_and_sample, build_assigner, build_sampler)
+
__all__ = [
'bbox_overlaps', 'BaseAssigner', 'MaxIoUAssigner', 'AssignResult',
'BaseSampler', 'PseudoSampler', 'RandomSampler',
| ImportError: cannot import name 'build_sampler' from 'mmdet.core.bbox.assign_sampling'
I have successful install the mmdetection by the command "pip install -v -e .". But I have the problem in the test. Would anyone help me ?
(lab) gpuserver@ubuntu:~/ht/labs/mmdetection-master$ python
Python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from mmdet.apis import init_detector
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/__init__.py", line 2, in <module>
from .inference import inference_detector, init_detector, show_result
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/inference.py", line 9, in <module>
from mmdet.core import get_classes
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/__init__.py", line 1, in <module>
from .anchor import * # noqa: F401, F403
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/__init__.py", line 2, in <module>
from .anchor_target import anchor_inside_flags, anchor_target
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/anchor_target.py", line 3, in <module>
from ..bbox import PseudoSampler, assign_and_sample, bbox2delta, build_assigner
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/__init__.py", line 1, in <module>
from .assign_sampling import assign_and_sample, build_assigner, build_sampler
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py", line 3, in <module>
from . import assigners, samplers
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/__init__.py", line 2, in <module>
from .combined_sampler import CombinedSampler
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/combined_sampler.py", line 1, in <module>
from ..assign_sampling import build_sampler
ImportError: cannot import name 'build_sampler' from 'mmdet.core.bbox.assign_sampling' (/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py)
| I encountered the same error today.
same problem here. | 2019-08-01T09:48:27Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/__init__.py", line 2, in <module>
from .inference import inference_detector, init_detector, show_result
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/inference.py", line 9, in <module>
from mmdet.core import get_classes
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/__init__.py", line 1, in <module>
from .anchor import * # noqa: F401, F403
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/__init__.py", line 2, in <module>
from .anchor_target import anchor_inside_flags, anchor_target
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/anchor_target.py", line 3, in <module>
from ..bbox import PseudoSampler, assign_and_sample, bbox2delta, build_assigner
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/__init__.py", line 1, in <module>
from .assign_sampling import assign_and_sample, build_assigner, build_sampler
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py", line 3, in <module>
from . import assigners, samplers
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/__init__.py", line 2, in <module>
from .combined_sampler import CombinedSampler
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/combined_sampler.py", line 1, in <module>
from ..assign_sampling import build_sampler
ImportError: cannot import name 'build_sampler' from 'mmdet.core.bbox.assign_sampling' (/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py)
| 10,596 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-1404 | c64beaf1494ea68562b274a723824d7f52fd31e8 | diff --git a/mmdet/datasets/loader/sampler.py b/mmdet/datasets/loader/sampler.py
--- a/mmdet/datasets/loader/sampler.py
+++ b/mmdet/datasets/loader/sampler.py
@@ -132,8 +132,12 @@ def __iter__(self):
math.ceil(
size * 1.0 / self.samples_per_gpu / self.num_replicas)
) * self.samples_per_gpu * self.num_replicas - len(indice)
- indice += indice[:extra]
- indices += indice
+ # pad indice
+ tmp = indice.copy()
+ for _ in range(extra // size):
+ indice.extend(tmp)
+ indice.extend(tmp[:extra % size])
+ indices.extend(indice)
assert len(indices) == self.total_size
| assert len(indices) == self.total_size error during multiple GPU training
I am trying to train my dataset on 8 GPU's. However, after calling `./dist_train.sh` this error assertion appeares:
Traceback (most recent call last):
File "./tools/train.py", line 113, in <module>
main()
File "./tools/train.py", line 109, in main
logger=logger)
File "/mmdetection/mmdet/apis/train.py", line 58, in train_detector
_dist_train(model, dataset, cfg, validate=validate)
File "/mmdetection/mmdet/apis/train.py", line 186, in _dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 260, in train
for i, data_batch in enumerate(data_loader):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 193, in __iter__ return _DataLoaderIter(self)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 493, in __init__
self._put_indices()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 591, in _put_indices
indices = next(self.sample_iter, None)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 172, in __iter__
for idx in self.sampler:
File "/mmdetection/mmdet/datasets/loader/sampler.py", line 138, in __iter__
assert len(indices) == self.total_size
...
in the config I tried various values for `imgs_per_gpu` and `workers_per_gpu`, currently it is:
`imgs_per_gpu=2,
workers_per_gpu=2,`
no settings was working though. Single-GPU training works well.
What is the meaning of this assert?
Thanks!
| Please follow the `Error report` issue template.
> Please follow the `Error report` issue template.
Here it is, thanks for any help!
**Checklist**
1. I have searched related issues but cannot get the expected help.
yes
2. The bug has not been fixed in the latest version.
yup
**Describe the bug**
I am trying to train my custom dataset on 8 GPU's. However, after calling ./dist_train.sh the error showed below appeares. In the config I tried more values for `imgs_per_gpu` and `workers_per_gpu` (e.g. `imgs_per_gpu=2`, `workers_per_gpu=2`), no settings was working though.
Single-GPU training works well.
What is the meaning of the assert in the Traceback? What does not fit? Thanks!
**Reproduction**
1. What command or script did you run?
```
./tools/dist_train.sh MY_CONFIG 8 --validate
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
I modified number of classes, workers_per_gpu, imgs_per_gpu, dataset type and paths to the datasets. No changes in code.
3. What dataset did you use?
My own dataset of 8 classes converted to COCO format.
**Environment**
- OS: Ubuntu 18.04
- GCC 5.4.0
- PyTorch version 1.1.0
- I built the docker I use on the official pytorch+cuda docker
- GPU model: 8xV100
- CUDA version: 10.0, CUDNN version: 7.5
**Error traceback**
```
Traceback (most recent call last):
File "./tools/train.py", line 113, in
main()
File "./tools/train.py", line 109, in main
logger=logger)
File "/mmdetection/mmdet/apis/train.py", line 58, in train_detector
_dist_train(model, dataset, cfg, validate=validate)
File "/mmdetection/mmdet/apis/train.py", line 186, in _dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 260, in train
for i, data_batch in enumerate(data_loader):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 193, in iter return _DataLoaderIter(self)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 493, in init
self._put_indices()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 591, in _put_indices
indices = next(self.sample_iter, None)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 172, in iter
for idx in self.sampler:
File "/mmdetection/mmdet/datasets/loader/sampler.py", line 138, in iter
assert len(indices) == self.total_size
```
One more detail, when I print `len(indices)` and `self.total_size` right before the critical assert, it's `9308` and `9312`. The size of my training dataset is `9306`..
@hellock Any ideas? Recently I found out that when I set the config file to train with 2 GPUs (2 img/gpu), the training (called with `./dist_train`) initiates well. However, training with any more GPUs results in the mentioned assert error. Seems like a bug to me.
You may count the images with aspect ratio >1 and <1. I suspect that there are only 2 images for one of the two groups.
> You may count the images with aspect ratio >1 and <1. I suspect that there are only 2 images for one of the two groups.
you are right, there are exactly 2 images with height > width
The problem lies in [this line](https://github.com/open-mmlab/mmdetection/blob/master/mmdet/datasets/loader/sampler.py#L135) where `len(indices) < extra`.
I meet the same issue, how to fix it?
@ZhexuanZhou
Before bug fixed, simplest way is make your img_per_gpu as power of 2.
e.g. 2, 4, 8, 16 ...
Also, make your gpu number as power of 2.
This works for me.
> @ZhexuanZhou
> Before bug fixed, simplest way is make your img_per_gpu as power of 2.
> e.g. 2, 4, 8, 16 ...
> Also, make your gpu number as power of 2.
> This works for me.
This fix doesn't work for me, as is already mentioned in the bug description
> **Describe the bug**
> I am trying to train my custom dataset on 8 GPU's. However, after calling ./dist_train.sh the error showed below appeares. In the config I tried more values for `imgs_per_gpu` and `workers_per_gpu` (e.g. `imgs_per_gpu=2`, `workers_per_gpu=2`), no settings was working though.
The easiest work-around for me was to comment out the two asserts in the `sampler.py` :)
@FilipLangr
Yeah, that might cause some samples lost but not that harmful.
Hello!
I have situation, when 6 pics, where w>h.
33994 pics, when h>w.
And 2 pics, where h==w.
I have deleted h==w pics and have AssertionError: assert len(indices) == self.total_size anyway :(
Than I deleted w>h pics and get another error: TypeError: 'NoneType' object is not subscriptable | 2019-09-16T10:01:55Z | [] | [] |
Traceback (most recent call last):
File "./tools/train.py", line 113, in <module>
main()
File "./tools/train.py", line 109, in main
logger=logger)
File "/mmdetection/mmdet/apis/train.py", line 58, in train_detector
_dist_train(model, dataset, cfg, validate=validate)
File "/mmdetection/mmdet/apis/train.py", line 186, in _dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/runner.py", line 260, in train
for i, data_batch in enumerate(data_loader):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 193, in __iter__ return _DataLoaderIter(self)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 493, in __init__
self._put_indices()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 591, in _put_indices
indices = next(self.sample_iter, None)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 172, in __iter__
for idx in self.sampler:
File "/mmdetection/mmdet/datasets/loader/sampler.py", line 138, in __iter__
assert len(indices) == self.total_size
...
in the config I tried various values for `imgs_per_gpu` and `workers_per_gpu`, currently it is:
| 10,598 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-2030 | aade6801e7df66679b1fe9d162da0d03b4742dd4 | diff --git a/mmdet/datasets/pipelines/transforms.py b/mmdet/datasets/pipelines/transforms.py
--- a/mmdet/datasets/pipelines/transforms.py
+++ b/mmdet/datasets/pipelines/transforms.py
@@ -156,7 +156,7 @@ def _resize_masks(self, results):
mmcv.imresize(mask, mask_size, interpolation='nearest')
for mask in results[key]
]
- results[key] = masks
+ results[key] = np.stack(masks)
def _resize_seg(self, results):
for key in results.get('seg_fields', []):
@@ -245,10 +245,10 @@ def __call__(self, results):
results['flip_direction'])
# flip masks
for key in results.get('mask_fields', []):
- results[key] = [
+ results[key] = np.stack([
mmcv.imflip(mask, direction=results['flip_direction'])
for mask in results[key]
- ]
+ ])
# flip segs
for key in results.get('seg_fields', []):
@@ -410,7 +410,7 @@ def __call__(self, results):
gt_mask = results['gt_masks'][i][crop_y1:crop_y2,
crop_x1:crop_x2]
valid_gt_masks.append(gt_mask)
- results['gt_masks'] = valid_gt_masks
+ results['gt_masks'] = np.stack(valid_gt_masks)
return results
@@ -586,7 +586,7 @@ def __call__(self, results):
0).astype(mask.dtype)
expand_mask[top:top + h, left:left + w] = mask
expand_gt_masks.append(expand_mask)
- results['gt_masks'] = expand_gt_masks
+ results['gt_masks'] = np.stack(expand_gt_masks)
# not tested
if 'gt_semantic_seg' in results:
@@ -678,10 +678,10 @@ def __call__(self, results):
results['gt_masks'][i] for i in range(len(mask))
if mask[i]
]
- results['gt_masks'] = [
+ results['gt_masks'] = np.stack([
gt_mask[patch[1]:patch[3], patch[0]:patch[2]]
for gt_mask in valid_masks
- ]
+ ])
# not tested
if 'gt_semantic_seg' in results:
| Stacking of masks done in `Pad`
Hey guys,
I run into this error when commenting out the line
```
dict(type='Pad', size_divisor=32),
```
in `train_pipeline` on custom images. It causes this error:
```
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pc/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/new_ptvsd/no_wheels/ptvsd/__main__.py", line 45, in <module>
cli.main()
File "/home/pc/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/new_ptvsd/no_wheels/ptvsd/../ptvsd/server/cli.py", line 361, in main
run()
File "/home/pc/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/new_ptvsd/no_wheels/ptvsd/../ptvsd/server/cli.py", line 203, in run_file
runpy.run_path(options.target, run_name="__main__")
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pc/dev/mmdetection/tools/train.py", line 124, in <module>
main()
File "/home/pc/dev/mmdetection/tools/train.py", line 120, in main
timestamp=timestamp)
File "/home/pc/dev/mmdetection/mmdet/apis/train.py", line 133, in train_detector
timestamp=timestamp)
File "/home/pc/dev/mmdetection/mmdet/apis/train.py", line 319, in _non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/mmcv/runner/runner.py", line 364, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/mmcv/runner/runner.py", line 268, in train
self.model, data_batch, train_mode=True, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/apis/train.py", line 100, in batch_processor
losses = model(**data)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func
return old_func(*args, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/models/detectors/base.py", line 138, in forward
return self.forward_train(img, img_meta, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/models/detectors/two_stage.py", line 254, in forward_train
sampling_results, gt_masks, self.train_cfg.rcnn)
File "/home/pc/dev/mmdetection/mmdet/models/mask_heads/fcn_mask_head.py", line 111, in get_target
gt_masks, rcnn_train_cfg)
File "/home/pc/dev/mmdetection/mmdet/core/mask/mask_target.py", line 12, in mask_target
mask_targets = torch.cat(list(mask_targets))
File "/home/pc/dev/mmdetection/mmdet/core/mask/mask_target.py", line 22, in mask_target_single
_, maxh, maxw = gt_masks.shape
AttributeError: 'list' object has no attribute 'shape'
```
I already debugged it and as far as I have found out it is because the masks are stacked in [`_pad_masks`](https://github.com/open-mmlab/mmdetection/blob/10c82efb0392fc1a5e1c696a53fe9ca7dfc3cdda/mmdet/datasets/pipelines/transforms.py#L304)
If padding is not included in the pipeline, however, then `gt_masks` is a `list` instead of `ndarray` causing the above error.
Shouldn't the stacking be done in [`_load_masks`](https://github.com/open-mmlab/mmdetection/blob/10c82efb0392fc1a5e1c696a53fe9ca7dfc3cdda/mmdet/datasets/pipelines/loading.py#L82) already to have a more flexible pipeline?
| Thanks for the suggestions. It is a known issue and we are preparing a PR to fix it.
Ok, thank you! | 2020-01-31T04:36:49Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pc/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/new_ptvsd/no_wheels/ptvsd/__main__.py", line 45, in <module>
cli.main()
File "/home/pc/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/new_ptvsd/no_wheels/ptvsd/../ptvsd/server/cli.py", line 361, in main
run()
File "/home/pc/.vscode-server/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/new_ptvsd/no_wheels/ptvsd/../ptvsd/server/cli.py", line 203, in run_file
runpy.run_path(options.target, run_name="__main__")
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pc/dev/mmdetection/tools/train.py", line 124, in <module>
main()
File "/home/pc/dev/mmdetection/tools/train.py", line 120, in main
timestamp=timestamp)
File "/home/pc/dev/mmdetection/mmdet/apis/train.py", line 133, in train_detector
timestamp=timestamp)
File "/home/pc/dev/mmdetection/mmdet/apis/train.py", line 319, in _non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/mmcv/runner/runner.py", line 364, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/mmcv/runner/runner.py", line 268, in train
self.model, data_batch, train_mode=True, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/apis/train.py", line 100, in batch_processor
losses = model(**data)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/pc/dev/venvs/mmdetection/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func
return old_func(*args, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/models/detectors/base.py", line 138, in forward
return self.forward_train(img, img_meta, **kwargs)
File "/home/pc/dev/mmdetection/mmdet/models/detectors/two_stage.py", line 254, in forward_train
sampling_results, gt_masks, self.train_cfg.rcnn)
File "/home/pc/dev/mmdetection/mmdet/models/mask_heads/fcn_mask_head.py", line 111, in get_target
gt_masks, rcnn_train_cfg)
File "/home/pc/dev/mmdetection/mmdet/core/mask/mask_target.py", line 12, in mask_target
mask_targets = torch.cat(list(mask_targets))
File "/home/pc/dev/mmdetection/mmdet/core/mask/mask_target.py", line 22, in mask_target_single
_, maxh, maxw = gt_masks.shape
AttributeError: 'list' object has no attribute 'shape'
| 10,603 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-2492 | c1ef12df9c9d35f1402734435b23a4ae711f3084 | diff --git a/mmdet/datasets/pipelines/transforms.py b/mmdet/datasets/pipelines/transforms.py
--- a/mmdet/datasets/pipelines/transforms.py
+++ b/mmdet/datasets/pipelines/transforms.py
@@ -490,6 +490,9 @@ def __init__(self,
def __call__(self, results):
img = results['img']
+ assert img.dtype == np.float32, \
+ 'PhotoMetricDistortion needs the input image of dtype np.float32,'\
+ ' please set "to_float32=True" in "LoadImageFromFile" pipeline'
# random brightness
if random.randint(2):
delta = random.uniform(-self.brightness_delta,
| _pickle.PicklingError: Can't pickle <class 'numpy.core._exceptions.UFuncTypeError'>: it's not the same object as numpy.core._exceptions.UFuncTypeError
when i use
dict(type='PhotoMetricDistortion',
brightness_delta=32,
contrast_range=(0.5, 1.5),
saturation_range=(0.5, 1.5),
hue_delta=18)
in config file for training
i got the following error:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'numpy.core._exceptions.UFuncTypeError'>: it's not the same object as numpy.core._exceptions.UFuncTypeError
that's all info.
i found the following code raise the error:
delta = random.uniform(-self.brightness_delta,self.brightness_delta)
img+=delta
in PhotoMetricDistortion class in pipelines/transforms.py file
other familiar code raise the error too.
i don't know why?
i found if delta is negative or float,it will raise the error.positive integer is ok.
| when i use img = img + delta to replace img+=delta,it's ok...
Do you use the config [ssd300_coco.py](https://github.com/open-mmlab/mmdetection/blob/master/configs/ssd300_coco.py), I can't reproduce your error with this config. Could you specify which config you use and what modification you apply to? Could you provide more detail information about your environment?
when i use PhotoMetricDistortion in retinanet training, I met exactly same error
> Do you use the config ssd300_coco.py, I can't reproduce your error with this config. Could you specify which config you use and what modification you apply to? Could you provide more detail information about your environment?
I use the faster_rcnn_r50_fpn. torch-1.1.0, CUDA-10.1, numpy-1.18.2.Thanks. As mentioned above, when i replace '+=','*=',and such operation in PhotoMetricDistortion, the error disappear.
I found that the problem is that I didn't load image with float32=True, cv2.cvtColor doesn't support float64. That causes this wired error here:)
> > Do you use the config ssd300_coco.py, I can't reproduce your error with this config. Could you specify which config you use and what modification you apply to? Could you provide more detail information about your environment?
>
> I use the faster_rcnn_r50_fpn. torch-1.1.0, CUDA-10.1, numpy-1.18.2.Thanks. As mentioned above, when i replace '+=','*=',and such operation in PhotoMetricDistortion, the error disappear.
@BChunlei Just as @edwardyangxin mentioned, when using PhotoMetricDistortion transformation, it is necessary to convert the image to np.float32 firstly just as what SSD does, you can refer [here](https://github.com/open-mmlab/mmdetection/blob/365c9302ee1eb5790b8e57f24ea4dfee8f2b88ac/configs/ssd300_coco.py#L51). But it is indeed not very friendly, would you like to create a PR to make it more friendly, i.e., report a reminder that user needs to set `to_float32` flag? | 2020-04-20T14:48:29Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'numpy.core._exceptions.UFuncTypeError'>: it's not the same object as numpy.core._exceptions.UFuncTypeError
| 10,612 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-2824 | 24a43d5060adb1b523a018eeee17d5ca75b2b23a | diff --git a/mmdet/core/mask/__init__.py b/mmdet/core/mask/__init__.py
--- a/mmdet/core/mask/__init__.py
+++ b/mmdet/core/mask/__init__.py
@@ -1,7 +1,8 @@
from .mask_target import mask_target
from .structures import BitmapMasks, PolygonMasks
-from .utils import split_combined_polys
+from .utils import encode_mask_results, split_combined_polys
__all__ = [
- 'split_combined_polys', 'mask_target', 'BitmapMasks', 'PolygonMasks'
+ 'split_combined_polys', 'mask_target', 'BitmapMasks', 'PolygonMasks',
+ 'encode_mask_results'
]
diff --git a/mmdet/core/mask/utils.py b/mmdet/core/mask/utils.py
--- a/mmdet/core/mask/utils.py
+++ b/mmdet/core/mask/utils.py
@@ -1,4 +1,6 @@
import mmcv
+import numpy as np
+import pycocotools.mask as mask_util
def split_combined_polys(polys, poly_lens, polys_per_mask):
@@ -28,3 +30,34 @@ def split_combined_polys(polys, poly_lens, polys_per_mask):
mask_polys = mmcv.slice_list(split_polys, polys_per_mask_single)
mask_polys_list.append(mask_polys)
return mask_polys_list
+
+
+# TODO: move this function to more proper place
+def encode_mask_results(mask_results):
+ """Encode bitmap mask to RLE code.
+
+ Args:
+ mask_results (list | tuple[list]): bitmap mask results.
+ In mask scoring rcnn, mask_results is a tuple of (segm_results,
+ segm_cls_score).
+
+ Returns:
+ list | tuple: RLE encoded mask.
+ """
+ if isinstance(mask_results, tuple): # mask scoring
+ cls_segms, cls_mask_scores = mask_results
+ else:
+ cls_segms = mask_results
+ num_classes = len(cls_segms)
+ encoded_mask_results = [[] for _ in range(num_classes)]
+ for i in range(len(cls_segms)):
+ for cls_segm in cls_segms[i]:
+ encoded_mask_results[i].append(
+ mask_util.encode(
+ np.array(
+ cls_segm[:, :, np.newaxis], order='F',
+ dtype='uint8'))[0]) # encoded with RLE
+ if isinstance(mask_results, tuple):
+ return encoded_mask_results, cls_mask_scores
+ else:
+ return encoded_mask_results
diff --git a/mmdet/models/detectors/base.py b/mmdet/models/detectors/base.py
--- a/mmdet/models/detectors/base.py
+++ b/mmdet/models/detectors/base.py
@@ -3,7 +3,6 @@
import mmcv
import numpy as np
-import pycocotools.mask as maskUtils
import torch.nn as nn
from mmcv.utils import print_log
@@ -210,7 +209,7 @@ def show_result(self,
for i in inds:
i = int(i)
color_mask = color_masks[labels[i]]
- mask = maskUtils.decode(segms[i]).astype(np.bool)
+ mask = segms[i]
img[mask] = img[mask] * 0.5 + color_mask * 0.5
# if out_file specified, do not show image in window
if out_file is not None:
diff --git a/mmdet/models/detectors/cascade_rcnn.py b/mmdet/models/detectors/cascade_rcnn.py
--- a/mmdet/models/detectors/cascade_rcnn.py
+++ b/mmdet/models/detectors/cascade_rcnn.py
@@ -31,4 +31,4 @@ def show_result(self, data, result, **kwargs):
else:
if isinstance(result, dict):
result = result['ensemble']
- super(CascadeRCNN, self).show_result(data, result, **kwargs)
+ return super(CascadeRCNN, self).show_result(data, result, **kwargs)
| IndexError in pycocotools
Thanks for your error report and we appreciate it a lot.
**Describe the bug**
When running the image_demo.py, I get an error about numpy index.
**Reproduction**
```
python demo/image_demo.py demo/demo.jpg configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth --device cpu
```
**Environment**
`ys.platform: linux
Python: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.0, V10.0.130
GPU 0: GeForce GTX 1080 Ti
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.4.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CUDA Runtime 10.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.3
- Magma 2.5.1
- Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.5.0
OpenCV: 4.2.0
MMCV: 0.5.4
MMDetection: 2.0.0+unknown
MMDetection Compiler: GCC 5.4
MMDetection CUDA Compiler: 10.0
**Error traceback**
```
Traceback (most recent call last):
File "demo/image_demo.py", line 26, in <module>
main()
File "demo/image_demo.py", line 22, in main
show_result_pyplot(model, args.img, result, score_thr=args.score_thr)
File "/root/mmdetection-master/mmdet/apis/inference.py", line 146, in show_result_pyplot
img = model.show_result(img, result, score_thr=score_thr, show=False)
File "/root/mmdetection-master/mmdet/models/detectors/base.py", line 211, in show_result
mask = maskUtils.decode(segms[i]).astype(np.bool)
File "/root/anaconda3/lib/python3.7/site-packages/pycocotools-2.0-py3.7-linux-x86_64.egg/pycocotools/mask.py", line 91, in decode
return _mask.decode([rleObjs])[:,:,0]
File "pycocotools/_mask.pyx", line 146, in pycocotools._mask.decode
File "pycocotools/_mask.pyx", line 128, in pycocotools._mask._frString
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
```
| Please try to install pycocotools through pip:
`pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"`.
Sometimes the numpy version can also cause some problem. If the error still exist when you have installed the right pycocotools, you may try numpy=1.17
@Johnson-Wang I did install pycocotools from github, and the numpy version is 1.17, but the problem is not solved.
@Johnson-Wang and I also tried lowering the numpy version, it just doesn't work
Maybe try numpy>=1.18 ?
@ZwwWayne Yes, I also try numpy 1.18, but it still doesn't work
Did you install pycocotools from pip before you install from github using `pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"`, it might because the environment already have one and the new installation does not work. In this case you need to uninstall all pycocotools in your environment and re-install a new one.
@ZwwWayne Before install from github, I uninstalled pycocotools and make sure that there is no pycocotools libarary files in the system, but it just can't work.
Then how about numpy? Did you do the same thing?
Yes, I did everythingk, I even uninstall conda and clean all python environment, it doesn't work
In object detection, try to rewrite the `def show_result(self, data, result, **kwargs) `in your detector.py and add `self.CLASSES = tuple('A','B', ... , 'Z')` before `super(detector_name, self).show_result(data, result, **kwargs)`.
There may be similar labels in segm
@aimhabo Actually I am doing something about instance segmentation, so I have to call the pycocotools , otherwise I will delete the segmentation part. But I don't understand how could similar labels cause an error in numpy?
@mangdian I mean the similar ground-truth information like labels.
In the dataset settings, it will load MS-COCO's information automatically while you use other dataset with `dataset_type = 'CocoDataset'`.
In my similar situation, I force the detector's `self.CLASSES` in `show_result()` (In __init__() the forcement is failed because of any rewriting I haven't found yet).
The problem is that the segmentation returned by the models is numpy boolean arrays instead of undecoded bytes data, so there is no need to decode the result. I finally solve this problem by modifying mmdet/apis/inference.py (delete the decode part). However, it is very strange that the code works well on another linux server of mine, no need to modify inference.py. Anyway, modifying inference.py can be a temporal solution to solve this problem.
Hi @mangdian ,
Thanks for your bug report. Do you use the newest version when you meet the bug? It seems that we did not change inference.py after changing the test logic. Would you like to create a PR to fix that?
@ZwwWayne Yes, I am using the newest version of mmdetection. What confuses me is that I tried the demo in two linux server. On one machine the demo code works, while on the other it doesn't work. I am not sure if it is a bug, or it is something related the numpy version or Cython version (if it is related to version, modifying the code may not be a good choice). But the reason that leads to the error is clear. The segmentation result returned by the model is numpy boolean array, insead of encoded bytes, so the cocotools failed to decode the results. It seems that some other people also meet this problem. Should I create a PR?
Met the same problem here.
I install the newest version mmdetection following the guide, it very annoying to met this error while testing`mask rcnn` with the demo code:
```
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
```
What's more, the model download urls for `aliyun` was not work
@FantasyJXF Please try my solution:
replace line 213 in mmdet/models/detectors/base.py:
` mask = maskUtils.decode(segms[i]).astype(np.bool)
`
with
`mask = segms[i]`
> @FantasyJXF Please try my solution:
>
> replace line 213 in mmdet/models/detectors/base.py:
> ` mask = maskUtils.decode(segms[i]).astype(np.bool)`
> with
> `mask = segms[i]`
Very useful, thanks for your contribution.
So there is no need to do the extra decode to get boolean array, the model already do that, maybe that's because the developer didn't check the segmentation model while release.
First day using mmdetection, almost turn to dedectron2.
Thank you again.
@mangdian Thanks for the fix, it will be appreciated if you could create a PR for it.
@hellock OK, I will create a PR for it. Glad to do it.
Two things to fix:
1. Change https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/base.py#L213 to `mask = segms[i]`.
2. Change https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/cascade_rcnn.py#L34 to `return xxxx`
#2697
@hellock Hi, anthor not related problem in [model zoo](https://github.com/open-mmlab/mmdetection/blob/master/docs/model_zoo.md)
It seems the mirror sites for ALIYUN was not work, and the AWS mirror url postfix `open-mmlab` seems changed to `mmdetection`.
Yes, the aliyun mirror site has not been updated to host v2.0 models. It will be updated later. | 2020-05-27T02:14:48Z | [] | [] |
Traceback (most recent call last):
File "demo/image_demo.py", line 26, in <module>
main()
File "demo/image_demo.py", line 22, in main
show_result_pyplot(model, args.img, result, score_thr=args.score_thr)
File "/root/mmdetection-master/mmdet/apis/inference.py", line 146, in show_result_pyplot
img = model.show_result(img, result, score_thr=score_thr, show=False)
File "/root/mmdetection-master/mmdet/models/detectors/base.py", line 211, in show_result
mask = maskUtils.decode(segms[i]).astype(np.bool)
File "/root/anaconda3/lib/python3.7/site-packages/pycocotools-2.0-py3.7-linux-x86_64.egg/pycocotools/mask.py", line 91, in decode
return _mask.decode([rleObjs])[:,:,0]
File "pycocotools/_mask.pyx", line 146, in pycocotools._mask.decode
File "pycocotools/_mask.pyx", line 128, in pycocotools._mask._frString
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
| 10,617 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-3529 | ae453fa92ffebcbd224b72f6d48e0b8699424450 | diff --git a/tools/fuse_conv_bn.py b/tools/fuse_conv_bn.py
deleted file mode 100644
--- a/tools/fuse_conv_bn.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import argparse
-
-import torch
-import torch.nn as nn
-from mmcv.runner import save_checkpoint
-
-from mmdet.apis import init_detector
-
-
-def fuse_conv_bn(conv, bn):
- """During inference, the functionary of batch norm layers is turned off but
- only the mean and var alone channels are used, which exposes the chance to
- fuse it with the preceding conv layers to save computations and simplify
- network structures."""
- conv_w = conv.weight
- conv_b = conv.bias if conv.bias is not None else torch.zeros_like(
- bn.running_mean)
-
- factor = bn.weight / torch.sqrt(bn.running_var + bn.eps)
- conv.weight = nn.Parameter(conv_w *
- factor.reshape([conv.out_channels, 1, 1, 1]))
- conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias)
- return conv
-
-
-def fuse_module(m):
- last_conv = None
- last_conv_name = None
-
- for name, child in m.named_children():
- if isinstance(child, (nn.BatchNorm2d, nn.SyncBatchNorm)):
- if last_conv is None: # only fuse BN that is after Conv
- continue
- fused_conv = fuse_conv_bn(last_conv, child)
- m._modules[last_conv_name] = fused_conv
- # To reduce changes, set BN as Identity instead of deleting it.
- m._modules[name] = nn.Identity()
- last_conv = None
- elif isinstance(child, nn.Conv2d):
- last_conv = child
- last_conv_name = name
- else:
- fuse_module(child)
- return m
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='fuse Conv and BN layers in a model')
- parser.add_argument('config', help='config file path')
- parser.add_argument('checkpoint', help='checkpoint file path')
- parser.add_argument('out', help='output path of the converted model')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
- # build the model from a config file and a checkpoint file
- model = init_detector(args.config, args.checkpoint)
- # fuse conv and bn layers of the model
- fused_model = fuse_module(model)
- save_checkpoint(fused_model, args.out)
-
-
-if __name__ == '__main__':
- main()
| ModuleNotFoundError: No module named 'tools'
i would like to test the result of training, so i write the next:
(base) zhangshen@zhangshen-X550JX:~/mmdetection$ python tools/test.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth --out./result/result_100/pkl --eval bbox
but i got:
Traceback (most recent call last):
File "tools/test.py", line 9, in <module>
from tools.fuse_conv_bn import fuse_module
ModuleNotFoundError: No module named 'tools'
how can i solve this problem?
| #2667.
@yhcao6 Either fix the tool script or update the documentation.
> @yhcao6 Either fix the tool script or update the documentation.
how can i fix the tool script or which doucumentation should i update?
I have created a pr to fix the error-report template, #3192 . Thanks for reporting the error. | 2020-08-11T08:38:52Z | [] | [] |
Traceback (most recent call last):
File "tools/test.py", line 9, in <module>
from tools.fuse_conv_bn import fuse_module
ModuleNotFoundError: No module named 'tools'
| 10,626 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-3836 | 8e29b954a0276593fae3469eaf0d205da145c3da | diff --git a/mmdet/models/dense_heads/reppoints_head.py b/mmdet/models/dense_heads/reppoints_head.py
--- a/mmdet/models/dense_heads/reppoints_head.py
+++ b/mmdet/models/dense_heads/reppoints_head.py
@@ -292,7 +292,7 @@ def forward_single(self, x):
pts_out_refine = pts_out_refine + pts_out_init.detach()
return cls_out, pts_out_init, pts_out_refine
- def get_points(self, featmap_sizes, img_metas):
+ def get_points(self, featmap_sizes, img_metas, device):
"""Get points according to feature map sizes.
Args:
@@ -310,7 +310,7 @@ def get_points(self, featmap_sizes, img_metas):
multi_level_points = []
for i in range(num_levels):
points = self.point_generators[i].grid_points(
- featmap_sizes[i], self.point_strides[i])
+ featmap_sizes[i], self.point_strides[i], device)
multi_level_points.append(points)
points_list = [[point.clone() for point in multi_level_points]
for _ in range(num_imgs)]
@@ -326,7 +326,7 @@ def get_points(self, featmap_sizes, img_metas):
valid_feat_h = min(int(np.ceil(h / point_stride)), feat_h)
valid_feat_w = min(int(np.ceil(w / point_stride)), feat_w)
flags = self.point_generators[i].valid_flags(
- (feat_h, feat_w), (valid_feat_h, valid_feat_w))
+ (feat_h, feat_w), (valid_feat_h, valid_feat_w), device)
multi_level_flags.append(flags)
valid_flag_list.append(multi_level_flags)
@@ -534,6 +534,7 @@ def loss_single(self, cls_score, pts_pred_init, pts_pred_refine, labels,
label_weights = label_weights.reshape(-1)
cls_score = cls_score.permute(0, 2, 3,
1).reshape(-1, self.cls_out_channels)
+ cls_score = cls_score.contiguous()
loss_cls = self.loss_cls(
cls_score,
labels,
@@ -572,11 +573,12 @@ def loss(self,
gt_bboxes_ignore=None):
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
assert len(featmap_sizes) == len(self.point_generators)
+ device = cls_scores[0].device
label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
# target for initial stage
center_list, valid_flag_list = self.get_points(featmap_sizes,
- img_metas)
+ img_metas, device)
pts_coordinate_preds_init = self.offset_to_pts(center_list,
pts_preds_init)
if self.train_cfg.init.assigner['type'] == 'PointAssigner':
@@ -604,7 +606,7 @@ def loss(self,
# target for refinement stage
center_list, valid_flag_list = self.get_points(featmap_sizes,
- img_metas)
+ img_metas, device)
pts_coordinate_preds_refine = self.offset_to_pts(
center_list, pts_preds_refine)
bbox_list = []
@@ -666,6 +668,7 @@ def get_bboxes(self,
rescale=False,
nms=True):
assert len(cls_scores) == len(pts_preds_refine)
+ device = cls_scores[0].device
bbox_preds_refine = [
self.points2bbox(pts_pred_refine)
for pts_pred_refine in pts_preds_refine
@@ -673,7 +676,7 @@ def get_bboxes(self,
num_levels = len(cls_scores)
mlvl_points = [
self.point_generators[i].grid_points(cls_scores[i].size()[-2:],
- self.point_strides[i])
+ self.point_strides[i], device)
for i in range(num_levels)
]
result_list = []
| RuntimeError: expected device cuda:1 but got device cuda:0
I have 2 titan xp gpus ,and when I run reppoints detection train.py it occurs:
```
python tools/train.py workproject/gureppoints/reppoints_moment_r101_fpn_gn-neck+head_2x_coco.py --gpu-ids=1
2020-09-24 17:04:35,811 - mmdet - INFO - Distributed training: False
2020-09-24 17:04:36,261 - mmdet - INFO - Config:
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(2048, 1024), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=4,
train=dict(
type='CocoDataset',
ann_file='data/coco/annotations/instances_train2017.json',
img_prefix='data/coco/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(2048, 1024), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]),
val=dict(
type='CocoDataset',
ann_file='data/coco/annotations/instances_val2017.json',
img_prefix='data/coco/val2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='CocoDataset',
ann_file='data/coco/annotations/instances_test2017.json',
img_prefix='data/coco/test2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]))
evaluation = dict(interval=1, metric='bbox')
optimizer = dict(type='SGD', lr=0.00125, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[16, 22])
total_epochs = 24
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
model = dict(
type='RepPointsDetector',
pretrained='torchvision://resnet101',
backbone=dict(
type='ResNet',
depth=101,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
start_level=1,
add_extra_convs='on_input',
num_outs=5,
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)),
bbox_head=dict(
type='RepPointsHead',
num_classes=6,
in_channels=256,
feat_channels=256,
point_feat_channels=256,
stacked_convs=3,
num_points=9,
gradient_mul=0.1,
point_strides=[8, 16, 32, 64, 128],
point_base_scale=4,
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox_init=dict(type='SmoothL1Loss', beta=0.11, loss_weight=0.5),
loss_bbox_refine=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0),
transform_method='moment',
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)))
train_cfg = dict(
init=dict(
assigner=dict(type='PointAssigner', scale=4, pos_num=1),
allowed_border=-1,
pos_weight=-1,
debug=False),
refine=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.4,
min_pos_iou=0,
ignore_iof_thr=-1),
allowed_border=-1,
pos_weight=-1,
debug=False))
test_cfg = dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100)
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
work_dir = 'workproject/gureppoints'
gpu_ids = [1]
2020-09-24 17:04:37,362 - mmdet - INFO - load model from: torchvision://resnet101
2020-09-24 17:04:38,153 - mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: fc.weight, fc.bias
loading annotations into memory...
Done (t=0.29s)
creating index...
index created!
loading annotations into memory...
Done (t=0.06s)
creating index...
index created!
2020-09-24 17:04:44,095 - mmdet - INFO - Start running, host: ys@ys, work_dir: /media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/workproject/gureppoints
2020-09-24 17:04:44,096 - mmdet - INFO - workflow: [('train', 1)], max: 24 epochs
Traceback (most recent call last):
File "tools/train.py", line 178, in <module>
main()
File "tools/train.py", line 174, in main
meta=meta)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/apis/train.py", line 143, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 32, in train
**kwargs)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/detectors/base.py", line 234, in train_step
losses = self(**data)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/core/fp16/decorators.py", line 51, in new_func
return old_func(*args, **kwargs)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/detectors/base.py", line 168, in forward
return self.forward_train(img, img_metas, **kwargs)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/detectors/single_stage.py", line 94, in forward_train
gt_labels, gt_bboxes_ignore)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 54, in forward_train
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/dense_heads/reppoints_head.py", line 581, in loss
pts_preds_init)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/dense_heads/reppoints_head.py", line 368, in offset_to_pts
pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center
RuntimeError: expected device cuda:1 but got device cuda:0
```
so what's the problem?
| Not typo.
`print(args)` shows `gpu_ids=[1]`, and the log shows `gpu_ids = [1]`.
> Not typo.
> `print(args)` shows `gpu_ids=[1]`, and the log shows `gpu_ids = [1]`.
you are right,I don't understand the usage of argparse well
Does passing `device = 'cuda:1'` from `reppoints_head.py` work?
`points = self.point_generators[i].grid_points(featmap_sizes[i], self.point_strides[i], device=device)`
`flags = self.point_generators[i].valid_flags((feat_h, feat_w), (valid_feat_h, valid_feat_w), device=device)`
If so, `pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center.to(xy_pts_shift.device)` would be a tentative fix. | 2020-09-25T07:15:13Z | [] | [] |
Traceback (most recent call last):
File "tools/train.py", line 178, in <module>
main()
File "tools/train.py", line 174, in main
meta=meta)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/apis/train.py", line 143, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 32, in train
**kwargs)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/detectors/base.py", line 234, in train_step
losses = self(**data)
File "/home/ys/anaconda3/envs/tensorflow1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/core/fp16/decorators.py", line 51, in new_func
return old_func(*args, **kwargs)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/detectors/base.py", line 168, in forward
return self.forward_train(img, img_metas, **kwargs)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/detectors/single_stage.py", line 94, in forward_train
gt_labels, gt_bboxes_ignore)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 54, in forward_train
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/dense_heads/reppoints_head.py", line 581, in loss
pts_preds_init)
File "/media/ys/4E2CD69B656E8D93/guchenliang/mmdetection/mmdet/models/dense_heads/reppoints_head.py", line 368, in offset_to_pts
pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center
RuntimeError: expected device cuda:1 but got device cuda:0
| 10,635 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-4555 | 3620bb74893ef688b3243652042660e1b5866d5e | diff --git a/mmdet/datasets/xml_style.py b/mmdet/datasets/xml_style.py
--- a/mmdet/datasets/xml_style.py
+++ b/mmdet/datasets/xml_style.py
@@ -20,6 +20,8 @@ class XMLDataset(CustomDataset):
"""
def __init__(self, min_size=None, **kwargs):
+ assert self.CLASSES or kwargs.get(
+ 'classes', None), 'CLASSES in `XMLDataset` can not be None.'
super(XMLDataset, self).__init__(**kwargs)
self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)}
self.min_size = min_size
@@ -43,8 +45,6 @@ def load_annotations(self, ann_file):
tree = ET.parse(xml_path)
root = tree.getroot()
size = root.find('size')
- width = 0
- height = 0
if size is not None:
width = int(size.find('width').text)
height = int(size.find('height').text)
| TypeError: argument of type 'NoneType' is not iterable
When I use my own datasets to train faster-rcnn,I meet this question,the following are environment infomation and logs:
sys.platform: linux
Python: 3.7.9 | packaged by conda-forge | (default, Dec 9 2020, 21:08:20) [GCC 9.3.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: GeForce GTX 1080 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.5.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_6
1;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37 - CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK
-DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.6.0a0+82fd1c8
OpenCV: 4.4.0
MMCV: 1.2.4
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.2
MMDetection: 2.8.0+
------------------------------------------------------------
2021-01-31 22:13:23,946 - mmdet - INFO - Distributed training: False
2021-01-31 22:13:26,538 - mmdet - INFO - Config:
model = dict(
type='FasterRCNN',
pretrained='torchvision://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
roi_head=dict(
type='StandardRoIHead',
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=232,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=1000,
max_num=1000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_across_levels=False,
nms_pre=1000,
nms_post=1000,
max_num=1000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100)))
dataset_type = 'VOCDataset'
data_root = '/home/chengyuhong/mmdetection/data/tt100k_2021/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1000, 600), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1000, 600),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='RepeatDataset',
times=3,
dataset=dict(
type='VOCDataset',
ann_file=[
'/home/chengyuhong/mmdetection/data/tt100k_2021/VOC2007/ImageSets/Main/trainval.txt'
],
img_prefix=[
'/home/chengyuhong/mmdetection/data/tt100k_2021/VOC2007/'
],
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1000, 600), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
])),
val=dict(
type='VOCDataset',
ann_file=
'/home/chengyuhong/mmdetection/data/tt100k_2021/VOC2007/ImageSets/Main/test.txt',
img_prefix='/home/chengyuhong/mmdetection/data/tt100k_2021/VOC2007/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1000, 600),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='VOCDataset',
ann_file=
'/home/chengyuhong/mmdetection/data/tt100k_2021/VOC2007/ImageSets/Main/test.txt',
img_prefix='/home/chengyuhong/mmdetection/data/tt100k_2021/VOC2007/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1000, 600),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]))
evaluation = dict(interval=1, metric='mAP')
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(policy='step', step=[3])
total_epochs = 4
work_dir = './work_dirs/faster_rcnn_r50_fpn_1x_voc0712'
gpu_ids = range(0, 1)
2021-01-31 22:13:27,410 - mmdet - INFO - load model from: torchvision://resnet50
2021-01-31 22:13:27,935 - mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: fc.weight, fc.bias
Traceback (most recent call last):
File "./tools/train.py", line 187, in <module>
main()
File "./tools/train.py", line 163, in main
datasets = [build_dataset(cfg.data.train)]
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 64, in build_dataset
build_dataset(cfg['dataset'], default_args), cfg['times'])
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 69, in build_dataset
dataset = _concat_dataset(cfg, default_args)
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 48, in _concat_dataset
datasets.append(build_dataset(data_cfg, default_args))
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 71, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/home/chengyuhong/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/home/chengyuhong/TT100K/mmdet/datasets/voc.py", line 32, in __init__
super(VOCDataset, self).__init__(**kwargs)
File "/home/chengyuhong/TT100K/mmdet/datasets/xml_style.py", line 23, in __init__
super(XMLDataset, self).__init__(**kwargs)
File "/home/chengyuhong/TT100K/mmdet/datasets/custom.py", line 96, in __init__
valid_inds = self._filter_imgs()
File "/home/chengyuhong/TT100K/mmdet/datasets/xml_style.py", line 75, in _filter_imgs
if name in self.CLASSES:
TypeError: argument of type 'NoneType' is not iterable
| 2021-01-31T14:25:14Z | [] | [] |
Traceback (most recent call last):
File "./tools/train.py", line 187, in <module>
main()
File "./tools/train.py", line 163, in main
datasets = [build_dataset(cfg.data.train)]
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 64, in build_dataset
build_dataset(cfg['dataset'], default_args), cfg['times'])
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 69, in build_dataset
dataset = _concat_dataset(cfg, default_args)
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 48, in _concat_dataset
datasets.append(build_dataset(data_cfg, default_args))
File "/home/chengyuhong/TT100K/mmdet/datasets/builder.py", line 71, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/home/chengyuhong/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/registry.py", line 171, in build_from_cfg
return obj_cls(**args)
File "/home/chengyuhong/TT100K/mmdet/datasets/voc.py", line 32, in __init__
super(VOCDataset, self).__init__(**kwargs)
File "/home/chengyuhong/TT100K/mmdet/datasets/xml_style.py", line 23, in __init__
super(XMLDataset, self).__init__(**kwargs)
File "/home/chengyuhong/TT100K/mmdet/datasets/custom.py", line 96, in __init__
valid_inds = self._filter_imgs()
File "/home/chengyuhong/TT100K/mmdet/datasets/xml_style.py", line 75, in _filter_imgs
if name in self.CLASSES:
TypeError: argument of type 'NoneType' is not iterable
| 10,650 |
||||
open-mmlab/mmdetection | open-mmlab__mmdetection-5654 | 475c6bec197e6b495d48146636301590165d8f66 | diff --git a/mmdet/models/detectors/detr.py b/mmdet/models/detectors/detr.py
--- a/mmdet/models/detectors/detr.py
+++ b/mmdet/models/detectors/detr.py
@@ -1,3 +1,5 @@
+import warnings
+
import torch
from ..builder import DETECTORS
@@ -19,6 +21,27 @@ def __init__(self,
super(DETR, self).__init__(backbone, None, bbox_head, train_cfg,
test_cfg, pretrained, init_cfg)
+ # over-write `forward_dummy` because:
+ # the forward of bbox_head requires img_metas
+ def forward_dummy(self, img):
+ """Used for computing network flops.
+
+ See `mmdetection/tools/analysis_tools/get_flops.py`
+ """
+ warnings.warn('Warning! MultiheadAttention in DETR does not '
+ 'support flops computation! Do not use the '
+ 'results in your papers!')
+
+ batch_size, _, height, width = img.shape
+ dummy_img_metas = [
+ dict(
+ batch_input_shape=(height, width),
+ img_shape=(height, width, 3)) for _ in range(batch_size)
+ ]
+ x = self.extract_feat(img)
+ outs = self.bbox_head(x, dummy_img_metas)
+ return outs
+
# over-write `onnx_export` because:
# (1) the forward of bbox_head requires img_metas
# (2) the different behavior (e.g. construction of `masks`) between
| Error get params DETR/ Deformable DETR
Despite my attempts to modify, also just testing with the basic config detr file.
Maybe this issue has already been raised?
mmdet==2.13.0
mmcv=1.3.3
```python
python tools/analysis_tools/get_flops.py configs/detr/detr_r50_8x2_150e_coco.py
```
```python
/home/bluav/mmdetection/mmdet/models/backbones/resnet.py:400: UserWarning: DeprecationWarning: pretrained is a deprecated, please use "init_cfg" instead
warnings.warn('DeprecationWarning: pretrained is a deprecated, '
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Warning: variables __flops__ or __params__ are already defined for the moduleReLU ptflops can affect your code!
Traceback (most recent call last):
File "tools/analysis_tools/get_flops.py", line 81, in <module>
main()
File "tools/analysis_tools/get_flops.py", line 71, in main
flops, params = get_model_complexity_info(model, input_shape)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/cnn/utils/flops_counter.py", line 104, in get_model_complexity_info
_ = flops_model(batch)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/bluav/mmdetection/mmdet/models/detectors/single_stage.py", line 48, in forward_dummy
outs = self.bbox_head(x)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'img_metas'
```
| 2021-07-20T05:56:04Z | [] | [] |
Traceback (most recent call last):
File "tools/analysis_tools/get_flops.py", line 81, in <module>
main()
File "tools/analysis_tools/get_flops.py", line 71, in main
flops, params = get_model_complexity_info(model, input_shape)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/cnn/utils/flops_counter.py", line 104, in get_model_complexity_info
_ = flops_model(batch)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/bluav/mmdetection/mmdet/models/detectors/single_stage.py", line 48, in forward_dummy
outs = self.bbox_head(x)
File "/home/bluav/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'img_metas'
| 10,660 |
||||
open-mmlab/mmdetection | open-mmlab__mmdetection-5884 | 6882fa0a220880e6e2e7e0536037f043b9031185 | diff --git a/mmdet/models/roi_heads/bbox_heads/bbox_head.py b/mmdet/models/roi_heads/bbox_heads/bbox_head.py
--- a/mmdet/models/roi_heads/bbox_heads/bbox_head.py
+++ b/mmdet/models/roi_heads/bbox_heads/bbox_head.py
@@ -455,9 +455,15 @@ def regress_by_class(self, rois, label, bbox_pred, img_meta):
"""Regress the bbox for the predicted class. Used in Cascade R-CNN.
Args:
- rois (Tensor): shape (n, 4) or (n, 5)
- label (Tensor): shape (n, )
- bbox_pred (Tensor): shape (n, 4*(#class)) or (n, 4)
+ rois (Tensor): Rois from `rpn_head` or last stage
+ `bbox_head`, has shape (num_proposals, 4) or
+ (num_proposals, 5).
+ label (Tensor): Only used when `self.reg_class_agnostic`
+ is False, has shape (num_proposals, ).
+ bbox_pred (Tensor): Regression prediction of
+ current stage `bbox_head`. When `self.reg_class_agnostic`
+ is False, it has shape (n, num_classes * 4), otherwise
+ it has shape (n, 4).
img_meta (dict): Image meta info.
Returns:
diff --git a/mmdet/models/roi_heads/cascade_roi_head.py b/mmdet/models/roi_heads/cascade_roi_head.py
--- a/mmdet/models/roi_heads/cascade_roi_head.py
+++ b/mmdet/models/roi_heads/cascade_roi_head.py
@@ -280,7 +280,28 @@ def forward_train(self,
return losses
def simple_test(self, x, proposal_list, img_metas, rescale=False):
- """Test without augmentation."""
+ """Test without augmentation.
+
+ Args:
+ x (tuple[Tensor]): Features from upstream network. Each
+ has shape (batch_size, c, h, w).
+ proposal_list (list(Tensor)): Proposals from rpn head.
+ Each has shape (num_proposals, 5), last dimension
+ 5 represent (x1, y1, x2, y2, score).
+ img_metas (list[dict]): Meta information of images.
+ rescale (bool): Whether to rescale the results to
+ the original image. Default: True.
+
+ Returns:
+ list[list[np.ndarray]] or list[tuple]: When no mask branch,
+ it is bbox results of each image and classes with type
+ `list[list[np.ndarray]]`. The outer list
+ corresponds to each image. The inner list
+ corresponds to each class. When the model has mask branch,
+ it contains bbox results and mask results.
+ The outer list corresponds to each image, and first element
+ of tuple is bbox results, second element is mask results.
+ """
assert self.with_bbox, 'Bbox head must be implemented.'
num_imgs = len(proposal_list)
img_shapes = tuple(meta['img_shape'] for meta in img_metas)
@@ -340,7 +361,7 @@ def simple_test(self, x, proposal_list, img_metas, rescale=False):
if rois[j].shape[0] > 0:
bbox_label = cls_score[j][:, :-1].argmax(dim=1)
refined_rois = self.bbox_head[i].regress_by_class(
- rois[j], bbox_label[j], bbox_pred[j], img_metas[j])
+ rois[j], bbox_label, bbox_pred[j], img_metas[j])
refine_rois_list.append(refined_rois)
rois = torch.cat(refine_rois_list)
diff --git a/mmdet/models/roi_heads/htc_roi_head.py b/mmdet/models/roi_heads/htc_roi_head.py
--- a/mmdet/models/roi_heads/htc_roi_head.py
+++ b/mmdet/models/roi_heads/htc_roi_head.py
@@ -326,7 +326,28 @@ def forward_train(self,
return losses
def simple_test(self, x, proposal_list, img_metas, rescale=False):
- """Test without augmentation."""
+ """Test without augmentation.
+
+ Args:
+ x (tuple[Tensor]): Features from upstream network. Each
+ has shape (batch_size, c, h, w).
+ proposal_list (list(Tensor)): Proposals from rpn head.
+ Each has shape (num_proposals, 5), last dimension
+ 5 represent (x1, y1, x2, y2, score).
+ img_metas (list[dict]): Meta information of images.
+ rescale (bool): Whether to rescale the results to
+ the original image. Default: True.
+
+ Returns:
+ list[list[np.ndarray]] or list[tuple]: When no mask branch,
+ it is bbox results of each image and classes with type
+ `list[list[np.ndarray]]`. The outer list
+ corresponds to each image. The inner list
+ corresponds to each class. When the model has mask branch,
+ it contains bbox results and mask results.
+ The outer list corresponds to each image, and first element
+ of tuple is bbox results, second element is mask results.
+ """
if self.with_semantic:
_, semantic_feat = self.semantic_head(x)
else:
@@ -381,7 +402,7 @@ def simple_test(self, x, proposal_list, img_metas, rescale=False):
if rois[j].shape[0] > 0:
bbox_label = cls_score[j][:, :-1].argmax(dim=1)
refine_rois = bbox_head.regress_by_class(
- rois[j], bbox_label[j], bbox_pred[j], img_metas[j])
+ rois[j], bbox_label, bbox_pred[j], img_metas[j])
refine_rois_list.append(refine_rois)
rois = torch.cat(refine_rois_list)
diff --git a/mmdet/models/roi_heads/scnet_roi_head.py b/mmdet/models/roi_heads/scnet_roi_head.py
--- a/mmdet/models/roi_heads/scnet_roi_head.py
+++ b/mmdet/models/roi_heads/scnet_roi_head.py
@@ -213,26 +213,19 @@ def forward_train(self,
"""
Args:
x (list[Tensor]): list of multi-level img features.
-
img_metas (list[dict]): list of image info dict where each dict
has: 'img_shape', 'scale_factor', 'flip', and may also contain
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
For details on the values of these keys see
`mmdet/datasets/pipelines/formatting.py:Collect`.
-
proposal_list (list[Tensors]): list of region proposals.
-
gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
-
gt_labels (list[Tensor]): class indices corresponding to each box
-
gt_bboxes_ignore (None, list[Tensor]): specify which bounding
boxes can be ignored when computing the loss.
-
gt_masks (None, Tensor) : true segmentation masks for each box
used if the architecture supports a segmentation task.
-
gt_semantic_seg (None, list[Tensor]): semantic segmentation masks
used if the architecture supports semantic segmentation task.
@@ -317,7 +310,28 @@ def forward_train(self,
return losses
def simple_test(self, x, proposal_list, img_metas, rescale=False):
- """Test without augmentation."""
+ """Test without augmentation.
+
+ Args:
+ x (tuple[Tensor]): Features from upstream network. Each
+ has shape (batch_size, c, h, w).
+ proposal_list (list(Tensor)): Proposals from rpn head.
+ Each has shape (num_proposals, 5), last dimension
+ 5 represent (x1, y1, x2, y2, score).
+ img_metas (list[dict]): Meta information of images.
+ rescale (bool): Whether to rescale the results to
+ the original image. Default: True.
+
+ Returns:
+ list[list[np.ndarray]] or list[tuple]: When no mask branch,
+ it is bbox results of each image and classes with type
+ `list[list[np.ndarray]]`. The outer list
+ corresponds to each image. The inner list
+ corresponds to each class. When the model has mask branch,
+ it contains bbox results and mask results.
+ The outer list corresponds to each image, and first element
+ of tuple is bbox results, second element is mask results.
+ """
if self.with_semantic:
_, semantic_feat = self.semantic_head(x)
else:
@@ -379,7 +393,7 @@ def simple_test(self, x, proposal_list, img_metas, rescale=False):
if rois[j].shape[0] > 0:
bbox_label = cls_score[j][:, :-1].argmax(dim=1)
refine_rois = bbox_head.regress_by_class(
- rois[j], bbox_label[j], bbox_pred[j], img_metas[j])
+ rois[j], bbox_label, bbox_pred[j], img_metas[j])
refine_rois_list.append(refine_rois)
rois = torch.cat(refine_rois_list)
diff --git a/mmdet/models/roi_heads/standard_roi_head.py b/mmdet/models/roi_heads/standard_roi_head.py
--- a/mmdet/models/roi_heads/standard_roi_head.py
+++ b/mmdet/models/roi_heads/standard_roi_head.py
@@ -224,7 +224,28 @@ def simple_test(self,
img_metas,
proposals=None,
rescale=False):
- """Test without augmentation."""
+ """Test without augmentation.
+
+ Args:
+ x (tuple[Tensor]): Features from upstream network. Each
+ has shape (batch_size, c, h, w).
+ proposal_list (list(Tensor)): Proposals from rpn head.
+ Each has shape (num_proposals, 5), last dimension
+ 5 represent (x1, y1, x2, y2, score).
+ img_metas (list[dict]): Meta information of images.
+ rescale (bool): Whether to rescale the results to
+ the original image. Default: True.
+
+ Returns:
+ list[list[np.ndarray]] or list[tuple]: When no mask branch,
+ it is bbox results of each image and classes with type
+ `list[list[np.ndarray]]`. The outer list
+ corresponds to each image. The inner list
+ corresponds to each class. When the model has mask branch,
+ it contains bbox results and mask results.
+ The outer list corresponds to each image, and first element
+ of tuple is bbox results, second element is mask results.
+ """
assert self.with_bbox, 'Bbox head must be implemented.'
det_bboxes, det_labels = self.simple_test_bboxes(
| bbox_label dimension incorrect in cascade_roi_head.py simeple test
**Describe the bug**
I am trying the cascade rcnn network on a test dataset with a single image, after the training begins, the `regress_by_class` method called in `simple_test` of `cascade_roi_head.py` produce an error.
**Error traceback**
```
2021-08-07 12:48:22,062 - mmdet - INFO - Saving checkpoint at 1 epochs
[ ] 0/1, elapsed: 0s
Traceback (most recent call last):
File "tools/train.py", line 188, in <module>
main()
File "tools/train.py", line 177, in main
train_detector(
File "c:\users\colli\mmdetection\mmdet\apis\train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 54, in train
self.call_hook('after_train_epoch')
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\hooks\evaluation.py", line 220, in after_train_epoch
self._do_evaluate(runner)
File "c:\users\colli\mmdetection\mmdet\core\evaluation\eval_hooks.py", line 17, in _do_evaluate
results = single_gpu_test(runner.model, self.dataloader, show=False)
File "c:\users\colli\mmdetection\mmdet\apis\test.py", line 27, in single_gpu_test
result = model(return_loss=False, rescale=True, **data)
File "C:\Users\colli\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\parallel\data_parallel.py", line 42, in forward
return super().forward(*inputs, **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "C:\Users\colli\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\detectors\base.py", line 173, in forward
return self.forward_test(img, img_metas, **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\detectors\base.py", line 146, in forward_test
return self.simple_test(imgs[0], img_metas[0], **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\detectors\two_stage.py", line 181, in simple_test
return self.roi_head.simple_test(
File "c:\users\colli\mmdetection\mmdet\models\roi_heads\cascade_roi_head.py", line 344, in simple_test
refined_rois = self.bbox_head[i].regress_by_class(
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\fp16_utils.py", line 186, in new_func
return old_func(*args, **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\roi_heads\bbox_heads\bbox_head.py", line 471, in regress_by_class
inds = torch.stack((label, label + 1, label + 2, label + 3), 1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
```
**Environment**
```
sys.platform: win32
Python: 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 2060 SUPER
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2
NVCC: Not Available
GCC: n/a
PyTorch: 1.9.0
PyTorch compiling details: PyTorch built with:
- C++ Version: 199711
- MSVC 192829337
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
- OpenMP 2019
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/cb/pytorch_1000000000000/work/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON,
TorchVision: 0.10.0
OpenCV: 4.5.2
MMCV: 1.3.9
MMCV Compiler: MSVC 192729111
MMCV CUDA Compiler: 11.2
MMDetection: 2.15.0+62a1cd3
```
**Bug fix**
https://github.com/open-mmlab/mmdetection/blob/62a1cd3fd6091fc4efa83627631ab4b99a8d005c/mmdet/models/roi_heads/cascade_roi_head.py#L332-L345
https://github.com/open-mmlab/mmdetection/blob/46988b3ac9820bcd8728980f04be98272ee5ea39/mmdet/models/roi_heads/bbox_heads/bbox_head.py#L453-L465
`regress_by_class` expects a shape (n, ), but in the code above, in line 341 bbox_label is produced by slicing the cls_score, so it has been a 1 dimension tensor, but in line 343, it is sliced again, so it seems to be a zero dimension scalar now. I guess this produce the problem, after changing it to
```
refined_rois = self.bbox_head[i].regress_by_class(rois[j], bbox_label, bbox_pred[j], img_metas[j])
```
The code runs correctly.
The change in these lines is made in fd5d019ca983b032c35d15cd900b0fa6eec4f988 by @hhaAndroid, can you have a look at it? Am I understanding it correctly?
| @hhaAndroid can you pls take a look at it?
Can anyone have a look at this issue? | 2021-08-14T03:18:04Z | [] | [] |
Traceback (most recent call last):
File "tools/train.py", line 188, in <module>
main()
File "tools/train.py", line 177, in main
train_detector(
File "c:\users\colli\mmdetection\mmdet\apis\train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 54, in train
self.call_hook('after_train_epoch')
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\hooks\evaluation.py", line 220, in after_train_epoch
self._do_evaluate(runner)
File "c:\users\colli\mmdetection\mmdet\core\evaluation\eval_hooks.py", line 17, in _do_evaluate
results = single_gpu_test(runner.model, self.dataloader, show=False)
File "c:\users\colli\mmdetection\mmdet\apis\test.py", line 27, in single_gpu_test
result = model(return_loss=False, rescale=True, **data)
File "C:\Users\colli\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\parallel\data_parallel.py", line 42, in forward
return super().forward(*inputs, **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "C:\Users\colli\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\detectors\base.py", line 173, in forward
return self.forward_test(img, img_metas, **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\detectors\base.py", line 146, in forward_test
return self.simple_test(imgs[0], img_metas[0], **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\detectors\two_stage.py", line 181, in simple_test
return self.roi_head.simple_test(
File "c:\users\colli\mmdetection\mmdet\models\roi_heads\cascade_roi_head.py", line 344, in simple_test
refined_rois = self.bbox_head[i].regress_by_class(
File "C:\Users\colli\anaconda3\lib\site-packages\mmcv\runner\fp16_utils.py", line 186, in new_func
return old_func(*args, **kwargs)
File "c:\users\colli\mmdetection\mmdet\models\roi_heads\bbox_heads\bbox_head.py", line 471, in regress_by_class
inds = torch.stack((label, label + 1, label + 2, label + 3), 1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
| 10,662 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-5930 | 0d2b39b195031b3b7ede73ebaa0a04150d1f332b | diff --git a/mmdet/datasets/pipelines/transforms.py b/mmdet/datasets/pipelines/transforms.py
--- a/mmdet/datasets/pipelines/transforms.py
+++ b/mmdet/datasets/pipelines/transforms.py
@@ -2456,7 +2456,7 @@ def __call__(self, results):
width = img.shape[1] + self.border[1] * 2
# Center
- center_matrix = np.eye(3)
+ center_matrix = np.eye(3, dtype=np.float32)
center_matrix[0, 2] = -img.shape[1] / 2 # x translation (pixels)
center_matrix[1, 2] = -img.shape[0] / 2 # y translation (pixels)
@@ -2561,21 +2561,24 @@ def __repr__(self):
@staticmethod
def _get_rotation_matrix(rotate_degrees):
radian = math.radians(rotate_degrees)
- rotation_matrix = np.array([[np.cos(radian), -np.sin(radian), 0.],
- [np.sin(radian),
- np.cos(radian), 0.], [0., 0., 1.]])
+ rotation_matrix = np.array(
+ [[np.cos(radian), -np.sin(radian), 0.],
+ [np.sin(radian), np.cos(radian), 0.], [0., 0., 1.]],
+ dtype=np.float32)
return rotation_matrix
@staticmethod
def _get_scaling_matrix(scale_ratio):
- scaling_matrix = np.array([[scale_ratio, 0., 0.],
- [0., scale_ratio, 0.], [0., 0., 1.]])
+ scaling_matrix = np.array(
+ [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]],
+ dtype=np.float32)
return scaling_matrix
@staticmethod
def _get_share_matrix(scale_ratio):
- scaling_matrix = np.array([[scale_ratio, 0., 0.],
- [0., scale_ratio, 0.], [0., 0., 1.]])
+ scaling_matrix = np.array(
+ [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]],
+ dtype=np.float32)
return scaling_matrix
@staticmethod
@@ -2583,10 +2586,12 @@ def _get_shear_matrix(x_shear_degrees, y_shear_degrees):
x_radian = math.radians(x_shear_degrees)
y_radian = math.radians(y_shear_degrees)
shear_matrix = np.array([[1, np.tan(x_radian), 0.],
- [np.tan(y_radian), 1, 0.], [0., 0., 1.]])
+ [np.tan(y_radian), 1, 0.], [0., 0., 1.]],
+ dtype=np.float32)
return shear_matrix
@staticmethod
def _get_translation_matrix(x, y):
- translation_matrix = np.array([[1, 0., x], [0., 1, y], [0., 0., 1.]])
+ translation_matrix = np.array([[1, 0., x], [0., 1, y], [0., 0., 1.]],
+ dtype=np.float32)
return translation_matrix
| RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'other' in call to _th_max when using mosaic mixup in cascade RCNN
As mmdetection recently release 2.15.1,
I want to use Mosaic and Mixup in Cascade RCNN,
However, when I replace the relevant config from YOLOX to Cascade RCNN, I got the following error:
-------------------------------------------------------------------------------------
2021-08-12 06:53:24,512 - mmdet - INFO - workflow: [('train', 1)], max: 40 epochs
Traceback (most recent call last):
File "tools/train.py", line 188, in <module>
main()
File "tools/train.py", line 184, in main
meta=meta)
File "mmdetection-2.15.1/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "mmcv-1.3.9/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "mmcv-1.3.9/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "mmcv-1.3.9/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "mmcv-1.3.9/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "mmdetection-2.15.1/mmdet/models/detectors/base.py", line 237, in train_step
losses = self(**data)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "mmcv-1.3.9/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "mmdetection-2.15.1/mmdet/models/detectors/base.py", line 171, in forward
return self.forward_train(img, img_metas, **kwargs)
File "mmdetection-2.15.1/mmdet/models/detectors/two_stage.py", line 140, in forward_train
proposal_cfg=proposal_cfg)
File "mmdetection-2.15.1/mmdet/models/dense_heads/base_dense_head.py", line 54, in forward_train
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
File "mmdetection-2.15.1/mmdet/models/dense_heads/rpn_head.py", line 74, in loss
gt_bboxes_ignore=gt_bboxes_ignore)
File "mmcv-1.3.9/mmcv/runner/fp16_utils.py", line 186, in new_func
return old_func(*args, **kwargs)
File "mmdetection-2.15.1/mmdet/models/dense_heads/anchor_head.py", line 463, in loss
label_channels=label_channels)
File "mmdetection-2.15.1/mmdet/models/dense_heads/anchor_head.py", line 345, in get_targets
unmap_outputs=unmap_outputs)
File "mmdetection-2.15.1/mmdet/core/utils/misc.py", line 29, in multi_apply
return tuple(map(list, zip(*map_results)))
File "mmdetection-2.15.1/mmdet/models/dense_heads/anchor_head.py", line 219, in _get_targets_single
None if self.sampling else gt_labels)
File "mmdetection-2.15.1/mmdet/core/bbox/assigners/max_iou_assigner.py", line 105, in assign
overlaps = self.iou_calculator(gt_bboxes, bboxes)
File "mmdetection-2.15.1/mmdet/core/bbox/iou_calculators/iou2d_calculator.py", line 65, in __call__
return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned)
File "mmdetection-2.15.1/mmdet/core/bbox/iou_calculators/iou2d_calculator.py", line 233, in bbox_overlaps
bboxes2[..., None, :, :2]) # [B, rows, cols, 2]
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'other' in call to _th_max
-------------------------------------------------------------------------------------
And below is my config:
-------------------------------------------------------------------------------------
model = dict(
type='CascadeRCNN',
backbone=dict(
type='ResNeXt',
depth=101,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(
type='Pretrained',
checkpoint=
'pretrained_model/resnext101_64x4d-ee2c6f71.pth'
),
groups=64,
base_width=4),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(
type='SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)),
roi_head=dict(
type='CascadeRoIHead',
num_stages=3,
stage_loss_weights=[1, 0.5, 0.25],
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=[
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=12,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=12,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=12,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
]),
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=2000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
]),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100)))
dataset_type = 'CocoDataset'
data_root = 'dataset/trainval/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (832, 832)
train_pipeline = [
dict(type='Mosaic', img_scale=img_scale, pad_val=114.0),
dict(
type='RandomAffine',
scaling_ratio_range=(0.1, 2),
border=(-img_scale[0] // 2, -img_scale[1] // 2)),
dict(
type='MixUp',
img_scale=img_scale,
ratio_range=(0.8, 1.6),
pad_val=114.0),
dict(
type='PhotoMetricDistortion',
brightness_delta=32,
contrast_range=(0.5, 1.5),
saturation_range=(0.5, 1.5),
hue_delta=18),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Resize', keep_ratio=True),
dict(type='Pad', pad_to_square=True, pad_val=114.0),
dict(type='Normalize', **img_norm_cfg),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=img_scale,
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Pad', size=img_scale, pad_val=114.0),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']), #try
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=8,
workers_per_gpu=2,
train=dict(
type='MultiImageMixDataset',
dataset=dict(
type=dataset_type,
ann_file=
'dataset/trainval/annotations/instances_train2017.json',
img_prefix='dataset/trainval/',
pipeline=[
dict(type='LoadImageFromFile', to_float32=True),
dict(type='LoadAnnotations', with_bbox=True)
],
filter_empty_gt=False,
),
pipeline=train_pipeline,
dynamic_scale=img_scale),
val=dict(
type=dataset_type,
ann_file=
'dataset/trainval/annotations/instances_val2017.json',
img_prefix='dataset/trainval/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=
'dataset/trainval/annotations/instances_val2017.json',
img_prefix='dataset/trainval/',
)
)
evaluation = dict(interval=1, metric='bbox', save_best='bbox_mAP_50')
optimizer = dict(
type='SGD',
lr=0.01,
momentum=0.9,
weight_decay=0.0005,
nesterov=True,
paramwise_cfg=dict(norm_decay_mult=0.0, bias_decay_mult=0.0))
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='YOLOX',
warmup='exp',
by_epoch=False,
warmup_by_epoch=True,
warmup_ratio=1,
warmup_iters=5,
num_last_epochs=15,
min_lr_ratio=0.05)
runner = dict(type='EpochBasedRunner', max_epochs=40)
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [
dict(type='YOLOXModeSwitchHook', num_last_epochs=5, priority=48),
dict(
type='SyncRandomSizeHook',
ratio_range=(14, 26),
img_scale=(640, 640),
interval=1,
priority=48),
dict(type='SyncNormHook', num_last_epochs=15, interval=1, priority=48),
dict(type='ExpMomentumEMAHook', resume_from=None, priority=49)
]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
work_dir = './work_dirs/cascade_rcnn_x101_64x4d_fpn_20e_coco-3-832-mixupmosaic'
gpu_ids = range(0, 1)
| @jamiechoi1995 In fact, we did not test the above combination configuration, and there may be an incompatibility.
>
>
> @jamiechoi1995 In fact, we did not test the above combination configuration, and there may be an incompatibility.
I found that this error is due to the RandomAffine, MixUp and Mosaic augmentation returning Double type bbox,
I solve it by forcing the bbox type of the above augmentation to float32.
> > @jamiechoi1995 In fact, we did not test the above combination configuration, and there may be an incompatibility.
>
> I found that this error is due to the RandomAffine, MixUp and Mosaic augmentation returning Double type bbox,
> I solve it by forcing the bbox type of the above augmentation to float32.
Indeed possible. Can you create a PR to fix it?
I find that it's because the dtype of warp_matrix in RandomAffine is float64, so after multiply this matrix to the box, the type of box becomes float64. | 2021-08-23T10:42:16Z | [] | [] |
Traceback (most recent call last):
File "tools/train.py", line 188, in <module>
main()
File "tools/train.py", line 184, in main
meta=meta)
File "mmdetection-2.15.1/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "mmcv-1.3.9/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "mmcv-1.3.9/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "mmcv-1.3.9/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "mmcv-1.3.9/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "mmdetection-2.15.1/mmdet/models/detectors/base.py", line 237, in train_step
losses = self(**data)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "mmcv-1.3.9/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "mmdetection-2.15.1/mmdet/models/detectors/base.py", line 171, in forward
return self.forward_train(img, img_metas, **kwargs)
File "mmdetection-2.15.1/mmdet/models/detectors/two_stage.py", line 140, in forward_train
proposal_cfg=proposal_cfg)
File "mmdetection-2.15.1/mmdet/models/dense_heads/base_dense_head.py", line 54, in forward_train
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
File "mmdetection-2.15.1/mmdet/models/dense_heads/rpn_head.py", line 74, in loss
gt_bboxes_ignore=gt_bboxes_ignore)
File "mmcv-1.3.9/mmcv/runner/fp16_utils.py", line 186, in new_func
return old_func(*args, **kwargs)
File "mmdetection-2.15.1/mmdet/models/dense_heads/anchor_head.py", line 463, in loss
label_channels=label_channels)
File "mmdetection-2.15.1/mmdet/models/dense_heads/anchor_head.py", line 345, in get_targets
unmap_outputs=unmap_outputs)
File "mmdetection-2.15.1/mmdet/core/utils/misc.py", line 29, in multi_apply
return tuple(map(list, zip(*map_results)))
File "mmdetection-2.15.1/mmdet/models/dense_heads/anchor_head.py", line 219, in _get_targets_single
None if self.sampling else gt_labels)
File "mmdetection-2.15.1/mmdet/core/bbox/assigners/max_iou_assigner.py", line 105, in assign
overlaps = self.iou_calculator(gt_bboxes, bboxes)
File "mmdetection-2.15.1/mmdet/core/bbox/iou_calculators/iou2d_calculator.py", line 65, in __call__
return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned)
File "mmdetection-2.15.1/mmdet/core/bbox/iou_calculators/iou2d_calculator.py", line 233, in bbox_overlaps
bboxes2[..., None, :, :2]) # [B, rows, cols, 2]
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'other' in call to _th_max
| 10,663 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-6079 | b1f97c1b6d2b3c735d8c5690952264c598c2f206 | diff --git a/mmdet/models/dense_heads/yolact_head.py b/mmdet/models/dense_heads/yolact_head.py
--- a/mmdet/models/dense_heads/yolact_head.py
+++ b/mmdet/models/dense_heads/yolact_head.py
@@ -663,6 +663,10 @@ def _init_layers(self):
protonets = protonets[:-1]
return nn.Sequential(*protonets)
+ def forward_dummy(self, x):
+ prototypes = self.protonet(x)
+ return prototypes
+
def forward(self, x, coeff_pred, bboxes, img_meta, sampling_results=None):
"""Forward feature from the upstream network to get prototypes and
linearly combine the prototypes, using masks coefficients, into
diff --git a/mmdet/models/detectors/yolact.py b/mmdet/models/detectors/yolact.py
--- a/mmdet/models/detectors/yolact.py
+++ b/mmdet/models/detectors/yolact.py
@@ -30,7 +30,10 @@ def forward_dummy(self, img):
See `mmdetection/tools/analysis_tools/get_flops.py`
"""
- raise NotImplementedError
+ feat = self.extract_feat(img)
+ bbox_outs = self.bbox_head(feat)
+ prototypes = self.mask_head.forward_dummy(feat[0])
+ return (bbox_outs, prototypes)
def forward_train(self,
img,
| Issue about tools\analysis_tools\get_flops.py
When I used the tools/analysis_tools/get_flops.py to compute the Flops and Parameters of YOLACT , I got an error "NotImplementedError".
Traceback (most recent call last):
File "tools/analysis_tools/get_flops.py", line 81, in <module>
main()
File "tools/analysis_tools/get_flops.py", line 71, in main
flops, params = get_model_complexity_info(model, input_shape)
File "/home/scsc01/anaconda3/envs/mmlab/lib/python3.8/site-packages/mmcv/cnn/utils/flops_counter.py", line 104, in get_model_complexity_info
_ = flops_model(batch)
File "/home/scsc01/anaconda3/envs/mmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/scsc01/lxx/mmlab/mmdet/models/detectors/yolact.py", line 32, in forward_dummy
raise NotImplementedError
NotImplementedError
Does the get_flops.py not support the complexity calculation of YOLACT? thank you~~
| In **yolact.py**, `forward_dummy` is not implemented
https://github.com/open-mmlab/mmdetection/blob/6882fa0a220880e6e2e7e0536037f043b9031185/mmdet/models/detectors/yolact.py#L27-L32
But in **get_flops.py**
https://github.com/open-mmlab/mmdetection/blob/6882fa0a220880e6e2e7e0536037f043b9031185/tools/analysis_tools/get_flops.py#L64-L69
I have checked `forward_dummy` of other models, and they are all implemented. I assume that the expected behavior is `forward_dummy` should not be declared in **yolact.py**, so the error information can be easier to understand?
For now we have not supported it. We will support it in the future. | 2021-09-10T13:27:44Z | [] | [] |
Traceback (most recent call last):
File "tools/analysis_tools/get_flops.py", line 81, in <module>
main()
File "tools/analysis_tools/get_flops.py", line 71, in main
flops, params = get_model_complexity_info(model, input_shape)
File "/home/scsc01/anaconda3/envs/mmlab/lib/python3.8/site-packages/mmcv/cnn/utils/flops_counter.py", line 104, in get_model_complexity_info
_ = flops_model(batch)
File "/home/scsc01/anaconda3/envs/mmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/scsc01/lxx/mmlab/mmdet/models/detectors/yolact.py", line 32, in forward_dummy
raise NotImplementedError
NotImplementedError
| 10,665 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-7147 | ffff556edc5a96ce72ce5b4d27d1fbcaa0d22122 | diff --git a/tools/analysis_tools/confusion_matrix.py b/tools/analysis_tools/confusion_matrix.py
--- a/tools/analysis_tools/confusion_matrix.py
+++ b/tools/analysis_tools/confusion_matrix.py
@@ -207,7 +207,10 @@ def plot_confusion_matrix(confusion_matrix,
ax.text(
j,
i,
- '{}%'.format(int(confusion_matrix[i, j])),
+ '{}%'.format(
+ int(confusion_matrix[
+ i,
+ j]) if not np.isnan(confusion_matrix[i, j]) else -1),
ha='center',
va='center',
color='w',
| Confusion matrix error
While trying to generate the confusion matrix with this command:
```
python tools/analysis_tools/confusion_matrix.py ./work_dirs/perception-types--D06-01-2022--T09-23-45/perception-types.py results.pkl ./temp --show
```
I ran into this error:
```
Traceback (most recent call last):
File "tools/analysis_tools/confusion_matrix.py", line 261, in <module>
main()
File "tools/analysis_tools/confusion_matrix.py", line 257, in main
show=args.show)
File "tools/analysis_tools/confusion_matrix.py", line 210, in plot_confusion_matrix
'{}%'.format(int(confusion_matrix[i, j])),
ValueError: cannot convert float NaN to integer
```
Would appreciate any help or suggestions! Thanks
| I ran into the same problem so I would appreciate if someone could help us with this.
Thanks for reporting the bug. We will fix it asap. | 2022-02-12T15:36:58Z | [] | [] |
Traceback (most recent call last):
File "tools/analysis_tools/confusion_matrix.py", line 261, in <module>
main()
File "tools/analysis_tools/confusion_matrix.py", line 257, in main
show=args.show)
File "tools/analysis_tools/confusion_matrix.py", line 210, in plot_confusion_matrix
'{}%'.format(int(confusion_matrix[i, j])),
ValueError: cannot convert float NaN to integer
| 10,676 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-7157 | 98949809b7179fab9391663ee5a4ab5978425f90 | diff --git a/tools/deployment/onnx2tensorrt.py b/tools/deployment/onnx2tensorrt.py
--- a/tools/deployment/onnx2tensorrt.py
+++ b/tools/deployment/onnx2tensorrt.py
@@ -201,7 +201,7 @@ def parse_args():
parsed directly from config file and are deprecated and will be \
removed in future releases.')
if not args.input_img:
- args.input_img = osp.join(osp.dirname(__file__), '../demo/demo.jpg')
+ args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg')
cfg = Config.fromfile(args.config)
| FileNotFoundError: img file does not exist: tools/deployment/../demo/demo.jpg
```
Traceback (most recent call last):
File "tools/deployment/onnx2tensorrt.py", line 255, in <module>
verbose=args.verbose)
File "tools/deployment/onnx2tensorrt.py", line 54, in onnx2tensorrt
one_img, one_meta = preprocess_example_input(input_config)
File "/workspace/mmdetection-2.18.1/mmdet/core/export/pytorch2onnx.py", line 139, in preprocess_example_input
one_img = mmcv.imread(input_path)
File "/workspace/mmcv-1.3.17-trt/mmcv/image/io.py", line 177, in imread
f'img file does not exist: {img_or_path}')
File "/workspace/mmcv-1.3.17-trt/mmcv/utils/path.py", line 23, in check_file_exist
raise FileNotFoundError(msg_tmpl.format(filename))
FileNotFoundError: img file does not exist: tools/deployment/../demo/demo.jpg
```
| 2022-02-14T12:37:02Z | [] | [] |
Traceback (most recent call last):
File "tools/deployment/onnx2tensorrt.py", line 255, in <module>
verbose=args.verbose)
File "tools/deployment/onnx2tensorrt.py", line 54, in onnx2tensorrt
one_img, one_meta = preprocess_example_input(input_config)
File "/workspace/mmdetection-2.18.1/mmdet/core/export/pytorch2onnx.py", line 139, in preprocess_example_input
one_img = mmcv.imread(input_path)
File "/workspace/mmcv-1.3.17-trt/mmcv/image/io.py", line 177, in imread
f'img file does not exist: {img_or_path}')
File "/workspace/mmcv-1.3.17-trt/mmcv/utils/path.py", line 23, in check_file_exist
raise FileNotFoundError(msg_tmpl.format(filename))
FileNotFoundError: img file does not exist: tools/deployment/../demo/demo.jpg
| 10,677 |
||||
open-mmlab/mmdetection | open-mmlab__mmdetection-7407 | c546b5044098b71d59a139036a87c5c97bcab4e2 | diff --git a/tools/analysis_tools/analyze_logs.py b/tools/analysis_tools/analyze_logs.py
old mode 100644
new mode 100755
--- a/tools/analysis_tools/analyze_logs.py
+++ b/tools/analysis_tools/analyze_logs.py
@@ -17,6 +17,10 @@ def cal_train_time(log_dicts, args):
all_times.append(log_dict[epoch]['time'])
else:
all_times.append(log_dict[epoch]['time'][1:])
+ if not all_times:
+ raise KeyError(
+ 'Please reduce the log interval in the config so that'
+ 'interval is less than iterations of one epoch.')
all_times = np.array(all_times)
epoch_ave_time = all_times.mean(-1)
slowest_epoch = epoch_ave_time.argmax()
@@ -50,12 +54,21 @@ def plot_curve(log_dicts, args):
epochs = list(log_dict.keys())
for j, metric in enumerate(metrics):
print(f'plot curve of {args.json_logs[i]}, metric is {metric}')
- if metric not in log_dict[epochs[0]]:
+ if metric not in log_dict[epochs[int(args.start_epoch) - 1]]:
+ if 'mAP' in metric:
+ raise KeyError(
+ f'{args.json_logs[i]} does not contain metric '
+ f'{metric}. Please check if "--no-validate" is '
+ 'specified when you trained the model.')
raise KeyError(
- f'{args.json_logs[i]} does not contain metric {metric}')
+ f'{args.json_logs[i]} does not contain metric {metric}. '
+ 'Please reduce the log interval in the config so that '
+ 'interval is less than iterations of one epoch.')
if 'mAP' in metric:
- xs = np.arange(1, max(epochs) + 1)
+ xs = np.arange(
+ int(args.start_epoch),
+ max(epochs) + 1, int(args.eval_interval))
ys = []
for epoch in epochs:
ys += log_dict[epoch][metric]
@@ -104,6 +117,16 @@ def add_plot_parser(subparsers):
nargs='+',
default=['bbox_mAP'],
help='the metric that you want to plot')
+ parser_plt.add_argument(
+ '--start-epoch',
+ type=str,
+ default='1',
+ help='the epoch that you want to start')
+ parser_plt.add_argument(
+ '--eval-interval',
+ type=str,
+ default='1',
+ help='the eval interval when training')
parser_plt.add_argument('--title', type=str, help='title of figure')
parser_plt.add_argument(
'--legend',
| ./tools/analysis_tools/analyze_logs.py plot_curve IndexError: list index out of range
`(openmmlab) lbc@prust-System-3:~/mmdetection-master$ python3.8 ./tools/analysis_tools/analyze_logs.py plot_curve ./work_dirs/deformable_detr_twostage_refine_r50_16x2_50e_coco/20211119_170702.log.json --keys bbox_mAP
plot curve of ./work_dirs/deformable_detr_twostage_refine_r50_16x2_50e_coco/20211119_170702.log.json, metric is bbox_mAP
Traceback (most recent call last):
File "./tools/analysis_tools/analyze_logs.py", line 180, in <module>
main()
File "./tools/analysis_tools/analyze_logs.py", line 176, in main
eval(args.task)(log_dicts, args)
File "./tools/analysis_tools/analyze_logs.py", line 53, in plot_curve
if metric not in log_dict[epochs[0]]:
IndexError: list index out of range
`
| 2022-03-15T10:49:28Z | [] | [] |
Traceback (most recent call last):
File "./tools/analysis_tools/analyze_logs.py", line 180, in <module>
main()
File "./tools/analysis_tools/analyze_logs.py", line 176, in main
eval(args.task)(log_dicts, args)
File "./tools/analysis_tools/analyze_logs.py", line 53, in plot_curve
if metric not in log_dict[epochs[0]]:
IndexError: list index out of range
| 10,680 |
||||
open-mmlab/mmdetection | open-mmlab__mmdetection-8273 | ca11860f4f3c3ca2ce8340e2686eeaec05b29111 | diff --git a/mmdet/core/hook/wandblogger_hook.py b/mmdet/core/hook/wandblogger_hook.py
--- a/mmdet/core/hook/wandblogger_hook.py
+++ b/mmdet/core/hook/wandblogger_hook.py
@@ -135,7 +135,8 @@ def before_run(self, runner):
super(MMDetWandbHook, self).before_run(runner)
# Save and Log config.
- if runner.meta is not None:
+ if runner.meta is not None and runner.meta.get('exp_name',
+ None) is not None:
src_cfg_path = osp.join(runner.work_dir,
runner.meta.get('exp_name', None))
if osp.exists(src_cfg_path):
| WandbLogger Hook Error
WandbLogger Hook error
Code
```
log_config = dict(
interval=10,
hooks=[
dict(type='TensorboardLoggerHook'),
dict(type='TextLoggerHook'),
dict(type='MMDetWandbHook',
init_kwargs={
'project': PROJECT,
'entity': ENTITY,
'name': TAG,
'config': {
'lr': 0.0025, 'batch_size':16
},
'tags': WANDB_TAGS
},
interval=1,
log_checkpoint=False,
log_checkpoint_metadata=True,
num_eval_images=3,
bbox_score_thr=0.3
)
])
```
**Environment**
```
Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]
CUDA available: True
GPU 0: Tesla V100-SXM2-16GB
CUDA_HOME: /usr/local/cuda-10.1
NVCC: Cuda compilation tools, release 10.1, V10.1.24
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.10.0+cu102
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.11.1+cu102
OpenCV: 4.5.4
MMCV: 1.5.3
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.2
MMDetection: 2.25.0+55d536e
```
**Error traceback**
```
Traceback (most recent call last):
File "train.py", line 28, in <module>
fire.Fire(launch)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire
target=component.__name__)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "train.py", line 25, in launch
train_detector(model, datasets, cfg, distributed=False, validate=True)
File "/home/james_sarmiento/mmdetection/mmdetection/mmdet/apis/train.py", line 244, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 111, in run
self.call_hook('before_run')
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 309, in call_hook
getattr(hook, fn_name)(self)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/mmcv/runner/dist_utils.py", line 135, in wrapper
return func(*args, **kwargs)
File "/home/james_sarmiento/mmdetection/mmdetection/mmdet/core/hook/wandblogger_hook.py", line 140, in before_run
runner.meta.get('exp_name', None))
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/posixpath.py", line 94, in join
genericpath._check_arg_types('join', a, *p)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/genericpath.py", line 153, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'NoneType'
```
| Hey @sarmientoj24 this seems like a quick fix.
```
# Save and Log config.
if runner.meta is not None and runner.meta.get('exp_name', None) is not None:
src_cfg_path = osp.join(runner.work_dir,
runner.meta.get('exp_name', None))
if osp.exists(src_cfg_path):
self.wandb.save(src_cfg_path, base_path=runner.work_dir)
self._update_wandb_config(runner)
else:
runner.logger.warning('No meta information found in the runner. ')
```
I will make a PR to fix this. | 2022-06-27T11:30:01Z | [] | [] |
Traceback (most recent call last):
File "train.py", line 28, in <module>
fire.Fire(launch)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire
target=component.__name__)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "train.py", line 25, in launch
train_detector(model, datasets, cfg, distributed=False, validate=True)
File "/home/james_sarmiento/mmdetection/mmdetection/mmdet/apis/train.py", line 244, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 111, in run
self.call_hook('before_run')
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 309, in call_hook
getattr(hook, fn_name)(self)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/site-packages/mmcv/runner/dist_utils.py", line 135, in wrapper
return func(*args, **kwargs)
File "/home/james_sarmiento/mmdetection/mmdetection/mmdet/core/hook/wandblogger_hook.py", line 140, in before_run
runner.meta.get('exp_name', None))
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/posixpath.py", line 94, in join
genericpath._check_arg_types('join', a, *p)
File "/home/james_sarmiento/anaconda3/envs/yl/lib/python3.7/genericpath.py", line 153, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'NoneType'
| 10,692 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-854 | ae856e11ec3f281ce77c3f8fdf1cd87162598ffb | diff --git a/mmdet/models/backbones/resnet.py b/mmdet/models/backbones/resnet.py
--- a/mmdet/models/backbones/resnet.py
+++ b/mmdet/models/backbones/resnet.py
@@ -297,11 +297,11 @@ def make_res_layer(block,
layers = []
layers.append(
block(
- inplanes,
- planes,
- stride,
- dilation,
- downsample,
+ inplanes=inplanes,
+ planes=planes,
+ stride=stride,
+ dilation=dilation,
+ downsample=downsample,
style=style,
with_cp=with_cp,
conv_cfg=conv_cfg,
@@ -314,10 +314,10 @@ def make_res_layer(block,
for i in range(1, blocks):
layers.append(
block(
- inplanes,
- planes,
- 1,
- dilation,
+ inplanes=inplanes,
+ planes=planes,
+ stride=1,
+ dilation=dilation,
style=style,
with_cp=with_cp,
conv_cfg=conv_cfg,
diff --git a/mmdet/models/backbones/resnext.py b/mmdet/models/backbones/resnext.py
--- a/mmdet/models/backbones/resnext.py
+++ b/mmdet/models/backbones/resnext.py
@@ -11,12 +11,12 @@
class Bottleneck(_Bottleneck):
- def __init__(self, groups=1, base_width=4, *args, **kwargs):
+ def __init__(self, inplanes, planes, groups=1, base_width=4, **kwargs):
"""Bottleneck block for ResNeXt.
If style is "pytorch", the stride-two layer is the 3x3 conv layer,
if it is "caffe", the stride-two layer is the first 1x1 conv layer.
"""
- super(Bottleneck, self).__init__(*args, **kwargs)
+ super(Bottleneck, self).__init__(inplanes, planes, **kwargs)
if groups == 1:
width = self.planes
| GCNNet with x101 backbone is not work
I use gcb in x101 backbone,but get this error:
Traceback (most recent call last):
File "./tools/train.py", line 95, in <module>
main()
File "./tools/train.py", line 73, in main
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 60, in build_detector
return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 32, in build
return _build_module(cfg, registry, default_args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 24, in _build_module
return obj_type(**args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/detectors/cascade_rcnn.py", line 36, in __init__
self.backbone = builder.build_backbone(backbone)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 36, in build_backbone
return build(cfg, BACKBONES)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 32, in build
return _build_module(cfg, registry, default_args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 24, in _build_module
return obj_type(**args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnext.py", line 191, in __init__
super(ResNeXt, self).__init__(**kwargs)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnet.py", line 447, in __init__
gen_attention_blocks=stage_with_gen_attention[i])
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnet.py", line 317, in make_res_layer
(0 in gen_attention_blocks) else None))
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnext.py", line 19, in __init__
super(Bottleneck, self).__init__(*args, **kwargs)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnet.py", line 205, in __init__
self.context_block = ContextBlock(inplanes=gcb_inplanes, **gcb)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/ops/gcb/context_block.py", line 40, in __init__
nn.Conv2d(self.inplanes, self.planes, kernel_size=1),
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 327, in __init__
False, _pair(0), groups, bias, padding_mode)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 45, in __init__
self.reset_parameters()
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 48, in reset_parameters
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/init.py", line 323, in kaiming_uniform_
fan = _calculate_correct_fan(tensor, mode)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/init.py", line 292, in _calculate_correct_fan
fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/init.py", line 222, in _calculate_fan_in_and_fan_out
receptive_field_size = tensor[0][0].numel()
IndexError: index 0 is out of bounds for dimension 0 with size 0
| i meet the same problem. Do you know what's wrong?
you should update your backbone file,because the orfer of parameters is changed to compliance with specificatinos. See pull #780
Thanks for reporting! Will fix soon.
@zhengye1995
i don't know what's your mean? your mean is my hyper parameter have some wrong? but when i run r4,it is ok, i only change gcb 1/4 to 1/16, the problem will appear
my backbone as following:
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch',
gcb = dict(
ratio=1. / 16.,
),
stage_with_gcb = (False, True, True, True),
dcn=dict(
modulated=False,
groups=32,
deformable_groups=1,
fallback_on_stride=False),
stage_with_dcn=(False, True, True, True),
norm_cfg=norm_cfg,
norm_eval=False,
), | 2019-06-22T15:25:21Z | [] | [] |
Traceback (most recent call last):
File "./tools/train.py", line 95, in <module>
main()
File "./tools/train.py", line 73, in main
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 60, in build_detector
return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 32, in build
return _build_module(cfg, registry, default_args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 24, in _build_module
return obj_type(**args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/detectors/cascade_rcnn.py", line 36, in __init__
self.backbone = builder.build_backbone(backbone)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 36, in build_backbone
return build(cfg, BACKBONES)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 32, in build
return _build_module(cfg, registry, default_args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/builder.py", line 24, in _build_module
return obj_type(**args)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnext.py", line 191, in __init__
super(ResNeXt, self).__init__(**kwargs)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnet.py", line 447, in __init__
gen_attention_blocks=stage_with_gen_attention[i])
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnet.py", line 317, in make_res_layer
(0 in gen_attention_blocks) else None))
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnext.py", line 19, in __init__
super(Bottleneck, self).__init__(*args, **kwargs)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/models/backbones/resnet.py", line 205, in __init__
self.context_block = ContextBlock(inplanes=gcb_inplanes, **gcb)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/ops/gcb/context_block.py", line 40, in __init__
nn.Conv2d(self.inplanes, self.planes, kernel_size=1),
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 327, in __init__
False, _pair(0), groups, bias, padding_mode)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 45, in __init__
self.reset_parameters()
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 48, in reset_parameters
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/init.py", line 323, in kaiming_uniform_
fan = _calculate_correct_fan(tensor, mode)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/init.py", line 292, in _calculate_correct_fan
fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
File "/home/zhengye/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/nn/init.py", line 222, in _calculate_fan_in_and_fan_out
receptive_field_size = tensor[0][0].numel()
IndexError: index 0 is out of bounds for dimension 0 with size 0
| 10,694 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-9151 | e71b499608e9c3ccd4211e7c815fa20eeedf18a2 | diff --git a/mmdet/models/detectors/rpn.py b/mmdet/models/detectors/rpn.py
--- a/mmdet/models/detectors/rpn.py
+++ b/mmdet/models/detectors/rpn.py
@@ -1,5 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
import warnings
+from inspect import signature
import mmcv
import torch
@@ -153,7 +154,9 @@ def show_result(self, data, result, top_k=20, **kwargs):
np.ndarray: The image with bboxes drawn on it.
"""
if kwargs is not None:
- kwargs.pop('score_thr', None)
- kwargs.pop('text_color', None)
- kwargs['colors'] = kwargs.pop('bbox_color', 'green')
+ kwargs['colors'] = 'green'
+ sig = signature(mmcv.imshow_bboxes)
+ for k in list(kwargs.keys()):
+ if k not in sig.parameters:
+ kwargs.pop(k)
mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)
| run image_demo.py with cascade_rpn model show error
### Prerequisite
- [X] I have searched [the existing and past issues](https://github.com/open-mmlab/mmdetection/issues) but cannot get the expected help.
- [X] I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
- [X] The bug has not been fixed in the [latest version](https://github.com/open-mmlab/mmdetection).
### 🐞 Describe the bug
``` python demo/image_demo.py demo/demo.jpg configs/cascade_rpn/crpn_r50_caffe_fpn_1x_coco.py checkpoints/cascade_rpn_r50_caffe_fpn_1x_coco-7aa93cef.pth```
```
UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2895.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "demo/image_demo.py", line 68, in <module>
main(args)
File "demo/image_demo.py", line 38, in main
show_result_pyplot(
File "/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/apis/inference.py", line 241, in show_result_pyplot
model.show_result(
File "/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/models/detectors/rpn.py", line 159, in show_result
mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)
TypeError: imshow_bboxes() got an unexpected keyword argument 'mask_color'
```
### Environment
```
Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]
CUDA available: True
GPU 0: GeForce RTX 2080 SUPER
CUDA_HOME: /usr/local/cuda-10.2
NVCC: Cuda compilation tools, release 10.2, V10.2.8
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.12.1+cu102
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.13.1+cu102
OpenCV: 4.6.0
MMCV: 1.6.2
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.2
MMDetection: 2.25.2+9d3e162
```
### Additional information
_No response_
| Thanks for your report! We will fix this bug in the next version. | 2022-10-27T03:49:51Z | [] | [] |
Traceback (most recent call last):
File "demo/image_demo.py", line 68, in <module>
main(args)
File "demo/image_demo.py", line 38, in main
show_result_pyplot(
File "/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/apis/inference.py", line 241, in show_result_pyplot
model.show_result(
File "/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/models/detectors/rpn.py", line 159, in show_result
mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)
TypeError: imshow_bboxes() got an unexpected keyword argument 'mask_color'
| 10,695 |
|||
open-mmlab/mmdetection | open-mmlab__mmdetection-9694 | ea29a1edbe973389c2d705d99fd060b07decddab | diff --git a/mmdet/utils/__init__.py b/mmdet/utils/__init__.py
--- a/mmdet/utils/__init__.py
+++ b/mmdet/utils/__init__.py
@@ -7,6 +7,7 @@
from .memory import AvoidCUDAOOM, AvoidOOM
from .misc import find_latest_checkpoint, update_data_root
from .replace_cfg_vals import replace_cfg_vals
+from .rfnext import rfnext_init_model
from .setup_env import setup_multi_processes
from .split_batch import split_batch
from .util_distribution import build_ddp, build_dp, get_device
@@ -16,5 +17,6 @@
'update_data_root', 'setup_multi_processes', 'get_caller_name',
'log_img_scale', 'compat_cfg', 'split_batch', 'build_ddp', 'build_dp',
'get_device', 'replace_cfg_vals', 'AvoidOOM', 'AvoidCUDAOOM',
- 'get_max_num_gt_division_factor', 'masked_fill', 'batch_images_to_levels'
+ 'get_max_num_gt_division_factor', 'masked_fill', 'batch_images_to_levels',
+ 'rfnext_init_model'
]
| [Bug] Training Error
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmdetection/issues) and [Discussions](https://github.com/open-mmlab/mmdetection/discussions) but cannot get the expected help.
- [X] I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
- [X] The bug has not been fixed in the [latest version (master)](https://github.com/open-mmlab/mmdetection) or [latest version (3.x)](https://github.com/open-mmlab/mmdetection/tree/dev-3.x).
### Task
I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.
### Branch
master branch https://github.com/open-mmlab/mmdetection
### Environment
```
sys.platform: linux
Python: 3.7.15 (default, Nov 24 2022, 21:12:53) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.6, V11.6.55
GCC: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
PyTorch: 1.13.1+cu117
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.14.1+cu117
OpenCV: 4.7.0
MMCV: 1.7.1
MMCV Compiler: GCC 9.3
MMCV CUDA Compiler: 11.7
MMDetection: 2.28.0+b955832
```
### Reproduces the problem - code sample
```python
python train.py $CONFIG_FILE
```
### Reproduces the problem - command or script
```python
python train.py $CONFIG_FILE
```
### Reproduces the problem - error message
```
/anaconda3/envs/dl/lib/python3.7/site-packages/mmcv/__init__.py:21: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
'On January 1, 2023, MMCV will release v2.0.0, in which it will remove '
Traceback (most recent call last):
File "train.py", line 20, in <module>
from mmdet.utils import (collect_env, get_device, get_root_logger,
ImportError: cannot import name 'rfnext_init_model' from 'mmdet.utils' (/mmdetection/mmdet/utils/__init__.py)
```
### Additional information
I clone the latest mmdetection package and intend to run a couple of benchmark. However, neither my configs nor the configs provided in the `mmdet/configs` yields the abovementioned error message.
| I also encountered this problem | 2023-01-30T08:09:21Z | [] | [] |
Traceback (most recent call last):
File "train.py", line 20, in <module>
from mmdet.utils import (collect_env, get_device, get_root_logger,
ImportError: cannot import name 'rfnext_init_model' from 'mmdet.utils' (/mmdetection/mmdet/utils/__init__.py)
| 10,701 |
|||
pandas-dev/pandas | pandas-dev__pandas-10108 | eafd22d961934a7b3cc72607ef4512a18b419085 | diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt
--- a/doc/source/whatsnew/v0.17.0.txt
+++ b/doc/source/whatsnew/v0.17.0.txt
@@ -57,3 +57,5 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
+
+- Bug in ``Categorical`` repr with ``display.width`` of ``None`` in Python 3 (:issue:`10087`)
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -1310,8 +1310,7 @@ def _repr_categories_info(self):
levheader = "Categories (%d, %s): " % (len(self.categories),
self.categories.dtype)
width, height = get_terminal_size()
- max_width = (width if get_option("display.width") == 0
- else get_option("display.width"))
+ max_width = get_option("display.width") or width
if com.in_ipython_frontend():
# 0 = no breaks
max_width = 0
| BUG: categorical doesn't handle display.width of None in Python 3
Categorical Series have a special repr that looks at display.width, which can be None if following the Options and Settings docs. Unlike Python 2, in Python 3 an integer vs None comparison throws an exception.
(on current master, and has been true for several releases now)
``` python
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> pd.core.config.set_option('display.width', None)
>>> import numpy as np
>>> x = pd.Series(np.random.randn(100))
>>> pd.cut(x, 10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/base.py", line 67, in __repr__
return str(self)
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/base.py", line 46, in __str__
return self.__unicode__()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/series.py", line 897, in __unicode__
max_rows=max_rows)
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/series.py", line 962, in to_string
name=name, max_rows=max_rows)
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/series.py", line 992, in _get_repr
result = formatter.to_string()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/format.py", line 222, in to_string
footer = self._get_footer()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/format.py", line 196, in _get_footer
level_info = self.tr_series.values._repr_categories_info()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/categorical.py", line 1323, in _repr_categories_info
if max_width != 0 and cur_col_len + sep_len + len(val) > max_width:
TypeError: unorderable types: int() > NoneType()
```
| should be a simple fix. PR's are welcome.
Sure, I'll try and get to it soon.
| 2015-05-11T22:15:45Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/base.py", line 67, in __repr__
return str(self)
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/base.py", line 46, in __str__
return self.__unicode__()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/series.py", line 897, in __unicode__
max_rows=max_rows)
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/series.py", line 962, in to_string
name=name, max_rows=max_rows)
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/series.py", line 992, in _get_repr
result = formatter.to_string()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/format.py", line 222, in to_string
footer = self._get_footer()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/format.py", line 196, in _get_footer
level_info = self.tr_series.values._repr_categories_info()
File "/home/andrew/git/pandas-rosnfeld-py3/pandas/core/categorical.py", line 1323, in _repr_categories_info
if max_width != 0 and cur_col_len + sep_len + len(val) > max_width:
TypeError: unorderable types: int() > NoneType()
| 10,720 |
|||
pandas-dev/pandas | pandas-dev__pandas-10473 | 83b232089b9292b11d4b9b00c0e50cc4a829f016 | diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt
--- a/doc/source/whatsnew/v0.17.0.txt
+++ b/doc/source/whatsnew/v0.17.0.txt
@@ -27,6 +27,7 @@ New features
~~~~~~~~~~~~
- SQL io functions now accept a SQLAlchemy connectable. (:issue:`7877`)
+- Enable writing complex values to HDF stores when using table format (:issue:`10447`)
.. _whatsnew_0170.enhancements.other:
@@ -147,3 +148,4 @@ Bug Fixes
- Bug in `groupby.var` which caused variance to be inaccurate for small float values (:issue:`10448`)
- Bug in ``Series.plot(kind='hist')`` Y Label not informative (:issue:`10485`)
+
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1773,6 +1773,8 @@ def set_kind(self):
self.kind = 'string'
elif dtype.startswith(u('float')):
self.kind = 'float'
+ elif dtype.startswith(u('complex')):
+ self.kind = 'complex'
elif dtype.startswith(u('int')) or dtype.startswith(u('uint')):
self.kind = 'integer'
elif dtype.startswith(u('date')):
@@ -1802,6 +1804,8 @@ def set_atom(self, block, block_items, existing_col, min_itemsize,
return self.set_atom_datetime64(block)
elif block.is_timedelta:
return self.set_atom_timedelta64(block)
+ elif block.is_complex:
+ return self.set_atom_complex(block)
dtype = block.dtype.name
inferred_type = lib.infer_dtype(block.values)
@@ -1936,6 +1940,12 @@ def get_atom_coltype(self, kind=None):
def get_atom_data(self, block, kind=None):
return self.get_atom_coltype(kind=kind)(shape=block.shape[0])
+ def set_atom_complex(self, block):
+ self.kind = block.dtype.name
+ itemsize = int(self.kind.split('complex')[-1]) // 8
+ self.typ = _tables().ComplexCol(itemsize=itemsize, shape=block.shape[0])
+ self.set_data(block.values.astype(self.typ.type, copy=False))
+
def set_atom_data(self, block):
self.kind = block.dtype.name
self.typ = self.get_atom_data(block)
@@ -3147,8 +3157,8 @@ def f(i, c):
def create_index(self, columns=None, optlevel=None, kind=None):
"""
Create a pytables index on the specified columns
- note: cannot index Time64Col() currently; PyTables must be >= 2.3
-
+ note: cannot index Time64Col() or ComplexCol currently;
+ PyTables must be >= 3.0
Paramaters
----------
@@ -3203,6 +3213,12 @@ def create_index(self, columns=None, optlevel=None, kind=None):
# create the index
if not v.is_indexed:
+ if v.type.startswith('complex'):
+ raise TypeError('Columns containing complex values can be stored but cannot'
+ ' be indexed when using table format. Either use fixed '
+ 'format, set index=False, or do not include the columns '
+ 'containing complex values to data_columns when '
+ 'initializing the table.')
v.create_index(**kw)
def read_axes(self, where, **kwargs):
| BUG: Cannot store complex valued Series/DataFrame/Panel/Panel4D as 'table' to hdf
Attempting to store a Panel4D of complext128 results in an error. This is true if I use table or fixed formats (different errors) [I am showing the last call from pandas as well as the actual error]
``` python
pd.Panel4D(np.tile(np.array([1+1j,2+2j]),[2,2,2,1])).to_hdf('complex.h5','complex',format='t')
Traceback (most recent call last):
...
File "/miniconda/envs/py34/lib/python3.4/site-packages/pandas/io/pytables.py", line 1930, in get_atom_data
return self.get_atom_coltype(kind=kind)(shape=block.shape[0])
...
File "/miniconda/envs/py34/lib/python3.4/site-packages/tables/atom.py", line 740, in __init__
"to avoid confusions with PyTables 1.X complex atom names, "
TypeError: to avoid confusions with PyTables 1.X complex atom names, please use ``ComplexAtom(itemsize=N)``, where N=8 for single precision complex atoms, and N=16 for double precision complex atoms
```
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.4.3.final.0
python-bits: 64
OS: Linux
OS-release: 2.6.32-504.16.2.el6.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.16.2
nose: 1.3.7
Cython: 0.22
numpy: 1.9.2
scipy: None
statsmodels: None
IPython: 3.1.0
sphinx: None
patsy: 0.3.0
dateutil: 2.4.2
pytz: 2015.4
bottleneck: None
tables: 3.1.1
numexpr: 2.3.1
matplotlib: None
openpyxl: 2.0.2
xlrd: 0.9.3
xlwt: 1.0.0
xlsxwriter: 0.7.3
lxml: 3.4.4
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
If you try fixed, you get
```
TypeError: cannot properly create the storer for: [_STORER_MAP] [group->/complex (Group) '',value-><class 'pandas.core.panelnd.Panel4D'>,format->fixed,append->False,kwargs->{'encoding': None}]
```
| '>3d' not supported in fixed at all (as pretty useless - u generally need/want to append to these)
complex is not supported very well in the exporters
but of course fixes welcome
Turns out this is true for all types, not just Panel4D
| 2015-06-29T19:27:56Z | [] | [] |
Traceback (most recent call last):
...
File "/miniconda/envs/py34/lib/python3.4/site-packages/pandas/io/pytables.py", line 1930, in get_atom_data
return self.get_atom_coltype(kind=kind)(shape=block.shape[0])
...
File "/miniconda/envs/py34/lib/python3.4/site-packages/tables/atom.py", line 740, in __init__
"to avoid confusions with PyTables 1.X complex atom names, "
TypeError: to avoid confusions with PyTables 1.X complex atom names, please use ``ComplexAtom(itemsize=N)``, where N=8 for single precision complex atoms, and N=16 for double precision complex atoms
| 10,768 |
|||
pandas-dev/pandas | pandas-dev__pandas-10497 | bbec57d6f881cb7d26ad65319595c8594381fe8c | diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt
--- a/doc/source/whatsnew/v0.17.0.txt
+++ b/doc/source/whatsnew/v0.17.0.txt
@@ -128,6 +128,7 @@ Bug Fixes
- Bug in ``test_categorical`` on big-endian builds (:issue:`10425`)
+- Bug in ``Series.shift`` and ``DataFrame.shift`` not supporting categorical data (:issue:`9416`)
- Bug in ``Series.map`` using categorical ``Series`` raises ``AttributeError`` (:issue:`10324`)
- Bug in ``MultiIndex.get_level_values`` including ``Categorical`` raises ``AttributeError`` (:issue:`10460`)
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -820,6 +820,35 @@ def shape(self):
return tuple([len(self._codes)])
+ def shift(self, periods):
+ """
+ Shift Categorical by desired number of periods.
+
+ Parameters
+ ----------
+ periods : int
+ Number of periods to move, can be positive or negative
+
+ Returns
+ -------
+ shifted : Categorical
+ """
+ # since categoricals always have ndim == 1, an axis parameter
+ # doesnt make any sense here.
+ codes = self.codes
+ if codes.ndim > 1:
+ raise NotImplementedError("Categorical with ndim > 1.")
+ if np.prod(codes.shape) and (periods != 0):
+ codes = np.roll(codes, com._ensure_platform_int(periods), axis=0)
+ if periods > 0:
+ codes[:periods] = -1
+ else:
+ codes[periods:] = -1
+
+ return Categorical.from_codes(codes,
+ categories=self.categories,
+ ordered=self.ordered)
+
def __array__(self, dtype=None):
"""
The numpy array interface.
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1709,6 +1709,10 @@ def interpolate(self, method='pad', axis=0, inplace=False,
limit=limit),
placement=self.mgr_locs)
+ def shift(self, periods, axis=0):
+ return self.make_block_same_class(values=self.values.shift(periods),
+ placement=self.mgr_locs)
+
def take_nd(self, indexer, axis=0, new_mgr_locs=None, fill_tuple=None):
"""
Take values according to indexer and return them as a block.bb
| Series.shift() doesn't work for categorical type
Not sure if this is intentional, but Series.shift() won't run with categorical dtypes:
``` python
ser = pd.Series(['a', 'b', 'c', 'd'], dtype="category")
ser.shift(1)
Traceback (most recent call last):
File "<ipython-input-15-1a7536b0af06>", line 1, in <module>
ser.shift(1)
File "/.../pandas/core/generic.py", line 3394, in shift
new_data = self._data.shift(periods=periods, axis=block_axis)
File "/.../pandas/core/internals.py", line 2533, in shift
return self.apply('shift', **kwargs)
File "/.../pandas/core/internals.py", line 2497, in apply
applied = getattr(b, f)(**kwargs)
File "/.../pandas/core/internals.py", line 893, in shift
new_values, fill_value = com._maybe_upcast(self.values)
File "/.../pandas/core/common.py", line 1218, in _maybe_upcast
new_dtype, fill_value = _maybe_promote(dtype, fill_value)
File "/.../pandas/core/common.py", line 1124, in _maybe_promote
if issubclass(np.dtype(dtype).type, compat.string_types):
TypeError: data type not understood
```
| This simply hasn't been implemented, but otherwise was not intentional. Help would be appreciated if you're interested in putting together a PR. The place to get started (I believe) would be to implement the `shift` method on `CategoricalBlock` in `pandas.core.internals`.
here's basically what you would do
```
In [2]: s = Series(list('aabbcde'),dtype='category')
In [3]: s
Out[3]:
0 a
1 a
2 b
3 b
4 c
5 d
6 e
dtype: category
Categories (5, object): [a < b < c < d < e]
In [4]: s.values
Out[4]:
[a, a, b, b, c, d, e]
Categories (5, object): [a < b < c < d < e]
In [5]: s.values.codes
Out[5]: array([0, 0, 1, 1, 2, 3, 4], dtype=int8)
In [6]: np.roll(s.values.codes,len(s)-1,axis=0)
Out[6]: array([0, 1, 1, 2, 3, 4, 0], dtype=int8)
In [7]: codes = np.roll(s.values.codes,len(s)-1,axis=0)
In [8]: codes[-1] = -1
In [11]: pd.Categorical(codes,categories=s.values.categories,fastpath=True)
Out[11]:
[a, b, b, c, d, e, NaN]
Categories (5, object): [a, b, c, d, e]
```
you would use the Block.shift method (and pass the codes to it for the actual shifting), then wrap it back to a catetgorical (their is a method for that too). Should be pretty straightforward.
| 2015-07-03T13:42:01Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-15-1a7536b0af06>", line 1, in <module>
ser.shift(1)
File "/.../pandas/core/generic.py", line 3394, in shift
new_data = self._data.shift(periods=periods, axis=block_axis)
File "/.../pandas/core/internals.py", line 2533, in shift
return self.apply('shift', **kwargs)
File "/.../pandas/core/internals.py", line 2497, in apply
applied = getattr(b, f)(**kwargs)
File "/.../pandas/core/internals.py", line 893, in shift
new_values, fill_value = com._maybe_upcast(self.values)
File "/.../pandas/core/common.py", line 1218, in _maybe_upcast
new_dtype, fill_value = _maybe_promote(dtype, fill_value)
File "/.../pandas/core/common.py", line 1124, in _maybe_promote
if issubclass(np.dtype(dtype).type, compat.string_types):
TypeError: data type not understood
| 10,770 |
|||
pandas-dev/pandas | pandas-dev__pandas-10853 | d27068f8b78661a64580340a5ab230d0dad17760 | testArrayNumpyLabelled fails on Python 2.7.10
```
FAIL: testArrayNumpyLabelled (pandas.io.tests.test_json.test_ujson.NumpyJSONTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/ch/repo/pandas/.tox/py27/lib/python2.7/site-packages/pandas/io/tests/test_json/test_ujson.py", line 1141, in testArrayNumpyLabelled
self.assertTrue((expectedvals == output[0]).all())
AssertionError: False is not true
-------------------- >> begin captured stdout << ---------------------
[[42 31]
[24 99]
[ 2 78]]
[[31 42]
[99 24]
[78 2]]
--------------------- >> end captured stdout << ----------------------
```
Order of dict elements seems to be non-deterministic for python2.7 versions as well.
| well this doesn't fail on travis AFAICT, nor have I ever actually seen this fail. So not sure how to repro.
Do you have a 32-bit build? This test does seem to depend on a few things, including architecture and the PYTHONHASHSEED environment variable.
@kawochen, No, I am running a 64bit interpreter. Yes, the failure is highly hardware/interpreter/seed dependent. But that is the point of this bug report.
@cel4 can you give the specific system this fails on. Yes its is somewhat non-deterministic, though in py2 the ordering is deterministic. Kind of a silly test actually (but IIRC came from another package :). Why don't you do a pull-request to sort them first.
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Darwin
OS-release: 14.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: de_DE.UTF-8
LANG: de_DE.UTF-8
pandas: 0.16.2+340.ge4368de.dirty
nose: 1.3.7
Cython: 0.23
numpy: 1.9.2
scipy: None
statsmodels: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.4.2
pytz: 2015.4
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: 1.8.6
xlrd: 0.9.4
xlwt: None
xlsxwriter: 0.7.3
lxml: 3.4.4
bs4: 4.4.0
html5lib: 0.999999
httplib2: 0.9.1
apiclient: 1.2
sqlalchemy: 1.0.8
pymysql: None
psycopg2: None
```
If the test fails or passes depends on the python hashseed.
e.g. `PYTHONHASHSEED='1145934480'` deterministically fails that test, `PYTHONHASHSEED='2585294338'` deterministically passes the test here.
| 2015-08-19T03:59:44Z | [] | [] |
Traceback (most recent call last):
File "/Users/ch/repo/pandas/.tox/py27/lib/python2.7/site-packages/pandas/io/tests/test_json/test_ujson.py", line 1141, in testArrayNumpyLabelled
self.assertTrue((expectedvals == output[0]).all())
AssertionError: False is not true
| 10,813 |
||||
pandas-dev/pandas | pandas-dev__pandas-11114 | d1feb49267da6074603c6a9dbf6314681378cd81 | diff --git a/.travis.yml b/.travis.yml
--- a/.travis.yml
+++ b/.travis.yml
@@ -43,13 +43,6 @@ matrix:
- CLIPBOARD_GUI=gtk2
- BUILD_TYPE=conda
- DOC_BUILD=true # if rst files were changed, build docs in parallel with tests
- - python: 3.3
- env:
- - JOB_NAME: "33_nslow"
- - NOSE_ARGS="not slow and not disabled"
- - FULL_DEPS=true
- - CLIPBOARD=xsel
- - BUILD_TYPE=conda
- python: 3.4
env:
- JOB_NAME: "34_nslow"
@@ -64,6 +57,13 @@ matrix:
- FULL_DEPS=true
- CLIPBOARD=xsel
- BUILD_TYPE=conda
+ - python: 3.3
+ env:
+ - JOB_NAME: "33_nslow"
+ - NOSE_ARGS="not slow and not disabled"
+ - FULL_DEPS=true
+ - CLIPBOARD=xsel
+ - BUILD_TYPE=conda
- python: 2.7
env:
- JOB_NAME: "27_slow"
@@ -104,10 +104,10 @@ matrix:
- BUILD_TYPE=pydata
- PANDAS_TESTING_MODE="deprecate"
allow_failures:
- - python: 3.5
+ - python: 3.3
env:
- - JOB_NAME: "35_nslow"
- - NOSE_ARGS="not slow and not network and not disabled"
+ - JOB_NAME: "33_nslow"
+ - NOSE_ARGS="not slow and not disabled"
- FULL_DEPS=true
- CLIPBOARD=xsel
- BUILD_TYPE=conda
diff --git a/ci/requirements-3.5.txt b/ci/requirements-3.5.txt
--- a/ci/requirements-3.5.txt
+++ b/ci/requirements-3.5.txt
@@ -10,3 +10,15 @@ cython
scipy
numexpr
pytables
+html5lib
+lxml
+
+# currently causing some warnings
+#sqlalchemy
+#pymysql
+#psycopg2
+
+# not available from conda
+#beautiful-soup
+#bottleneck
+#matplotlib
diff --git a/doc/source/install.rst b/doc/source/install.rst
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -18,7 +18,7 @@ Instructions for installing from source,
Python version support
----------------------
-Officially Python 2.6, 2.7, 3.3, and 3.4.
+Officially Python 2.6, 2.7, 3.3, 3.4, and 3.5
Installing pandas
-----------------
diff --git a/doc/source/release.rst b/doc/source/release.rst
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -64,6 +64,7 @@ Highlights include:
- Support for reading SAS xport files, see :ref:`here <whatsnew_0170.enhancements.sas_xport>`
- Documentation comparing SAS to *pandas*, see :ref:`here <compare_with_sas>`
- Removal of the automatic TimeSeries broadcasting, deprecated since 0.8.0, see :ref:`here <whatsnew_0170.prior_deprecations>`
+- Compatibility with Python 3.5
See the :ref:`v0.17.0 Whatsnew <whatsnew_0170>` overview for an extensive list
of all enhancements and bugs that have been fixed in 0.17.0.
diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt
--- a/doc/source/whatsnew/v0.17.0.txt
+++ b/doc/source/whatsnew/v0.17.0.txt
@@ -49,6 +49,7 @@ Highlights include:
- Support for reading SAS xport files, see :ref:`here <whatsnew_0170.enhancements.sas_xport>`
- Documentation comparing SAS to *pandas*, see :ref:`here <compare_with_sas>`
- Removal of the automatic TimeSeries broadcasting, deprecated since 0.8.0, see :ref:`here <whatsnew_0170.prior_deprecations>`
+- Compatibility with Python 3.5 (:issue:`11097`)
Check the :ref:`API Changes <whatsnew_0170.api>` and :ref:`deprecations <whatsnew_0170.deprecations>` before updating.
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -36,9 +36,9 @@
import sys
import types
-PY3 = (sys.version_info[0] >= 3)
PY2 = sys.version_info[0] == 2
-
+PY3 = (sys.version_info[0] >= 3)
+PY35 = (sys.version_info >= (3, 5))
try:
import __builtin__ as builtins
diff --git a/pandas/computation/expr.py b/pandas/computation/expr.py
--- a/pandas/computation/expr.py
+++ b/pandas/computation/expr.py
@@ -516,7 +516,54 @@ def visit_Attribute(self, node, **kwargs):
raise ValueError("Invalid Attribute context {0}".format(ctx.__name__))
- def visit_Call(self, node, side=None, **kwargs):
+ def visit_Call_35(self, node, side=None, **kwargs):
+ """ in 3.5 the starargs attribute was changed to be more flexible, #11097 """
+
+ if isinstance(node.func, ast.Attribute):
+ res = self.visit_Attribute(node.func)
+ elif not isinstance(node.func, ast.Name):
+ raise TypeError("Only named functions are supported")
+ else:
+ try:
+ res = self.visit(node.func)
+ except UndefinedVariableError:
+ # Check if this is a supported function name
+ try:
+ res = FuncNode(node.func.id)
+ except ValueError:
+ # Raise original error
+ raise
+
+ if res is None:
+ raise ValueError("Invalid function call {0}".format(node.func.id))
+ if hasattr(res, 'value'):
+ res = res.value
+
+ if isinstance(res, FuncNode):
+
+ new_args = [ self.visit(arg) for arg in node.args ]
+
+ if node.keywords:
+ raise TypeError("Function \"{0}\" does not support keyword "
+ "arguments".format(res.name))
+
+ return res(*new_args, **kwargs)
+
+ else:
+
+ new_args = [ self.visit(arg).value for arg in node.args ]
+
+ for key in node.keywords:
+ if not isinstance(key, ast.keyword):
+ raise ValueError("keyword error in function call "
+ "'{0}'".format(node.func.id))
+
+ if key.arg:
+ kwargs.append(ast.keyword(keyword.arg, self.visit(keyword.value)))
+
+ return self.const_type(res(*new_args, **kwargs), self.env)
+
+ def visit_Call_legacy(self, node, side=None, **kwargs):
# this can happen with: datetime.datetime
if isinstance(node.func, ast.Attribute):
@@ -607,6 +654,13 @@ def visitor(x, y):
operands = node.values
return reduce(visitor, operands)
+# ast.Call signature changed on 3.5,
+# conditionally change which methods is named
+# visit_Call depending on Python version, #11097
+if compat.PY35:
+ BaseExprVisitor.visit_Call = BaseExprVisitor.visit_Call_35
+else:
+ BaseExprVisitor.visit_Call = BaseExprVisitor.visit_Call_legacy
_python_not_supported = frozenset(['Dict', 'BoolOp', 'In', 'NotIn'])
_numexpr_supported_calls = frozenset(_reductions + _mathops)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -181,6 +181,7 @@ def build_extensions(self):
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
+ 'Programming Language :: Python :: 3.5',
'Programming Language :: Cython',
'Topic :: Scientific/Engineering',
]
| TST/COMPAT: python 3.5 compat
A few changes are needed: https://travis-ci.org/jreback/pandas/jobs/80331174 (since merged to master)
so the Call node as changed
need to do something like this: https://bitbucket.org/pytest-dev/pytest/pull-requests/296/astcall-signature-changed-on-35/diff
If this is fixed I think everything will pass. I guess this is an API change in python.
```
======================================================================
ERROR: test_df_use_case (pandas.computation.tests.test_eval.TestMathNumExprPandas)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/jreback/pandas/pandas/computation/tests/test_eval.py", line 1488, in test_df_use_case
parser=self.parser)
File "/home/travis/build/jreback/pandas/pandas/core/frame.py", line 2102, in eval
return _eval(expr, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/eval.py", line 230, in eval
truediv=truediv)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 656, in __init__
self.terms = self.parse()
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 673, in parse
return self._visitor.visit(self.expr)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 320, in visit_Module
return self.visit(expr, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 498, in visit_Assign
return self.visit(node.value, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 543, in visit_Call
args = [self.visit(targ) for targ in node.args]
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 543, in <listcomp>
args = [self.visit(targ) for targ in node.args]
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 545, in visit_Call
if node.starargs is not None:
AttributeError: 'Call' object has no attribute 'starargs'
```
| 2015-09-15T16:32:37Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/build/jreback/pandas/pandas/computation/tests/test_eval.py", line 1488, in test_df_use_case
parser=self.parser)
File "/home/travis/build/jreback/pandas/pandas/core/frame.py", line 2102, in eval
return _eval(expr, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/eval.py", line 230, in eval
truediv=truediv)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 656, in __init__
self.terms = self.parse()
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 673, in parse
return self._visitor.visit(self.expr)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 320, in visit_Module
return self.visit(expr, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 498, in visit_Assign
return self.visit(node.value, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 543, in visit_Call
args = [self.visit(targ) for targ in node.args]
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 543, in <listcomp>
args = [self.visit(targ) for targ in node.args]
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/travis/build/jreback/pandas/pandas/computation/expr.py", line 545, in visit_Call
if node.starargs is not None:
AttributeError: 'Call' object has no attribute 'starargs'
| 10,840 |
||||
pandas-dev/pandas | pandas-dev__pandas-11366 | b09b84e8e0baf89e78b618cdda30af11087d2e4a | diff --git a/doc/source/whatsnew/v0.17.1.txt b/doc/source/whatsnew/v0.17.1.txt
--- a/doc/source/whatsnew/v0.17.1.txt
+++ b/doc/source/whatsnew/v0.17.1.txt
@@ -94,7 +94,7 @@ Bug Fixes
-
+- Bug in ``pd.eval`` where unary ops in a list error (:issue:`11235`)
- Bug in ``squeeze()`` with zero length arrays (:issue:`11230`, :issue:`8999`)
diff --git a/pandas/computation/expr.py b/pandas/computation/expr.py
--- a/pandas/computation/expr.py
+++ b/pandas/computation/expr.py
@@ -427,7 +427,7 @@ def visit_Str(self, node, **kwargs):
return self.term_type(name, self.env)
def visit_List(self, node, **kwargs):
- name = self.env.add_tmp([self.visit(e).value for e in node.elts])
+ name = self.env.add_tmp([self.visit(e)(self.env) for e in node.elts])
return self.term_type(name, self.env)
visit_Tuple = visit_List
@@ -655,7 +655,7 @@ def visitor(x, y):
return reduce(visitor, operands)
# ast.Call signature changed on 3.5,
-# conditionally change which methods is named
+# conditionally change which methods is named
# visit_Call depending on Python version, #11097
if compat.PY35:
BaseExprVisitor.visit_Call = BaseExprVisitor.visit_Call_35
| Dataframe.eval(): Negative number in list passed to 'in'-expression causes crash on python 3.4.0
The following crashes on python 3.4.0. It works fine on Python 2.7.5.
```
>>> import pandas
>>> from io import StringIO
>>> data = "foo,bar\n11,12"
>>> df = pandas.read_csv(StringIO(data))
>>> df.eval('foo in [11, 32]')
0 True
dtype: bool
>>> df.eval('foo in [11, -32]')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/core/frame.py", line 1987, in eval
return _eval(expr, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/eval.py", line 230, in eval
truediv=truediv)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 635, in __init__
self.terms = self.parse()
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 652, in parse
return self._visitor.visit(self.expr)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 320, in visit_Module
return self.visit(expr, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 323, in visit_Expr
return self.visit(node.value, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 560, in visit_Compare
return self.visit(binop)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 404, in visit_BinOp
op, op_class, left, right = self._possibly_transform_eq_ne(node)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 357, in _possibly_transform_eq_ne
right = self.visit(node.right, side='right')
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 430, in visit_List
name = self.env.add_tmp([self.visit(e).value for e in node.elts])
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 430, in <listcomp>
name = self.env.add_tmp([self.visit(e).value for e in node.elts])
AttributeError: 'UnaryOp' object has no attribute 'value'
>>>
```
| pls show a reproducible example, e.g. show how the actual frame is created so it can be copy-pasted
pls show `pd.show_versions()`
Sorry, my bad. The lines actually defining the data fell away in during copy-paste. I've updated the example above.
Here's show_versions()
```
>>> import pandas
>>> pandas.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.4.0.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-48-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.16.2
nose: None
Cython: None
numpy: 1.9.3
scipy: None
statsmodels: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.4.2
pytz: 2015.6
bottleneck: None
tables: None
numexpr: 2.4.4
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
```
yeh I guess this is parsed differently in 3.4 that in 2.7.
pull-requests to fix are welcome
| 2015-10-19T05:40:18Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/core/frame.py", line 1987, in eval
return _eval(expr, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/eval.py", line 230, in eval
truediv=truediv)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 635, in __init__
self.terms = self.parse()
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 652, in parse
return self._visitor.visit(self.expr)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 320, in visit_Module
return self.visit(expr, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 323, in visit_Expr
return self.visit(node.value, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 560, in visit_Compare
return self.visit(binop)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 404, in visit_BinOp
op, op_class, left, right = self._possibly_transform_eq_ne(node)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 357, in _possibly_transform_eq_ne
right = self.visit(node.right, side='right')
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 314, in visit
return visitor(node, **kwargs)
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 430, in visit_List
name = self.env.add_tmp([self.visit(e).value for e in node.elts])
File "/home/tobias/Envs/qcache-py3/lib/python3.4/site-packages/pandas/computation/expr.py", line 430, in <listcomp>
name = self.env.add_tmp([self.visit(e).value for e in node.elts])
AttributeError: 'UnaryOp' object has no attribute 'value'
| 10,873 |
|||
pandas-dev/pandas | pandas-dev__pandas-11400 | 88e8d6e7dfcea3435d7695a0c312690a57c05663 | diff --git a/doc/source/whatsnew/v0.17.1.txt b/doc/source/whatsnew/v0.17.1.txt
--- a/doc/source/whatsnew/v0.17.1.txt
+++ b/doc/source/whatsnew/v0.17.1.txt
@@ -70,7 +70,7 @@ Bug Fixes
- Bug in ``HDFStore.append`` with strings whose encoded length exceded the max unencoded length (:issue:`11234`)
- Bug in merging ``datetime64[ns, tz]`` dtypes (:issue:`11405`)
- Bug in ``HDFStore.select`` when comparing with a numpy scalar in a where clause (:issue:`11283`)
-
+- Bug in using ``DataFrame.ix`` with a multi-index indexer(:issue:`11372`)
- Bug in tz-conversions with an ambiguous time and ``.dt`` accessors (:issue:`11295`)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -443,11 +443,14 @@ def can_do_equal_len():
# we have an equal len Frame
if isinstance(value, ABCDataFrame) and value.ndim > 1:
sub_indexer = list(indexer)
+ multiindex_indexer = isinstance(labels, MultiIndex)
for item in labels:
if item in value:
sub_indexer[info_axis] = item
- v = self._align_series(tuple(sub_indexer), value[item])
+ v = self._align_series(
+ tuple(sub_indexer), value[item], multiindex_indexer
+ )
else:
v = np.nan
@@ -516,8 +519,28 @@ def can_do_equal_len():
self.obj._data = self.obj._data.setitem(indexer=indexer, value=value)
self.obj._maybe_update_cacher(clear=True)
- def _align_series(self, indexer, ser):
- # indexer to assign Series can be tuple, slice, scalar
+ def _align_series(self, indexer, ser, multiindex_indexer=False):
+ """
+ Parameters
+ ----------
+ indexer : tuple, slice, scalar
+ The indexer used to get the locations that will be set to
+ `ser`
+
+ ser : pd.Series
+ The values to assign to the locations specified by `indexer`
+
+ multiindex_indexer : boolean, optional
+ Defaults to False. Should be set to True if `indexer` was from
+ a `pd.MultiIndex`, to avoid unnecessary broadcasting.
+
+
+ Returns:
+ --------
+ `np.array` of `ser` broadcast to the appropriate shape for assignment
+ to the locations selected by `indexer`
+
+ """
if isinstance(indexer, (slice, np.ndarray, list, Index)):
indexer = tuple([indexer])
@@ -555,7 +578,7 @@ def _align_series(self, indexer, ser):
ser = ser.reindex(obj.axes[0][indexer[0]], copy=True)._values
# single indexer
- if len(indexer) > 1:
+ if len(indexer) > 1 and not multiindex_indexer:
l = len(indexer[1])
ser = np.tile(ser, l).reshape(l, -1).T
| DataFrame.ix[idx, :] = value sets wrong values when idx is a MultiIndex and DataFrame.columns is also a MultiIndex
This code is broken in `0.17.0` but not in `0.15.2`:
``` python
import pandas as pd
import numpy as np
np.random.seed(1)
from itertools import product
from pandas.util.testing import assert_frame_equal
pd.show_versions()
idx = pd.MultiIndex.from_tuples(
list(
product(['A', 'B', 'C'],
pd.date_range('2015-01-01', '2015-04-01', freq='MS'))
)
)
sub = pd.MultiIndex.from_tuples(
[('A', pd.Timestamp('2015-01-01')), ('A', pd.Timestamp('2015-02-01'))]
)
# if cols = ['foo', 'bar', 'baz', 'quux'], there is no error.
cols = pd.MultiIndex.from_tuples(
list(
product(['foo', 'bar'],
pd.date_range('2015-01-01', '2015-02-01', freq='MS'))
)
)
test = pd.DataFrame(np.random.random((12, 4)), index=idx, columns=cols)
vals = pd.DataFrame(np.random.random((2, 4)), index=sub, columns=cols)
test.ix[sub, :] = vals
print test.ix[sub, :]
print vals
assert_frame_equal(test.ix[sub, :], vals)
```
### 0.17.0
``` python
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.17.0
nose: 1.3.7
pip: 7.1.0
setuptools: 18.0.1
Cython: 0.22
numpy: 1.10.1
scipy: 0.16.0
statsmodels: 0.6.1
IPython: 3.2.1
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.1
pytz: 2015.4
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.4.4
matplotlib: 1.4.3
openpyxl: None
xlrd: 0.9.4
xlwt: None
xlsxwriter: 0.7.3
lxml: None
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: 1.0.7
pymysql: None
psycopg2: None
foo bar
2015-01-01 2015-02-01 2015-01-01 2015-02-01
A 2015-01-01 0.287775 0.130029 0.019367 0.678836
2015-02-01 0.287775 0.130029 0.019367 0.678836
foo bar
2015-01-01 2015-02-01 2015-01-01 2015-02-01
A 2015-01-01 0.287775 0.130029 0.019367 0.678836
2015-02-01 0.211628 0.265547 0.491573 0.053363
Traceback (most recent call last):
File "c:\dev\code\sandbox\multiindex.py", line 41, in <module>
assert_frame_equal(test.ix[sub, :], vals)
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 1028, in assert_frame_equal
obj='DataFrame.iloc[:, {0}]'.format(i))
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 925, in assert_series_equal
check_less_precise, obj='{0}'.format(obj))
File "pandas\src\testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas\src\testing.c:3809)
File "pandas\src\testing.pyx", line 147, in pandas._testing.assert_almost_equal (pandas\src\testing.c:2685)
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 798, in raise_assert_detail
raise AssertionError(msg)
AssertionError: DataFrame.iloc[:, 0] are different
DataFrame.iloc[:, 0] values are different (50.0 %)
[left]: [0.287775338586, 0.287775338586]
[right]: [0.287775338586, 0.211628116]
```
### 0.15.2
``` python
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en_GB
pandas: 0.15.2
nose: 1.3.7
Cython: 0.22
numpy: 1.9.2
scipy: 0.15.1
statsmodels: None
IPython: 3.2.1
sphinx: 1.3.1
patsy: 0.3.0
dateutil: 2.4.1
pytz: 2015.4
bottleneck: 1.0.0
tables: 3.2.0
numexpr: 2.4.3
matplotlib: 1.4.3
openpyxl: 1.8.5
xlrd: 0.9.4
xlwt: 0.7.5
xlsxwriter: 0.7.3
lxml: 3.4.4
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 1.0.7
pymysql: None
psycopg2: None
foo bar
2015-01-01 2015-02-01 2015-01-01 2015-02-01
A 2015-01-01 0.287775 0.130029 0.019367 0.678836
2015-02-01 0.211628 0.265547 0.491573 0.053363
foo bar
2015-01-01 2015-02-01 2015-01-01 2015-02-01
A 2015-01-01 0.287775 0.130029 0.019367 0.678836
2015-02-01 0.211628 0.265547 0.491573 0.053363
```
| Indexing with a specific set of columns also gives the error:
Code sample:
``` python
import pandas as pd
import numpy as np
np.random.seed(1)
from itertools import product
from pandas.util.testing import assert_frame_equal
pd.show_versions()
idx = pd.MultiIndex.from_tuples(
list(
product(['A', 'B', 'C'],
pd.date_range('2015-01-01', '2015-04-01', freq='MS'))
)
)
cols = pd.MultiIndex.from_tuples(
list(
product(['foo', 'bar'],
pd.date_range('2016-01-01', '2016-02-01', freq='MS'))
)
)
# if cols = ['foo', 'bar', 'baz', 'quux'], there is no error.
test = pd.DataFrame(np.random.random((12, 4)), index=idx, columns=cols)
subidx = pd.MultiIndex.from_tuples(
[('A', pd.Timestamp('2015-01-01')), ('A', pd.Timestamp('2015-02-01'))]
)
subcols = pd.MultiIndex.from_tuples(
[('foo', pd.Timestamp('2016-01-01')), ('foo', pd.Timestamp('2016-02-01'))]
)
vals = pd.DataFrame(np.random.random((2, 2)), index=subidx, columns=subcols)
test.ix[subidx, subcols] = vals
print test.ix[subidx, subcols]
print vals
assert_frame_equal(test.ix[subidx, subcols], vals)
```
### 0.17.0
``` python
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.17.0
nose: 1.3.7
pip: 7.1.0
setuptools: 18.0.1
Cython: 0.22
numpy: 1.10.1
scipy: 0.16.0
statsmodels: 0.6.1
IPython: 3.2.1
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.1
pytz: 2015.4
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.4.4
matplotlib: 1.4.3
openpyxl: None
xlrd: 0.9.4
xlwt: None
xlsxwriter: 0.7.3
lxml: None
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: 1.0.7
pymysql: None
psycopg2: None
foo
2016-01-01 2016-02-01
A 2015-01-01 0.287775 0.130029
2015-02-01 0.287775 0.130029
foo
2016-01-01 2016-02-01
A 2015-01-01 0.287775 0.130029
2015-02-01 0.019367 0.678836
Traceback (most recent call last):
File "c:\dev\code\sandbox\multiindex.py", line 48, in <module>
assert_frame_equal(test.ix[subidx, subcols], vals)
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 1028, in assert_frame_equal
obj='DataFrame.iloc[:, {0}]'.format(i))
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 925, in assert_series_equal
check_less_precise, obj='{0}'.format(obj))
File "pandas\src\testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas\src\testing.c:3809)
File "pandas\src\testing.pyx", line 147, in pandas._testing.assert_almost_equal (pandas\src\testing.c:2685)
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 798, in raise_assert_detail
raise AssertionError(msg)
AssertionError: DataFrame.iloc[:, 0] are different
DataFrame.iloc[:, 0] values are different (50.0 %)
[left]: [0.287775338586, 0.287775338586]
[right]: [0.287775338586, 0.0193669578703]
```
(Deleted- misread something, my previous suggestion was not really a fix)
hmm, surprised that broke. there is not must testing on that sub-section actually
The issue is here: https://github.com/pydata/pandas/blob/master/pandas/core/indexing.py#L450
`self._align_series` is called on a sub-section of the frame, but in the aligner, it looks at it and says oh you are a frame so gives back the wrong result.
So could prob pass in an additional parameter which would determine this.
Since I've already got two test cases, I'd be happy to have a go if I can be pointed in the right direction. I'll start by looking at the history of `indexing.py` and following any referenced issues / PRs
the pointer above is to the relevant issues.
the way to do this is to setup the test cases and the expected results (in test_indexing); they should fail before a fix, then you can step thru to see where to put a fix and go from there
| 2015-10-21T08:40:16Z | [] | [] |
Traceback (most recent call last):
File "c:\dev\code\sandbox\multiindex.py", line 41, in <module>
assert_frame_equal(test.ix[sub, :], vals)
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 1028, in assert_frame_equal
obj='DataFrame.iloc[:, {0}]'.format(i))
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 925, in assert_series_equal
check_less_precise, obj='{0}'.format(obj))
File "pandas\src\testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas\src\testing.c:3809)
File "pandas\src\testing.pyx", line 147, in pandas._testing.assert_almost_equal (pandas\src\testing.c:2685)
File "c:\python\envs\pd017\lib\site-packages\pandas\util\testing.py", line 798, in raise_assert_detail
raise AssertionError(msg)
AssertionError: DataFrame.iloc[:, 0] are different
| 10,878 |
|||
pandas-dev/pandas | pandas-dev__pandas-11427 | faa6cc744ba6086ddcef66c462823a169e1a733c | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -954,6 +954,52 @@ These can be used as arguments to ``date_range``, ``bdate_range``, constructors
for ``DatetimeIndex``, as well as various other timeseries-related functions
in pandas.
+Anchored Offset Semantics
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For those offsets that are anchored to the start or end of specific
+frequency (``MonthEnd``, ``MonthBegin``, ``WeekEnd``, etc) the following
+rules apply to rolling forward and backwards.
+
+When ``n`` is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
+anchor point, and moved ``|n|-1`` additional steps forwards or backwards.
+
+.. ipython:: python
+
+ pd.Timestamp('2014-01-02') + MonthBegin(n=1)
+ pd.Timestamp('2014-01-02') + MonthEnd(n=1)
+
+ pd.Timestamp('2014-01-02') - MonthBegin(n=1)
+ pd.Timestamp('2014-01-02') - MonthEnd(n=1)
+
+ pd.Timestamp('2014-01-02') + MonthBegin(n=4)
+ pd.Timestamp('2014-01-02') - MonthBegin(n=4)
+
+If the given date *is* on an anchor point, it is moved ``|n|`` points forwards
+or backwards.
+
+.. ipython:: python
+
+ pd.Timestamp('2014-01-01') + MonthBegin(n=1)
+ pd.Timestamp('2014-01-31') + MonthEnd(n=1)
+
+ pd.Timestamp('2014-01-01') - MonthBegin(n=1)
+ pd.Timestamp('2014-01-31') - MonthEnd(n=1)
+
+ pd.Timestamp('2014-01-01') + MonthBegin(n=4)
+ pd.Timestamp('2014-01-31') - MonthBegin(n=4)
+
+For the case when ``n=0``, the date is not moved if on an anchor point, otherwise
+it is rolled forward to the next anchor point.
+
+.. ipython:: python
+
+ pd.Timestamp('2014-01-02') + MonthBegin(n=0)
+ pd.Timestamp('2014-01-02') + MonthEnd(n=0)
+
+ pd.Timestamp('2014-01-01') + MonthBegin(n=0)
+ pd.Timestamp('2014-01-31') + MonthEnd(n=0)
+
.. _timeseries.legacyaliases:
Legacy Aliases
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt
--- a/doc/source/whatsnew/v0.18.0.txt
+++ b/doc/source/whatsnew/v0.18.0.txt
@@ -190,7 +190,7 @@ Bug Fixes
-
+ - Bug in vectorized ``DateOffset`` when ``n`` parameter is ``0`` (:issue:`11370`)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -444,7 +444,7 @@ def _beg_apply_index(self, i, freq):
from pandas.tseries.frequencies import get_freq_code
base, mult = get_freq_code(freq)
base_period = i.to_period(base)
- if self.n < 0:
+ if self.n <= 0:
# when subtracting, dates on start roll to prior
roll = np.where(base_period.to_timestamp() == i - off,
self.n, self.n + 1)
@@ -464,7 +464,7 @@ def _end_apply_index(self, i, freq):
base, mult = get_freq_code(freq)
base_period = i.to_period(base)
if self.n > 0:
- # when adding, dtates on end roll to next
+ # when adding, dates on end roll to next
roll = np.where(base_period.to_timestamp(how='end') == i - off,
self.n, self.n - 1)
else:
@@ -1081,8 +1081,7 @@ def apply(self, other):
@apply_index_wraps
def apply_index(self, i):
- months = self.n - 1 if self.n >= 0 else self.n
- shifted = tslib.shift_months(i.asi8, months, 'end')
+ shifted = tslib.shift_months(i.asi8, self.n, 'end')
return i._shallow_copy(shifted)
def onOffset(self, dt):
@@ -1108,8 +1107,7 @@ def apply(self, other):
@apply_index_wraps
def apply_index(self, i):
- months = self.n + 1 if self.n < 0 else self.n
- shifted = tslib.shift_months(i.asi8, months, 'start')
+ shifted = tslib.shift_months(i.asi8, self.n, 'start')
return i._shallow_copy(shifted)
def onOffset(self, dt):
@@ -1777,6 +1775,7 @@ def apply(self, other):
@apply_index_wraps
def apply_index(self, i):
freq_month = 12 if self.startingMonth == 1 else self.startingMonth - 1
+ # freq_month = self.startingMonth
freqstr = 'Q-%s' % (_int_to_month[freq_month],)
return self._beg_apply_index(i, freqstr)
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -4458,7 +4458,8 @@ def shift_months(int64_t[:] dtindex, int months, object day=None):
Py_ssize_t i
pandas_datetimestruct dts
int count = len(dtindex)
- int days_in_current_month
+ int months_to_roll
+ bint roll_check
int64_t[:] out = np.empty(count, dtype='int64')
if day is None:
@@ -4472,36 +4473,44 @@ def shift_months(int64_t[:] dtindex, int months, object day=None):
dts.day = min(dts.day, days_in_month(dts))
out[i] = pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
elif day == 'start':
+ roll_check = False
+ if months <= 0:
+ months += 1
+ roll_check = True
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT: out[i] = NPY_NAT; continue
pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
- dts.year = _year_add_months(dts, months)
- dts.month = _month_add_months(dts, months)
+ months_to_roll = months
+
+ # offset semantics - if on the anchor point and going backwards
+ # shift to next
+ if roll_check and dts.day == 1:
+ months_to_roll -= 1
+
+ dts.year = _year_add_months(dts, months_to_roll)
+ dts.month = _month_add_months(dts, months_to_roll)
+ dts.day = 1
- # offset semantics - when subtracting if at the start anchor
- # point, shift back by one more month
- if months <= 0 and dts.day == 1:
- dts.year = _year_add_months(dts, -1)
- dts.month = _month_add_months(dts, -1)
- else:
- dts.day = 1
out[i] = pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
elif day == 'end':
+ roll_check = False
+ if months > 0:
+ months -= 1
+ roll_check = True
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT: out[i] = NPY_NAT; continue
pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
- days_in_current_month = days_in_month(dts)
-
- dts.year = _year_add_months(dts, months)
- dts.month = _month_add_months(dts, months)
+ months_to_roll = months
# similar semantics - when adding shift forward by one
# month if already at an end of month
- if months >= 0 and dts.day == days_in_current_month:
- dts.year = _year_add_months(dts, 1)
- dts.month = _month_add_months(dts, 1)
+ if roll_check and dts.day == days_in_month(dts):
+ months_to_roll += 1
+
+ dts.year = _year_add_months(dts, months_to_roll)
+ dts.month = _month_add_months(dts, months_to_roll)
dts.day = days_in_month(dts)
out[i] = pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts)
| Vectorised addition of MonthOffset(n=0) returns different values to item-by-item addition
This code returns different values in `0.17.0` and `0.15.2`
``` python
import pandas as pd
from pandas.util.testing import assert_index_equal
pd.show_versions()
offsets = [
pd.offsets.Day, pd.offsets.MonthBegin,
pd.offsets.QuarterBegin, pd.offsets.YearBegin,
]
dates = pd.date_range('2011-01-01', '2011-01-05', freq='D')
for offset in offsets:
# adding each item individually or vectorised should give same answer
expected_vec = dates + offset(n=0)
expected = pd.DatetimeIndex([d + offset(n=0) for d in dates])
msg = "offset: {}, vectorised: {}, individual: {}".format(
offset, expected_vec, expected
)
try:
if pd.__version__ == '0.17.0':
assert_index_equal(expected_vec, expected, check_names=False)
else:
assert_index_equal(expected_vec, expected)
except AssertionError as er:
raise Exception(msg + str(er))
```
### 0.17.0
``` python
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.17.0
nose: 1.3.7
pip: 7.1.0
setuptools: 18.0.1
Cython: 0.22
numpy: 1.10.1
scipy: 0.16.0
statsmodels: 0.6.1
IPython: 3.2.1
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.1
pytz: 2015.4
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.4.4
matplotlib: 1.4.3
openpyxl: None
xlrd: 0.9.4
xlwt: None
xlsxwriter: 0.7.3
lxml: None
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: 1.0.7
pymysql: None
psycopg2: None
Traceback (most recent call last):
File "c:\dev\code\sandbox\pandas_17_vs_15_dateoffsets.py", line 24, in <module>
raise Exception(msg + str(er))
Exception: offset: <class 'pandas.tseries.offsets.MonthBegin'>, vectorised: DatetimeIndex(['2010-12-01', '2011-01-01', '2011-01-01', '2011-01-01',
'2011-01-01'],
dtype='datetime64[ns]', freq=None), individual: DatetimeIndex(['2011-01-01', '2011-02-01', '2011-02-01', '2011-02-01',
'2011-02-01'],
dtype='datetime64[ns]', freq=None)Index are different
Index values are different (100.0 %)
[left]: DatetimeIndex(['2010-12-01', '2011-01-01', '2011-01-01', '2011-01-01',
'2011-01-01'],
dtype='datetime64[ns]', freq=None)
[right]: DatetimeIndex(['2011-01-01', '2011-02-01', '2011-02-01', '2011-02-01',
'2011-02-01'],
dtype='datetime64[ns]', freq=None)
```
### 0.15.2
``` python
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en_GB
pandas: 0.15.2
nose: 1.3.7
Cython: 0.22
numpy: 1.9.2
scipy: 0.15.1
statsmodels: None
IPython: 3.2.1
sphinx: 1.3.1
patsy: 0.3.0
dateutil: 2.4.1
pytz: 2015.4
bottleneck: 1.0.0
tables: 3.2.0
numexpr: 2.4.3
matplotlib: 1.4.3
openpyxl: 1.8.5
xlrd: 0.9.4
xlwt: 0.7.5
xlsxwriter: 0.7.3
lxml: 3.4.4
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 1.0.7
pymysql: None
psycopg2: None
```
| This is from #10744, I didn't have the n=0 semantics right (and apparently didn't test!). It'll be a couple days, but I'll submit a fix.
Many thanks for quick response!
`MonthEnd` also not working:
Test script
``` python
import pandas as pd
from pandas.util.testing import assert_index_equal
pd.show_versions()
offsets = [
pd.offsets.MonthEnd,
pd.offsets.QuarterEnd, pd.offsets.YearEnd,
]
dates = pd.date_range('2011-01-01', '2011-01-05', freq='D')
for offset in offsets:
# adding each item individually or vectorised should give same answer
expected_vec = dates + offset(n=0)
expected = pd.DatetimeIndex([d + offset(n=0) for d in dates])
msg = "offset: {}, vectorised: {}, individual: {}".format(
offset, expected_vec, expected
)
try:
if pd.__version__ == '0.17.0':
assert_index_equal(expected_vec, expected, check_names=False)
else:
assert_index_equal(expected_vec, expected)
except AssertionError as er:
raise Exception(msg + str(er))
```
### 0.17.0
``` python
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.17.0
nose: 1.3.7
pip: 7.1.0
setuptools: 18.0.1
Cython: 0.22
numpy: 1.10.1
scipy: 0.16.0
statsmodels: 0.6.1
IPython: 3.2.1
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.1
pytz: 2015.4
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.4.4
matplotlib: 1.4.3
openpyxl: None
xlrd: 0.9.4
xlwt: None
xlsxwriter: 0.7.3
lxml: None
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: 1.0.7
pymysql: None
psycopg2: None
Traceback (most recent call last):
File "c:\dev\code\sandbox\pandas_17_vs_15_dateoffsets.py", line 34, in <module>
raise Exception(msg + str(er))
Exception: offset: <class 'pandas.tseries.offsets.MonthEnd'>, vectorised: DatetimeIndex(['2010-12-31', '2010-12-31', '2010-12-31', '2010-12-31',
'2010-12-31'],
dtype='datetime64[ns]', freq=None), individual: DatetimeIndex(['2011-01-31', '2011-01-31', '2011-01-31', '2011-01-31',
'2011-01-31'],
dtype='datetime64[ns]', freq=None)Index are different
Index values are different (100.0 %)
[left]: DatetimeIndex(['2010-12-31', '2010-12-31', '2010-12-31', '2010-12-31',
'2010-12-31'],
dtype='datetime64[ns]', freq=None)
[right]: DatetimeIndex(['2011-01-31', '2011-01-31', '2011-01-31', '2011-01-31',
'2011-01-31'],
dtype='datetime64[ns]', freq=None)
```
### 0.15.2
``` python
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
byteorder: little
LC_ALL: None
LANG: en_GB
pandas: 0.15.2
nose: 1.3.7
Cython: 0.22
numpy: 1.9.2
scipy: 0.15.1
statsmodels: None
IPython: 3.2.1
sphinx: 1.3.1
patsy: 0.3.0
dateutil: 2.4.1
pytz: 2015.4
bottleneck: 1.0.0
tables: 3.2.0
numexpr: 2.4.3
matplotlib: 1.4.3
openpyxl: 1.8.5
xlrd: 0.9.4
xlwt: 0.7.5
xlsxwriter: 0.7.3
lxml: 3.4.4
bs4: 4.3.2
html5lib: 0.999
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 1.0.7
pymysql: None
psycopg2: None
```
Probably also wrong for `YearEnd` and `QuarterEnd` too as the counting
logic is shared IIRC.
On Mon, Oct 19, 2015 at 10:31 AM, Petra Chong notifications@github.com
wrote:
> MonthEnd also not working:
>
> Test script
>
> import pandas as pdfrom pandas.util.testing import assert_index_equal
>
> pd.show_versions()
>
> offsets = [
> pd.offsets.MonthEnd,
> pd.offsets.QuarterEnd, pd.offsets.YearEnd,
> ]
>
> dates = pd.date_range('2011-01-01', '2011-01-05', freq='D')
> for offset in offsets:
> # adding each item individually or vectorised should give same answer
> expected_vec = dates + offset(n=0)
> expected = pd.DatetimeIndex([d + offset(n=0) for d in dates])
>
> ```
> msg = "offset: {}, vectorised: {}, individual: {}".format(
> offset, expected_vec, expected
> )
> try:
> if pd.__version__ == '0.17.0':
> assert_index_equal(expected_vec, expected, check_names=False)
> else:
> assert_index_equal(expected_vec, expected)
> except AssertionError as er:
> raise Exception(msg + str(er))
> ```
>
> 0.17.0
>
> INSTALLED VERSIONS------------------
> commit: None
> python: 2.7.10.final.0
> python-bits: 64
> OS: Windows
> OS-release: 7
> machine: AMD64
> processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
> byteorder: little
> LC_ALL: None
> LANG: None
>
> pandas: 0.17.0
> nose: 1.3.7
> pip: 7.1.0
> setuptools: 18.0.1
> Cython: 0.22
> numpy: 1.10.1
> scipy: 0.16.0
> statsmodels: 0.6.1
> IPython: 3.2.1
> sphinx: 1.3.1
> patsy: 0.4.0
> dateutil: 2.4.1
> pytz: 2015.4
> blosc: None
> bottleneck: 1.0.0
> tables: 3.2.2
> numexpr: 2.4.4
> matplotlib: 1.4.3
> openpyxl: None
> xlrd: 0.9.4
> xlwt: None
> xlsxwriter: 0.7.3
> lxml: None
> bs4: 4.3.2
> html5lib: 0.999
> httplib2: None
> apiclient: None
> sqlalchemy: 1.0.7
> pymysql: None
> psycopg2: None
> Traceback (most recent call last):
> File "c:\dev\code\sandbox\pandas_17_vs_15_dateoffsets.py", line 34, in <module>
> raise Exception(msg + str(er))Exception: offset: <class 'pandas.tseries.offsets.MonthEnd'>, vectorised: DatetimeIndex(['2010-12-31', '2010-12-31', '2010-12-31', '2010-12-31',
> '2010-12-31'],
> dtype='datetime64[ns]', freq=None), individual: DatetimeIndex(['2011-01-31', '2011-01-31', '2011-01-31', '2011-01-31',
> '2011-01-31'],
> dtype='datetime64[ns]', freq=None)Index are different
>
> Index values are different (100.0 %)
> [left]: DatetimeIndex(['2010-12-31', '2010-12-31', '2010-12-31', '2010-12-31',
> '2010-12-31'],
> dtype='datetime64[ns]', freq=None)
> [right]: DatetimeIndex(['2011-01-31', '2011-01-31', '2011-01-31', '2011-01-31',
> '2011-01-31'],
> dtype='datetime64[ns]', freq=None)
>
> 0.15.2
>
> INSTALLED VERSIONS------------------
> commit: None
> python: 2.7.10.final.0
> python-bits: 64
> OS: Windows
> OS-release: 7
> machine: AMD64
> processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel
> byteorder: little
> LC_ALL: None
> LANG: en_GB
>
> pandas: 0.15.2
> nose: 1.3.7
> Cython: 0.22
> numpy: 1.9.2
> scipy: 0.15.1
> statsmodels: None
> IPython: 3.2.1
> sphinx: 1.3.1
> patsy: 0.3.0
> dateutil: 2.4.1
> pytz: 2015.4
> bottleneck: 1.0.0
> tables: 3.2.0
> numexpr: 2.4.3
> matplotlib: 1.4.3
> openpyxl: 1.8.5
> xlrd: 0.9.4
> xlwt: 0.7.5
> xlsxwriter: 0.7.3
> lxml: 3.4.4
> bs4: 4.3.2
> html5lib: 0.999
> httplib2: None
> apiclient: None
> rpy2: None
> sqlalchemy: 1.0.7
> pymysql: None
> psycopg2: None
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/pydata/pandas/issues/11370#issuecomment-149250402.
| 2015-10-24T21:09:45Z | [] | [] |
Traceback (most recent call last):
File "c:\dev\code\sandbox\pandas_17_vs_15_dateoffsets.py", line 24, in <module>
raise Exception(msg + str(er))
Exception: offset: <class 'pandas.tseries.offsets.MonthBegin'>, vectorised: DatetimeIndex(['2010-12-01', '2011-01-01', '2011-01-01', '2011-01-01',
| 10,881 |
|||
pandas-dev/pandas | pandas-dev__pandas-11653 | 2d038327b7ea805a2d9c3db9ca3dd2b459e694bb | diff --git a/doc/source/whatsnew/v0.17.1.txt b/doc/source/whatsnew/v0.17.1.txt
--- a/doc/source/whatsnew/v0.17.1.txt
+++ b/doc/source/whatsnew/v0.17.1.txt
@@ -169,7 +169,7 @@ Bug Fixes
-
+- Bug in indexing with a ``range``, (:issue:`11652`)
- Bug in ``to_sql`` using unicode column names giving UnicodeEncodeError with (:issue:`11431`).
diff --git a/pandas/core/index.py b/pandas/core/index.py
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1755,7 +1755,8 @@ def get_loc(self, key, method=None, tolerance=None):
if tolerance is not None:
raise ValueError('tolerance argument only valid if using pad, '
'backfill or nearest lookups')
- return self._engine.get_loc(_values_from_object(key))
+ key = _values_from_object(key)
+ return self._engine.get_loc(key)
indexer = self.get_indexer([key], method=method,
tolerance=tolerance)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -104,6 +104,8 @@ def _get_setitem_indexer(self, key):
if isinstance(key, tuple) and not self.ndim < len(key):
return self._convert_tuple(key, is_setter=True)
+ if isinstance(key, range):
+ return self._convert_range(key, is_setter=True)
try:
return self._convert_to_indexer(key, is_setter=True)
@@ -156,6 +158,10 @@ def _convert_tuple(self, key, is_setter=False):
keyidx.append(idx)
return tuple(keyidx)
+ def _convert_range(self, key, is_setter=False):
+ """ convert a range argument """
+ return list(key)
+
def _convert_scalar_indexer(self, key, axis):
# if we are accessing via lowered dim, use the last dim
ax = self.obj._get_axis(min(axis, self.ndim - 1))
| BUG: ValueError when indexing using range only when length >= 1,000,000
From SO (http://stackoverflow.com/questions/33814223/strange-error-in-pandas-indexing-with-range-when-length-1-000-000)
Pandas raises a ValueError when assigning multiple values to a Series (or DataFrame) using range(x) where x > 1. This error is raised only when its length is one million or larger.
``` python
import pandas as pd
for x in [5, 999999, 1000000]:
s = pd.Series(index=range(x))
print('series length =', len(s))
# assigning value with range(1), always works
s.loc[range(1)] = 42
# reading values with range(x>1), always works
_ = s.loc[range(2)]
# assigning values with range(x>1), fails only when len >= 1 million
s.loc[range(2)] = 42
```
Output:
``` python
series length = 5
series length = 999999
series length = 1000000
Traceback (most recent call last):
File "<stdin>", line 9, in <module>
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/indexing.py", line 114, in __setitem__
indexer = self._get_setitem_indexer(key)
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/indexing.py", line 109, in _get_setitem_indexer
return self._convert_to_indexer(key, is_setter=True)
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/indexing.py", line 1042, in _convert_to_indexer
return labels.get_loc(obj)
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/index.py", line 1692, in get_loc
return self._engine.get_loc(_values_from_object(key))
File "pandas/index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas/index.c:3979)
File "pandas/index.pyx", line 145, in pandas.index.IndexEngine.get_loc (pandas/index.c:3680)
File "pandas/index.pyx", line 464, in pandas.index._bin_search (pandas/index.c:9124)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
Tested on pandas 0.17.0 and python 3.4.
| odd, only happens on py3.
`range(2)` is a list in py2.x. `s.loc[list[range(2)]] = 42` works fine with py3, too.
It seems to be failing only with `range` object.
| 2015-11-19T22:47:48Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 9, in <module>
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/indexing.py", line 114, in __setitem__
indexer = self._get_setitem_indexer(key)
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/indexing.py", line 109, in _get_setitem_indexer
return self._convert_to_indexer(key, is_setter=True)
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/indexing.py", line 1042, in _convert_to_indexer
return labels.get_loc(obj)
File "/home/nekobon/.env_exp/lib/python3.4/site-packages/pandas/core/index.py", line 1692, in get_loc
return self._engine.get_loc(_values_from_object(key))
File "pandas/index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas/index.c:3979)
File "pandas/index.pyx", line 145, in pandas.index.IndexEngine.get_loc (pandas/index.c:3680)
File "pandas/index.pyx", line 464, in pandas.index._bin_search (pandas/index.c:9124)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
| 10,898 |
|||
pandas-dev/pandas | pandas-dev__pandas-11714 | 547750aa5ba5b4a1b5d0cde05cc21e588b30cc27 | diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt
--- a/doc/source/whatsnew/v0.18.0.txt
+++ b/doc/source/whatsnew/v0.18.0.txt
@@ -32,6 +32,7 @@ Other enhancements
^^^^^^^^^^^^^^^^^^
- Handle truncated floats in SAS xport files (:issue:`11713`)
+- ``read_excel`` now supports s3 urls of the format ``s3://bucketname/filename`` (:issue:`11447`)
.. _whatsnew_0180.enhancements.rounding:
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -11,7 +11,7 @@
from pandas.core.frame import DataFrame
from pandas.io.parsers import TextParser
-from pandas.io.common import _is_url, _urlopen, _validate_header_arg
+from pandas.io.common import _is_url, _urlopen, _validate_header_arg, get_filepath_or_buffer, _is_s3_url
from pandas.tseries.period import Period
from pandas import json
from pandas.compat import (map, zip, reduce, range, lrange, u, add_metaclass,
@@ -199,7 +199,10 @@ def __init__(self, io, **kwds):
raise ValueError("Unknown engine: %s" % engine)
if isinstance(io, compat.string_types):
- if _is_url(io):
+ if _is_s3_url(io):
+ buffer, _, _ = get_filepath_or_buffer(io)
+ self.book = xlrd.open_workbook(file_contents=buffer.read())
+ elif _is_url(io):
data = _urlopen(io).read()
self.book = xlrd.open_workbook(file_contents=data)
else:
| ENH read_excel error when accessing AWS S3 URL
Summary: read_excel is unable to read a file using the same S3 URL syntax as read_csv. read_excel should support accessing S3 data in the same manner as read_csv
read_excel fails with the following error:
``` python
>>> import pandas as pd
>>> df = pd.read_excel("s3://my-bucket/my_file.xlsx")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib64/python2.6/site-packages/pandas/io/excel.py", line 163, in read_excel
io = ExcelFile(io, engine=engine)
File "/usr/local/lib64/python2.6/site-packages/pandas/io/excel.py", line 206, in __init__
self.book = xlrd.open_workbook(io)
File "/usr/local/lib/python2.6/site-packages/xlrd/__init__.py", line 394, in open_workbook
f = open(filename, "rb")
IOError: [Errno 2] No such file or directory: 's3://my-bucket/my_file.xlsx'
>>>
```
read_csv on the other hand is able to successfully read a csv file in the same S3 bucket using the same URL syntax:
``` python
>>> import pandas as pd
>>> df = pd.read_csv("s3://my-bucket/my_file.csv")
>>> len(df.index)
1187
>>>
```
For the record, read_csv can also see the xlsx file but returns parse errors when attempting to tokenize the data.
``` python
>>> import pandas as pd
>>> df = pd.read_csv("s3://my-bucket/my_file.xlsx")
Exception pandas.parser.CParserError: CParserError('Error tokenizing data. C error: Expected 9 fields in line 210, saw 10\n',) in 'pandas.parser.TextReader._tokenize_rows' ignored
>>>
```
read_excel successfully reads and parses a local copy of the xlsx file
``` python
>>> import pandas as pd
>>> df = pd.read_excel("my_file.xlsx")
>>> len(df.index)
221
>>>
```
Pandas version string and dependencies:
``` python
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.6.9.final.0
python-bits: 64
OS: Linux
OS-release: 3.14.48-33.39.amzn1.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
pandas: 0.17.0
nose: 1.3.4
pip: 6.1.1
setuptools: 12.2
Cython: None
numpy: 1.10.1
scipy: 0.16.0
statsmodels: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.4.2
pytz: 2015.7
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: 0.9.4
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
>>>
```
| This is almost a trivial enhancement, just add `_is_s3_url` [here](https://github.com/pydata/pandas/blob/master/pandas/io/excel.py#L202)
post a file link as an example and i'll put it on our test s3 bucket.
| 2015-11-27T20:09:28Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib64/python2.6/site-packages/pandas/io/excel.py", line 163, in read_excel
io = ExcelFile(io, engine=engine)
File "/usr/local/lib64/python2.6/site-packages/pandas/io/excel.py", line 206, in __init__
self.book = xlrd.open_workbook(io)
File "/usr/local/lib/python2.6/site-packages/xlrd/__init__.py", line 394, in open_workbook
f = open(filename, "rb")
IOError: [Errno 2] No such file or directory: 's3://my-bucket/my_file.xlsx'
| 10,901 |
|||
pandas-dev/pandas | pandas-dev__pandas-12043 | 1ae6384a0c04be8a1faddaa85751a9cac2f5a42a | diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1804,7 +1804,6 @@ def indexer_between_time(self, start_time, end_time, include_start=True,
"%I%M%S%p")
include_start : boolean, default True
include_end : boolean, default True
- tz : string or pytz.timezone or dateutil.tz.tzfile, default None
Returns
-------
| Error in doc of DatetimeIndex.indexer_between_time
Hello everyone,
This is not a major issue: I was trying to use the `tz` parameter as indicated on the [documentation](http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DatetimeIndex.indexer_between_time.html?highlight=indexer_between_time#pandas.DatetimeIndex.indexer_between_time) of `DatetimeIndex.indexer_between_time` and it turns out that `tz` is not implemented.
Here is an example:
``` python
import pandas as pd
pd.DatetimeIndex(['2016-01-01 00:00:00', '2016-01-01 01:00:00', '2016-01-01 02:00:00']).indexer_between_time('01:00', '02:00', tz='Europe/Paris')
```
```
Traceback (most recent call last):
File "~/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3066, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-98-258497b42264>", line 1, in <module>
pd.DatetimeIndex(['2016-01-01 00:00:00', '2016-01-01 01:00:00', '2016-01-01 02:00:00']).indexer_between_time('01:00', '02:00', tz='Europe/Paris')
TypeError: indexer_between_time() got an unexpected keyword argument 'tz'
```
| That doc should be fixed (and you see the signature no longer has a tz argument). Having a `tz` makes no sense in this context.
@joseRLC want to do a pull-request for that?
Can I try this out? This seems like a good issue for a first PR.
@RahulHP Certainly, go for it! If you have any questions about the workflow (see http://pandas.pydata.org/pandas-docs/stable/contributing.html), just ask (you can also use the gitter channel for that: https://gitter.im/pydata/pandas)
| 2016-01-15T08:44:51Z | [] | [] |
Traceback (most recent call last):
File "~/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3066, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-98-258497b42264>", line 1, in <module>
pd.DatetimeIndex(['2016-01-01 00:00:00', '2016-01-01 01:00:00', '2016-01-01 02:00:00']).indexer_between_time('01:00', '02:00', tz='Europe/Paris')
TypeError: indexer_between_time() got an unexpected keyword argument 'tz'
| 10,929 |
|||
pandas-dev/pandas | pandas-dev__pandas-12058 | 1945eed731a9d8fdb9a21837b326c42f8771def7 | diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt
--- a/doc/source/whatsnew/v0.18.0.txt
+++ b/doc/source/whatsnew/v0.18.0.txt
@@ -445,7 +445,7 @@ Bug Fixes
- Accept unicode in ``Timedelta`` constructor (:issue:`11995`)
- Bug in value label reading for ``StataReader`` when reading incrementally (:issue:`12014`)
- Bug in vectorized ``DateOffset`` when ``n`` parameter is ``0`` (:issue:`11370`)
-
+- Compat for numpy 1.11 w.r.t. ``NaT`` comparison changes (:issue:`12049`)
diff --git a/pandas/core/common.py b/pandas/core/common.py
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -379,12 +379,13 @@ def array_equivalent(left, right, strict_nan=False):
"""
left, right = np.asarray(left), np.asarray(right)
+
+ # shape compat
if left.shape != right.shape:
return False
# Object arrays can contain None, NaN and NaT.
- if (issubclass(left.dtype.type, np.object_) or
- issubclass(right.dtype.type, np.object_)):
+ if is_object_dtype(left) or is_object_dtype(right):
if not strict_nan:
# pd.isnull considers NaN and None to be equivalent.
@@ -405,13 +406,21 @@ def array_equivalent(left, right, strict_nan=False):
return True
# NaNs can occur in float and complex arrays.
- if issubclass(left.dtype.type, (np.floating, np.complexfloating)):
+ if is_float_dtype(left) or is_complex_dtype(left):
return ((left == right) | (np.isnan(left) & np.isnan(right))).all()
# numpy will will not allow this type of datetimelike vs integer comparison
elif is_datetimelike_v_numeric(left, right):
return False
+ # M8/m8
+ elif needs_i8_conversion(left) and needs_i8_conversion(right):
+ if not is_dtype_equal(left.dtype, right.dtype):
+ return False
+
+ left = left.view('i8')
+ right = right.view('i8')
+
# NaNs cannot occur otherwise.
return np.array_equal(left, right)
| BLD: numpy master changes breaking
(24 hrs ago) good build: https://travis-ci.org/pydata/pandas/jobs/102356098: 1.11.0.dev0+51d2ecd
(1 hr ago) breaking lots of things: https://travis-ci.org/pydata/pandas/jobs/102596904: 1.11.0.dev0+aa6335c
@shoyer IIRC a couple of your PR's were merged in the last day.
here's an example:
```
======================================================================
FAIL: test_coercion_with_setitem_and_series (pandas.tests.test_indexing.TestSeriesNoneCoercion)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/pydata/pandas/pandas/tests/test_indexing.py", line 5247, in test_coercion_with_setitem_and_series
expected_series.values, strict_nan=True)
File "/home/travis/build/pydata/pandas/pandas/util/testing.py", line 866, in assert_numpy_array_equal
raise_assert_detail(obj, msg, left, right)
File "/home/travis/build/pydata/pandas/pandas/util/testing.py", line 825, in raise_assert_detail
raise AssertionError(msg)
AssertionError: numpy array are different
numpy array values are different (33.33333 %)
[left]: [NaT, 2000-01-02T00:00:00.000000000+0000, 2000-01-03T00:00:00.000000000+0000]
[right]: [NaT, 2000-01-02T00:00:00.000000000+0000, 2000-01-03T00:00:00.000000000+0000]
```
| my guess is that we now need to compare `M8[ns]` by `.view('i8')` then comparing as the `NaT` will then compare equal.
Yep, if this is datetime64 related it's my fault :). I though we were already using .view('i8') before making comparisons but I guess I was wrong. If so (especially if this breaks user facing stuff) we may need to hold off on these numpy fixes for a little longer (deprecation cycle?). Sigh...
On Fri, Jan 15, 2016 at 7:27 AM, Jeff Reback notifications@github.com
wrote:
> ## my guess is that we now need to compare `M8[ns]` by `.view('i8')` then comparing as the `NaT` will then compare equal.
>
> Reply to this email directly or view it on GitHub:
> https://github.com/pydata/pandas/issues/12049#issuecomment-171991205
we can adapt. do you want me to open an issue on numpy?
```
In [1]: np.__version__
Out[1]: '1.10.2'
In [5]: arr = np.array([np.nan])
In [6]: np.array_equal(arr,arr)
Out[6]: False
In [7]: arr = np.array([np.datetime64('NaT')])
In [8]: np.array_equal(arr,arr)
Out[8]: True
```
ok so your change makes sense
```
In [1]: np.__version__
Out[1]: '1.11.0.dev0+aa6335c'
In [2]: In [5]: arr = np.array([np.nan])
In [7]: arr = np.array([np.nan])
In [8]: np.array_equal(arr,arr)
Out[8]: False
In [9]: arr = np.array([np.datetime64('NaT')])
In [10]: np.array_equal(arr,arr)
Out[10]: False
```
Actually, looks like we might just be able to drop our `assert_numpy_array_equivalent` entirely in favor of numpy's `assert_array_equal`, which has supported NaN equality since at least August 2011 (numpy 1.7, I think):
https://github.com/numpy/numpy/commit/67ece6bdd2b35d011893e78154dbff6ab51c7d35
Unfortunately, this bug exists in our `array_equivalent` utility function, which we use for the `equals` method. This means that with this change datetime64 equality checks involving NaT will be broken. As much as I would love to just roll out the NumPy fix, doing it like this will assuredly result in unhappy users and unnecessary aggravation when sanity checks and test suites fail.
To fix this, I propose:
- We roll back the NaT comparison fix in NumPy for now, issuing a deprecation warning instead for a numpy release or two.
- We add the fix to `array_equivalent` in pandas, doing the appropriate cast to int64 to avoid needing to catch the deprecation warning (which can have performance consequences).
yep, prob need a special case for `M8/m8` (alternatively we could have a conditional check on the numpy version).
Ok, lmk if you need anything on the numpy side.
| 2016-01-15T20:59:07Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/build/pydata/pandas/pandas/tests/test_indexing.py", line 5247, in test_coercion_with_setitem_and_series
expected_series.values, strict_nan=True)
File "/home/travis/build/pydata/pandas/pandas/util/testing.py", line 866, in assert_numpy_array_equal
raise_assert_detail(obj, msg, left, right)
File "/home/travis/build/pydata/pandas/pandas/util/testing.py", line 825, in raise_assert_detail
raise AssertionError(msg)
AssertionError: numpy array are different
| 10,930 |
|||
pandas-dev/pandas | pandas-dev__pandas-13188 | 3944a369265f27268d1b3867a161e97f9c63cd62 | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -3353,7 +3353,8 @@ def _subplots(naxes=None, sharex=False, sharey=False, squeeze=True,
if sharex or sharey:
warnings.warn("When passing multiple axes, sharex and sharey "
"are ignored. These settings must be specified "
- "when creating axes", UserWarning)
+ "when creating axes", UserWarning,
+ stacklevel=4)
if len(ax) == naxes:
fig = ax[0].get_figure()
return fig, ax
@@ -3370,7 +3371,8 @@ def _subplots(naxes=None, sharex=False, sharey=False, squeeze=True,
return fig, _flatten(ax)
else:
warnings.warn("To output multiple subplots, the figure containing "
- "the passed axes is being cleared", UserWarning)
+ "the passed axes is being cleared", UserWarning,
+ stacklevel=4)
fig.clear()
nrows, ncols = _get_layout(naxes, layout=layout, layout_type=layout_type)
| FAIL: test_scatter_matrix_axis (pandas.tests.test_graphics_others.TestDataFramePlots) in 0.18.1 with py27
When run alone, `test_scatter_matrix_axis` passes. When run together with the rest of `TestDataFramePlots`, it fails.
I cannot test with py34 because matplotlib is not available on FreeBSD under py3k.
#### Code Sample, a copy-pastable example if possible
```
% nosetests-2.7 pandas.tests.test_graphics_others:TestDataFramePlots.test_scatter_matrix_axis
.
----------------------------------------------------------------------
Ran 1 test in 8.150s
OK
% nosetests-2.7 pandas.tests.test_graphics_others:TestDataFramePlots
..S/usr/local/lib/python2.7/site-packages/pandas/tools/plotting.py:3369: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
"the passed axes is being cleared", UserWarning)
/usr/local/lib/python2.7/site-packages/pandas/tools/plotting.py:3369: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
"the passed axes is being cleared", UserWarning)
/usr/local/lib/python2.7/site-packages/pandas/tools/plotting.py:3369: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
"the passed axes is being cleared", UserWarning)
/usr/local/lib/python2.7/site-packages/pandas/tools/plotting.py:3369: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
"the passed axes is being cleared", UserWarning)
/usr/local/lib/python2.7/site-packages/pandas/tools/plotting.py:3369: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
"the passed axes is being cleared", UserWarning)
/usr/local/lib/python2.7/site-packages/matplotlib/artist.py:221: MatplotlibDeprecationWarning: This has been deprecated in mpl 1.5, please use the
axes property. A removal date has not been set.
warnings.warn(_get_axes_msg, mplDeprecation, stacklevel=1)
/usr/local/lib/python2.7/site-packages/pandas/tools/plotting.py:3369: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
"the passed axes is being cleared", UserWarning)
......F.
======================================================================
FAIL: test_scatter_matrix_axis (pandas.tests.test_graphics_others.TestDataFramePlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pandas/tests/test_graphics_others.py", line 431, in test_scatter_matrix_axis
frame=df, range_padding=.1)
File "/usr/local/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/site-packages/pandas/util/testing.py", line 2318, in assert_produces_warning
% expected_warning.__name__)
AssertionError: Did not see expected warning of class 'UserWarning'.
----------------------------------------------------------------------
Ran 11 tests in 74.896s
FAILED (SKIP=1, failures=1)
```
#### Expected Output
I expect all tests to pass.
#### output of `pd.show_versions()`
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.11.final.0
python-bits: 64
OS: FreeBSD
OS-release: 10.2-STABLE
machine: amd64
processor: amd64
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.18.1
nose: 1.3.7
pip: 8.0.2
setuptools: 20.0
Cython: None
numpy: 1.11.0
scipy: 0.16.1
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.5.0
pytz: 2016.4
blosc: None
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.5.2
matplotlib: 1.5.0
openpyxl: 2.3.5
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.8.5
lxml: 3.5.0
bs4: 4.4.1
html5lib: 0.9999999
httplib2: None
apiclient: None
sqlalchemy: 0.7.10
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.39.0
pandas_datareader: None
```
| Thanks. Haven't been able to reproduce yet, but I'm going to clean up that testing module anyway to catch all those userwarnings and fix the matplotlib deprecation warning. Hopefully I'll figure out what's going wrong.
| 2016-05-15T17:11:08Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pandas/tests/test_graphics_others.py", line 431, in test_scatter_matrix_axis
frame=df, range_padding=.1)
File "/usr/local/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/site-packages/pandas/util/testing.py", line 2318, in assert_produces_warning
% expected_warning.__name__)
AssertionError: Did not see expected warning of class 'UserWarning'.
| 10,937 |
|||
pandas-dev/pandas | pandas-dev__pandas-13641 | 20de2661c8eff66e465248cbe28062eae0e0e3bb | Test failure with matplotlib 1.5.2rc2 on Debian
When running the test suite after the package was built, we get the following failure:
```
ERROR: test_plot (pandas.tests.test_graphics.TestDataFramePlots)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/$BUILD/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 1322, in test_plot
df.plot.line(blarg=True)
File "/$BUILD/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 3758, in line
return self(kind='line', x=x, y=y, **kwds)
[...]
File "/$BUILD/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 1340, in _plot
return ax.plot(*args, **kwds)
File "/usr/lib/python2.7/dist-packages/matplotlib/__init__.py", line 1821, in inner
return func(ax, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_axes.py", line 1432, in plot
for line in self._get_lines(*args, **kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 381, in _grab_next_args
for seg in self._plot_args(remaining, kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 369, in _plot_args
seg = func(x[:, j % ncx], y[:, j % ncy], kw, kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 276, in _makeline
seg = mlines.Line2D(x, y, **kw)
File "/usr/lib/python2.7/dist-packages/matplotlib/lines.py", line 380, in __init__
self.update(kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/artist.py", line 859, in update
raise AttributeError('Unknown property %s' % k)
AttributeError: Unknown property blarg
```
Full log [here](https://tests.reproducible-builds.org/debian/rbuild/testing/amd64/pandas_0.18.0+git114-g6c692ae-1.rbuild.log) (from the [Debian bug report](https://bugs.debian.org/827938)).
`pandas.show_versions()` gives:
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.12.final.0
python-bits: 64
OS: Linux
OS-release: 4.6.0-1-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: C
LANG: C
pandas: 0.18.0+git114-g6c692ae
nose: 1.3.7
pip: None
setuptools: 20.10.1
Cython: None
numpy: 1.11.1rc1
scipy: 0.17.1
statsmodels: 0.6.1
xarray: None
IPython: None
sphinx: 1.4.4
patsy: 0.4.1
dateutil: 2.4.2
pytz: 2015.7
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.6.0
matplotlib: 1.5.2rc2
openpyxl: 2.3.0
xlrd: 1.0.0
xlwt: 0.7.5
xlsxwriter: None
lxml: 3.6.0
bs4: 4.4.1
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.8
boto: None
pandas_datareader: None
```
| Thanks for the report. Looks to be caused by mpl which raises `AttributeError` rather than `TypeError`. Should change the test on pandas side.
- https://github.com/pydata/pandas/blob/master/pandas/tests/test_graphics.py#L1317
Appreciated to submit a PR.
[line 1333](https://github.com/pydata/pandas/blob/master/pandas/tests/test_graphics.py#L1333)
Shall this be compatible with matplotlib < 1.5.2rc? Then this is not that simple, since tm.assertRaises() only accepts a singe exception, and one then would need to create another mpl_ge_1_5_2 comparison specifically for this test.
Is this really worth it? What is the rationale of the test?
cc @tacaswell Just to be sure, the change from TypeError to AttributeError, was this on purpose? (it's raised in `set_lineprops`)
I do not think that was intentional and I am not sure off the top of my head why this changed. The `Artist.update` method has raised `AttributeError` since 2004.
So, I broke this: https://github.com/matplotlib/matplotlib/pull/6175
It looks like we used to have `Artist.set` which raised `TypeError` and `Artist.update` which raised `AttributeError` (because history). These two code-paths got merged in https://github.com/matplotlib/matplotlib/pull/5599 (also my fault) and we missed the API change in the the exceptions.
Unfortunately I _just_ tagged 1.5.2 last weekend, but have not posted it to pypi or publicized it yet :disappointed:. I am inclined to just document this as an API change, but if you want to lobby for a 1.5.3 fixing this, this is the time to do it!
I don't think it is that an important change, so leaving (and documenting) it is fine for me (for pandas it is only a test that is broken, and I think you will have a better idea of how big this change is for matploltib users). In any case, I think the AttributeError is more logical.
@sinhrks @jorisvandenbossche For me the question here is still, what that test is for?
can u add `_mpl_ge_1_5_2()` function like below, then:
```
if _mpl_ge_1_5_2():
with tm.assertRaises(AttributeError):
...
else:
with ...
```
- https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L129
> For me the question here is still, what that test is for?
Well, it tests that if you pass a wrong keyword, that this raises an error, and is not swallowed somewhere in the implementation. So it's not a huge important test, but still useful (we actually have to many functions in pandas that silently swallow invalid arguments)
| 2016-07-13T15:01:57Z | [] | [] |
Traceback (most recent call last):
File "/$BUILD/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tests/test_graphics.py", line 1322, in test_plot
df.plot.line(blarg=True)
File "/$BUILD/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 3758, in line
return self(kind='line', x=x, y=y, **kwds)
[...]
File "/$BUILD/debian/tmp/usr/lib/python2.7/dist-packages/pandas/tools/plotting.py", line 1340, in _plot
return ax.plot(*args, **kwds)
File "/usr/lib/python2.7/dist-packages/matplotlib/__init__.py", line 1821, in inner
return func(ax, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_axes.py", line 1432, in plot
for line in self._get_lines(*args, **kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 381, in _grab_next_args
for seg in self._plot_args(remaining, kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 369, in _plot_args
seg = func(x[:, j % ncx], y[:, j % ncy], kw, kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 276, in _makeline
seg = mlines.Line2D(x, y, **kw)
File "/usr/lib/python2.7/dist-packages/matplotlib/lines.py", line 380, in __init__
self.update(kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/artist.py", line 859, in update
raise AttributeError('Unknown property %s' % k)
AttributeError: Unknown property blarg
| 10,954 |
||||
pandas-dev/pandas | pandas-dev__pandas-14126 | 47a8e713430707afcfe76e7ca995902628d4bccf | diff --git a/pandas/util/print_versions.py b/pandas/util/print_versions.py
--- a/pandas/util/print_versions.py
+++ b/pandas/util/print_versions.py
@@ -101,7 +101,10 @@ def show_versions(as_json=False):
deps_blob = list()
for (modname, ver_f) in deps:
try:
- mod = importlib.import_module(modname)
+ if modname in sys.modules:
+ mod = sys.modules[modname]
+ else:
+ mod = importlib.import_module(modname)
ver = ver_f(mod)
deps_blob.append((modname, ver))
except:
| DataFrame.__repr__ raises TypeError after pd.show_versions() was run
Maybe one of the imports in `show_versions` has unwanted side effects?
``` python
>>> import pandas as pd
>>> pd.DataFrame({'spam': range(10)})
spam
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.12.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 42 Stepping 7, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.18.1
nose: 1.3.7
pip: 8.1.2
setuptools: 24.0.3
Cython: 0.24.1
numpy: 1.11.1
scipy: 0.18.0rc2
statsmodels: None
xarray: None
IPython: 5.0.0
sphinx: 1.4.5
patsy: 0.4.1
dateutil: 2.5.3
pytz: 2016.6.1
blosc: None
bottleneck: 1.1.0
tables: None
numexpr: 2.6.0
matplotlib: 1.5.1
openpyxl: 2.3.5
xlrd: 1.0.0
xlwt: None
xlsxwriter: None
lxml: 3.6.0
bs4: None
html5lib: 0.999999999
httplib2: 0.9.2
apiclient: None
sqlalchemy: 1.0.14
pymysql: None
psycopg2: 2.6.2 (dt dec pq3 ext lo64)
jinja2: 2.8
boto: None
pandas_datareader: None
>>> pd.DataFrame({'spam': range(10)})
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
pd.DataFrame({'spam': range(10)})
File "C:\Program Files\Python27\lib\site-packages\pandas\core\base.py", line 67, in __repr__
return str(self)
File "C:\Program Files\Python27\lib\site-packages\pandas\core\base.py", line 47, in __str__
return self.__bytes__()
File "C:\Program Files\Python27\lib\site-packages\pandas\core\base.py", line 59, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "C:\Program Files\Python27\lib\site-packages\pandas\core\frame.py", line 535, in __unicode__
line_width=width, show_dimensions=show_dimensions)
File "C:\Program Files\Python27\lib\site-packages\pandas\core\frame.py", line 1488, in to_string
formatter.to_string()
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 549, in to_string
strcols = self._to_str_columns()
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 467, in _to_str_columns
str_index = self._get_formatted_index(frame)
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 746, in _get_formatted_index
fmt_index = [index.format(name=show_index_names, formatter=fmt)]
File "C:\Program Files\Python27\lib\site-packages\pandas\indexes\base.py", line 1462, in format
return self._format_with_header(header, **kwargs)
File "C:\Program Files\Python27\lib\site-packages\pandas\indexes\base.py", line 1486, in _format_with_header
result = _trim_front(format_array(values, None, justify='left'))
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 2007, in format_array
return fmt_obj.get_result()
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 2027, in get_result
return _make_fixed_width(fmt_values, self.justify)
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 2394, in _make_fixed_width
max_len = np.max([adj.len(x) for x in strings])
File "C:\Program Files\Python27\lib\site-packages\numpy\core\fromnumeric.py", line 2293, in amax
out=out, **kwargs)
File "C:\Program Files\Python27\lib\site-packages\numpy\core\_methods.py", line 26, in _amax
return umr_maximum(a, axis, None, out, keepdims)
TypeError: an integer is required
```
| I cannot reproduce this using Windows and python 2.7.
Could you try to debug this? To see where the error is coming from (seems there is something wrong with the `np.max([adj.len(x) for x in strings])`).
Or eg create an isolated environment with only required dependencies to see of the problem occurs there as well (using conda or virtualenv).
Tracked it to [this import](https://github.com/pydata/pandas/commit/b4e2d34edcbc404f6c90f76b67bcc5fe26f0945f#diff-24212510f4a09e0461c2b6754d34626dL103) of `numpy`, which according to [the docs](https://docs.python.org/2/library/imp.html#imp.load_module) does a `reload()`.
Indeed `numpy` (at least on my machines) seems to dislike being reloaded:
``` python
>>> import numpy as np
>>> np.max([42])
42
>>> reload(np)
<module 'numpy' from 'C:\Program Files\Python27\lib\site-packages\numpy\__init__.pyc'>
>>> np.max([42])
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
np.max([42])
File "C:\Program Files\Python27\lib\site-packages\numpy\core\fromnumeric.py", line 2293, in amax
out=out, **kwargs)
File "C:\Program Files\Python27\lib\site-packages\numpy\core\_methods.py", line 26, in _amax
return umr_maximum(a, axis, None, out, keepdims)
TypeError: an integer is required
```
The line was changed in b4e2d34edcbc404f6c90f76b67bcc5fe26f0945f, so I guess that should fix this for the next version (though I am still curious if others have this `numpy` issue with `reload`).
I can't reproduce it using NumPy 1.10.4 and 1.11.1. Can you report it to NumPy?
Seems to be a problem with the [binaries here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy) (does not occur with the PyPI wheels): I'll report to the packager, closing.
The numpy issue: numpy/numpy#7844
@xflr6 Thanks for tracking it down!
I think there is a real pandas bug here. The bug is that `show_versions` calls [`importlib.import_module`](https://github.com/pydata/pandas/blob/b4e2d34edcbc404f6c90f76b67bcc5fe26f0945f/pandas/util/print_versions.py#L102), and apparently -- this is not documented anywhere, and may vary between py2 and py3 -- `import_module` may reload modules. `show_versions` should not be reloading all these modules. I'd suggest replacing that line with something like
``` python
if modname in sys.modules:
mod = sys.modules[modname]
else:
mod = importlib.import_module(modname)
```
@njsmith but seems numpy is not robust to being reloaded.
@jreback I suspect a lot of modules are not robust against reloading. For instance if you define a class in `foo.py`, use it in `bar.py`, instanciate `a = foo.MyClass()` in another module, then use `isinstance(a, foo.MyClass)` that statement will fail if `foo` is reloaded. I suspect what is wanted in many cases is a simple `import`, but I haven't checked. Numpy also uses `load_module` in a few places that should probably be audited.
@njsmith The reload property is documented in the `imp` module documentation. It always happens for existing modules. Note that nonexisting modules get created...
ok this should be easy to fix then
AFAICT, `load_module` is useful when you need to use a module that is not installed and not located in the current directory. For instance, during the numpy install process. If numpy is installed you should be able to simply import it.
Maybe `__import__`?
Or `importlib.import_module`
Pandas actually uses `importlib.import_module`, which isn't documented to reload, but I guess it must eventually call `load_module` because otherwise we wouldn't have this problem. (I haven't tried tracing the details, and `importlib` has completely different implementations on different versions of python, so that's something to watch out for if anyone wants to figure out exactly what's happening).
| 2016-08-31T10:35:26Z | [] | [] |
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
pd.DataFrame({'spam': range(10)})
File "C:\Program Files\Python27\lib\site-packages\pandas\core\base.py", line 67, in __repr__
return str(self)
File "C:\Program Files\Python27\lib\site-packages\pandas\core\base.py", line 47, in __str__
return self.__bytes__()
File "C:\Program Files\Python27\lib\site-packages\pandas\core\base.py", line 59, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "C:\Program Files\Python27\lib\site-packages\pandas\core\frame.py", line 535, in __unicode__
line_width=width, show_dimensions=show_dimensions)
File "C:\Program Files\Python27\lib\site-packages\pandas\core\frame.py", line 1488, in to_string
formatter.to_string()
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 549, in to_string
strcols = self._to_str_columns()
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 467, in _to_str_columns
str_index = self._get_formatted_index(frame)
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 746, in _get_formatted_index
fmt_index = [index.format(name=show_index_names, formatter=fmt)]
File "C:\Program Files\Python27\lib\site-packages\pandas\indexes\base.py", line 1462, in format
return self._format_with_header(header, **kwargs)
File "C:\Program Files\Python27\lib\site-packages\pandas\indexes\base.py", line 1486, in _format_with_header
result = _trim_front(format_array(values, None, justify='left'))
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 2007, in format_array
return fmt_obj.get_result()
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 2027, in get_result
return _make_fixed_width(fmt_values, self.justify)
File "C:\Program Files\Python27\lib\site-packages\pandas\formats\format.py", line 2394, in _make_fixed_width
max_len = np.max([adj.len(x) for x in strings])
File "C:\Program Files\Python27\lib\site-packages\numpy\core\fromnumeric.py", line 2293, in amax
out=out, **kwargs)
File "C:\Program Files\Python27\lib\site-packages\numpy\core\_methods.py", line 26, in _amax
return umr_maximum(a, axis, None, out, keepdims)
TypeError: an integer is required
| 11,001 |
|||
pandas-dev/pandas | pandas-dev__pandas-14208 | 5e2f9da6e8e713bd89cfe8760e63583ea7d29879 | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -188,6 +188,32 @@ And similarly for ``axis="items"`` and ``axis="minor"``.
match the broadcasting behavior of Panel. Though it would require a
transition period so users can change their code...
+Series and Index also support the :func:`divmod` builtin. This function takes
+the floor division and modulo operation at the same time returning a two-tuple
+of the same type as the left hand side. For example:
+
+.. ipython:: python
+
+ s = pd.Series(np.arange(10))
+ s
+ div, rem = divmod(s, 3)
+ div
+ rem
+
+ idx = pd.Index(np.arange(10))
+ idx
+ div, rem = divmod(idx, 3)
+ div
+ rem
+
+We can also do elementwise :func:`divmod`:
+
+.. ipython:: python
+
+ div, rem = divmod(s, [2, 2, 3, 3, 4, 4, 5, 5, 6, 6])
+ div
+ rem
+
Missing data / operations with fill values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -1328,6 +1328,9 @@ Other API Changes
- ``pd.read_csv()`` in the C engine will now issue a ``ParserWarning`` or raise a ``ValueError`` when ``sep`` encoded is more than one character long (:issue:`14065`)
- ``DataFrame.values`` will now return ``float64`` with a ``DataFrame`` of mixed ``int64`` and ``uint64`` dtypes, conforming to ``np.find_common_type`` (:issue:`10364`, :issue:`13917`)
- ``pd.read_stata()`` can now handle some format 111 files, which are produced by SAS when generating Stata dta files (:issue:`11526`)
+- ``Series`` and ``Index`` now support ``divmod`` which will return a tuple of
+ series or indices. This behaves like a standard binary operator with regards
+ to broadcasting rules (:issue:`14208`).
.. _whatsnew_0190.deprecations:
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -39,7 +39,8 @@
def _create_methods(arith_method, comp_method, bool_method,
- use_numexpr, special=False, default_axis='columns'):
+ use_numexpr, special=False, default_axis='columns',
+ have_divmod=False):
# creates actual methods based upon arithmetic, comp and bool method
# constructors.
@@ -127,6 +128,15 @@ def names(x):
names('ror_'), op('|')),
rxor=bool_method(lambda x, y: operator.xor(y, x),
names('rxor'), op('^'))))
+ if have_divmod:
+ # divmod doesn't have an op that is supported by numexpr
+ new_methods['divmod'] = arith_method(
+ divmod,
+ names('divmod'),
+ None,
+ default_axis=default_axis,
+ construct_result=_construct_divmod_result,
+ )
new_methods = dict((names(k), v) for k, v in new_methods.items())
return new_methods
@@ -156,7 +166,7 @@ def add_methods(cls, new_methods, force, select, exclude):
def add_special_arithmetic_methods(cls, arith_method=None,
comp_method=None, bool_method=None,
use_numexpr=True, force=False, select=None,
- exclude=None):
+ exclude=None, have_divmod=False):
"""
Adds the full suite of special arithmetic methods (``__add__``,
``__sub__``, etc.) to the class.
@@ -177,6 +187,9 @@ def add_special_arithmetic_methods(cls, arith_method=None,
if passed, only sets functions with names in select
exclude : iterable of strings (optional)
if passed, will not set functions with names in exclude
+ have_divmod : bool, (optional)
+ should a divmod method be added? this method is special because it
+ returns a tuple of cls instead of a single element of type cls
"""
# in frame, special methods have default_axis = None, comp methods use
@@ -184,7 +197,7 @@ def add_special_arithmetic_methods(cls, arith_method=None,
new_methods = _create_methods(arith_method, comp_method,
bool_method, use_numexpr, default_axis=None,
- special=True)
+ special=True, have_divmod=have_divmod)
# inplace operators (I feel like these should get passed an `inplace=True`
# or just be removed
@@ -618,8 +631,22 @@ def _align_method_SERIES(left, right, align_asobject=False):
return left, right
+def _construct_result(left, result, index, name, dtype):
+ return left._constructor(result, index=index, name=name, dtype=dtype)
+
+
+def _construct_divmod_result(left, result, index, name, dtype):
+ """divmod returns a tuple of like indexed series instead of a single series.
+ """
+ constructor = left._constructor
+ return (
+ constructor(result[0], index=index, name=name, dtype=dtype),
+ constructor(result[1], index=index, name=name, dtype=dtype),
+ )
+
+
def _arith_method_SERIES(op, name, str_rep, fill_zeros=None, default_axis=None,
- **eval_kwargs):
+ construct_result=_construct_result, **eval_kwargs):
"""
Wrapper function for Series arithmetic operations, to avoid
code duplication.
@@ -692,8 +719,14 @@ def wrapper(left, right, name=name, na_op=na_op):
lvalues = lvalues.values
result = wrap_results(safe_na_op(lvalues, rvalues))
- return left._constructor(result, index=left.index,
- name=name, dtype=dtype)
+ return construct_result(
+ left,
+ result,
+ index=left.index,
+ name=name,
+ dtype=dtype,
+ )
+
return wrapper
@@ -933,6 +966,10 @@ def wrapper(self, other):
'desc': 'Integer division',
'reversed': False,
'reverse': 'rfloordiv'},
+ 'divmod': {'op': 'divmod',
+ 'desc': 'Integer division and modulo',
+ 'reversed': False,
+ 'reverse': None},
'eq': {'op': '==',
'desc': 'Equal to',
@@ -1033,7 +1070,8 @@ def flex_wrapper(self, other, level=None, fill_value=None, axis=0):
series_special_funcs = dict(arith_method=_arith_method_SERIES,
comp_method=_comp_method_SERIES,
- bool_method=_bool_method_SERIES)
+ bool_method=_bool_method_SERIES,
+ have_divmod=True)
_arith_doc_FRAME = """
Binary operator %s with support to substitute a fill_value for missing data in
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -3426,7 +3426,7 @@ def _validate_for_numeric_binop(self, other, op, opstr):
def _add_numeric_methods_binary(cls):
""" add in numeric methods """
- def _make_evaluate_binop(op, opstr, reversed=False):
+ def _make_evaluate_binop(op, opstr, reversed=False, constructor=Index):
def _evaluate_numeric_binop(self, other):
from pandas.tseries.offsets import DateOffset
@@ -3448,7 +3448,7 @@ def _evaluate_numeric_binop(self, other):
attrs = self._maybe_update_attributes(attrs)
with np.errstate(all='ignore'):
result = op(values, other)
- return Index(result, **attrs)
+ return constructor(result, **attrs)
return _evaluate_numeric_binop
@@ -3478,6 +3478,15 @@ def _evaluate_numeric_binop(self, other):
cls.__rdiv__ = _make_evaluate_binop(
operator.div, '__div__', reversed=True)
+ cls.__divmod__ = _make_evaluate_binop(
+ divmod,
+ '__divmod__',
+ constructor=lambda result, **attrs: (
+ Index(result[0], **attrs),
+ Index(result[1], **attrs),
+ ),
+ )
+
@classmethod
def _add_numeric_methods_unary(cls):
""" add in numeric unary methods """
| Regression: divmod(my_series, some_integer) no longer works since version 0.13.
In Pandas version 0.12, you could apply the Python built-in `divmod` function to a `Series` and an integer:
```
Enthought Canopy Python 2.7.6 | 64-bit | (default, Jun 4 2014, 16:42:26)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> pd.__version__
'0.12.0'
>>> divmod(pd.Series(range(4)), 2)
(0 0
1 0
2 1
3 1
dtype: int64, 0 0
1 1
2 0
3 1
dtype: int64)
>>>
```
With version >= 0.13, it appears that this usage is no longer supported:
```
Enthought Canopy Python 2.7.6 | 64-bit | (default, Jun 4 2014, 16:42:26)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> pd.__version__
'0.14.1'
>>> divmod(pd.Series(range(4)), 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for divmod(): 'Series' and 'int'
```
Was this change intentional?
Some context: I was using this to read a climate datafile that had a 4-digit column holding combined month and day values. The original code looked something like: `month, day = divmod(df['MODA'], 100)`, but broke after upgrading to version 0.14.
| `__divmod__` was not included when Series was refactored in 0.13.
You can simply do:
`s // 2, s % 2` if you want to get the same results
If you would like to add the method, a PR would be accepted.
I don't think was included as no tests for this (and to be honest never seen it used; its more of a 'functional' way of working with objects)
| 2016-09-12T18:58:01Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for divmod(): 'Series' and 'int'
| 11,005 |
|||
pandas-dev/pandas | pandas-dev__pandas-14225 | e8357a15cd61ff698cbd3d57904133c586a8ed8b | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -1572,3 +1572,4 @@ Bug Fixes
- Bug in ``eval()`` where the ``resolvers`` argument would not accept a list (:issue:`14095`)
- Bugs in ``stack``, ``get_dummies``, ``make_axis_dummies`` which don't preserve categorical dtypes in (multi)indexes (:issue:`13854`)
- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
+- Bug in ``df.groupby`` where ``.median()`` returns arbitrary values if grouped dataframe contains empty bins (:issue:`13629`)
diff --git a/pandas/algos.pyx b/pandas/algos.pyx
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -992,7 +992,7 @@ def is_lexsorted(list list_of_arrays):
def groupby_indices(dict ids, ndarray[int64_t] labels,
ndarray[int64_t] counts):
"""
- turn group_labels output into a combined indexer maping the labels to
+ turn group_labels output into a combined indexer mapping the labels to
indexers
Parameters
@@ -1313,6 +1313,9 @@ cdef inline float64_t _median_linear(float64_t* a, int n):
cdef float64_t result
cdef float64_t* tmp
+ if n == 0:
+ return NaN
+
# count NAs
for i in range(n):
if a[i] != a[i]:
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -4424,12 +4424,13 @@ def _reorder_by_uniques(uniques, labels):
def _groupby_indices(values):
if is_categorical_dtype(values):
-
# we have a categorical, so we can do quite a bit
# bit better than factorizing again
reverse = dict(enumerate(values.categories))
codes = values.codes.astype('int64')
- _, counts = _hash.value_count_int64(codes, False)
+
+ mask = 0 <= codes
+ counts = np.bincount(codes[mask], minlength=values.categories.size)
else:
reverse, codes, counts = _algos.group_labels(
_values_from_object(_ensure_object(values)))
| BUG: Binned groupby median function calculates median on empty bins and outputs random numbers
#### Code Sample, a copy-pastable example if possible
```
import pandas as pd
d = pd.DataFrame([1,2,5,6,9,3,6,5,9,7,11,36,4,7,8,25,8,24])
b = [0,5,10,15,20,25,30,35,40,45,50,55]
g = d.groupby(pd.cut(d[0],b))
print g.mean()
print g.median()
print g.get_group('(0, 5]').median()
print g.get_group('(40, 45]').median()
```
#### Expected Output
```
0
0
(0, 5] 3.333333
(5, 10] 7.500000
(10, 15] 11.000000
(15, 20] NaN
(20, 25] 24.500000
(25, 30] NaN
(30, 35] NaN
(35, 40] 36.000000
(40, 45] NaN
(45, 50] NaN
(50, 55] NaN
0
0
(0, 5] 3.5
(5, 10] 7.5
(10, 15] 11.0
(15, 20] 18.0
(20, 25] 24.5
(25, 30] 30.5
(30, 35] 30.5
(35, 40] 36.0
(40, 45] 18.0
(45, 50] 18.0
(50, 55] 18.0
0 3.5
dtype: float64
Traceback (most recent call last):
File "<ipython-input-9-0663486889da>", line 1, in <module>
runfile('C:/PythonDir/test04.py', wdir='C:/PythonDir')
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/PythonDir/test04.py", line 20, in <module>
print g.get_group('(40, 45]').median()
File "C:\Anaconda2\lib\site-packages\pandas\core\groupby.py", line 587, in get_group
raise KeyError(name)
KeyError: '(40, 45]'
```
This example shows how the median-function of the groupby object outputs a random number instead of NaN like the mean-function does when a bin is empty. Directly trying to call that bin with its key leads to an error since it doesn't exist, yet the full median output suggests it does exist and that the value might even be meaningful (like in the (15, 20] bin or the (30, 35] bin). The wrong numbers that are returned can change randomly, another possible output using the same code might look like this:
```
(0, 5] 3.500000e+00
(5, 10] 7.500000e+00
(10, 15] 1.100000e+01
(15, 20] 1.800000e+01
(20, 25] 2.450000e+01
(25, 30] 3.050000e+01
(30, 35] 3.050000e+01
(35, 40] 3.600000e+01
(40, 45] 4.927210e+165
(45, 50] 4.927210e+165
(50, 55] 4.927210e+165
```
#### output of `pd.show_versions()`
`pandas: 0.18.1`
| @Khris777 Thanks for reporting!
As a workaround for now, you can do:
```
In [11]: g.agg(lambda x: x.median())
Out[11]:
0
0
(0, 5] 3.5
(5, 10] 7.5
(10, 15] 11.0
(15, 20] NaN
(20, 25] 24.5
(25, 30] NaN
(30, 35] NaN
(35, 40] 36.0
(40, 45] NaN
(45, 50] NaN
(50, 55] NaN
```
First time contributor, thought I'd take a look into this one. Do you think there's a more logical response than raising a KeyError to `g.get_group('(40, 45]')` ?
get_group with no additional arguments is supposed to return a subset of the original dataframe with values that fall within the specified interval. If there are no values in the interval (40,45] in the original dataframe, there's no way to slice that up into a sensible response. Empty dataframe?
ATM, internval types are actual string reprs (and not a distinct dtype), so yes, `g.get_group('(40, 45)')` should be a `KeyError`, just like any other indexing operation.
| 2016-09-15T01:57:41Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-9-0663486889da>", line 1, in <module>
runfile('C:/PythonDir/test04.py', wdir='C:/PythonDir')
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/PythonDir/test04.py", line 20, in <module>
print g.get_group('(40, 45]').median()
File "C:\Anaconda2\lib\site-packages\pandas\core\groupby.py", line 587, in get_group
raise KeyError(name)
KeyError: '(40, 45]'
| 11,007 |
|||
pandas-dev/pandas | pandas-dev__pandas-14329 | 6dcc23862b6b60ce2a67436b4a278fbe4c05490f | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1635,7 +1635,8 @@ def to_latex(self, buf=None, columns=None, col_space=None, header=True,
When set to False prevents from escaping latex special
characters in column names.
encoding : str, default None
- Default encoding is ascii in Python 2 and utf-8 in Python 3
+ A string representing the encoding to use in the output file,
+ defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
decimal : string, default '.'
Character recognized as decimal separator, e.g. ',' in Europe
diff --git a/pandas/formats/format.py b/pandas/formats/format.py
--- a/pandas/formats/format.py
+++ b/pandas/formats/format.py
@@ -654,6 +654,9 @@ def to_latex(self, column_format=None, longtable=False, encoding=None):
latex_renderer = LatexFormatter(self, column_format=column_format,
longtable=longtable)
+ if encoding is None:
+ encoding = 'ascii' if compat.PY2 else 'utf-8'
+
if hasattr(self.buf, 'write'):
latex_renderer.write_result(self.buf)
elif isinstance(self.buf, compat.string_types):
| TST: 3.5 c-locale
https://travis-ci.org/pydata/pandas/jobs/161159736
xref #14114, #12337
```
======================================================================
ERROR: test_to_latex_filename (pandas.tests.formats.test_format.TestDataFrameFormatting)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/pydata/pandas/pandas/tests/formats/test_format.py", line 2825, in test_to_latex_filename
df.to_latex(path)
File "/home/travis/build/pydata/pandas/pandas/core/frame.py", line 1661, in to_latex
encoding=encoding)
File "/home/travis/build/pydata/pandas/pandas/formats/format.py", line 662, in to_latex
latex_renderer.write_result(f)
File "/home/travis/build/pydata/pandas/pandas/formats/format.py", line 906, in write_result
buf.write(' & '.join(crow))
UnicodeEncodeError: 'ascii' codec can't encode character '\xdf' in position 7: ordinal not in range(128)
----------------------------------------------------------------------
Ran 10273 tests in 375.734s
```
cc @nbonnotte
| cc @nbonnotte
| 2016-10-01T17:34:21Z | [] | [] |
Traceback (most recent call last):
File "/home/travis/build/pydata/pandas/pandas/tests/formats/test_format.py", line 2825, in test_to_latex_filename
df.to_latex(path)
File "/home/travis/build/pydata/pandas/pandas/core/frame.py", line 1661, in to_latex
encoding=encoding)
File "/home/travis/build/pydata/pandas/pandas/formats/format.py", line 662, in to_latex
latex_renderer.write_result(f)
File "/home/travis/build/pydata/pandas/pandas/formats/format.py", line 906, in write_result
buf.write(' & '.join(crow))
UnicodeEncodeError: 'ascii' codec can't encode character '\xdf' in position 7: ordinal not in range(128)
| 11,016 |
|||
pandas-dev/pandas | pandas-dev__pandas-14853 | b6de920d8c3c2becc46b4fe233e9f388947554f2 | BUG: Passing ambiguous ndarray[datetime64[ns]] to DatetimeIndex constructor can cause ValueError with wrong offset
if you give infer_freq 5 consecutive weekdays, it'll come back with 'D' as its inferred frequency. But if your actual frequency is `BDay`, then, when DatetimeIndex checks that the frequency matches, 'B' != 'D'. (note that verify_integrity=False skips this). This leads to a more general issue about `infer_freq` with ambiguous cases. I think it makes the most sense to move these sorts of checks to a method on offset that takes a frequence and an Index or ndarray, and determines whether it is compatible.
This matters because you can hit some edge cases when you pass freq and also datetime64[ns] to the DatetimeIndex constructor and more generally because comparing freqstr is probably not the best way to go about checking whether a frequency matches.
Default implementation could be:
``` python
def is_compatible(freqstr, arr=None):
return freqstr == self.freqstr
```
and then bday could do something like (and this is totally psuedocode)
``` python
def is_compatible(freqstr, arr=None):
if freqstr == self.freqstr: return True
if arr is not None and len(arr) <= 5:
if freqstr == 'D': # or other compatibles that ensure it's consecutive
return all(is_weekday(date) for date in arr)
```
This gets more complicated with multiplied offsets, but I think it's worth considering.
``` python
from datetime import datetime
import pandas as pd
dates = [datetime(2013, 10, 7), datetime(2013, 10, 8), datetime(2013, 10, 9)]
ind = pd.DatetimeIndex(dates, freq=pd.tseries.frequencies.BDay())
ind2 = pd.DatetimeIndex(ind.values, freq=pd.tseries.frequencies.BDay(),
verify_integrity=False)
ind3 = pd.DatetimeIndex(ind.values, freq=pd.tseries.frequencies.BDay())
```
produces this Traceback:
```
Traceback (most recent call last):
File "test2.py", line 8, in <module>
ind3 = pd.DatetimeIndex(ind.values, freq=pd.tseries.frequencies.BDay())
File "../pandas/tseries/index.py", line 280, in __new__
raise ValueError('Dates do not conform to passed '
ValueError: Dates do not conform to passed frequency
```
cc @cancan101 - this is what we need to deal with in adding your offsets. I believe that every other offset can be returned from infer_freq, so these offsets would be different and therefore could _never_ pass integrity checks. So either we'd need to change infer_freq and/or define some kind of is_compatible method that intelligently covers all the ways in which the frequency could be something different than its freqstr.
| Pretty simple test case:
``` python
In [88]: infer_freq([datetime(2013, 10, 7), datetime(2013, 10, 8), datetime(2013, 10, 9)])
Out[88]: 'D'
```
which I agree is incorrect since a frequency of 'BD' cannot be ruled out.
@wesm ?
no that's not incorrect - it's reasonable and valid as the freq for the sequence, right? It might be ambiguous, but all that matters is inferring some frequency. So either it should return a list of possibilities, or we pass the buck to offsets to handle the ambiguity.
Okay. Agreed, "incorrect" is not the right word. That being said, what exactly is the spec for infer_freq? There is not quite a total ordering of frequencies by specificity, but in general, should infer_freq return the most specific or the more general frequency?
@cancan101 it obviously is the most general, it is used quite extensively interally to lazily evaluate frequency when its not already assigned
Perhaps `infer_freq` should have an option not to guess when there is any potential for ambiguity. This might be the default behavior, then. I don't think anyone will shed tears if a length-3 (or 5) array case like you describe gets inferred as no frequency with the change.
@wesm or at the very least could skip frequency inference when small.
| 2016-12-10T21:23:26Z | [] | [] |
Traceback (most recent call last):
File "test2.py", line 8, in <module>
ind3 = pd.DatetimeIndex(ind.values, freq=pd.tseries.frequencies.BDay())
File "../pandas/tseries/index.py", line 280, in __new__
raise ValueError('Dates do not conform to passed '
ValueError: Dates do not conform to passed frequency
| 11,066 |
||||
pandas-dev/pandas | pandas-dev__pandas-14884 | 3ba2cff9c55cd16b172f9feb09da551990753f3b | Assigning datetime array to column fails with OutOfBoundsDatetime when having NaT and other unit as [ns]
Assigning an array with datetime64[ns] values including a NaT just works:
```
In [85]: a = np.array([1, 'nat'], dtype='datetime64[ns]')
In [86]: pd.Series(a)
Out[86]:
0 1970-01-01 00:00:00.000000001
1 NaT
dtype: datetime64[ns]
In [88]: df = pd.Series(a).to_frame()
In [89]: df['new'] = a
```
But when having an array with another date unit, converting it to a Series still works, but assigning it directly to a column not anymore, resulting in a OutOfBoundsDatetime error:
```
In [90]: a = np.array([1, 'nat'], dtype='datetime64[s]')
In [91]: pd.Series(a)
Out[91]:
0 1970-01-01 00:00:01
1 NaT
dtype: datetime64[ns]
In [92]: df['new'] = a
Traceback (most recent call last):
...
File "tslib.pyx", line 1720, in pandas.tslib.cast_to_nanoseconds (pandas\tslib.c:27435)
File "tslib.pyx", line 1023, in pandas.tslib._check_dts_bounds (pandas\tslib.c:18102)
OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 292277026596-12-03 08:29:52
```
If you first convert it to a series, it does work. Also if the `NaT` is not present:
```
In [93]: df['new'] = pd.Series(a)
In [94]: a = np.array([1, 2], dtype='datetime64[s]')
In [95]: df['new'] = a
```
| yep, this conversion is amazingly complex. marking as a bug
| 2016-12-15T06:06:56Z | [] | [] |
Traceback (most recent call last):
...
File "tslib.pyx", line 1720, in pandas.tslib.cast_to_nanoseconds (pandas\tslib.c:27435)
File "tslib.pyx", line 1023, in pandas.tslib._check_dts_bounds (pandas\tslib.c:18102)
OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 292277026596-12-03 08:29:52
| 11,069 |
||||
pandas-dev/pandas | pandas-dev__pandas-14886 | 5f889a2106f6584583458e01dbd0f3b9b696fab2 | diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.txt
--- a/doc/source/whatsnew/v0.19.2.txt
+++ b/doc/source/whatsnew/v0.19.2.txt
@@ -78,7 +78,7 @@ Bug Fixes
- Bug in clipboard functions on linux with python2 with unicode and separators (:issue:`13747`)
- Bug in clipboard functions on Windows 10 and python 3 (:issue:`14362`, :issue:`12807`)
- Bug in ``.to_clipboard()`` and Excel compat (:issue:`12529`)
-
+- Bug in ``DataFrame.combine_first()`` for integer columns (:issue:`14687`).
- Bug in ``pd.read_csv()`` in which the ``dtype`` parameter was not being respected for empty data (:issue:`14712`)
- Bug in ``pd.read_csv()`` in which the ``nrows`` parameter was not being respected for large input when using the C engine for parsing (:issue:`7626`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3665,10 +3665,8 @@ def combine(self, other, func, fill_value=None, overwrite=True):
otherSeries[other_mask] = fill_value
# if we have different dtypes, possibily promote
- if notnull(series).all():
- new_dtype = this_dtype
- otherSeries = otherSeries.astype(new_dtype)
- else:
+ new_dtype = this_dtype
+ if not is_dtype_equal(this_dtype, other_dtype):
new_dtype = _find_common_type([this_dtype, other_dtype])
if not is_dtype_equal(this_dtype, new_dtype):
series = series.astype(new_dtype)
| combine_first throws ValueError: Cannot convert NA to integer
I do not understand why there is a need to convert NA to integer if the result does not have NAs. Perhaps the combine_first algo needs to do it under the hood?
#### A small, complete example of the issue
```python
from pandas import DataFrame
DataFrame({'a': [0, 1, 3, 5]}).combine_first(DataFrame({'a': [1, 4]}))
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/IPython/core/interactiveshell.py", line 3066, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-16-12b973b1b150>", line 1, in <module>
pd.DataFrame({'a': [0, 1, 3, 5]}).combine_first(pd.DataFrame({'a': [1, 4]}))
File "/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py", line 3787, in combine_first
return self.combine(other, combiner, overwrite=False)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py", line 3714, in combine
otherSeries = otherSeries.astype(new_dtype)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/generic.py", line 3054, in astype
raise_on_error=raise_on_error, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 3168, in astype
return self.apply('astype', dtype=dtype, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 3035, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 462, in astype
values=values, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 505, in _astype
values = _astype_nansafe(values.ravel(), dtype, copy=True)
File "/usr/local/lib/python3.4/dist-packages/pandas/types/cast.py", line 531, in _astype_nansafe
raise ValueError('Cannot convert NA to integer')
ValueError: Cannot convert NA to integer
```
#### Expected Output
```
a
0 0
1 1
2 3
3 5
```
It does work when at least one item is a float:
```python
DataFrame({'a': [0.0, 1, 3, 5]}).combine_first(DataFrame({'a': [1, 4]}))
a
0 0.0
1 1.0
2 3.0
3 5.0
```
I am aware that integer series cannot have NAs but there is no need to introduce NAs here. I do like it that the series is not upcasted to float silently though.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.4.3.final.0
python-bits: 64
OS: Linux
OS-release: 3.19.0-66-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.19.0
nose: None
pip: 1.5.4
setuptools: 3.3
Cython: 0.24.1
numpy: 1.11.2
scipy: 0.17.1
statsmodels: 0.6.1
xarray: None
IPython: 4.0.0
sphinx: None
patsy: 0.4.1
dateutil: 2.5.3
pytz: 2016.7
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.4.6
matplotlib: 1.5.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999
httplib2: 0.9.2
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: 2.6.1 (dt dec pq3 ext lo64)
jinja2: 2.8
boto: None
pandas_datareader: None
</details>
| This seems to be a regression from 0.18, as this worked before:
```
In [1]: DataFrame({'a': [0, 1, 3, 5]}).combine_first(DataFrame({'a': [1, 4]}))
Out[1]:
a
0 0
1 1
2 3
3 5
In [2]: pd.__version__
Out[2]: u'0.18.1'
```
@Dmitrii-I Thanks for the report! Always welcome to look into what could have caused this change.
| 2016-12-15T10:53:24Z | [] | [] |
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/IPython/core/interactiveshell.py", line 3066, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-16-12b973b1b150>", line 1, in <module>
pd.DataFrame({'a': [0, 1, 3, 5]}).combine_first(pd.DataFrame({'a': [1, 4]}))
File "/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py", line 3787, in combine_first
return self.combine(other, combiner, overwrite=False)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py", line 3714, in combine
otherSeries = otherSeries.astype(new_dtype)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/generic.py", line 3054, in astype
raise_on_error=raise_on_error, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 3168, in astype
return self.apply('astype', dtype=dtype, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 3035, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 462, in astype
values=values, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/pandas/core/internals.py", line 505, in _astype
values = _astype_nansafe(values.ravel(), dtype, copy=True)
File "/usr/local/lib/python3.4/dist-packages/pandas/types/cast.py", line 531, in _astype_nansafe
raise ValueError('Cannot convert NA to integer')
ValueError: Cannot convert NA to integer
| 11,070 |
|||
pandas-dev/pandas | pandas-dev__pandas-14907 | e503d40ace473556990e5453ed5b4c9aa96e24ff | groupby/transform with NaNs in grouped column
What's the expected behavior when grouping on a column containing `NaN` and then applying `transform`? For a `Series`, the current result is to throw an exception:
```
>>> df = pd.DataFrame({
... 'a' : range(10),
... 'b' : [1, 1, 2, 3, np.nan, 4, 4, 5, 5, 5]})
>>>
>>> df.groupby(df.b)['a'].transform(max)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/groupby.py", line 2422, in transform
return self._transform_fast(cyfunc)
File "pandas/core/groupby.py", line 2463, in _transform_fast
return self._set_result_index_ordered(Series(values))
File "pandas/core/groupby.py", line 498, in _set_result_index_ordered
result.index = self.obj.index
File "pandas/core/generic.py", line 1997, in __setattr__
return object.__setattr__(self, name, value)
File "pandas/src/properties.pyx", line 65, in pandas.lib.AxisProperty.__set__ (pandas/lib.c:41301)
obj._set_axis(self.axis, value)
File "pandas/core/series.py", line 273, in _set_axis
self._data.set_axis(axis, labels)
File "pandas/core/internals.py", line 2219, in set_axis
'new values have %d elements' % (old_len, new_len))
ValueError: Length mismatch: Expected axis has 9 elements, new values have 10 elements
```
For a `DataFrame`, the missing value gets filled in with what looks like an uninitialized value from `np.empty_like`:
```
>>> df.groupby(df.b).transform(max)
a
0 1
1 1
2 2
3 3
4 -1
5 6
6 6
7 9
8 9
9 9
```
It seems like either it should fill in the missing values with `NaN` (which might require a change of dtype), or just drop those rows from the result (which requires the shape to change). Either solution has the potential to surprise.
| http://pandas.pydata.org/pandas-docs/stable/groupby.html#na-group-handling
This _should_ work, so this is a bug as the NA group is not defined. Resultant value should be `NaN`.
xref #5456
xref #6992
xref #443
| 2016-12-18T07:12:07Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/groupby.py", line 2422, in transform
return self._transform_fast(cyfunc)
File "pandas/core/groupby.py", line 2463, in _transform_fast
return self._set_result_index_ordered(Series(values))
File "pandas/core/groupby.py", line 498, in _set_result_index_ordered
result.index = self.obj.index
File "pandas/core/generic.py", line 1997, in __setattr__
return object.__setattr__(self, name, value)
File "pandas/src/properties.pyx", line 65, in pandas.lib.AxisProperty.__set__ (pandas/lib.c:41301)
obj._set_axis(self.axis, value)
File "pandas/core/series.py", line 273, in _set_axis
self._data.set_axis(axis, labels)
File "pandas/core/internals.py", line 2219, in set_axis
'new values have %d elements' % (old_len, new_len))
ValueError: Length mismatch: Expected axis has 9 elements, new values have 10 elements
| 11,072 |
||||
pandas-dev/pandas | pandas-dev__pandas-14952 | 74de478392e09cf938d244f5990da4e001afc84c | Groupby.groups doesn't work by a groups convert from DateTimeIndex
```
i = pd.DatetimeIndex(pd.date_range('2015/01/01', periods=5), name='date')
d = pd.DataFrame({'A':[5,6,7,8,9], 'B':[1,2,3,4,5]}, index=i)
print i
print d
dg = d.groupby(level='date')
print dg.get_group('2015-01-01')
print dg.groups
```
**Output:**
```
DatetimeIndex(['2015-01-01', '2015-01-02', '2015-01-03', '2015-01-04',
'2015-01-05'],
dtype='datetime64[ns]', name=u'date', freq='D', tz=None)
A B
date
2015-01-01 5 1
2015-01-02 6 2
2015-01-03 7 3
2015-01-04 8 4
2015-01-05 9 5
A B
date
2015-01-01 5 1
Traceback (most recent call last):
File "<ipython-input-2-f2b4d1146750>", line 12, in <module>
print dg.groups
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 405, in groups
return self.grouper.groups
File "pandas/src/properties.pyx", line 34, in pandas.lib.cache_readonly.__get__ (pandas/lib.c:41917)
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 1352, in groups
return self.groupings[0].groups
File "pandas/src/properties.pyx", line 34, in pandas.lib.cache_readonly.__get__ (pandas/lib.c:41917)
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 2041, in groups
return self.index.groupby(self.grouper)
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/tseries/base.py", line 60, in groupby
return _algos.groupby_object(objs, f)
TypeError: Argument 'labels' has incorrect type (expected numpy.ndarray, got DatetimeIndex)
```
| hmm, does seem a bit buggy
@Temmplar reason you closed this?
@jreback sorry I guess, I'm not familiar with the system and I thought if you answered I have to close it.
nope
would be closed by a pull request to fix which are welcome!
I tried this in `tseries/base.py`:
```
- return _algos.groupby_object(objs, f)
+ return _algos.groupby_object(objs, np.asarray(f))
```
And it sort of works, but the results are ugly, especially if a timezone is involved. An index like this:
```
DatetimeIndex(['2016-06-28 05:30:00-05:00', '2016-06-28 05:31:00-05:00'], dtype='datetime64[ns, America/Chicago]')
```
Produces naive UTC results:
```
{numpy.datetime64('2016-06-28T10:30:00.000000000'): [Timestamp('2016-06-28 05:30:00-0500', ...
```
@jreback Do you have any idea how to fix this simply and properly?
I am getting this error too.
| 2016-12-22T06:45:33Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-2-f2b4d1146750>", line 12, in <module>
print dg.groups
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 405, in groups
return self.grouper.groups
File "pandas/src/properties.pyx", line 34, in pandas.lib.cache_readonly.__get__ (pandas/lib.c:41917)
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 1352, in groups
return self.groupings[0].groups
File "pandas/src/properties.pyx", line 34, in pandas.lib.cache_readonly.__get__ (pandas/lib.c:41917)
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/core/groupby.py", line 2041, in groups
return self.index.groupby(self.grouper)
File "/home/george/anaconda/lib/python2.7/site-packages/pandas/tseries/base.py", line 60, in groupby
return _algos.groupby_object(objs, f)
TypeError: Argument 'labels' has incorrect type (expected numpy.ndarray, got DatetimeIndex)
| 11,079 |
||||
pandas-dev/pandas | pandas-dev__pandas-15569 | 5f0b69aee3622eed9392cef163e4b31ba742498e | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -635,7 +635,7 @@ Performance Improvements
- Increased performance of ``pd.factorize()`` by releasing the GIL with ``object`` dtype when inferred as strings (:issue:`14859`)
- Improved performance of timeseries plotting with an irregular DatetimeIndex
(or with ``compat_x=True``) (:issue:`15073`).
-- Improved performance of ``groupby().cummin()`` and ``groupby().cummax()`` (:issue:`15048`, :issue:`15109`)
+- Improved performance of ``groupby().cummin()`` and ``groupby().cummax()`` (:issue:`15048`, :issue:`15109`, :issue:`15561`)
- Improved performance and reduced memory when indexing with a ``MultiIndex`` (:issue:`15245`)
- When reading buffer object in ``read_sas()`` method without specified format, filepath string is inferred rather than buffer object. (:issue:`14947`)
- Improved performance of `rank()` for categorical data (:issue:`15498`)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1442,7 +1442,7 @@ def cummin(self, axis=0, **kwargs):
if axis != 0:
return self.apply(lambda x: np.minimum.accumulate(x, axis))
- return self._cython_transform('cummin', **kwargs)
+ return self._cython_transform('cummin', numeric_only=False)
@Substitution(name='groupby')
@Appender(_doc_template)
@@ -1451,7 +1451,7 @@ def cummax(self, axis=0, **kwargs):
if axis != 0:
return self.apply(lambda x: np.maximum.accumulate(x, axis))
- return self._cython_transform('cummax', **kwargs)
+ return self._cython_transform('cummax', numeric_only=False)
@Substitution(name='groupby')
@Appender(_doc_template)
| BUG: cython version of groupby.cummax throws error on datetimes
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> x = pd.DataFrame(dict(a=[1], b=pd.to_datetime(['2001'])))
>>> x.groupby('a').b.cummax()
Traceback (most recent call last):
File "<ipython-input-9-316257648d5f>", line 1, in <module>
x.groupby('a').b.cummax()
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 1454, in cummax
return self._cython_transform('cummax', **kwargs)
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 806, in _cython_transform
raise DataError('No numeric types to aggregate')
DataError: No numeric types to aggregate
```
#### Problem description
The current github version of pandas has cython implementations of `groupby.cummin` and `groupby.cummax`, which throw an error if called on datetime columns. (See #15048, 0fe491d.)
#### Expected Output
```python
0 2001-01-01
Name: b, dtype: datetime64[ns]
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.8-100.fc24.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
LOCALE: None.None
pandas: 0.19.0+531.g04e1168
pytest: 3.0.5
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.11.3
scipy: 0.18.1
xarray: 0.9.1
IPython: 4.2.0
sphinx: 1.5.1
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: 1.2.0
tables: 3.3.0
numexpr: 2.6.2
feather: None
matplotlib: 2.0.0
openpyxl: 2.4.1
xlrd: 1.0.0
xlwt: 1.2.0
xlsxwriter: 0.9.6
lxml: 3.7.2
bs4: 4.5.3
html5lib: 0.999
sqlalchemy: 1.1.5
pymysql: None
psycopg2: None
jinja2: 2.9.4
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| I think this may be as easy as adding a `numeric_only=False` to this function call. (of course with tests)
https://github.com/pandas-dev/pandas/blob/04e116851337cd852b4255f8221d9be44829e0e1/pandas/core/groupby.py#L1454
This is correct and not a bug. datetimes are not numeric types. This has to be specifically enabled.
this was covered by #15054
```
In [29]: x = pd.DataFrame(dict(a=[1], b=pd.to_datetime(['2001'])))
...: >>> x.groupby('a').b.cummax(numeric_only=False)
...:
...:
Out[29]:
0 2001-01-01
Name: b, dtype: datetime64[ns]
```
@jreback - maybe the API is set, but isn't this inconsistent?
```python
In [23]: df = pd.DataFrame(dict(a=[1], b=pd.to_datetime(['2001'])))
In [24]: df['b'].max()
Out[24]: Timestamp('2001-01-01 00:00:00')
In [25]: df['b'].cummax()
Out[25]:
0 2001-01-01
Name: b, dtype: datetime64[ns]
In [26]: df.groupby('a')['b'].max()
Out[26]:
a
1 2001-01-01
Name: b, dtype: datetime64[ns]
In [27]: df.groupby('a')['b'].cummax()
Out[27]:
DataError: No numeric types to aggregate
```
@chris-b1 hmm, ahh we are defaulting ``numeric_only=False`` for ``.max``
yes that should be done for ``cummax``/``cummin``. | 2017-03-04T20:20:15Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-9-316257648d5f>", line 1, in <module>
x.groupby('a').b.cummax()
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 1454, in cummax
return self._cython_transform('cummax', **kwargs)
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 806, in _cython_transform
raise DataError('No numeric types to aggregate')
DataError: No numeric types to aggregate
| 11,116 |
|||
pandas-dev/pandas | pandas-dev__pandas-16090 | d313e4dd7605a658869f5d026d6705afb169ab40 | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -1600,7 +1600,7 @@ Indexing
- Bug in the HTML display with with a ``MultiIndex`` and truncation (:issue:`14882`)
- Bug in the display of ``.info()`` where a qualifier (+) would always be displayed with a ``MultiIndex`` that contains only non-strings (:issue:`15245`)
- Bug in ``pd.concat()`` where the names of ``MultiIndex`` of resulting ``DataFrame`` are not handled correctly when ``None`` is presented in the names of ``MultiIndex`` of input ``DataFrame`` (:issue:`15787`)
-- Bug in ``DataFrame.sort_index()`` and ``Series.sort_index()`` where ``na_position`` doesn't work with a ``MultiIndex`` (:issue:`14784`)
+- Bug in ``DataFrame.sort_index()`` and ``Series.sort_index()`` where ``na_position`` doesn't work with a ``MultiIndex`` (:issue:`14784`, :issue:`16604`)
I/O
^^^
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1645,10 +1645,11 @@ def _get_labels_for_sorting(self):
"""
from pandas.core.categorical import Categorical
- return [Categorical.from_codes(label,
- np.arange(np.array(label).max() + 1,
- dtype=label.dtype),
- ordered=True)
+ def cats(label):
+ return np.arange(np.array(label).max() + 1 if len(label) else 0,
+ dtype=label.dtype)
+
+ return [Categorical.from_codes(label, cats(label), ordered=True)
for label in self.labels]
def sortlevel(self, level=0, ascending=True, sort_remaining=True):
| BUG: groupy().nth() throws error on multiple groups, empty result
#### Code Sample, a copy-pastable example if possible
```python
>>> import pandas as pd
>>> df = pd.DataFrame(index=[0], columns=['a', 'b', 'c'])
>>> df.groupby('a').nth(10)
Empty DataFrame
Columns: [b, c]
Index: []
>>> df.groupby(['a', 'b']).nth(10)
Traceback (most recent call last):
File "<ipython-input-3-ae8299c3984e>", line 1, in <module>
df.groupby(['a', 'b']).nth(10)
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 1390, in nth
return out.sort_index() if self.sort else out
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/frame.py", line 3344, in sort_index
indexer = lexsort_indexer(labels._get_labels_for_sorting(),
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/indexes/multi.py", line 1652, in _get_labels_for_sorting
for label in self.labels]
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/indexes/multi.py", line 1652, in <listcomp>
for label in self.labels]
File "~/anaconda3/lib/python3.5/site-packages/numpy/core/_methods.py", line 26, in _amax
return umr_maximum(a, axis, None, out, keepdims)
ValueError: zero-size array to reduction operation maximum which has no identity
```
#### Problem description
In the current Github version of Pandas, when calling `groupby().nth()` with multiple grouping columns, an error is raised if the result is empty. This is a regression from version 0.19.2.
#### Expected Output
```python
Empty DataFrame
Columns: [b, c]
Index: []
Empty DataFrame
Columns: [c]
Index: []
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.8-100.fc24.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C
LANG: C
LOCALE: None.None
pandas: 0.19.0+829.gb17e286
pytest: 3.0.5
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.11.3
scipy: 0.18.1
xarray: 0.9.1
IPython: 4.2.0
sphinx: 1.5.1
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: 1.2.0
tables: 3.3.0
numexpr: 2.6.2
feather: None
matplotlib: 2.0.0
openpyxl: 2.4.1
xlrd: 1.0.0
xlwt: 1.2.0
xlsxwriter: 0.9.6
lxml: 3.7.2
bs4: 4.5.3
html5lib: 0.999
sqlalchemy: 1.1.5
pymysql: None
psycopg2: None
jinja2: 2.9.4
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| Think this just requires:
```diff
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 92baf9d..34b62c5 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1645,11 +1645,9 @@ class MultiIndex(Index):
"""
from pandas.core.categorical import Categorical
- return [Categorical.from_codes(label,
- np.arange(np.array(label).max() + 1,
- dtype=label.dtype),
- ordered=True)
- for label in self.labels]
+ return [Categorical.from_codes(label, np.arange(
+ np.array(label).max() + 1 if len(label) else 0,
+ dtype=label.dtype), ordered=True) for label in self.labels]
def sortlevel(self, level=0, ascending=True, sort_remaining=True):
"""
```
can u put up a PR with that fix (and test)? | 2017-04-21T23:11:51Z | [] | [] |
Traceback (most recent call last):
File "<ipython-input-3-ae8299c3984e>", line 1, in <module>
df.groupby(['a', 'b']).nth(10)
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py", line 1390, in nth
return out.sort_index() if self.sort else out
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/frame.py", line 3344, in sort_index
indexer = lexsort_indexer(labels._get_labels_for_sorting(),
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/indexes/multi.py", line 1652, in _get_labels_for_sorting
for label in self.labels]
File "~/anaconda3/lib/python3.5/site-packages/pandas/core/indexes/multi.py", line 1652, in <listcomp>
for label in self.labels]
File "~/anaconda3/lib/python3.5/site-packages/numpy/core/_methods.py", line 26, in _amax
return umr_maximum(a, axis, None, out, keepdims)
ValueError: zero-size array to reduction operation maximum which has no identity
| 11,152 |
|||
pandas-dev/pandas | pandas-dev__pandas-16294 | 4bed864a24901d9c2baab5e17c57c956a188602f | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -48,7 +48,7 @@ Indexing
I/O
^^^
-
+- Bug that would force importing of the clipboard routines unecessarily, potentially causing an import error on startup (:issue:`16288`)
Plotting
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1382,8 +1382,8 @@ def to_clipboard(self, excel=None, sep=None, **kwargs):
- Windows: none
- OS X: none
"""
- from pandas.io.clipboard import clipboard
- clipboard.to_clipboard(self, excel=excel, sep=sep, **kwargs)
+ from pandas.io import clipboards
+ clipboards.to_clipboard(self, excel=excel, sep=sep, **kwargs)
def to_xarray(self):
"""
diff --git a/pandas/io/api.py b/pandas/io/api.py
--- a/pandas/io/api.py
+++ b/pandas/io/api.py
@@ -5,7 +5,7 @@
# flake8: noqa
from pandas.io.parsers import read_csv, read_table, read_fwf
-from pandas.io.clipboard.clipboard import read_clipboard
+from pandas.io.clipboards import read_clipboard
from pandas.io.excel import ExcelFile, ExcelWriter, read_excel
from pandas.io.pytables import HDFStore, get_store, read_hdf
from pandas.io.json import read_json
diff --git a/pandas/io/clipboard/clipboard.py b/pandas/io/clipboards.py
similarity index 100%
rename from pandas/io/clipboard/clipboard.py
rename to pandas/io/clipboards.py
| ImportError with pandas 0.20.0 and 0.20.1
#### Code Sample, a copy-pastable example if possible
With versions 0.20.0 and 0.20.1, I get the following error:
```python
import pandas
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/__init__.py", line 58, in <module>
from pandas.io.api import *
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/api.py", line 8, in <module>
from pandas.io.clipboard.clipboard import read_clipboard
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/clipboard/__init__.py", line 103, in <module>
copy, paste = determine_clipboard()
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/clipboard/__init__.py", line 76, in determine_clipboard
return init_qt_clipboard()
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/clipboard/clipboards.py", line 49, in init_qt_clipboard
from PyQt4.QtGui import QApplication
ImportError: No module named sip
```
This does not occur with 0.19.2
Here are the other packages installed in my virtualenv
```
$ pip freeze
appdirs==1.4.3
numpy==1.12.1
packaging==16.8
pandas==0.20.1
pyparsing==2.2.0
python-dateutil==2.6.0
pytz==2017.2
PyYAML==3.12
six==1.10.0
```
#### Problem description
It would be nice if pandas was pip installable. As it stands, the new versions are not (for me).
#### Expected Output
No error when importing pandas.
#### Output of ``pd.show_versions()``
I can't get that info without being able to import pandas.
| @jorisvandenbossche did you see something like this earlier?
Not exactly this error, but possibly related. For me the error with clipboard was something with ``from PyQt4 import QtCore``.
For some reason, I had an empty PyQt4 package in site-packages next to PyQt5 (so `import PyQt4` did work, which is used to check which clipboard backend to use, but then later on actual imports raise an error), and for some reason this started to give problems. Matplotlib also had this problem, so I am not sure it was pandas-related.
@FragLegs Is this in a clean (newly made) env? Or if not, can you see if you can reproduce it then as well? | 2017-05-09T01:07:17Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/__init__.py", line 58, in <module>
from pandas.io.api import *
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/api.py", line 8, in <module>
from pandas.io.clipboard.clipboard import read_clipboard
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/clipboard/__init__.py", line 103, in <module>
copy, paste = determine_clipboard()
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/clipboard/__init__.py", line 76, in determine_clipboard
return init_qt_clipboard()
File "/home/smiel/.venvs/foo/local/lib/python2.7/site-packages/pandas/io/clipboard/clipboards.py", line 49, in init_qt_clipboard
from PyQt4.QtGui import QApplication
ImportError: No module named sip
| 11,183 |
|||
pandas-dev/pandas | pandas-dev__pandas-16434 | 49ec31bbaeca81a6f58fc1be26fe80f3ac188cdd | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -80,7 +80,7 @@ Reshaping
^^^^^^^^^
- Bug in ``DataFrame.stack`` with unsorted levels in MultiIndex columns (:issue:`16323`)
-
+- Bug in ``Series.isin(..)`` with a list of tuples (:issue:`16394`)
Numeric
^^^^^^^
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -388,7 +388,7 @@ def isin(comps, values):
"[{0}]".format(type(values).__name__))
if not isinstance(values, (ABCIndex, ABCSeries, np.ndarray)):
- values = np.array(list(values), dtype='object')
+ values = lib.list_to_object_array(list(values))
comps, dtype, _ = _ensure_data(comps)
values, _, _ = _ensure_data(values, dtype=dtype)
| BUG: ValueError with Series.isin and tuples
#### Code Sample, a copy-pastable example if possible
```
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3], 'B': ['a', 'b', 'f']})
df['C'] = list(zip(df['A'], df['B']))
df['C'].isin([(1, 'a')])
```
#### Problem description
Returns ValueError:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/anaconda/envs/pandas_dev/lib/python3.6/site-packages/pandas/core/series.py", line 2555, in isin
result = algorithms.isin(_values_from_object(self), values)
File "/anaconda/envs/pandas_dev/lib/python3.6/site-packages/pandas/core/algorithms.py", line 421, in isin
return f(comps, values)
File "/anaconda/envs/pandas_dev/lib/python3.6/site-packages/pandas/core/algorithms.py", line 399, in <lambda>
f = lambda x, y: htable.ismember_object(x, values)
File "pandas/_libs/hashtable_func_helper.pxi", line 428, in pandas._libs.hashtable.ismember_object (pandas/_libs/hashtable.c:29677)
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)
#### Expected Output
In pandas 0.19.2 returns:
0 True
1 False
2 False
Name: C, dtype: bool
#### Output of ``pd.show_versions()``
<details>
# Paste the output here pd.show_versions() here
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Darwin
OS-release: 16.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.0rc2
pytest: None
pip: 9.0.1
setuptools: 27.2.0
Cython: None
numpy: 1.12.1
scipy: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| this code was refactored to be more general, so this was a missing case. easy fix I think. ``np.array`` converts nested tuples to lists, which is not nice, so do this.
if you'd like to submit a PR with this as an added tests (and make sure nothing else breaks), would be great.
```
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index a745ec6..77d79c9 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -388,7 +388,7 @@ def isin(comps, values):
"[{0}]".format(type(values).__name__))
if not isinstance(values, (ABCIndex, ABCSeries, np.ndarray)):
- values = np.array(list(values), dtype='object')
+ values = lib.list_to_object_array(list(values))
comps, dtype, _ = _ensure_data(comps)
values, _, _ = _ensure_data(values, dtype=dtype)
```
I'm taking a crack at this. Is the solution to just add lib.list_to_object_array back in along with a test for the tuple case, or should we check if comps contains tuples and use lib.list_to_object_array only if it does? | 2017-05-22T20:03:21Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/anaconda/envs/pandas_dev/lib/python3.6/site-packages/pandas/core/series.py", line 2555, in isin
result = algorithms.isin(_values_from_object(self), values)
File "/anaconda/envs/pandas_dev/lib/python3.6/site-packages/pandas/core/algorithms.py", line 421, in isin
return f(comps, values)
File "/anaconda/envs/pandas_dev/lib/python3.6/site-packages/pandas/core/algorithms.py", line 399, in <lambda>
f = lambda x, y: htable.ismember_object(x, values)
File "pandas/_libs/hashtable_func_helper.pxi", line 428, in pandas._libs.hashtable.ismember_object (pandas/_libs/hashtable.c:29677)
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)
| 11,206 |
|||
pandas-dev/pandas | pandas-dev__pandas-16486 | 92d07992e826808cd56f0bd8fec083b510ca402d | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -40,6 +40,7 @@ Bug Fixes
- Silenced a warning on some Windows environments about "tput: terminal attributes: No such device or address" when
detecting the terminal size. This fix only applies to python 3 (:issue:`16496`)
- Bug in using ``pathlib.Path`` or ``py.path.local`` objects with io functions (:issue:`16291`)
+- Bug in ``Index.symmetric_difference()`` on two equal MultiIndex's, results in a TypeError (:issue `13490`)
- Bug in ``DataFrame.update()`` with ``overwrite=False`` and ``NaN values`` (:issue:`15593`)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -414,6 +414,12 @@ def view(self, cls=None):
return result
def _shallow_copy_with_infer(self, values=None, **kwargs):
+ # On equal MultiIndexes the difference is empty.
+ # Therefore, an empty MultiIndex is returned GH13490
+ if len(values) == 0:
+ return MultiIndex(levels=[[] for _ in range(self.nlevels)],
+ labels=[[] for _ in range(self.nlevels)],
+ **kwargs)
return self._shallow_copy(values, **kwargs)
@Appender(_index_shared_docs['_shallow_copy'])
| Symmetric difference on equal MultiIndexes raises TypeError
Calling `symmetric_difference` on two equal multiindices results in a TypeError rather than an empty MultiIndex. This is surprising since calling `difference` on the same multiindices results in the expected empty MultiIndex.
#### Code Sample, a copy-pastable example if possible
```
a = pandas.MultiIndex.from_product([['a', 'b'], [0, 1]])
b = pandas.MultiIndex.from_product([['a', 'b'], [0, 1]])
print(a.symmetric_difference(b))
```
Which gives the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\base.py", line 1674, in __xor__
return self.symmetric_difference(other)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\base.py", line 1911, in symmetric_difference
return self._shallow_copy_with_infer(the_diff, **attribs)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\multi.py", line 387, in _shallow_copy_with_infer
return self._shallow_copy(values, **kwargs)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\multi.py", line 396, in _shallow_copy
return MultiIndex.from_tuples(values, **kwargs)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\multi.py", line 883, in from_tuples
raise TypeError('Cannot infer number of levels from empty list')
TypeError: Cannot infer number of levels from empty list
```
#### Expected Output
```
MultiIndex(levels=[[], []], labels = [[], []])
```
#### output of `pd.show_versions()`
```
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.11.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 61 Stepping 4, GenuineIntel
byteorder: little
LC_ALL: None
LANG: DA
pandas: 0.18.1
nose: None
pip: 8.1.1
setuptools: 20.7.0
Cython: 0.23
numpy: 1.10.4
scipy: 0.17.0
statsmodels: 0.6.1
xarray: None
IPython: 4.2.0
sphinx: None
patsy: 0.4.1
dateutil: 2.5.2
pytz: 2016.3
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.5.2
matplotlib: 1.5.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.12
pymysql: None
psycopg2: None
jinja2: 2.8
boto: None
pandas_datareader: None
```
| yeah I agree, on empties this should behave the same as `.difference` / `.union`
cc @TomAugspurger
Agreed.
| 2017-05-24T20:40:41Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\base.py", line 1674, in __xor__
return self.symmetric_difference(other)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\base.py", line 1911, in symmetric_difference
return self._shallow_copy_with_infer(the_diff, **attribs)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\multi.py", line 387, in _shallow_copy_with_infer
return self._shallow_copy(values, **kwargs)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\multi.py", line 396, in _shallow_copy
return MultiIndex.from_tuples(values, **kwargs)
File "C:\Miniconda3\envs\balancedbeta\lib\site-packages\pandas\indexes\multi.py", line 883, in from_tuples
raise TypeError('Cannot infer number of levels from empty list')
TypeError: Cannot infer number of levels from empty list
| 11,222 |
|||
pandas-dev/pandas | pandas-dev__pandas-16526 | ef487d9e474e8052c0f7c6260de5802a950defad | Various py3k test failures in tests.io.test_html with US-ASCII preferred encoding
#### Code Sample, a copy-pastable example if possible
```python
>>> import locale
>>> locale.getpreferredencoding()
'US-ASCII'
>>> open('/usr/local/lib/python3.4/site-packages/pandas/tests/io/data/spam.html').read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 21552: ordinal not in range(128)
>>>
```
#### Problem description
Three tests, `test_string_io`, `test_string`, and `test_file_like`, all open `spam.html` without specifying the encoding, and then attempt to read it. This causes the tests to terminate prematurely with an error.
#### Expected Output
All three tests should pass since the code under test is not responsible for determining the file encoding.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.4.6.final.0
python-bits: 64
OS: FreeBSD
OS-release: 10.3-STABLE
machine: amd64
processor: amd64
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.20.1
pytest: 3.1.0
pip: None
setuptools: 32.1.0
Cython: None
numpy: 1.11.2
scipy: 0.19.0
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: 1.0.0
tables: 3.4.2
numexpr: 2.6.2
feather: None
matplotlib: None
openpyxl: 2.4.7
xlrd: 1.0.0
xlwt: None
xlsxwriter: 0.9.6
lxml: 3.6.0
bs4: 4.5.1
html5lib: 0.9999999
sqlalchemy: 1.1.10
pymysql: 0.7.11.None
psycopg2: 2.7.1 (dt dec pq3 ext lo64)
jinja2: 2.9.5
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| 2017-05-28T18:59:57Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 21552: ordinal not in range(128)
| 11,231 |
|||||
pandas-dev/pandas | pandas-dev__pandas-16543 | 03d44f3dd0ffd55d7538b67466cf4d3899ceac27 | diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.txt
--- a/doc/source/whatsnew/v0.20.2.txt
+++ b/doc/source/whatsnew/v0.20.2.txt
@@ -44,8 +44,7 @@ Bug Fixes
- Bug in ``DataFrame.update()`` with ``overwrite=False`` and ``NaN values`` (:issue:`15593`)
- Passing an invalid engine to :func:`read_csv` now raises an informative
``ValueError`` rather than ``UnboundLocalError``. (:issue:`16511`)
-
-
+- Bug in :func:`unique` on an array of tuples (:issue:`16519`)
- Fixed a compatibility issue with IPython 6.0's tab completion showing deprecation warnings on Categoricals (:issue:`16409`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -163,7 +163,7 @@ def _ensure_arraylike(values):
ABCIndexClass, ABCSeries)):
inferred = lib.infer_dtype(values)
if inferred in ['mixed', 'string', 'unicode']:
- values = np.asarray(values, dtype=object)
+ values = lib.list_to_object_array(values)
else:
values = np.asarray(values)
return values
@@ -328,6 +328,11 @@ def unique(values):
[b, a, c]
Categories (3, object): [a < b < c]
+ An array of tuples
+
+ >>> pd.unique([('a', 'b'), ('b', 'a'), ('a', 'c'), ('b', 'a')])
+ array([('a', 'b'), ('b', 'a'), ('a', 'c')], dtype=object)
+
See Also
--------
pandas.Index.unique
| Regression from 0.19.2 to 0.20.1 in pandas.unique() when applied to list of tuples
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
input = [(0, 0), (0, 1), (1, 0), (1, 1), (0, 0), (0, 1), (1, 0), (1, 1)]
print pd.unique(input)
```
#### Problem description
The code exits unexpectedly
```
Traceback (most recent call last):
File "pandas_bug.py", line 6, in <module>
pd.unique(input)
File "/Users/johannes/.virtualenvs/pandas/lib/python2.7/site-packages/pandas/core/algorithms.py", line 351, in unique
uniques = table.unique(values)
File "pandas/_libs/hashtable_class_helper.pxi", line 1271, in pandas._libs.hashtable.PyObjectHashTable.unique (pandas/_libs/hashtable.c:21384)
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)
```
#### Expected Output
The code works on pandas version 0.19.2 and produces the expected output
```
[(0, 0) (0, 1) (1, 0) (1, 1)]
```
Moreover this problem is not limited to MacOSX, but was also encounter on Ubuntu CI server.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Darwin
OS-release: 15.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.20.1
pytest: None
pip: 9.0.1
setuptools: 35.0.2
Cython: None
numpy: 1.12.1
scipy: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
pandas_gbq: None
pandas_datareader: None
None
</details>
| this is related to #16394 and needs the same fix, along with some tests; ensuring that nothing else breaks.
```
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 77d79c9..9cfaf04 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -163,7 +163,7 @@ def _ensure_arraylike(values):
ABCIndexClass, ABCSeries)):
inferred = lib.infer_dtype(values)
if inferred in ['mixed', 'string', 'unicode']:
- values = np.asarray(values, dtype=object)
+ values = lib.list_to_object_array(values)
else:
values = np.asarray(values)
return values
``` | 2017-05-30T22:05:37Z | [] | [] |
Traceback (most recent call last):
File "pandas_bug.py", line 6, in <module>
pd.unique(input)
File "/Users/johannes/.virtualenvs/pandas/lib/python2.7/site-packages/pandas/core/algorithms.py", line 351, in unique
uniques = table.unique(values)
File "pandas/_libs/hashtable_class_helper.pxi", line 1271, in pandas._libs.hashtable.PyObjectHashTable.unique (pandas/_libs/hashtable.c:21384)
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)
| 11,236 |
|||
pandas-dev/pandas | pandas-dev__pandas-16701 | b7e7fd3f17d4d2a2f87b9d169cf87143f04e5d33 | diff --git a/doc/source/whatsnew/v0.20.3.txt b/doc/source/whatsnew/v0.20.3.txt
--- a/doc/source/whatsnew/v0.20.3.txt
+++ b/doc/source/whatsnew/v0.20.3.txt
@@ -59,7 +59,7 @@ I/O
Plotting
^^^^^^^^
-
+- Fix regression in series plotting that prevented RGB and RGBA tuples from being used as color arguments (:issue:`16233`)
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -187,6 +187,11 @@ def _validate_color_args(self):
# support series.plot(color='green')
self.kwds['color'] = [self.kwds['color']]
+ if ('color' in self.kwds and isinstance(self.kwds['color'], tuple) and
+ self.nseries == 1 and len(self.kwds['color']) in (3, 4)):
+ # support RGB and RGBA tuples in series plot
+ self.kwds['color'] = [self.kwds['color']]
+
if ('color' in self.kwds or 'colors' in self.kwds) and \
self.colormap is not None:
warnings.warn("'color' and 'colormap' cannot be used "
| DataFrame plot method no long takes RGB tuple as color arg
#### Problem description
The `plot` method on `DataFrame` objects takes a `color` argument that in versions prior to 0.20.2 took an RGB tuple as an accepted value. The 0.20.2 release throws an exception when specifying an RGB tuple for the `color` arg.
#### Code Sample
```python
# import matplotlib.pyplot as plt
df = pandas.DataFrame([[1, 2], [3, 4]], columns=['a', 'b'])
df.plot(x='b', y='a', color=(1, 0, 0))
plt.show()
```
#### Expected Output
The expected output is a red line plot. The following exception is thrown:
```python
Exception in Tkinter callback
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1536, in __call__
return self.func(*args)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 280, in resize
self.show()
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 351, in draw
FigureCanvasAgg.draw(self)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/backends/backend_agg.py", line 464, in draw
self.figure.draw(self.renderer)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/artist.py", line 63, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/figure.py", line 1144, in draw
renderer, self, dsu, self.suppressComposite)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/image.py", line 139, in _draw_list_compositing_images
a.draw(renderer)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/artist.py", line 63, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 2426, in draw
mimage._draw_list_compositing_images(renderer, self, dsu)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/image.py", line 139, in _draw_list_compositing_images
a.draw(renderer)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/artist.py", line 63, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/lines.py", line 803, in draw
ln_color_rgba = self._get_rgba_ln_color()
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/lines.py", line 1344, in _get_rgba_ln_color
return mcolors.to_rgba(self._color, self._alpha)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/colors.py", line 143, in to_rgba
rgba = _to_rgba_no_colorcycle(c, alpha)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/colors.py", line 194, in _to_rgba_no_colorcycle
raise ValueError("Invalid RGBA argument: {!r}".format(orig_c))
ValueError: Invalid RGBA argument: 1
```
This plot is correctly displayed with an identical environment that has pandas 0.19.2 installed so is API breaking.
Apologies if this is fixed already in master.
#### Output of ``pd.show_versions()``
<details>
NSTALLED VERSIONS
------------------
commit: None
python: 2.7.10.final.0
python-bits: 64
OS: Darwin
OS-release: 15.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.2
pytest: None
pip: 9.0.1
setuptools: 36.0.1
Cython: None
numpy: 1.13.0
scipy: None
xarray: None
IPython: 5.4.1
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| Follow up:
Enclosing the tuple in a list works:
```
df.plot(x='b', y='a', color=[(1, 0, 0)])
```
This looks related to #16233 which fixes #3486 but breaks what I perceive to be a fairly standard way of assigning line color (e.g. it is the first method listed for specifying color on https://matplotlib.org/users/colors.html).
@kjford Thanks for the report! That is indeed a regression, and should be fixed.
Want to do a PR to fix?
| 2017-06-15T06:33:34Z | [] | [] |
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1536, in __call__
return self.func(*args)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 280, in resize
self.show()
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 351, in draw
FigureCanvasAgg.draw(self)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/backends/backend_agg.py", line 464, in draw
self.figure.draw(self.renderer)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/artist.py", line 63, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/figure.py", line 1144, in draw
renderer, self, dsu, self.suppressComposite)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/image.py", line 139, in _draw_list_compositing_images
a.draw(renderer)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/artist.py", line 63, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 2426, in draw
mimage._draw_list_compositing_images(renderer, self, dsu)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/image.py", line 139, in _draw_list_compositing_images
a.draw(renderer)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/artist.py", line 63, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/lines.py", line 803, in draw
ln_color_rgba = self._get_rgba_ln_color()
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/lines.py", line 1344, in _get_rgba_ln_color
return mcolors.to_rgba(self._color, self._alpha)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/colors.py", line 143, in to_rgba
rgba = _to_rgba_no_colorcycle(c, alpha)
File "/venvs/pandas0.20.2/lib/python2.7/site-packages/matplotlib/colors.py", line 194, in _to_rgba_no_colorcycle
raise ValueError("Invalid RGBA argument: {!r}".format(orig_c))
ValueError: Invalid RGBA argument: 1
| 11,258 |
|||
pandas-dev/pandas | pandas-dev__pandas-16744 | 8a98f5ed541c87a9bf101c9331bd6cfa8f007cc9 | Different behaviour on two different environments. TypeError: data type "datetime" not understood
#### I have an aggregation logic
```python
In [19]: datewise_prices
Out[19]:
[{'arrivalDate': 1490227200000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1490400000000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1490832000000,
'maxPrice': 3500,
'minPrice': 3500,
'modalPrice': 3500},
{'arrivalDate': 1490918400000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1491091200000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300}]
In [20]: weekly_dataframe = pandas.DataFrame(datewise_prices)
...: weekly_dataframe.drop('minPrice', axis=1, inplace=True)
...: weekly_dataframe.drop('maxPrice', axis=1, inplace=True)
...: weekly_dataframe['arrivalDate'] = pandas.to_datetime(weekly_dataframe['arrivalDate'], unit='ms')
...: weekly_dataframe = weekly_dataframe.resample('W', on='arrivalDate')['modalPrice'].mean().dropna().reset_index()
...: weekly_dataframe['label'] = (weekly_dataframe['arrivalDate'] - pandas.offsets.DateOffset(days=6)).dt.strftime('%d %b') + ' to ' + weekly_dataframe['arriv
...: alDate'].dt.strftime('%d %b')
```
#### It results in an error on my Staging environment but works fine on my local.
The staging environment gives an error in `(weekly_dataframe['arrivalDate'] - pandas.offsets.DateOffset(days=6)).dt.strftime('%d %b')`
```
Traceback (most recent call last):
File "/root/myapp/myapp/handlers/aggregation_handler.py", line 290, in get_daily_weekly_and_monthly_aggregates
weekly_dataframe['label'] = (weekly_dataframe['arrivalDate'] - pandas.offsets.DateOffset(days=6)).dt.strftime('%d %b') + ' to ' + weekly_dataframe['arrivalDate'].dt.strftime('%d %b')
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 696, in wrapper
converted = _Op.get_op(left, right, name, na_op)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 330, in get_op
return _TimeOp(left, right, name, na_op)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 343, in __init__
lvalues = self._convert_to_array(left, name=name)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 454, in _convert_to_array
if (inferred_type in ('datetime64', 'datetime', 'date', 'time') or
TypeError: data type "datetime" not understood
```
but it works fine on my local.
#### Output of ``pd.show_versions()`` on local
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-78-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_IN
LOCALE: None.None
pandas: 0.19.2
nose: None
pip: 9.0.1
setuptools: 36.0.1
Cython: None
numpy: 1.12.1
scipy: None
statsmodels: None
xarray: None
IPython: 5.4.1
sphinx: None
patsy: None
dateutil: 2.5.3
pytz: 2016.7
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.9.6
lxml: None
bs4: None
html5lib: 0.999
httplib2: 0.8
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.40.0
pandas_datareader: None
</details>
#### Output of ``pd.show_versions()`` on Staging
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-57-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: None.None
pandas: 0.19.2
nose: None
pip: 1.5.4
setuptools: 3.3
Cython: None
numpy: 1.13.0
scipy: None
statsmodels: None
xarray: None
IPython: 4.0.0
sphinx: None
patsy: None
dateutil: 2.5.3
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: 0.9.6
lxml: None
bs4: None
html5lib: 0.999
httplib2: None
apiclient: None
sqlalchemy: 1.0.8
pymysql: None
psycopg2: None
jinja2: 2.9.6
boto: 2.40.0
pandas_datareader: None
</details>
Please help me out.
| Can you make a copy-pastable example, as requested?
```
import pandas as pd
datewise_prices = [
{'arrivalDate': 1490227200000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1490400000000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1490832000000,
'maxPrice': 3500,
'minPrice': 3500,
'modalPrice': 3500},
{'arrivalDate': 1490918400000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1491091200000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300}
]
weekly_dataframe = pd.DataFrame(datewise_prices)
weekly_dataframe.drop('minPrice', axis=1, inplace=True)
weekly_dataframe.drop('maxPrice', axis=1, inplace=True)
weekly_dataframe['arrivalDate'] = pd.to_datetime(weekly_dataframe['arrivalDate'], unit='ms')
weekly_dataframe = weekly_dataframe.resample('W', on='arrivalDate')['modalPrice'].mean().dropna().reset_index()
weekly_dataframe['label'] = (weekly_dataframe['arrivalDate'] - pd.offsets.DateOffset(days=6)).dt.strftime('%d %b') + ' to ' + weekly_dataframe['arrivalDate'].dt.strftime('%d %b')
```
Transferred from @hussaintamboli 's input. It will help if you provide more information, for example, the full trace when it error.
@BranYang and @TomAugspurger, I have added the stacktrace.
Just figured it out myself.
It's because of numpy==1.13.0. For numpy==1.12.1, it's working fine.
this looks fine. what exactly is the issue?
On staging environment with the dependencies mentioned in `Output of pd.show_versions() on Staging`, I get `TypeError: data type "datetime" not understood`.
But on local environment with dependencies mentioned in `Output of pd.show_versions() on local`, I get desired output.
your example does not repro. I would guess that you have strings (and not datetimes) in one of your fields
Reproduction steps.
This works
```
$ pip freeze | grep numpy
numpy==1.12.1
import pandas as pd
datewise_prices = [
{'arrivalDate': 1490227200000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1490400000000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1490832000000,
'maxPrice': 3500,
'minPrice': 3500,
'modalPrice': 3500},
{'arrivalDate': 1490918400000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300},
{'arrivalDate': 1491091200000,
'maxPrice': 3300,
'minPrice': 3300,
'modalPrice': 3300}
]
weekly_dataframe = pd.DataFrame(datewise_prices)
weekly_dataframe.drop('minPrice', axis=1, inplace=True)
weekly_dataframe.drop('maxPrice', axis=1, inplace=True)
weekly_dataframe['arrivalDate'] = pd.to_datetime(weekly_dataframe['arrivalDate'], unit='ms')
weekly_dataframe = weekly_dataframe.resample('W', on='arrivalDate')['modalPrice'].mean().dropna().reset_index()
weekly_dataframe['label'] = (weekly_dataframe['arrivalDate'] - pd.offsets.DateOffset(days=6)).dt.strftime('%d %b') + ' to ' + weekly_dataframe['arrivalDate'].dt.strftime('%d %b')
weekly_dataframe.drop('arrivalDate', axis=1, inplace=True)
print weekly_dataframe.to_dict(orient='records')
[{'label': u'20 Mar to 26 Mar', 'modalPrice': 3300.0},
{'label': u'27 Mar to 02 Apr', 'modalPrice': 3366.6666666666665}]
```
On the staging environment where it gives error
```
pip install numpy==1.13.0
weekly_dataframe = pd.DataFrame(datewise_prices)
weekly_dataframe.drop('minPrice', axis=1, inplace=True)
weekly_dataframe.drop('maxPrice', axis=1, inplace=True)
weekly_dataframe['arrivalDate'] = pd.to_datetime(weekly_dataframe['arrivalDate'], unit='ms')
weekly_dataframe = weekly_dataframe.resample('W', on='arrivalDate')['modalPrice'].mean().dropna().reset_index()
weekly_dataframe['label'] = (weekly_dataframe['arrivalDate'] - pd.offsets.DateOffset(days=6)).dt.strftime('%d %b') + ' to ' + weekly_dataframe['arrivalDate'].dt.strftime('%d %b')
TypeError: data type "datetime" not understood
```
this works on master for both numpies so not sure what your issue is
I can reproduce it on 0.19.2 / 1.13.0. Small reproducible example:
```
pd.Series(pd.date_range("2012-01-01", periods=3)) - pd.offsets.DateOffset(days=6)
```
and the error actually comes from
```
np.dtype('M8[ns]') in ('datetime', 'datetime64')
```
But on master / 0.20.2, this does not raise anymore, so I suppose we have already fixed this (this line of code is also removed).
Nonetheless, always welcome to do a PR to add a test for this specific case (the small reproducible example I showed above), to make sure it keeps on working.
yeah we had some refactorings of this in 0.20.0. ok test PR it is.
@hussaintamboli Would you be interested to add such a test?
Sure. I'll do it. | 2017-06-21T11:24:41Z | [] | [] |
Traceback (most recent call last):
File "/root/myapp/myapp/handlers/aggregation_handler.py", line 290, in get_daily_weekly_and_monthly_aggregates
weekly_dataframe['label'] = (weekly_dataframe['arrivalDate'] - pandas.offsets.DateOffset(days=6)).dt.strftime('%d %b') + ' to ' + weekly_dataframe['arrivalDate'].dt.strftime('%d %b')
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 696, in wrapper
converted = _Op.get_op(left, right, name, na_op)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 330, in get_op
return _TimeOp(left, right, name, na_op)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 343, in __init__
lvalues = self._convert_to_array(left, name=name)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 454, in _convert_to_array
if (inferred_type in ('datetime64', 'datetime', 'date', 'time') or
TypeError: data type "datetime" not understood
| 11,263 |
||||
pandas-dev/pandas | pandas-dev__pandas-16926 | a587d568d213c62307a72d98d6913239f55844e8 | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -162,6 +162,7 @@ I/O
- Bug in :func:`read_csv` in which non integer values for the header argument generated an unhelpful / unrelated error message (:issue:`16338`)
+- Bug in :func:`read_stata` where value labels could not be read when using an iterator (:issue:`16923`)
Plotting
^^^^^^^^
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -997,6 +997,7 @@ def __init__(self, path_or_buf, convert_dates=True,
self.path_or_buf = BytesIO(contents)
self._read_header()
+ self._setup_dtype()
def __enter__(self):
""" enter context manager """
@@ -1299,6 +1300,23 @@ def _read_old_header(self, first_char):
# necessary data to continue parsing
self.data_location = self.path_or_buf.tell()
+ def _setup_dtype(self):
+ """Map between numpy and state dtypes"""
+ if self._dtype is not None:
+ return self._dtype
+
+ dtype = [] # Convert struct data types to numpy data type
+ for i, typ in enumerate(self.typlist):
+ if typ in self.NUMPY_TYPE_MAP:
+ dtype.append(('s' + str(i), self.byteorder +
+ self.NUMPY_TYPE_MAP[typ]))
+ else:
+ dtype.append(('s' + str(i), 'S' + str(typ)))
+ dtype = np.dtype(dtype)
+ self._dtype = dtype
+
+ return self._dtype
+
def _calcsize(self, fmt):
return (type(fmt) is int and fmt or
struct.calcsize(self.byteorder + fmt))
@@ -1472,22 +1490,10 @@ def read(self, nrows=None, convert_dates=None,
if nrows is None:
nrows = self.nobs
- if (self.format_version >= 117) and (self._dtype is None):
+ if (self.format_version >= 117) and (not self._value_labels_read):
self._can_read_value_labels = True
self._read_strls()
- # Setup the dtype.
- if self._dtype is None:
- dtype = [] # Convert struct data types to numpy data type
- for i, typ in enumerate(self.typlist):
- if typ in self.NUMPY_TYPE_MAP:
- dtype.append(('s' + str(i), self.byteorder +
- self.NUMPY_TYPE_MAP[typ]))
- else:
- dtype.append(('s' + str(i), 'S' + str(typ)))
- dtype = np.dtype(dtype)
- self._dtype = dtype
-
# Read data
dtype = self._dtype
max_read_len = (self.nobs - self._lines_read) * dtype.itemsize
@@ -1958,7 +1964,6 @@ def _prepare_categoricals(self, data):
return data
get_base_missing_value = StataMissingValue.get_base_missing_value
- index = data.index
data_formatted = []
for col, col_is_cat in zip(data, is_cat):
if col_is_cat:
@@ -1981,8 +1986,7 @@ def _prepare_categoricals(self, data):
# Replace missing values with Stata missing value for type
values[values == -1] = get_base_missing_value(dtype)
- data_formatted.append((col, values, index))
-
+ data_formatted.append((col, values))
else:
data_formatted.append((col, data[col]))
return DataFrame.from_items(data_formatted)
| Unable to read Stata value_labels from .dta-file created by pandas (to_stata())
#### Code Sample, a copy-pastable example if possible
```python
import pandas as pd
d = {'A':['B','E','C','A','E']}
df = pd.DataFrame(data=d)
df['A'] = df['A'].astype('category') # Setting as categorical, similar to value_label in Stata
df.to_stata('test.dta') # Writing dataframe to Stata-file
dfs_fromstata = pd.read_stata('test.dta', iterator=True) # Creating StataReader-object
print(dfs_fromstata.value_labels()) # Printing value_labels
```
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
print(dfs_fromstata.value_labels()) # Printing value_labels
File "C:\Anaconda3\lib\site-packages\pandas\io\stata.py", line 1725, in value_labels
self._read_value_labels()
File "C:\Anaconda3\lib\site-packages\pandas\io\stata.py", line 1329, in _read_value_labels
offset = self.nobs * self._dtype.itemsize
AttributeError: 'NoneType' object has no attribute 'itemsize'
```
#### Problem description
It seems as if read_stata() is not able to read value_labels properly from a .dta-file created by to_stata(). If the file is created by Stata itself, the value_labels are read correctly. Also, if the .dta-file created by to_stata() is opened and saved by Stata, the value_labels are read correctly.
#### Expected Output
{'A': {0: 'A', 1: 'B', 2: 'C', 3: 'E'}}
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.0.final.0
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: None.None
pandas: 0.20.2
pytest: 3.0.5
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.12.1
scipy: 0.19.1
xarray: None
IPython: 5.1.0
sphinx: 1.5.1
patsy: 0.4.1
dateutil: 2.6.0
pytz: 2016.10
blosc: None
bottleneck: 1.2.1
tables: 3.2.2
numexpr: 2.6.2
feather: None
matplotlib: 2.0.2
openpyxl: 2.4.1
xlrd: 1.0.0
xlwt: 1.2.0
xlsxwriter: 0.9.6
lxml: 3.7.2
bs4: 4.5.3
html5lib: None
sqlalchemy: 1.1.5
pymysql: None
psycopg2: None
jinja2: 2.9.4
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| cc @bashtage
I don't think this is supported.
It works fine. You need to read once in order to initialize some values. This appears to be an omission in the iterator implementation. Should probably raise with a usable error in the scenario.
```
import pandas as pd
d = {'A':['B','E','C','A','E']}
df = pd.DataFrame(data=d)
df['A'] = df['A'].astype('category') # Setting as categorical, similar to value_label in Stata
df.to_stata('test.dta') # Writing dataframe to Stata-file
dfs_fromstata = pd.read_stata('test.dta', iterator=True) # Creating StataReader-object
dfs_fromstata.read()
print(dfs_fromstata.value_labels())
```
pull-requests welcome!
Should pull this block
https://github.com/pandas-dev/pandas/blob/master/pandas/io/stata.py#L1480
out to a stand alone function and then call it once the header has been read. | 2017-07-14T17:22:36Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
print(dfs_fromstata.value_labels()) # Printing value_labels
File "C:\Anaconda3\lib\site-packages\pandas\io\stata.py", line 1725, in value_labels
self._read_value_labels()
File "C:\Anaconda3\lib\site-packages\pandas\io\stata.py", line 1329, in _read_value_labels
offset = self.nobs * self._dtype.itemsize
AttributeError: 'NoneType' object has no attribute 'itemsize'
| 11,285 |
|||
pandas-dev/pandas | pandas-dev__pandas-16930 | 4efe6560e07f28de6a1834fa90e31cef31b0fb18 | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -164,6 +164,8 @@ I/O
- Bug in :func:`read_stata` where value labels could not be read when using an iterator (:issue:`16923`)
+- Bug in :func:`read_html` where import check fails when run in multiple threads (:issue:`16928`)
+
Plotting
^^^^^^^^
- Bug in plotting methods using ``secondary_y`` and ``fontsize`` not setting secondary axis font size (:issue:`12565`)
diff --git a/pandas/io/html.py b/pandas/io/html.py
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -37,8 +37,6 @@ def _importers():
if _IMPORTS:
return
- _IMPORTS = True
-
global _HAS_BS4, _HAS_LXML, _HAS_HTML5LIB
try:
@@ -59,6 +57,8 @@ def _importers():
except ImportError:
pass
+ _IMPORTS = True
+
#############
# READ HTML #
| read_html() Thread Safety
#### Code Sample
```python
#!/usr/bin/python3
import pandas
import threading
def fetch_file():
url = "https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html"
pandas.read_html(url)
thread1 = threading.Thread(target = fetch_file)
thread2 = threading.Thread(target = fetch_file)
thread1.start()
thread2.start()
```
### Output
```
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "./pandas_bug.py", line 7, in fetch_file
pandas.read_html(url)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 904, in read_html
keep_default_na=keep_default_na)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 731, in _parse
parser = _parser_dispatch(flav)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 691, in _parser_dispatch
raise ImportError("lxml not found, please install it")
ImportError: lxml not found, please install it
```
#### Problem description
read_html() doesn't appear to be multi-threading safe. This specific issue seems to be caused by setting `_IMPORTS` in html.py to True too early resulting in the second thread entering `_parser_dispatch` and throwing an exception while the first thread hasn't finished the check.
I have written a potential fix and will open a PR shortly.
#### Expected Output
No exception should be thrown since lxml is installed and the program works fine without multi-threading.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------------
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.11.3-1-ARCH
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
pandas: 0.20.1
pytest: None
pip: 9.0.1
setuptools: 36.0.1
Cython: None
numpy: 1.12.1
scipy: 0.19.0
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
pandas_gbq: None
pandas_datareader: None
</details>
| 2017-07-14T19:25:11Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "./pandas_bug.py", line 7, in fetch_file
pandas.read_html(url)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 904, in read_html
keep_default_na=keep_default_na)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 731, in _parse
parser = _parser_dispatch(flav)
File "/usr/lib/python3.6/site-packages/pandas/io/html.py", line 691, in _parser_dispatch
raise ImportError("lxml not found, please install it")
ImportError: lxml not found, please install it
| 11,286 |
||||
pandas-dev/pandas | pandas-dev__pandas-17169 | 9b07ef4a5b656a1532512c270533053ee338e30d | diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -323,10 +323,6 @@ def reindex_axis(self, indexer, method=None, axis=1, fill_value=None,
fill_value=fill_value, mask_info=mask_info)
return self.make_block(new_values, fastpath=True)
- def get(self, item):
- loc = self.items.get_loc(item)
- return self.values[loc]
-
def iget(self, i):
return self.values[i]
@@ -1658,13 +1654,6 @@ def set(self, locs, values, check=False):
assert locs.tolist() == [0]
self.values = values
- def get(self, item):
- if self.ndim == 1:
- loc = self.items.get_loc(item)
- return self.values[loc]
- else:
- return self.values
-
def putmask(self, mask, new, align=True, inplace=False, axis=0,
transpose=False, mgr=None):
"""
@@ -4722,8 +4711,6 @@ def _concat_indexes(indexes):
def _block2d_to_blocknd(values, placement, shape, labels, ref_items):
""" pivot to the labels shape """
- from pandas.core.internals import make_block
-
panel_shape = (len(placement),) + shape
# TODO: lexsort depth needs to be 2!!
| AttributeError Block.items
https://github.com/pandas-dev/pandas/blob/master/pandas/core/internals.py#L326
`core.internals.Block` references `self.items`. AFAICT `items` is an attribute of `BlockManager`, does not exist in `Block`.
```
ser = pd.Series(range(5))
mgr = ser._data
block = mgr.blocks[0]
>>> block.get(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/internals.py", line 329, in get
loc = self.items.get_loc(item)
AttributeError: 'IntBlock' object has no attribute 'items'
```
| I think this is dead code. We *always* use positional indexing on a block (e.g. ``.iget``). not sure why the linter doesn't find this. you can do a PR to remove.
Code coverage also indicates this is never used (at least in our tests) | 2017-08-04T07:13:48Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pandas/core/internals.py", line 329, in get
loc = self.items.get_loc(item)
AttributeError: 'IntBlock' object has no attribute 'items'
| 11,323 |
|||
pandas-dev/pandas | pandas-dev__pandas-17194 | 3c833db29b6f5977c78d1ade791a09a5b29cedb8 | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -278,6 +278,7 @@ Indexing
- Fixes bug where indexing with ``np.inf`` caused an ``OverflowError`` to be raised (:issue:`16957`)
- Bug in reindexing on an empty ``CategoricalIndex`` (:issue:`16770`)
- Fixes ``DataFrame.loc`` for setting with alignment and tz-aware ``DatetimeIndex`` (:issue:`16889`)
+- Avoids ``IndexError`` when passing an Index or Series to ``.iloc`` with older numpy (:issue:`17193`)
I/O
^^^
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -861,6 +861,9 @@ def _is_empty_indexer(indexer):
# set
else:
+ if _np_version_under1p9:
+ # Work around GH 6168 to support old numpy
+ indexer = getattr(indexer, 'values', indexer)
values[indexer] = value
# coerce and try to infer the dtypes of the result
| passing Index or Series to iloc fails with old numpy
#### Code Sample, a copy-pastable example if possible
```python
>>> s = pd.Series([1,2])
>>> s.iloc[pd.Series([0])] = 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/indexing.py", line 198, in __setitem__
self._setitem_with_indexer(indexer, value)
File "pandas/core/indexing.py", line 619, in _setitem_with_indexer
value=value)
File "pandas/core/internals.py", line 3313, in setitem
return self.apply('setitem', **kwargs)
File "pandas/core/internals.py", line 3201, in apply
applied = getattr(b, f)(**kwargs)
File "pandas/core/internals.py", line 864, in setitem
values[indexer] = value
IndexError: unsupported iterator index
>>> np.version.version
'1.7.0'
```
#### Problem description
This is a consequence of #6168 , which is fixed in more recent numpy versions (but 1.7.0 is still a supported version).
Not 100% sure this is worth fixing (i.e. how interested we are in keeping compatibility with numpy 1.8.0), but the fix is trivial, so I'll just push a PR and let you judge.
#### Expected Output
Just no error.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 2.7.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.0-3-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: it_IT.UTF-8
LOCALE: None.None
pandas: 0+unknown
pytest: 3.2.0
pip: None
setuptools: 36.2.7
Cython: None
numpy: 1.7.0
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 1.5
pytz: 2012c
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
>>>
</details>
| 2017-08-07T22:07:19Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pandas/core/indexing.py", line 198, in __setitem__
self._setitem_with_indexer(indexer, value)
File "pandas/core/indexing.py", line 619, in _setitem_with_indexer
value=value)
File "pandas/core/internals.py", line 3313, in setitem
return self.apply('setitem', **kwargs)
File "pandas/core/internals.py", line 3201, in apply
applied = getattr(b, f)(**kwargs)
File "pandas/core/internals.py", line 864, in setitem
values[indexer] = value
IndexError: unsupported iterator index
| 11,325 |
||||
pandas-dev/pandas | pandas-dev__pandas-17201 | 674fb96b33c07c680844f674fcdf0767b6e3c2f9 | diff --git a/doc/source/whatsnew/v0.21.1.txt b/doc/source/whatsnew/v0.21.1.txt
--- a/doc/source/whatsnew/v0.21.1.txt
+++ b/doc/source/whatsnew/v0.21.1.txt
@@ -88,7 +88,7 @@ I/O
- :func:`read_parquet` now allows to specify kwargs which are passed to the respective engine (:issue:`18216`)
- Bug in parsing integer datetime-like columns with specified format in ``read_sql`` (:issue:`17855`).
- Bug in :meth:`DataFrame.to_msgpack` when serializing data of the numpy.bool_ datatype (:issue:`18390`)
-
+- Bug in :func:`read_json` not decoding when reading line deliminted JSON from S3 (:issue:`17200`)
Plotting
^^^^^^^^
diff --git a/pandas/io/json/json.py b/pandas/io/json/json.py
--- a/pandas/io/json/json.py
+++ b/pandas/io/json/json.py
@@ -5,7 +5,7 @@
import pandas._libs.json as json
from pandas._libs.tslib import iNaT
-from pandas.compat import StringIO, long, u
+from pandas.compat import StringIO, long, u, to_str
from pandas import compat, isna
from pandas import Series, DataFrame, to_datetime, MultiIndex
from pandas.io.common import (get_filepath_or_buffer, _get_handle,
@@ -458,8 +458,10 @@ def read(self):
if self.lines and self.chunksize:
obj = concat(self)
elif self.lines:
+
+ data = to_str(self.data)
obj = self._get_object_parser(
- self._combine_lines(self.data.split('\n'))
+ self._combine_lines(data.split('\n'))
)
else:
obj = self._get_object_parser(self.data)
@@ -612,7 +614,7 @@ def _try_convert_data(self, name, data, use_dtypes=True,
try:
dtype = np.dtype(dtype)
return data.astype(dtype), True
- except:
+ except (TypeError, ValueError):
return data, False
if convert_dates:
@@ -628,7 +630,7 @@ def _try_convert_data(self, name, data, use_dtypes=True,
try:
data = data.astype('float64')
result = True
- except:
+ except (TypeError, ValueError):
pass
if data.dtype.kind == 'f':
@@ -639,7 +641,7 @@ def _try_convert_data(self, name, data, use_dtypes=True,
try:
data = data.astype('float64')
result = True
- except:
+ except (TypeError, ValueError):
pass
# do't coerce 0-len data
@@ -651,7 +653,7 @@ def _try_convert_data(self, name, data, use_dtypes=True,
if (new_data == data).all():
data = new_data
result = True
- except:
+ except (TypeError, ValueError):
pass
# coerce ints to 64
@@ -661,7 +663,7 @@ def _try_convert_data(self, name, data, use_dtypes=True,
try:
data = data.astype('int64')
result = True
- except:
+ except (TypeError, ValueError):
pass
return data, result
@@ -680,7 +682,7 @@ def _try_convert_to_date(self, data):
if new_data.dtype == 'object':
try:
new_data = data.astype('int64')
- except:
+ except (TypeError, ValueError):
pass
# ignore numbers that are out of range
@@ -697,7 +699,7 @@ def _try_convert_to_date(self, data):
unit=date_unit)
except ValueError:
continue
- except:
+ except Exception:
break
return new_data, True
return data, False
| read_json(lines=True) broken for s3 urls in Python 3 (v0.20.3)
#### Code Sample, a copy-pastable example if possible
Using Python
```python
import pandas as pd
inputdf = pd.read_json(path_or_buf="s3://path/to/python-lines/file.json", lines=True)
```
The file is similar to:
```
{"url": "blah", "other": "blah"}
{"url": "blah", "other": "blah"}
{"url": "blah", "other": "blah"}
```
#### Problem description
When attempting to read a python lines file into a DataFrame using the s3 protocol, the above code will error with:
```
2017-08-08 11:06:14,225 - image_rank_csv - ERROR - initial_value must be str or None, not bytes
Traceback (most recent call last):
File "image_rank_csv.py", line 62, in run
inputdf = pd.read_json(path_or_buf="s3://path/to/python-lines/file.json", lines=True)
File "...env/lib/python3.6/site-packages/pandas/io/json/json.py", line 347, in read_json
lines = list(StringIO(json.strip()))
TypeError: initial_value must be str or None, not bytes
```
This works fine if the file is local, e.g.:
```python
import pandas as pd
inputdf = pd.read_json(path_or_buf="/local/path/to/python-lines/file.json", lines=True)
```
#### Expected Output
Expect to successfully read the file and error above not to occur.
My current thinking is that when we get the file handle: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L333 , you delegate to `s3fs`, which documents that [it only operates in Binary mode](http://s3fs.readthedocs.io/en/latest/#limitations). Therefore when you `read()`: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L335, Therefore passing to `StringIO` will fail here: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L347 . Maybe it needs a different handler for `BytesIO`?
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Darwin
OS-release: 16.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: None
pip: 9.0.1
setuptools: 36.2.7
Cython: None
numpy: 1.12.0
scipy: 0.19.1
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: 2.6.2 (dt dec pq3 ext lo64)
jinja2: None
s3fs: 0.1.2
pandas_gbq: None
pandas_datareader: None
```
</details>
| Well we do have a `BytesIO` class in `pandas.compat`. If we can condition on the data returned to us, that would be reasonable I think.
@gfyoung I'm not intimately familiar with the codebase, but I have a (possibly naive) fix that just attempts to decode the json based on whatever `encoding` is. It seems to fix. Would you like to review if do a PR?
> I'm not intimately familiar with the codebase
Don't worry, that's a pretty tall order :smile:
Absolutely! Submit a PR, and we'll certainly review it.
FYI, here's how we handle it on the CSV side: https://github.com/pandas-dev/pandas/blob/3c833db29b6f5977c78d1ade791a09a5b29cedb8/pandas/io/common.py#L401
that `f` would be an instance of `S3File`, which is one of the classes in `need_text_wrapping`. By the `TextIOWrapper` can go around buffer of bytes and it'll do the encoding. You might be able to reuse parts of that for `read_json`, or just do something similar. | 2017-08-08T18:39:45Z | [] | [] |
Traceback (most recent call last):
File "image_rank_csv.py", line 62, in run
inputdf = pd.read_json(path_or_buf="s3://path/to/python-lines/file.json", lines=True)
File "...env/lib/python3.6/site-packages/pandas/io/json/json.py", line 347, in read_json
lines = list(StringIO(json.strip()))
TypeError: initial_value must be str or None, not bytes
| 11,326 |
Subsets and Splits