content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How do I (successfully) decode a encoded password from command line openSSL?
Using PyCrypto (although I've tried this in ObjC with OpenSSL bindings as well) :
from Crypto.Cipher import DES
import base64
obj=DES.new('abcdefgh', DES.MODE_ECB)
plain="Guido van Rossum is a space alien.XXXXXX"
ciph=obj.encrypt(plain)
enc=base64.b64encode(ciph)
#print ciph
print enc
outputs a base64 encoded value of :
ESzjTnGMRFnfVOJwQfqtyXOI8yzAatioyufiSdE1dx02McNkZ2IvBg==
If you were in the interpreter, ciph will give you
'\x11,\xe3Nq\x8cDY\xdfT\xe2pA\xfa\xad\xc9s\x88\xf3,\xc0j\xd8\xa8\xca\xe7\xe2I\xd15w\x1d61\xc3dgb/\x06'
Easy enough. I should be able to pipe this output to OpenSSL and decode it :
I test to make sure that the b64 decode works -
python enctest.py | openssl enc -base64 -d
+ python enctest.py
+ openssl enc -base64 -d
,?Nq?DY?T?pA???s??,?jب???I?5w61?dgb/
Not pretty, but you can see that it got decoded fine, "dgb" and "Nq" are still there.
But go for the full thing :
python enctest.py | openssl enc -base64 -d | openssl enc -nosalt -des-ecb -d -pass pass:abcdefgh
+ python enctest.py
+ openssl enc -nosalt -des-ecb -d -pass pass:abcdefgh
+ openssl enc -base64 -d
bad decrypt
15621:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:461:
j?7????vc]???LE?m³??q?
What am I doing wrong? I've tried using -k abcdefgh -iv 0000000000000000 or typing in the password interactively - same problem.
A:
echo ESzjTnGMRFnfVOJwQfqtyXOI8yzAatioyufiSdE1dx02McNkZ2IvBg== | openssl enc -nopad -a -des-ecb -K 6162636465666768 -iv 0 -p -d
6162636465666768 is the ASCII "abcdefgh" written out in hexadecimal.
But note that DES in ECB mode is probably not a good way to encode passwords and also is not the "DES crypt" you may have heard of being used on Unix systems.
(For passwords, it is usually better to use a hard-to-reverse algorithm (checking the password by regenerating the result instead decrypting the stored password). Even if you do need to be able to decrypt these encrypted passwords, single-DES and especially ECB are poor choices as far as confidentiality is concerned.)
| How do I (successfully) decode a encoded password from command line openSSL? | Using PyCrypto (although I've tried this in ObjC with OpenSSL bindings as well) :
from Crypto.Cipher import DES
import base64
obj=DES.new('abcdefgh', DES.MODE_ECB)
plain="Guido van Rossum is a space alien.XXXXXX"
ciph=obj.encrypt(plain)
enc=base64.b64encode(ciph)
#print ciph
print enc
outputs a base64 encoded value of :
ESzjTnGMRFnfVOJwQfqtyXOI8yzAatioyufiSdE1dx02McNkZ2IvBg==
If you were in the interpreter, ciph will give you
'\x11,\xe3Nq\x8cDY\xdfT\xe2pA\xfa\xad\xc9s\x88\xf3,\xc0j\xd8\xa8\xca\xe7\xe2I\xd15w\x1d61\xc3dgb/\x06'
Easy enough. I should be able to pipe this output to OpenSSL and decode it :
I test to make sure that the b64 decode works -
python enctest.py | openssl enc -base64 -d
+ python enctest.py
+ openssl enc -base64 -d
,?Nq?DY?T?pA???s??,?jب???I?5w61?dgb/
Not pretty, but you can see that it got decoded fine, "dgb" and "Nq" are still there.
But go for the full thing :
python enctest.py | openssl enc -base64 -d | openssl enc -nosalt -des-ecb -d -pass pass:abcdefgh
+ python enctest.py
+ openssl enc -nosalt -des-ecb -d -pass pass:abcdefgh
+ openssl enc -base64 -d
bad decrypt
15621:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:461:
j?7????vc]???LE?m³??q?
What am I doing wrong? I've tried using -k abcdefgh -iv 0000000000000000 or typing in the password interactively - same problem.
| [
"\necho ESzjTnGMRFnfVOJwQfqtyXOI8yzAatioyufiSdE1dx02McNkZ2IvBg== | openssl enc -nopad -a -des-ecb -K 6162636465666768 -iv 0 -p -d\n\n6162636465666768 is the ASCII \"abcdefgh\" written out in hexadecimal.\nBut note that DES in ECB mode is probably not a good way to encode passwords and also is not the \"DES crypt\" you may have heard of being used on Unix systems. \n(For passwords, it is usually better to use a hard-to-reverse algorithm (checking the password by regenerating the result instead decrypting the stored password). Even if you do need to be able to decrypt these encrypted passwords, single-DES and especially ECB are poor choices as far as confidentiality is concerned.)\n"
] | [
4
] | [] | [] | [
"bash",
"encryption",
"linux",
"openssl",
"python"
] | stackoverflow_0000426294_bash_encryption_linux_openssl_python.txt |
Q:
How do I timestamp simultaneous function calls in Python?
I have a read function in a module.
If I perform that function simultaneously I need to timestamp it.
How do I do this?
A:
I'll offer a slightly different approach:
import time
def timestampit(func):
def decorate(*args, **kwargs):
decorate.timestamp = time.time()
return func(*args, **kwargs)
return decorate
@timestampit
def hello():
print 'hello'
hello()
print hello.timestamp
time.sleep(1)
hello()
print hello.timestamp
The differences from Swaroop's example are:
I'm using time.time() and not datetime.now() for a timestamp, because it's more suitable for performance testing
I'm attaching the timestamp as an attribute of the decorated function. This way you may invoke and keep it whenever you want.
A:
#!/usr/bin/env python
import datetime
def timestampit(func):
def decorate(*args, **kwargs):
print datetime.datetime.now()
return func(*args, **kwargs)
return decorate
@timestampit
def hello():
print 'hello'
hello()
# Output:
# $ python test.py
# 2009-01-09 11:50:48.704584
# hello
A:
If you're interested, there's a wealth of information about decorators on the python wiki.
A:
Some example
code by Terik Ziade
(more polished version, which uses timeit module, can be found in his recent book Expert Python Programming)
| How do I timestamp simultaneous function calls in Python? | I have a read function in a module.
If I perform that function simultaneously I need to timestamp it.
How do I do this?
| [
"I'll offer a slightly different approach:\nimport time\n\ndef timestampit(func):\n def decorate(*args, **kwargs):\n decorate.timestamp = time.time()\n return func(*args, **kwargs)\n return decorate\n\n@timestampit\ndef hello():\n print 'hello'\n\n\nhello()\nprint hello.timestamp\n\ntime.sleep(1)\n\nhello()\nprint hello.timestamp\n\nThe differences from Swaroop's example are:\n\nI'm using time.time() and not datetime.now() for a timestamp, because it's more suitable for performance testing\nI'm attaching the timestamp as an attribute of the decorated function. This way you may invoke and keep it whenever you want.\n\n",
"#!/usr/bin/env python\n\nimport datetime\n\ndef timestampit(func):\n def decorate(*args, **kwargs):\n print datetime.datetime.now()\n return func(*args, **kwargs)\n return decorate\n\n@timestampit\ndef hello():\n print 'hello'\n\nhello()\n\n# Output:\n# $ python test.py \n# 2009-01-09 11:50:48.704584\n# hello\n\n",
"If you're interested, there's a wealth of information about decorators on the python wiki.\n",
"Some example \ncode by Terik Ziade\n(more polished version, which uses timeit module, can be found in his recent book Expert Python Programming)\n"
] | [
6,
2,
0,
0
] | [] | [] | [
"function_call",
"python",
"simultaneous_calls",
"timestamping"
] | stackoverflow_0000427152_function_call_python_simultaneous_calls_timestamping.txt |
Q:
Does anyone know of a Python equivalent of FMPP?
Does anyone know of a Python equivalent for FMPP the text file preprocessor?
Follow up: I am reading the docs and looking at the examples for the suggestions given. Just to expand. My usage of FMPP is to read in a data file (csv) and use multiple templates depending on that data to create multi page reports in html all linked to a main index.
A:
Let me add Mako Fine fast tool (and it even uses ${var} syntax).
Note: Mako, Jinja and Cheetah are textual languages (they process and generate text). I'd order them Mako > Jinja > Cheetah (in term of features and readability), but people's preferences vary.
Kid and it's successor Genshi are HTML/XML aware attribute languages (<div py:if="variable"> ... </div> etc ). That's completely different methodology - and tools suitable for HTML or XML only.
A:
Python has lots of templating engines. It depends on your exact needs.
Jinja2 is a good one, for example. Kid is another.
A:
You could give Cheetah a try. I've used it before with some success.
A:
I'm not sure exactly what FMPP does, but from a quick glance it seems like a template language.
Jinja2 is an excellent template system for python.
sample:
<ul>
{% for item in list %}
<li> {{ item.title }} </li>
{% endfor %}
</ul>
{% if user.is_admin() %}
<a href="./edit">Edit this page</a>
{% endif %}
| Does anyone know of a Python equivalent of FMPP? | Does anyone know of a Python equivalent for FMPP the text file preprocessor?
Follow up: I am reading the docs and looking at the examples for the suggestions given. Just to expand. My usage of FMPP is to read in a data file (csv) and use multiple templates depending on that data to create multi page reports in html all linked to a main index.
| [
"Let me add Mako Fine fast tool (and it even uses ${var} syntax).\nNote: Mako, Jinja and Cheetah are textual languages (they process and generate text). I'd order them Mako > Jinja > Cheetah (in term of features and readability), but people's preferences vary.\nKid and it's successor Genshi are HTML/XML aware attribute languages (<div py:if=\"variable\"> ... </div> etc ). That's completely different methodology - and tools suitable for HTML or XML only.\n",
"Python has lots of templating engines. It depends on your exact needs.\nJinja2 is a good one, for example. Kid is another.\n",
"You could give Cheetah a try. I've used it before with some success.\n",
"I'm not sure exactly what FMPP does, but from a quick glance it seems like a template language.\nJinja2 is an excellent template system for python.\nsample:\n<ul>\n {% for item in list %}\n <li> {{ item.title }} </li>\n {% endfor %}\n</ul>\n\n{% if user.is_admin() %}\n <a href=\"./edit\">Edit this page</a>\n{% endif %}\n\n"
] | [
3,
2,
1,
1
] | [] | [] | [
"fmpp",
"freemarker",
"preprocessor",
"python",
"template_engine"
] | stackoverflow_0000427095_fmpp_freemarker_preprocessor_python_template_engine.txt |
Q:
How do I search for unpublished Plone content in an IPython debug shell?
I like to use IPython's zope profile to inspect my Plone instance, but a few annoying permissions differences come up compared to inserting a breakpoint and hitting it with the admin user.
For example, I would like to iterate over the content objects in an unpublished testing folder. This query will return no results in the shell, but works from a breakpoint.
$ bin/instance shell
$ ipython --profile=zope
from Products.CMFPlone.utils import getToolByName
catalog = getToolByName(context, 'portal_catalog')
catalog({'path':'Plone/testing'})
Can I authenticate as admin or otherwise rejigger the permissions to fully manipulate my site from ipython?
A:
here's the (very dirty) code I use to manage my plone app from the debug shell. It may requires some updates depending on your versions of Zope and Plone.
from sys import stdin, stdout, exit
import base64
from thread import get_ident
from ZPublisher.HTTPRequest import HTTPRequest
from ZPublisher.HTTPResponse import HTTPResponse
from ZPublisher.BaseRequest import RequestContainer
from ZPublisher import Publish
from AccessControl import ClassSecurityInfo, getSecurityManager
from AccessControl.SecurityManagement import newSecurityManager
from AccessControl.User import UnrestrictedUser
def loginAsUnrestrictedUser():
"""Exemple of use :
old_user = loginAsUnrestrictedUser()
# Manager stuff
loginAsUser(old_user)
"""
current_user = getSecurityManager().getUser()
newSecurityManager(None, UnrestrictedUser('manager', '', ['Manager'], []))
return current_user
def loginAsUser(user):
newSecurityManager(None, user)
def makerequest(app, stdout=stdout, query_string=None, user_pass=None):
"""Make a request suitable for CMF sites & Plone
- user_pass = "user:pass"
"""
# copy from Testing.makerequest
resp = HTTPResponse(stdout=stdout)
env = {}
env['SERVER_NAME'] = 'lxtools.makerequest.fr'
env['SERVER_PORT'] = '80'
env['REQUEST_METHOD'] = 'GET'
env['REMOTE_HOST'] = 'a.distant.host'
env['REMOTE_ADDR'] = '77.77.77.77'
env['HTTP_HOST'] = '127.0.0.1'
env['HTTP_USER_AGENT'] = 'LxToolsUserAgent/1.0'
env['HTTP_ACCEPT']='image/gif, image/x-xbitmap, image/jpeg, */* '
if user_pass:
env['HTTP_AUTHORIZATION']="Basic %s" % base64.encodestring(user_pass)
if query_string:
p_q = query_string.split('?')
if len(p_q) == 1:
env['PATH_INFO'] = p_q[0]
elif len(p_q) == 2:
(env['PATH_INFO'], env['QUERY_STRING'])=p_q
else:
raise TypeError, ''
req = HTTPRequest(stdin, env, resp)
req['URL1']=req['URL'] # fix for CMFQuickInstaller
#
# copy/hacked from Localizer __init__ patches
# first put the needed values in the request
req['HTTP_ACCEPT_CHARSET'] = 'latin-9'
#req.other['AcceptCharset'] = AcceptCharset(req['HTTP_ACCEPT_CHARSET'])
#
req['HTTP_ACCEPT_LANGUAGE'] = 'fr'
#accept_language = AcceptLanguage(req['HTTP_ACCEPT_LANGUAGE'])
#req.other['AcceptLanguage'] = accept_language
# XXX For backwards compatibility
#req.other['USER_PREF_LANGUAGES'] = accept_language
#req.other['AcceptLanguage'] = accept_language
#
# Plone stuff
#req['plone_skin'] = 'Plone Default'
#
# then store the request in Publish._requests
# with the thread id
id = get_ident()
if hasattr(Publish, '_requests'):
# we do not have _requests inside ZopeTestCase
Publish._requests[id] = req
# add a brainless session container
req['SESSION'] = {}
#
# ok, let's wrap
return app.__of__(RequestContainer(REQUEST = req))
def debug_init(app):
loginAsUnrestrictedUser()
app = makerequest(app)
return app
This lives in a wshelpers Zope product. Once the debug shell launched, it's just a matter of;
>> from Products.wshelpers import wsdebug
>> app = wsdebug.debug_init(app)
>> # now you're logged in as admin
A:
Just use catalog.search({'path':'Plone/testing'}). It performs the same query as catalog() but does not filter the results based on the current user's permissions.
IPython's zope profile does provide a method utils.su('username') to change the current user, but it does not recognize the admin user (defined in /acl_users instead of /Plone/acl_users) and after calling it subsequent calls to catalog() fail with AttributeError: 'module' object has no attribute 'checkPermission'.
| How do I search for unpublished Plone content in an IPython debug shell? | I like to use IPython's zope profile to inspect my Plone instance, but a few annoying permissions differences come up compared to inserting a breakpoint and hitting it with the admin user.
For example, I would like to iterate over the content objects in an unpublished testing folder. This query will return no results in the shell, but works from a breakpoint.
$ bin/instance shell
$ ipython --profile=zope
from Products.CMFPlone.utils import getToolByName
catalog = getToolByName(context, 'portal_catalog')
catalog({'path':'Plone/testing'})
Can I authenticate as admin or otherwise rejigger the permissions to fully manipulate my site from ipython?
| [
"here's the (very dirty) code I use to manage my plone app from the debug shell. It may requires some updates depending on your versions of Zope and Plone.\nfrom sys import stdin, stdout, exit\nimport base64\nfrom thread import get_ident\nfrom ZPublisher.HTTPRequest import HTTPRequest\nfrom ZPublisher.HTTPResponse import HTTPResponse\nfrom ZPublisher.BaseRequest import RequestContainer\nfrom ZPublisher import Publish\n\nfrom AccessControl import ClassSecurityInfo, getSecurityManager\nfrom AccessControl.SecurityManagement import newSecurityManager\nfrom AccessControl.User import UnrestrictedUser\n\ndef loginAsUnrestrictedUser():\n \"\"\"Exemple of use :\n old_user = loginAsUnrestrictedUser()\n # Manager stuff\n loginAsUser(old_user)\n \"\"\"\n current_user = getSecurityManager().getUser()\n newSecurityManager(None, UnrestrictedUser('manager', '', ['Manager'], []))\n return current_user\n\ndef loginAsUser(user):\n newSecurityManager(None, user)\n\ndef makerequest(app, stdout=stdout, query_string=None, user_pass=None):\n \"\"\"Make a request suitable for CMF sites & Plone\n - user_pass = \"user:pass\"\n \"\"\"\n # copy from Testing.makerequest\n resp = HTTPResponse(stdout=stdout)\n env = {}\n env['SERVER_NAME'] = 'lxtools.makerequest.fr'\n env['SERVER_PORT'] = '80'\n env['REQUEST_METHOD'] = 'GET'\n env['REMOTE_HOST'] = 'a.distant.host'\n env['REMOTE_ADDR'] = '77.77.77.77'\n env['HTTP_HOST'] = '127.0.0.1'\n env['HTTP_USER_AGENT'] = 'LxToolsUserAgent/1.0'\n env['HTTP_ACCEPT']='image/gif, image/x-xbitmap, image/jpeg, */* '\n if user_pass:\n env['HTTP_AUTHORIZATION']=\"Basic %s\" % base64.encodestring(user_pass)\n if query_string:\n p_q = query_string.split('?')\n if len(p_q) == 1: \n env['PATH_INFO'] = p_q[0]\n elif len(p_q) == 2: \n (env['PATH_INFO'], env['QUERY_STRING'])=p_q\n else: \n raise TypeError, ''\n req = HTTPRequest(stdin, env, resp)\n req['URL1']=req['URL'] # fix for CMFQuickInstaller\n #\n # copy/hacked from Localizer __init__ patches\n # first put the needed values in the request\n req['HTTP_ACCEPT_CHARSET'] = 'latin-9'\n #req.other['AcceptCharset'] = AcceptCharset(req['HTTP_ACCEPT_CHARSET'])\n #\n req['HTTP_ACCEPT_LANGUAGE'] = 'fr'\n #accept_language = AcceptLanguage(req['HTTP_ACCEPT_LANGUAGE'])\n #req.other['AcceptLanguage'] = accept_language \n # XXX For backwards compatibility\n #req.other['USER_PREF_LANGUAGES'] = accept_language\n #req.other['AcceptLanguage'] = accept_language \n #\n # Plone stuff\n #req['plone_skin'] = 'Plone Default'\n #\n # then store the request in Publish._requests\n # with the thread id\n id = get_ident()\n if hasattr(Publish, '_requests'):\n # we do not have _requests inside ZopeTestCase\n Publish._requests[id] = req\n # add a brainless session container\n req['SESSION'] = {}\n #\n # ok, let's wrap\n return app.__of__(RequestContainer(REQUEST = req))\n\n\ndef debug_init(app):\n loginAsUnrestrictedUser()\n app = makerequest(app)\n return app\n\nThis lives in a wshelpers Zope product. Once the debug shell launched, it's just a matter of;\n>> from Products.wshelpers import wsdebug\n>> app = wsdebug.debug_init(app)\n>> # now you're logged in as admin\n\n",
"Just use catalog.search({'path':'Plone/testing'}). It performs the same query as catalog() but does not filter the results based on the current user's permissions.\nIPython's zope profile does provide a method utils.su('username') to change the current user, but it does not recognize the admin user (defined in /acl_users instead of /Plone/acl_users) and after calling it subsequent calls to catalog() fail with AttributeError: 'module' object has no attribute 'checkPermission'.\n"
] | [
2,
1
] | [] | [] | [
"plone",
"python"
] | stackoverflow_0000279119_plone_python.txt |
Q:
Python 3.0 and language evolution
Python 3.0 breaks backwards compatibility with previous versions and splits the language into two paths (at least temporarily). Do you know of any other language that went through such a major design phase while in maturity?
Also, do you believe that this is how programming languages should evolve or is the price to pay simply too high?
A:
The only language I can think of to attempt such a mid-stream change would be Perl. Of course, Python is beating Perl to that particular finish line by releasing first. It should be noted, however, that Perl's changes are much more extensive than Python's and likely will be harder to detangle.
(There's a price for Perl's "There's More Than One Way To Do It" philosophy.)
There are examples like the changes from version to version of .NET-based languages (ironic, considering the whole point of .NET was supposed to be API stability and cross-platform compatibility). However, I would hardly call those languages "mature"; it's always been more of a design-on-the-go, build-the-plane-as-we-fly approach to things.
Or, as I tend to think of it, most languages come from either "organic growth" or "engineered construction." Perl is the perfect example of organic growth; it started as a fancy text processing tool ala awk/sed and grew into a full language.
Python, on the other hand, is much more engineered. Spend a bit of time wandering around the extensive whitepapers on their website to see the extensive debate that goes into every even minor change to the language's syntax and implementation.
The idea of making these sorts of far-reaching changes is somewhat new to programming languages because programming languages themselves have changed in nature. It used to be that programming methodologies changed only when a new processor came out that had a new instruction set. The early languages tended to either be so low-level and married to assembly language (e.g. C) or so utterly dynamic in nature (Forth, Lisp) that such a mid-stream change wouldn't even come up as a consideration.
As to whether or not the changes are good ones, I'm not sure. I tend to have faith in the people guiding Python's development, however; the changes in the language thus far have been largely for the better.
I think in the days to come the Global Interpreter Lock will prove more central than syntax changes. Though the new multiprocessor library might alleviate most of that.
A:
The price of insisting on near-absolute backwards compatibility is just too high. Spend two minutes programming in C++ if you want to see why.
A:
The python team has worked very hard to make the lack of backward compatibility as painless as possible, to the point where the 2.6 release of python was created with a mind towards a painless upgrade process. Once you have upgraded to 2.6 there are scripts that you can run that will move you to 3.0 without issue.
A:
It's worth mentioning that backward compatibility incurs costs of its own. In some cases it's almost impossible to evolve a language in the ideal way if 100% backward compatibility is required. Java's implementation of generics (which erases type information at compile-time in order to be backwardly-compatible) is a good example of how implementing features with 100% backward compatibility can result in a sub-optimal language feature.
So loosely speaking, it can come down to a choice between a poorly implemented new feature that's backwardly compatible, or a nicely implemented new feature that's not. In many cases, the latter is a better choice, particularly if there are tools that can automatically translate incompatible code.
A:
I think there are many examples of backward compatibility breakages. Many of the languages that did this were either small or died out along the way.
Many examples of this involved renaming the language.
Algol 60 and Algol 68 were so different that the meetings on Algol 68 broke up into factions. The Algol 68 faction, the Pascal faction and the PL/I faction.
Wirth's Pascal morphed into Modula-3. It was very similar to pascal -- very similar syntax and semantics -- but several new features. Was that really a Pascal-2 with no backward compatibility?
The Lisp to Scheme thing involved a rename.
If you track down a scan of the old B programming language manual, you'll see that the evolution to C looks kind of incremental -- not radical -- but it did break compatibility.
Fortran existed in many forms. I don't know for sure, but I think that Digital's Fortran 90 for VAX/VMS wasn't completely compatible with ancient Fortran IV programs.
RPG went through major upheavals -- I think that there are really two incompatible languages called RPG.
Bottom Line I think that thinking and learning are inevitable. You have three responses to learning the limitations of a language.
Invent a new language that's utterly incompatible.
Incremental chagne until you are forced to invent a new language.
Break compatibility in a controlled, thoughtful way.
I think that #1 and #2 are both coward's ways out. Chucking the old is easier than attempting to preserve it. Preserving every nuanced feature (no matter how bad) is a lot of work, some of it of little or no value.
Commercial enterprises opt for cowardly approaches in the name of "new marketing" or "preserving our existing customers". That's why commercial software ventures aren't hot-beds of innovation.
I think that only open-source projects can be embrace innovation in the way that the Python community is tackling this change.
A:
C# and the .NET framework broke compatibility between versions 1.0 and 1.1 as well as between 1.1 and 2.0. Running applications in different versions required having multiple versions of the .NET runtime installed.
At least they did include an upgrade wizard to upgrade source from one version to the next (it worked for most of our code).
A:
Wouldn't VB6 to VB.net be the biggest example of this? Or do you all consider them two separate languages?
A:
In the Lisp world it has happened a few times. of course, the language is so dynamic that usually evolution is simply deprecating part of the standard library and making standard another part.
also, Lua 4 to 5 was pretty significant; but the language core is so minimal that even wide-reaching changes are documented in a couple of pages.
A:
Perl 6 is also going through this type of split right now. Perl 5 programs won't run directly on Perl 6, but there will be a translator to translate the code into a form that may work (I don't think it can handle 100% of the cases).
Perl 6 even has its own article on Wikipedia.
A:
First, here is a video talk about the changes Python will go through.
Second, changes are no good.
Third, I for one welcome evolution and believe it is necessary.
A:
gcc regularly changes how it handles C++ almost every minor release. Of course, this is more a consequence of gcc tightening how they follow the rules, and less of C++ itself changing.
A:
The new version of the Ruby programming language will also break compatibility.
And think of the libraries one might use: gtk, Qt, and so on (they also have incompatible versions).
I think incompatibility is necessary sometimes (but not too often) to support progress.
| Python 3.0 and language evolution | Python 3.0 breaks backwards compatibility with previous versions and splits the language into two paths (at least temporarily). Do you know of any other language that went through such a major design phase while in maturity?
Also, do you believe that this is how programming languages should evolve or is the price to pay simply too high?
| [
"The only language I can think of to attempt such a mid-stream change would be Perl. Of course, Python is beating Perl to that particular finish line by releasing first. It should be noted, however, that Perl's changes are much more extensive than Python's and likely will be harder to detangle.\n(There's a price for Perl's \"There's More Than One Way To Do It\" philosophy.)\nThere are examples like the changes from version to version of .NET-based languages (ironic, considering the whole point of .NET was supposed to be API stability and cross-platform compatibility). However, I would hardly call those languages \"mature\"; it's always been more of a design-on-the-go, build-the-plane-as-we-fly approach to things.\nOr, as I tend to think of it, most languages come from either \"organic growth\" or \"engineered construction.\" Perl is the perfect example of organic growth; it started as a fancy text processing tool ala awk/sed and grew into a full language.\nPython, on the other hand, is much more engineered. Spend a bit of time wandering around the extensive whitepapers on their website to see the extensive debate that goes into every even minor change to the language's syntax and implementation.\nThe idea of making these sorts of far-reaching changes is somewhat new to programming languages because programming languages themselves have changed in nature. It used to be that programming methodologies changed only when a new processor came out that had a new instruction set. The early languages tended to either be so low-level and married to assembly language (e.g. C) or so utterly dynamic in nature (Forth, Lisp) that such a mid-stream change wouldn't even come up as a consideration.\nAs to whether or not the changes are good ones, I'm not sure. I tend to have faith in the people guiding Python's development, however; the changes in the language thus far have been largely for the better.\nI think in the days to come the Global Interpreter Lock will prove more central than syntax changes. Though the new multiprocessor library might alleviate most of that.\n",
"The price of insisting on near-absolute backwards compatibility is just too high. Spend two minutes programming in C++ if you want to see why.\n",
"The python team has worked very hard to make the lack of backward compatibility as painless as possible, to the point where the 2.6 release of python was created with a mind towards a painless upgrade process. Once you have upgraded to 2.6 there are scripts that you can run that will move you to 3.0 without issue. \n",
"It's worth mentioning that backward compatibility incurs costs of its own. In some cases it's almost impossible to evolve a language in the ideal way if 100% backward compatibility is required. Java's implementation of generics (which erases type information at compile-time in order to be backwardly-compatible) is a good example of how implementing features with 100% backward compatibility can result in a sub-optimal language feature.\nSo loosely speaking, it can come down to a choice between a poorly implemented new feature that's backwardly compatible, or a nicely implemented new feature that's not. In many cases, the latter is a better choice, particularly if there are tools that can automatically translate incompatible code.\n",
"I think there are many examples of backward compatibility breakages. Many of the languages that did this were either small or died out along the way.\nMany examples of this involved renaming the language.\nAlgol 60 and Algol 68 were so different that the meetings on Algol 68 broke up into factions. The Algol 68 faction, the Pascal faction and the PL/I faction. \nWirth's Pascal morphed into Modula-3. It was very similar to pascal -- very similar syntax and semantics -- but several new features. Was that really a Pascal-2 with no backward compatibility?\nThe Lisp to Scheme thing involved a rename.\nIf you track down a scan of the old B programming language manual, you'll see that the evolution to C looks kind of incremental -- not radical -- but it did break compatibility.\nFortran existed in many forms. I don't know for sure, but I think that Digital's Fortran 90 for VAX/VMS wasn't completely compatible with ancient Fortran IV programs.\nRPG went through major upheavals -- I think that there are really two incompatible languages called RPG.\nBottom Line I think that thinking and learning are inevitable. You have three responses to learning the limitations of a language.\n\nInvent a new language that's utterly incompatible.\nIncremental chagne until you are forced to invent a new language.\nBreak compatibility in a controlled, thoughtful way.\n\nI think that #1 and #2 are both coward's ways out. Chucking the old is easier than attempting to preserve it. Preserving every nuanced feature (no matter how bad) is a lot of work, some of it of little or no value.\nCommercial enterprises opt for cowardly approaches in the name of \"new marketing\" or \"preserving our existing customers\". That's why commercial software ventures aren't hot-beds of innovation. \nI think that only open-source projects can be embrace innovation in the way that the Python community is tackling this change.\n",
"C# and the .NET framework broke compatibility between versions 1.0 and 1.1 as well as between 1.1 and 2.0. Running applications in different versions required having multiple versions of the .NET runtime installed.\nAt least they did include an upgrade wizard to upgrade source from one version to the next (it worked for most of our code).\n",
"Wouldn't VB6 to VB.net be the biggest example of this? Or do you all consider them two separate languages? \n",
"In the Lisp world it has happened a few times. of course, the language is so dynamic that usually evolution is simply deprecating part of the standard library and making standard another part.\nalso, Lua 4 to 5 was pretty significant; but the language core is so minimal that even wide-reaching changes are documented in a couple of pages.\n",
"Perl 6 is also going through this type of split right now. Perl 5 programs won't run directly on Perl 6, but there will be a translator to translate the code into a form that may work (I don't think it can handle 100% of the cases).\nPerl 6 even has its own article on Wikipedia.\n",
"First, here is a video talk about the changes Python will go through.\nSecond, changes are no good.\nThird, I for one welcome evolution and believe it is necessary.\n",
"gcc regularly changes how it handles C++ almost every minor release. Of course, this is more a consequence of gcc tightening how they follow the rules, and less of C++ itself changing.\n",
"The new version of the Ruby programming language will also break compatibility.\nAnd think of the libraries one might use: gtk, Qt, and so on (they also have incompatible versions).\nI think incompatibility is necessary sometimes (but not too often) to support progress.\n"
] | [
16,
13,
9,
7,
6,
4,
4,
2,
1,
1,
0,
0
] | [] | [] | [
"programming_languages",
"python",
"python_3.x"
] | stackoverflow_0000273524_programming_languages_python_python_3.x.txt |
Q:
Is it possible to implement Python code-completion in TextMate?
PySmell seems like a good starting point.
I think it should be possible, PySmell's idehelper.py does a majority of the complex stuff, it should just be a case of giving it the current line, offering up the completions (the bit I am not sure about) and then replacing the line with the selected one.
>>> import idehelper
>>> # The path is where my PYSMELLTAGS file is located:
>>> PYSMELLDICT = idehelper.findPYSMELLDICT("/Users/dbr/Desktop/pysmell/")
>>> options = idehelper.detectCompletionType("", "" 1, 2, "", PYSMELLDICT)
>>> completions = idehelper.findCompletions("proc", PYSMELLDICT, options)
>>> print completions
[{'dup': '1', 'menu': 'pysmell.pysmell', 'kind': 'f', 'word': 'process', 'abbr': 'process(argList, excluded, output, verbose=False)'}]
It'll never be perfect, but it would be extremely useful (even if just for completing the stdlib modules, which should never change, so you wont have to constantly regenerate the PYSMELLTAGS file whenever you add a function)
Progressing! I have the utter-basics of completion in place - barely works, but it's close..
I ran python pysmells.py /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/*.py -O /Library/Python/2.5/site-packages/pysmell/PYSMELLTAGS
Place the following in a TextMate bundle script, set "input: entire document", "output: insert as text", "activation: key equivalent: alt+esc", "scope selector: source.python"
#!/usr/bin/env python
import os
import sys
from pysmell import idehelper
CUR_WORD = os.environ.get("TM_CURRENT_WORD")
cur_file = os.environ.get("TM_FILEPATH")
orig_source = sys.stdin.read()
line_no = int(os.environ.get("TM_LINE_NUMBER"))
cur_col = int(os.environ.get("TM_LINE_INDEX"))
# PYSMELLS is currently in site-packages/pysmell/
PYSMELLDICT = idehelper.findPYSMELLDICT("/Library/Python/2.5/site-packages/pysmell/blah")
options = idehelper.detectCompletionType(cur_file, orig_source, line_no, cur_col, "", PYSMELLDICT)
completions = idehelper.findCompletions(CUR_WORD, PYSMELLDICT, options)
if len(completions) > 0:
new_word = completions[0]['word']
new_word = new_word.replace(CUR_WORD, "", 1) # remove what user has already typed
print new_word
Then I made a new python document, typed "import urll" and hit alt+escape, and it completed it to "import urllib"!
As I said, it's entirely a work-in-progress, so don't use it yet..
Last update:
orestis has integrated this into the PySmell project's code! Any further fiddling will happen on github
A:
EDIT: I've actually took your code above and integrated into a command. It will properly show a completion list for you to choose.
You can grab it here: http://github.com/orestis/pysmell/tree/master (hit download and do python setup.py install). It's rough but it works. - please report any errors on http://code.google.com/p/pysmell/
--
Hi, I'm the developer of PySmell. I also use a Mac, so if you can send me an email (contact info is in the source code) with your progress so far, I can try to integrate it :)
Oh BTW it's called PySmell - no trailing 's' :)
A:
This isn't exactly what you're looking for but it might be able to get you started:
Using TextMate with Django
They appear to be somewhat Django specific but some snippets may assist with your needs. You also may be able to build on top of that with PySmells.
A:
It's not perfect, but you can give it a try: http://mtod.org/tempy
A:
In TextMate PHP has a simple auto-completion in form of hardcoded set of function names. Sounds as ugly as PHP, but in practice it's good enough to be useful.
| Is it possible to implement Python code-completion in TextMate? | PySmell seems like a good starting point.
I think it should be possible, PySmell's idehelper.py does a majority of the complex stuff, it should just be a case of giving it the current line, offering up the completions (the bit I am not sure about) and then replacing the line with the selected one.
>>> import idehelper
>>> # The path is where my PYSMELLTAGS file is located:
>>> PYSMELLDICT = idehelper.findPYSMELLDICT("/Users/dbr/Desktop/pysmell/")
>>> options = idehelper.detectCompletionType("", "" 1, 2, "", PYSMELLDICT)
>>> completions = idehelper.findCompletions("proc", PYSMELLDICT, options)
>>> print completions
[{'dup': '1', 'menu': 'pysmell.pysmell', 'kind': 'f', 'word': 'process', 'abbr': 'process(argList, excluded, output, verbose=False)'}]
It'll never be perfect, but it would be extremely useful (even if just for completing the stdlib modules, which should never change, so you wont have to constantly regenerate the PYSMELLTAGS file whenever you add a function)
Progressing! I have the utter-basics of completion in place - barely works, but it's close..
I ran python pysmells.py /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/*.py -O /Library/Python/2.5/site-packages/pysmell/PYSMELLTAGS
Place the following in a TextMate bundle script, set "input: entire document", "output: insert as text", "activation: key equivalent: alt+esc", "scope selector: source.python"
#!/usr/bin/env python
import os
import sys
from pysmell import idehelper
CUR_WORD = os.environ.get("TM_CURRENT_WORD")
cur_file = os.environ.get("TM_FILEPATH")
orig_source = sys.stdin.read()
line_no = int(os.environ.get("TM_LINE_NUMBER"))
cur_col = int(os.environ.get("TM_LINE_INDEX"))
# PYSMELLS is currently in site-packages/pysmell/
PYSMELLDICT = idehelper.findPYSMELLDICT("/Library/Python/2.5/site-packages/pysmell/blah")
options = idehelper.detectCompletionType(cur_file, orig_source, line_no, cur_col, "", PYSMELLDICT)
completions = idehelper.findCompletions(CUR_WORD, PYSMELLDICT, options)
if len(completions) > 0:
new_word = completions[0]['word']
new_word = new_word.replace(CUR_WORD, "", 1) # remove what user has already typed
print new_word
Then I made a new python document, typed "import urll" and hit alt+escape, and it completed it to "import urllib"!
As I said, it's entirely a work-in-progress, so don't use it yet..
Last update:
orestis has integrated this into the PySmell project's code! Any further fiddling will happen on github
| [
"EDIT: I've actually took your code above and integrated into a command. It will properly show a completion list for you to choose.\nYou can grab it here: http://github.com/orestis/pysmell/tree/master (hit download and do python setup.py install). It's rough but it works. - please report any errors on http://code.google.com/p/pysmell/\n--\nHi, I'm the developer of PySmell. I also use a Mac, so if you can send me an email (contact info is in the source code) with your progress so far, I can try to integrate it :)\nOh BTW it's called PySmell - no trailing 's' :)\n",
"This isn't exactly what you're looking for but it might be able to get you started:\nUsing TextMate with Django\nThey appear to be somewhat Django specific but some snippets may assist with your needs. You also may be able to build on top of that with PySmells.\n",
"It's not perfect, but you can give it a try: http://mtod.org/tempy\n",
"In TextMate PHP has a simple auto-completion in form of hardcoded set of function names. Sounds as ugly as PHP, but in practice it's good enough to be useful.\n"
] | [
9,
4,
1,
0
] | [] | [] | [
"autocomplete",
"python",
"text_editor",
"textmate"
] | stackoverflow_0000221339_autocomplete_python_text_editor_textmate.txt |
Q:
Ordered lists in django
i have very simple problem. I need to create model, that represent element of ordered list. This model can be implemented like this:
class Item(models.Model):
data = models.TextField()
order = models.IntegerField()
or like this:
class Item(models.Model):
data = models.TextField()
next = models.ForeignKey('self')
What way is preferred? What drawbacks have each solution?
A:
Essentially, the second solution you propose is a linked list. Linked list implemented at the database level are usually not a good idea. To retrieve a list of n elements, you will need n database access (or use complicated queries). Performance wise, retrieving a list in O(n) is awfully not efficient.
In regular code, linked list are used to get better insert performance compared to arrays (no need to move all elements around). In your database, updating all elements is not that complicated in only 2 queries :
UPDATE item.order = item.order + 1 FROM item WHERE order > 3
INSERT INTO item (order, ...) VALUES (3, ...)
I remember seeing a reuseable app that implemented all that and a nice admin interface, but I cant find it right now ...
To summarize, definitly use solution #1 and stay away from solution #2 unless you have a very very good reason not to !
A:
That depends on what you want to do.
The first one seems better to make a single query in the database and get all data in the correct order
The second one seems better to insert an element between two existing elements (because in the first one you'd have to change a lot of items if the numbers are sequential)
I'd use the first one, because it seems to fit better a database table, which is how django stores model data behind the hood.
| Ordered lists in django | i have very simple problem. I need to create model, that represent element of ordered list. This model can be implemented like this:
class Item(models.Model):
data = models.TextField()
order = models.IntegerField()
or like this:
class Item(models.Model):
data = models.TextField()
next = models.ForeignKey('self')
What way is preferred? What drawbacks have each solution?
| [
"Essentially, the second solution you propose is a linked list. Linked list implemented at the database level are usually not a good idea. To retrieve a list of n elements, you will need n database access (or use complicated queries). Performance wise, retrieving a list in O(n) is awfully not efficient.\nIn regular code, linked list are used to get better insert performance compared to arrays (no need to move all elements around). In your database, updating all elements is not that complicated in only 2 queries :\nUPDATE item.order = item.order + 1 FROM item WHERE order > 3\nINSERT INTO item (order, ...) VALUES (3, ...)\n\nI remember seeing a reuseable app that implemented all that and a nice admin interface, but I cant find it right now ...\nTo summarize, definitly use solution #1 and stay away from solution #2 unless you have a very very good reason not to !\n",
"That depends on what you want to do. \nThe first one seems better to make a single query in the database and get all data in the correct order\nThe second one seems better to insert an element between two existing elements (because in the first one you'd have to change a lot of items if the numbers are sequential)\nI'd use the first one, because it seems to fit better a database table, which is how django stores model data behind the hood.\n"
] | [
21,
6
] | [
"There is another solution.\nclass Item(models.Model):\n data = models.TextField()\n\nYou can just pickle or marshal Python list into the data field and the load it up. This one is good for updating and reading, but not for searching e.g. fetching all lists that contain a specific item.\n"
] | [
-6
] | [
"django",
"django_models",
"python"
] | stackoverflow_0000428149_django_django_models_python.txt |
Q:
Measure load time for python cgi script?
I use python cgi for our intranet application.
When I measure time, the script takes 4s to finish. But after that, it still takes another 11s to show the screen in the browser.
The screen is build with tables (size: 10 KB, 91 KB uncompressed) and has a large css file (5 KB, 58 KB uncompressed).
I used YSlow and did as much optimization as suggested. Gzipping etc.
Firebug Net says: 11s for the file.
How do I measure where these last 11 seconds are needed for?
Is it just the size of the HTML, or the table structure?
Anyone more ideas for tweaking?
A:
I think I'd grab a copy of Ethereal and watch the TCP connection between the browser and the script, if I were concerned about whether the server is not getting its job done in an acceptable amount of time. If you see the TCP socket close before that 11s gap, you know that your issue is entirely on the browser side. If the TCP close comes well into the 11s gap, then you're going to have to do some debugging on the http server side.
I think that Ethereal has changed it's name to WireShark. Whatever it is calling itself recently, it's a must-have tool for this sort of work. I was using it just the other day to find out why I couldn't connect to my virtualized http server.
A:
with that much html to render I would also consider the speed of the computer. you can test this by saving the html file and opening it from your local hard drive :)
| Measure load time for python cgi script? | I use python cgi for our intranet application.
When I measure time, the script takes 4s to finish. But after that, it still takes another 11s to show the screen in the browser.
The screen is build with tables (size: 10 KB, 91 KB uncompressed) and has a large css file (5 KB, 58 KB uncompressed).
I used YSlow and did as much optimization as suggested. Gzipping etc.
Firebug Net says: 11s for the file.
How do I measure where these last 11 seconds are needed for?
Is it just the size of the HTML, or the table structure?
Anyone more ideas for tweaking?
| [
"I think I'd grab a copy of Ethereal and watch the TCP connection between the browser and the script, if I were concerned about whether the server is not getting its job done in an acceptable amount of time. If you see the TCP socket close before that 11s gap, you know that your issue is entirely on the browser side. If the TCP close comes well into the 11s gap, then you're going to have to do some debugging on the http server side.\nI think that Ethereal has changed it's name to WireShark. Whatever it is calling itself recently, it's a must-have tool for this sort of work. I was using it just the other day to find out why I couldn't connect to my virtualized http server.\n",
"with that much html to render I would also consider the speed of the computer. you can test this by saving the html file and opening it from your local hard drive :)\n"
] | [
1,
1
] | [] | [] | [
"browser",
"cgi",
"css",
"html",
"python"
] | stackoverflow_0000428704_browser_cgi_css_html_python.txt |
Q:
Can I use urllib to submit a SOAP request?
I have a SOAP request that is known to work using a tool like, say, SoapUI, but I am trying to get it to work using urllib.
This is what I have tried so far and it did not work:
import urllib
f = "".join(open("ws_request_that_works_in_soapui", "r").readlines())
urllib.urlopen('http://url.com/to/Router?wsdl', f)
I haven't been able to find the spec on how the document should be posted to the SOAP Server.
urllib is not a necessary requirement.
A:
Well, I answered my own question
import httplib
f = "".join(open('ws_request', 'r'))
webservice = httplib.HTTP('localhost', 8083)
webservice.putrequest("POST", "Router?wsdl")
webservice.putheader("User-Agent", "Python post")
webservice.putheader("Content-length", "%d" % len(f))
webservice.putheader("SOAPAction", "\"\"")
webservice.endheaders()
webservice.send(f)
A:
Short answer: yes you can.
Long answer:
Take a look at this example it doesn't use urllib but will give you the idea of how to prepare SOAP request.
As far as urllib, I suggest using urllib2, and yes you can submit a SOAP request using it, follow the same steps to prepare the request as in previous example.
| Can I use urllib to submit a SOAP request? | I have a SOAP request that is known to work using a tool like, say, SoapUI, but I am trying to get it to work using urllib.
This is what I have tried so far and it did not work:
import urllib
f = "".join(open("ws_request_that_works_in_soapui", "r").readlines())
urllib.urlopen('http://url.com/to/Router?wsdl', f)
I haven't been able to find the spec on how the document should be posted to the SOAP Server.
urllib is not a necessary requirement.
| [
"Well, I answered my own question\nimport httplib\n\nf = \"\".join(open('ws_request', 'r'))\n\nwebservice = httplib.HTTP('localhost', 8083)\nwebservice.putrequest(\"POST\", \"Router?wsdl\")\nwebservice.putheader(\"User-Agent\", \"Python post\")\nwebservice.putheader(\"Content-length\", \"%d\" % len(f))\nwebservice.putheader(\"SOAPAction\", \"\\\"\\\"\")\nwebservice.endheaders()\nwebservice.send(f)\n\n",
"Short answer: yes you can.\nLong answer:\nTake a look at this example it doesn't use urllib but will give you the idea of how to prepare SOAP request.\nAs far as urllib, I suggest using urllib2, and yes you can submit a SOAP request using it, follow the same steps to prepare the request as in previous example.\n"
] | [
8,
3
] | [] | [] | [
"python",
"soap"
] | stackoverflow_0000429164_python_soap.txt |
Q:
Best way to poll a web service (eg, for a twitter app)
I need to poll a web service, in this case twitter's API, and I'm wondering what the conventional wisdom is on this topic. I'm not sure whether this is important, but I've always found feedback useful in the past.
A couple scenarios I've come up with:
The querying process starts every X seconds, eg a cron job runs a python script
A process continually loops and queries at each iteration, eg ... well, here is where I enter unfamiliar territory. Do I just run a python script that doesn't end?
Thanks for your advice.
ps - regarding the particulars of twitter: I know that it sends emails for following and direct messages, but sometimes one might want the flexibility of parsing @replies. In those cases, I believe polling is as good as it gets.
pps - twitter limits bots to 100 requests per 60 minutes. I don't know if this also limits web scraping or rss feed reading. Anyone know how easy or hard it is to be whitelisted?
Thanks again.
A:
"Do I just run a python script that doesn't end?"
How is this unfamiliar territory?
import time
polling_interval = 36.0 # (100 requests in 3600 seconds)
running= True
while running:
start= time.clock()
poll_twitter()
anything_else_that_seems_important()
work_duration = time.clock() - start
time.sleep( polling_interval - work_duration )
It's just a loop.
A:
You should have a page that is like a Ping or Heartbeat page. The you have another process that "tickles" or hits that page, usually you can do this in your Control Panel of your web host, or use a cron if you have a local access. Then this script can keep statistics of how often it has polled in a database or some data store and then you poll the service as often as you really need to, of course limiting it to whatever the providers limit is. You definitely don't want to (and certainly don't want to rely) on a python scrip that "doesn't end." :)
| Best way to poll a web service (eg, for a twitter app) | I need to poll a web service, in this case twitter's API, and I'm wondering what the conventional wisdom is on this topic. I'm not sure whether this is important, but I've always found feedback useful in the past.
A couple scenarios I've come up with:
The querying process starts every X seconds, eg a cron job runs a python script
A process continually loops and queries at each iteration, eg ... well, here is where I enter unfamiliar territory. Do I just run a python script that doesn't end?
Thanks for your advice.
ps - regarding the particulars of twitter: I know that it sends emails for following and direct messages, but sometimes one might want the flexibility of parsing @replies. In those cases, I believe polling is as good as it gets.
pps - twitter limits bots to 100 requests per 60 minutes. I don't know if this also limits web scraping or rss feed reading. Anyone know how easy or hard it is to be whitelisted?
Thanks again.
| [
"\"Do I just run a python script that doesn't end?\"\nHow is this unfamiliar territory?\nimport time\npolling_interval = 36.0 # (100 requests in 3600 seconds)\nrunning= True\nwhile running:\n start= time.clock()\n poll_twitter()\n anything_else_that_seems_important()\n work_duration = time.clock() - start\n time.sleep( polling_interval - work_duration )\n\nIt's just a loop.\n",
"You should have a page that is like a Ping or Heartbeat page. The you have another process that \"tickles\" or hits that page, usually you can do this in your Control Panel of your web host, or use a cron if you have a local access. Then this script can keep statistics of how often it has polled in a database or some data store and then you poll the service as often as you really need to, of course limiting it to whatever the providers limit is. You definitely don't want to (and certainly don't want to rely) on a python scrip that \"doesn't end.\" :)\n"
] | [
5,
0
] | [] | [] | [
"polling",
"python",
"twitter"
] | stackoverflow_0000430226_polling_python_twitter.txt |
Q:
Draw rounded corners on photo with PIL
My site is full of rounded corners on every box and picture, except for the thumbnails of user uploaded photos.
How can I use the Python Imaging Library to 'draw' white or transparent rounded corners onto each thumbnail?
A:
From Fredrik Lundh:
create a mask image with round corners (either with your favourite image
editor or using ImageDraw/aggdraw or some such).
in your program, load the mask image, and cut out the four corners using
"crop".
then, for each image, create a thumbnail as usual, and use the corner
masks on the corners of the thumbnail.
if you want transparent corners, create an "L" image with the same
size as the thumbnail, use "paste" to add the corner masks in that
image, and then use "putalpha" to attach the alpha layer it to the
thumbnail.
if you want solid corners, use "paste" on the thumbnail instead, using
a solid color as the source.
http://mail.python.org/pipermail/python-list/2008-January/472508.html
A:
Might it not be a better idea (assuming HTML is the output) to use HTML and CSS to put some rounded borders on those pictures? That way, if you want to change the look of your site, you don't have to do any image reprocessing, and you don't have to do any image processing in the first place.
| Draw rounded corners on photo with PIL | My site is full of rounded corners on every box and picture, except for the thumbnails of user uploaded photos.
How can I use the Python Imaging Library to 'draw' white or transparent rounded corners onto each thumbnail?
| [
"From Fredrik Lundh:\ncreate a mask image with round corners (either with your favourite image \neditor or using ImageDraw/aggdraw or some such).\nin your program, load the mask image, and cut out the four corners using \n\"crop\".\nthen, for each image, create a thumbnail as usual, and use the corner \nmasks on the corners of the thumbnail.\n\nif you want transparent corners, create an \"L\" image with the same \nsize as the thumbnail, use \"paste\" to add the corner masks in that \nimage, and then use \"putalpha\" to attach the alpha layer it to the \nthumbnail.\nif you want solid corners, use \"paste\" on the thumbnail instead, using \na solid color as the source.\n\n\nhttp://mail.python.org/pipermail/python-list/2008-January/472508.html\n",
"Might it not be a better idea (assuming HTML is the output) to use HTML and CSS to put some rounded borders on those pictures? That way, if you want to change the look of your site, you don't have to do any image reprocessing, and you don't have to do any image processing in the first place.\n"
] | [
7,
0
] | [] | [] | [
"python",
"python_imaging_library",
"rounded_corners"
] | stackoverflow_0000430379_python_python_imaging_library_rounded_corners.txt |
Q:
Scripting LMMS from Python
Recently I asked about scripting FruityLoops or Reason from Python, which didn't turn up much.
Today I found LMMS, a free-software FruityLoops clone. So, similarly. Has anyone tried scripting this from Python (or similar)? Is there an API or wrapper for accessing its resources from outside?
If not, what would be the right approach to try writing one?
A:
It seems you can write plugins for LMMS using C++. By embedding Python in the C++ plugin you can effectively script the program in Python.
A:
Look at http://www.csounds.com/ for an approach to scripting music synth programs in Python.
A:
You can connect pretty much everything in LMMS to a MIDI input. Try that?
| Scripting LMMS from Python | Recently I asked about scripting FruityLoops or Reason from Python, which didn't turn up much.
Today I found LMMS, a free-software FruityLoops clone. So, similarly. Has anyone tried scripting this from Python (or similar)? Is there an API or wrapper for accessing its resources from outside?
If not, what would be the right approach to try writing one?
| [
"It seems you can write plugins for LMMS using C++. By embedding Python in the C++ plugin you can effectively script the program in Python. \n",
"Look at http://www.csounds.com/ for an approach to scripting music synth programs in Python.\n",
"You can connect pretty much everything in LMMS to a MIDI input. Try that?\n"
] | [
5,
0,
0
] | [] | [] | [
"audio_player",
"python"
] | stackoverflow_0000427037_audio_player_python.txt |
Q:
Convert CVS/SVN to a Programming Snippets Site
I use cvs to maintain all my python snippets, notes, c, c++ code. As the hosting provider provides a public web- server also, I was thinking that I should convert the cvs automatically to a programming snippets website.
cvsweb is not what I mean.
doxygen is for a complete project and to browse the self-referencing codes online.I think doxygen is more like web based ctags.
I tried with rest2web, it is requires that I write /restweb headers and files to be .txt files and it will interfere with the programming language syntax.
An approach I have thought is:
1) run source-hightlight and create .html pages for all the scripts.
2) now write a script to index those script .htmls and create webpage.
3) Create the website of those pages.
before proceeding, I thought I shall discuss here, if the members have any suggestion.
What do do, when you want to maintain your snippets and notes in cvs and also auto generate it into a good website. I like rest2web for converting notes to html.
A:
Run Trac on the server linked to the (svn) repository. The Trac wiki can conveniently refer to files and changesets. You get TODO tickets, too.
A:
enscript or pygmentize (part of pygments) can be used to convert code to HTML. You can use a custom header or footer to link to the actual code for download.
A:
I finally settled for rest2web. I had to do the following.
Use a separate python script to recursively copy the files in the CVS to a separate directory.
Added extra files index.txt and template.txt to all the directories which I wanted to be in the webpage.
The best thing about rest2web is that it supports python scripting within the template.txt, so I just ran a loop of the contents and indexed them in the page.
There is still lot more to go to automate the entire process. For eg. Inline viewing of programs and colorization, which I think can be done with some more trials.
I have the completed website here, It is called uthcode.
| Convert CVS/SVN to a Programming Snippets Site | I use cvs to maintain all my python snippets, notes, c, c++ code. As the hosting provider provides a public web- server also, I was thinking that I should convert the cvs automatically to a programming snippets website.
cvsweb is not what I mean.
doxygen is for a complete project and to browse the self-referencing codes online.I think doxygen is more like web based ctags.
I tried with rest2web, it is requires that I write /restweb headers and files to be .txt files and it will interfere with the programming language syntax.
An approach I have thought is:
1) run source-hightlight and create .html pages for all the scripts.
2) now write a script to index those script .htmls and create webpage.
3) Create the website of those pages.
before proceeding, I thought I shall discuss here, if the members have any suggestion.
What do do, when you want to maintain your snippets and notes in cvs and also auto generate it into a good website. I like rest2web for converting notes to html.
| [
"Run Trac on the server linked to the (svn) repository. The Trac wiki can conveniently refer to files and changesets. You get TODO tickets, too.\n",
"enscript or pygmentize (part of pygments) can be used to convert code to HTML. You can use a custom header or footer to link to the actual code for download.\n",
"I finally settled for rest2web. I had to do the following. \n\nUse a separate python script to recursively copy the files in the CVS to a separate directory.\nAdded extra files index.txt and template.txt to all the directories which I wanted to be in the webpage.\nThe best thing about rest2web is that it supports python scripting within the template.txt, so I just ran a loop of the contents and indexed them in the page.\nThere is still lot more to go to automate the entire process. For eg. Inline viewing of programs and colorization, which I think can be done with some more trials.\n\nI have the completed website here, It is called uthcode.\n"
] | [
3,
1,
0
] | [] | [] | [
"cvs",
"python",
"rest",
"svn",
"web_applications"
] | stackoverflow_0000408621_cvs_python_rest_svn_web_applications.txt |
Q:
Regular expression: replace the suffix of a string ending in '.js' but not 'min.js'
Assume infile is a variable holding the name of an input file, and similarly outfile for output file. If infile ends in .js, I'd like to replace with .min.js and that's easy enough (I think).
outfile = re.sub(r'\b.js$', '.min.js', infile)
But my question is if infile ends in .min.js, then I do not want the substitution to take place. (Otherwise, I'll end up with .min.min.js) How can I accomplish this by using regular expression?
PS: This is not homework. If you're curious what this is for: this is for a small python script to do mass compress of JavaScript files in a directory.
A:
You want to do a negative lookbehind assertion. For instance,
outfile = re.sub(r"(?<!\.min)\.js$", ".min.js", infile)
You can find more about this here: http://docs.python.org/library/re.html#regular-expression-syntax
A:
For tasks this simple, there's no need for regexps. String methods can be more readable, eg.:
if filename.endswith('.js') and not filename.endswith('.min.js'):
filename= filename[:-3]+'.min.js'
| Regular expression: replace the suffix of a string ending in '.js' but not 'min.js' | Assume infile is a variable holding the name of an input file, and similarly outfile for output file. If infile ends in .js, I'd like to replace with .min.js and that's easy enough (I think).
outfile = re.sub(r'\b.js$', '.min.js', infile)
But my question is if infile ends in .min.js, then I do not want the substitution to take place. (Otherwise, I'll end up with .min.min.js) How can I accomplish this by using regular expression?
PS: This is not homework. If you're curious what this is for: this is for a small python script to do mass compress of JavaScript files in a directory.
| [
"You want to do a negative lookbehind assertion. For instance,\noutfile = re.sub(r\"(?<!\\.min)\\.js$\", \".min.js\", infile)\n\nYou can find more about this here: http://docs.python.org/library/re.html#regular-expression-syntax\n",
"For tasks this simple, there's no need for regexps. String methods can be more readable, eg.:\nif filename.endswith('.js') and not filename.endswith('.min.js'):\n filename= filename[:-3]+'.min.js'\n\n"
] | [
9,
3
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000430047_python_regex.txt |
Q:
How to process a YAML stream in Python
I have a command line app the continuously outputs YAML data in the form:
- col0: datum0
col1: datum1
col2: datum2
- col0: datum0
col1: datum1
col2: datum2
...
It does this for all of eternity. I would like to write a Python script that continuously reads each of these records.
The PyYAML library seems best at taking fully loaded strings and interpreting those as a complete YAML document. Is there a way to put PyYAML into a "streaming" mode?
Or is my only option to chunk the data myself and feed it bit by bit into PyYAML?
A:
Here is what I've ended up using since there does not seem to be a built-in method for accomplishing what I want. This function should be generic enough that it can read in a stream of YAML and return top-level objects as they are encountered.
def streamInYAML(stream):
y = stream.readline()
cont = 1
while cont:
l = stream.readline()
if len(l) == 0:
cont = 0
else:
if l.startswith(' '):
y = y + l
else:
yield yaml.load(y)
y = l
Can anyone do better?
A:
All of the references to stream in the the documentation seem to be referring to a stream of documents... I've never tried to use it in the way you describe, but it seems like chunking the data into such a stream of documents is a reasonable approach.
| How to process a YAML stream in Python | I have a command line app the continuously outputs YAML data in the form:
- col0: datum0
col1: datum1
col2: datum2
- col0: datum0
col1: datum1
col2: datum2
...
It does this for all of eternity. I would like to write a Python script that continuously reads each of these records.
The PyYAML library seems best at taking fully loaded strings and interpreting those as a complete YAML document. Is there a way to put PyYAML into a "streaming" mode?
Or is my only option to chunk the data myself and feed it bit by bit into PyYAML?
| [
"Here is what I've ended up using since there does not seem to be a built-in method for accomplishing what I want. This function should be generic enough that it can read in a stream of YAML and return top-level objects as they are encountered.\ndef streamInYAML(stream):\n y = stream.readline()\n cont = 1\n while cont:\n l = stream.readline()\n if len(l) == 0:\n cont = 0\n else:\n if l.startswith(' '):\n y = y + l\n else:\n yield yaml.load(y)\n y = l\n\nCan anyone do better?\n",
"All of the references to stream in the the documentation seem to be referring to a stream of documents... I've never tried to use it in the way you describe, but it seems like chunking the data into such a stream of documents is a reasonable approach.\n"
] | [
4,
2
] | [] | [] | [
"command_line",
"python",
"streaming",
"yaml"
] | stackoverflow_0000429162_command_line_python_streaming_yaml.txt |
Q:
Regex for links in html text
I hope this question is not a RTFM one.
I am trying to write a Python script that extracts links from a standard HTML webpage (the <link href... tags).
I have searched the web for matching regexen and found many different patterns. Is there any agreed, standard regex to match links?
Adam
UPDATE:
I am actually looking for two different answers:
What's the library solution for parsing HTML links. Beautiful Soup seems to be a good solution (thanks, Igal Serban and cletus!)
Can a link be defined using a regex?
A:
Regexes with HTML get messy. Just use a DOM parser like Beautiful Soup.
A:
As others have suggested, if real-time-like performance isn't necessary, BeautifulSoup is a good solution:
import urllib2
from BeautifulSoup import BeautifulSoup
html = urllib2.urlopen("http://www.google.com").read()
soup = BeautifulSoup(html)
all_links = soup.findAll("a")
As for the second question, yes, HTML links ought to be well-defined, but the HTML you actually encounter is very unlikely to be standard. The beauty of BeautifulSoup is that it uses browser-like heuristics to try to parse the non-standard, malformed HTML that you are likely to actually come across.
If you are certain to be working on standard XHTML, you can use (much) faster XML parsers like expat.
Regex, for the reasons above (the parser must maintain state, and regex can't do that) will never be a general solution.
A:
No there isn't.
You can consider using Beautiful Soup. You can call it the standard for parsing html files.
A:
Shoudln't a link be a well-defined regex?
No, [X]HTML is not in the general case parseable with regex. Consider examples like:
<link title='hello">world' href="x">link</link>
<!-- <link href="x">not a link</link> -->
<![CDATA[ ><link href="x">not a link</link> ]]>
<script>document.write('<link href="x">not a link</link>')</script>
and that's just a few random valid examples; if you have to cope with real-world tag-soup HTML there are a million malformed possibilities.
If you know and can rely on the exact output format of the target page you can get away with regex. Otherwise it is completely the wrong choice for scraping web pages.
A:
Shoudln't a link be a well-defined regex? This is a rather theoretical question,
I second PEZ's answer:
I don't think HTML lends itself to "well defined" regular expressions since it's not a regular language.
As far as I know, any HTML tag may contain any number of nested tags. For example:
<a href="http://stackoverflow.com">stackoverflow</a>
<a href="http://stackoverflow.com"><i>stackoverflow</i></a>
<a href="http://stackoverflow.com"><b><i>stackoverflow</i></b></a>
...
Thus, in principle, to match a tag properly you must be able at least to match strings of the form:
BE
BBEE
BBBEEE
...
BBBBBBBBBBEEEEEEEEEE
...
where B means the beginning of a tag and E means the end. That is, you must be able to match strings formed by any number of B's followed by the same number of E's. To do that, your matcher must be able to "count", and regular expressions (i.e. finite state automata) simply cannot do that (in order to count, an automaton needs at least a stack). Referring to PEZ's answer, HTML is a context-free grammar, not a regular language.
A:
It depends a bit on how the HTML is produced. If it's somewhat controlled you can get away with:
re.findall(r'''<link\s+.*?href=['"](.*?)['"].*?(?:</link|/)>''', html, re.I)
A:
Answering your two subquestions there.
I've sometimes subclassed SGMLParser (included in the core Python distribution) and must say it's straight forward.
I don't think HTML lends itself to "well defined" regular expressions since it's not a regular language.
A:
In response to question #2 (shouldn't a link be a well defined regular expression) the answer is ... no.
An HTML link structure is a recursive much like parens and braces in programming languages. There must be an equal number of start and end constructs and the "link" expression can be nested within itself.
To properly match a "link" expression a regex would be required to count the start and end tags. Regular expressions are a class of Finite Automata. By definition a Finite Automata cannot "count" constructs within a pattern. A grammar is required to describe a recursive data structure such as this. The inability for a regex to "count" is why you see programming languages described with Grammars as opposed to regular expressions.
So it is not possible to create a regex that will positively match 100% of all "link" expressions. There are certainly regex's that will match a good deal of "link"'s with a high degree of accuracy but they won't ever be perfect.
I wrote a blog article about this problem recently. Regular Expression Limitations
| Regex for links in html text | I hope this question is not a RTFM one.
I am trying to write a Python script that extracts links from a standard HTML webpage (the <link href... tags).
I have searched the web for matching regexen and found many different patterns. Is there any agreed, standard regex to match links?
Adam
UPDATE:
I am actually looking for two different answers:
What's the library solution for parsing HTML links. Beautiful Soup seems to be a good solution (thanks, Igal Serban and cletus!)
Can a link be defined using a regex?
| [
"Regexes with HTML get messy. Just use a DOM parser like Beautiful Soup.\n",
"As others have suggested, if real-time-like performance isn't necessary, BeautifulSoup is a good solution:\nimport urllib2\nfrom BeautifulSoup import BeautifulSoup\n\nhtml = urllib2.urlopen(\"http://www.google.com\").read()\nsoup = BeautifulSoup(html)\nall_links = soup.findAll(\"a\")\n\nAs for the second question, yes, HTML links ought to be well-defined, but the HTML you actually encounter is very unlikely to be standard. The beauty of BeautifulSoup is that it uses browser-like heuristics to try to parse the non-standard, malformed HTML that you are likely to actually come across. \nIf you are certain to be working on standard XHTML, you can use (much) faster XML parsers like expat.\nRegex, for the reasons above (the parser must maintain state, and regex can't do that) will never be a general solution.\n",
"No there isn't.\nYou can consider using Beautiful Soup. You can call it the standard for parsing html files.\n",
"\nShoudln't a link be a well-defined regex?\n\nNo, [X]HTML is not in the general case parseable with regex. Consider examples like:\n<link title='hello\">world' href=\"x\">link</link>\n<!-- <link href=\"x\">not a link</link> -->\n<![CDATA[ ><link href=\"x\">not a link</link> ]]>\n<script>document.write('<link href=\"x\">not a link</link>')</script>\n\nand that's just a few random valid examples; if you have to cope with real-world tag-soup HTML there are a million malformed possibilities.\nIf you know and can rely on the exact output format of the target page you can get away with regex. Otherwise it is completely the wrong choice for scraping web pages.\n",
"\nShoudln't a link be a well-defined regex? This is a rather theoretical question,\n\nI second PEZ's answer:\n\nI don't think HTML lends itself to \"well defined\" regular expressions since it's not a regular language.\n\nAs far as I know, any HTML tag may contain any number of nested tags. For example:\n<a href=\"http://stackoverflow.com\">stackoverflow</a>\n<a href=\"http://stackoverflow.com\"><i>stackoverflow</i></a>\n<a href=\"http://stackoverflow.com\"><b><i>stackoverflow</i></b></a>\n...\n\nThus, in principle, to match a tag properly you must be able at least to match strings of the form:\nBE\nBBEE\nBBBEEE\n...\nBBBBBBBBBBEEEEEEEEEE\n...\n\nwhere B means the beginning of a tag and E means the end. That is, you must be able to match strings formed by any number of B's followed by the same number of E's. To do that, your matcher must be able to \"count\", and regular expressions (i.e. finite state automata) simply cannot do that (in order to count, an automaton needs at least a stack). Referring to PEZ's answer, HTML is a context-free grammar, not a regular language.\n",
"It depends a bit on how the HTML is produced. If it's somewhat controlled you can get away with:\nre.findall(r'''<link\\s+.*?href=['\"](.*?)['\"].*?(?:</link|/)>''', html, re.I)\n\n",
"Answering your two subquestions there.\n\nI've sometimes subclassed SGMLParser (included in the core Python distribution) and must say it's straight forward.\nI don't think HTML lends itself to \"well defined\" regular expressions since it's not a regular language.\n\n",
"In response to question #2 (shouldn't a link be a well defined regular expression) the answer is ... no. \nAn HTML link structure is a recursive much like parens and braces in programming languages. There must be an equal number of start and end constructs and the \"link\" expression can be nested within itself. \nTo properly match a \"link\" expression a regex would be required to count the start and end tags. Regular expressions are a class of Finite Automata. By definition a Finite Automata cannot \"count\" constructs within a pattern. A grammar is required to describe a recursive data structure such as this. The inability for a regex to \"count\" is why you see programming languages described with Grammars as opposed to regular expressions.\nSo it is not possible to create a regex that will positively match 100% of all \"link\" expressions. There are certainly regex's that will match a good deal of \"link\"'s with a high degree of accuracy but they won't ever be perfect. \nI wrote a blog article about this problem recently. Regular Expression Limitations\n"
] | [
17,
8,
5,
4,
3,
1,
1,
0
] | [] | [] | [
"href",
"html",
"hyperlink",
"python",
"regex"
] | stackoverflow_0000430966_href_html_hyperlink_python_regex.txt |
Q:
Why do managed attributes just work for class attributes and not for instance attributes in python?
To illustrate the question check the following code:
class MyDescriptor(object):
def __get__(self, obj, type=None):
print "get", self, obj, type
return self._v
def __set__(self, obj, value):
self._v = value
print "set", self, obj, value
return None
class SomeClass1(object):
m = MyDescriptor()
class SomeClass2(object):
def __init__(self):
self.m = MyDescriptor()
x1 = SomeClass1()
x2 = SomeClass2()
x1.m = 1000
# -> set <__main__.MyDescriptor object at 0xb787c7ec> <__main__.SomeClass1 object at 0xb787cc8c> 10000
x2.m = 1000 # I guess that this overwrites the function. But why?
# ->
print x1.m
# -> get <__main__.MyDescriptor object at 0xb787c7ec> <__main__.SomeClass1 object at 0xb787cc8c> <class '__main__.SomeClass1'> 10000
print x2.m
# -> 10000
Why doesn't x2.m = 1000 not call the __set__-function? It seems that this overwrites the function. But why?
Where is _v in x1? It is not in x1._v
A:
To answer your second question, where is _v?
Your version of the descriptor keeps _v in the descriptor itself. Each instance of the descriptor (the class-level instance SomeClass1, and all of the object-level instances in objects of class SomeClass2 will have distinct values of _v.
Look at this version. This version updates the object associated with the descriptor. This means the object (SomeClass1 or x2) will contain the attribute _v.
class MyDescriptor(object):
def __get__(self, obj, type=None):
print "get", self, obj, type
return obj._v
def __set__(self, obj, value):
obj._v = value
print "set", self, obj, value
A:
You should read this and this.
It overwrites the function because you didn't overload the __set__ and __get__ functions of SomeClass but of MyDescriptor class. Maybe you wanted for SomeClass to inherit MyDescriptor? SomeClass1 prints the "get" and "set" output because it's a static method AFAIK. For details read the upper links.
A:
I found _v of x1: It is in SomeClass1.__dict__['m']._v
For the version suggested by S.Lott within the other answer: _v is in x1._v
| Why do managed attributes just work for class attributes and not for instance attributes in python? | To illustrate the question check the following code:
class MyDescriptor(object):
def __get__(self, obj, type=None):
print "get", self, obj, type
return self._v
def __set__(self, obj, value):
self._v = value
print "set", self, obj, value
return None
class SomeClass1(object):
m = MyDescriptor()
class SomeClass2(object):
def __init__(self):
self.m = MyDescriptor()
x1 = SomeClass1()
x2 = SomeClass2()
x1.m = 1000
# -> set <__main__.MyDescriptor object at 0xb787c7ec> <__main__.SomeClass1 object at 0xb787cc8c> 10000
x2.m = 1000 # I guess that this overwrites the function. But why?
# ->
print x1.m
# -> get <__main__.MyDescriptor object at 0xb787c7ec> <__main__.SomeClass1 object at 0xb787cc8c> <class '__main__.SomeClass1'> 10000
print x2.m
# -> 10000
Why doesn't x2.m = 1000 not call the __set__-function? It seems that this overwrites the function. But why?
Where is _v in x1? It is not in x1._v
| [
"To answer your second question, where is _v?\nYour version of the descriptor keeps _v in the descriptor itself. Each instance of the descriptor (the class-level instance SomeClass1, and all of the object-level instances in objects of class SomeClass2 will have distinct values of _v.\nLook at this version. This version updates the object associated with the descriptor. This means the object (SomeClass1 or x2) will contain the attribute _v. \nclass MyDescriptor(object):\n def __get__(self, obj, type=None):\n print \"get\", self, obj, type\n return obj._v\n def __set__(self, obj, value):\n obj._v = value\n print \"set\", self, obj, value\n\n",
"You should read this and this.\nIt overwrites the function because you didn't overload the __set__ and __get__ functions of SomeClass but of MyDescriptor class. Maybe you wanted for SomeClass to inherit MyDescriptor? SomeClass1 prints the \"get\" and \"set\" output because it's a static method AFAIK. For details read the upper links.\n",
"I found _v of x1: It is in SomeClass1.__dict__['m']._v\nFor the version suggested by S.Lott within the other answer: _v is in x1._v\n"
] | [
3,
3,
0
] | [] | [] | [
"attributes",
"descriptor",
"python"
] | stackoverflow_0000428264_attributes_descriptor_python.txt |
Q:
Unpack to unknown number of variables?
How could I unpack a tuple of unknown to, say, a list?
I have a number of columns of data and they get split up into a tuple by some function. I want to unpack this tuple to variables but I do not know how many columns I will have. Is there any way to dynamically unpack it to as many variables as I need?
A:
You can use the asterisk to unpack a variable length, for instance:
foo, bar, *other = funct()
This should put the first item into foo, the second into bar, and all the rest into other.
Update: I forgot to mention that this is Python 3.0 compatible only.
A:
Unpack the tuple to a list?
l = list(t)
A:
Do you mean you want to create variables on the fly? How will your program know how to reference them, if they're dynamically created?
Tuples have lengths, just like lists. It's perfectly permissable to do something like:
total_columns = len(the_tuple)
You can also convert a tuple to a list, though there's no benefit to doing so unless you want to start modifying the results. (Tuples can't be modified; lists can.) But, anyway, converting a tuple to a list is trivial:
my_list = list(the_tuple)
There are ways to create variables on the fly (e.g., with eval), but again, how would you know how to refer to them?
I think you should clarify exactly what you're trying to do here.
| Unpack to unknown number of variables? | How could I unpack a tuple of unknown to, say, a list?
I have a number of columns of data and they get split up into a tuple by some function. I want to unpack this tuple to variables but I do not know how many columns I will have. Is there any way to dynamically unpack it to as many variables as I need?
| [
"You can use the asterisk to unpack a variable length, for instance:\nfoo, bar, *other = funct()\n\nThis should put the first item into foo, the second into bar, and all the rest into other.\nUpdate: I forgot to mention that this is Python 3.0 compatible only.\n",
"Unpack the tuple to a list?\nl = list(t)\n\n",
"Do you mean you want to create variables on the fly? How will your program know how to reference them, if they're dynamically created?\nTuples have lengths, just like lists. It's perfectly permissable to do something like:\ntotal_columns = len(the_tuple)\n\nYou can also convert a tuple to a list, though there's no benefit to doing so unless you want to start modifying the results. (Tuples can't be modified; lists can.) But, anyway, converting a tuple to a list is trivial:\nmy_list = list(the_tuple)\n\nThere are ways to create variables on the fly (e.g., with eval), but again, how would you know how to refer to them?\nI think you should clarify exactly what you're trying to do here.\n"
] | [
35,
10,
4
] | [] | [] | [
"casting",
"iterable_unpacking",
"python"
] | stackoverflow_0000431944_casting_iterable_unpacking_python.txt |
Q:
Restarting a Python Interpreter Quietly
I have a python interpreter embedded inside an application. The application takes a long time to start up and I have no ability to restart the interpreter without restarting the whole application. What I would like to do is to essentially save the state of the interpreter and return to that state easily.
I started by storing the names of all modules in sys.modules that the python interpreter started with and then deleting all new modules from sys.modules when requested. This appears to make the interpreter prepared to re-import the same modules even though it has already imported them before. However, this doesn't seem to work in all situations, such as using singleton classes and static methods, etc.
I'd rather not embed another interpreter inside this interpreter if it can be avoided, as the ease of being able to use the applications API will be lost (as well as including a slight speed hit I imagine).
So, does anyone know of a way I could store the interpreter's state and then return to this so that it can cope with all situations?
Thanks,
Dan
A:
Try this code from ActiveState recipes: http://code.activestate.com/recipes/572213/
It extends pickle so it supports pickling anything defined in the shell console. Theoretically you should just be able to pickle the main module, according to their documentation:
import savestate, pickle, __main__
pickle.dump(__main__, open('savestate.pickle', 'wb'), 2)
A:
I'd suggest tackling the root cause problem.
"The application takes a long time to
start up and I have no ability to
restart the interpreter without
restarting the whole application"
I doubt this is actually 100% true. If the overall application is the result of an act of Congress, okay, it can't be changed. But if the overall application was written by real people, then finding and moving the code to restart the Python interpreter should be possible. It's cheaper, simpler and more reliable than anything else you might do to hack around the problem.
A:
storing the names of all modules in sys.modules that the python interpreter started with and then deleting all new modules from sys.modules when requested. This appears to make the interpreter prepared to re-import the same modules even though it has already imported them before.
The module-reload-forcing approach can be made to work in some circumstances but it's a bit hairy. In summary:
You need to make sure that all modules that have dependencies on each other are all reloaded at once. So any module 'x' that does 'import y' or 'from y import ...' must be deleted from sys.modules at the same time as module 'y'.
This process will need protecting with a lock if your app or any other active module is using threads.
Any module that leaves hooks pointing to itself in other modules cannot usefully be reloaded as references to the old module will remain in unreloaded/unreloadable code. This includes stuff like exception hooks, signals, warnings filters, encodings, monkey-patches and so on. If you start blithely reloading modules containing other people's code you might be surprised how often they do stuff like that, potentially resulting in subtle and curious errors.
So to get it to work you need to have well-defined boundaries between interdependent modules - "was it imported at initial start-up time" probably isn't quite good enough - and to make sure they're nicely encapsulated without unexpected dependencies like monkey-patching.
This can be based on folder, so for example anything in /home/me/myapp/lib could be reloaded as a unit, whilst leaving other modules alone - especially the contents of the stdlib in eg. /usr/lib/python2.x/ which is in general not reliable to reload. I've got code for this in an as-yet-unreleased webapp reloading wrapper, if you need.
Finally:
You need to know a little bit about the internals of sys.modules, specifically that it leaves a bunch of 'None' values to signify failed relative imports. If you don't delete them at the same time as you delete your other module values, the subsequent attempt to import a module can (sometimes) end up importing 'None', leading to confusing errors.
This is a nasty implementation detail which might change and break your app in some future Python version, but that is the price for playing with sys.modules in unsupported ways.
A:
One very hacky and bug prone approach might be a c module that simply copies the memory to a file so it can be loaded back the next time. But since I can't imagine that this would always work properly, would pickling be an alternative?
If you are able to make all of your modules pickleable than you should be able to pickle everything in globals() so it can be reloaded again.
A:
If you know in advance the modules, classes, functions, variables, etc... in use, you could pickle them to disk and reload. I'm not sure off the top of my head the best way to tackle the issue if your environment contains many unknowns. Though, it may suffice to pickle globals and locals.
| Restarting a Python Interpreter Quietly | I have a python interpreter embedded inside an application. The application takes a long time to start up and I have no ability to restart the interpreter without restarting the whole application. What I would like to do is to essentially save the state of the interpreter and return to that state easily.
I started by storing the names of all modules in sys.modules that the python interpreter started with and then deleting all new modules from sys.modules when requested. This appears to make the interpreter prepared to re-import the same modules even though it has already imported them before. However, this doesn't seem to work in all situations, such as using singleton classes and static methods, etc.
I'd rather not embed another interpreter inside this interpreter if it can be avoided, as the ease of being able to use the applications API will be lost (as well as including a slight speed hit I imagine).
So, does anyone know of a way I could store the interpreter's state and then return to this so that it can cope with all situations?
Thanks,
Dan
| [
"Try this code from ActiveState recipes: http://code.activestate.com/recipes/572213/\nIt extends pickle so it supports pickling anything defined in the shell console. Theoretically you should just be able to pickle the main module, according to their documentation:\nimport savestate, pickle, __main__\npickle.dump(__main__, open('savestate.pickle', 'wb'), 2)\n\n",
"I'd suggest tackling the root cause problem. \n\n\"The application takes a long time to\n start up and I have no ability to\n restart the interpreter without\n restarting the whole application\"\n\nI doubt this is actually 100% true. If the overall application is the result of an act of Congress, okay, it can't be changed. But if the overall application was written by real people, then finding and moving the code to restart the Python interpreter should be possible. It's cheaper, simpler and more reliable than anything else you might do to hack around the problem.\n",
"\nstoring the names of all modules in sys.modules that the python interpreter started with and then deleting all new modules from sys.modules when requested. This appears to make the interpreter prepared to re-import the same modules even though it has already imported them before.\n\nThe module-reload-forcing approach can be made to work in some circumstances but it's a bit hairy. In summary:\n\nYou need to make sure that all modules that have dependencies on each other are all reloaded at once. So any module 'x' that does 'import y' or 'from y import ...' must be deleted from sys.modules at the same time as module 'y'.\nThis process will need protecting with a lock if your app or any other active module is using threads.\nAny module that leaves hooks pointing to itself in other modules cannot usefully be reloaded as references to the old module will remain in unreloaded/unreloadable code. This includes stuff like exception hooks, signals, warnings filters, encodings, monkey-patches and so on. If you start blithely reloading modules containing other people's code you might be surprised how often they do stuff like that, potentially resulting in subtle and curious errors.\n\nSo to get it to work you need to have well-defined boundaries between interdependent modules - \"was it imported at initial start-up time\" probably isn't quite good enough - and to make sure they're nicely encapsulated without unexpected dependencies like monkey-patching.\nThis can be based on folder, so for example anything in /home/me/myapp/lib could be reloaded as a unit, whilst leaving other modules alone - especially the contents of the stdlib in eg. /usr/lib/python2.x/ which is in general not reliable to reload. I've got code for this in an as-yet-unreleased webapp reloading wrapper, if you need.\nFinally:\n\nYou need to know a little bit about the internals of sys.modules, specifically that it leaves a bunch of 'None' values to signify failed relative imports. If you don't delete them at the same time as you delete your other module values, the subsequent attempt to import a module can (sometimes) end up importing 'None', leading to confusing errors.\n\nThis is a nasty implementation detail which might change and break your app in some future Python version, but that is the price for playing with sys.modules in unsupported ways.\n",
"One very hacky and bug prone approach might be a c module that simply copies the memory to a file so it can be loaded back the next time. But since I can't imagine that this would always work properly, would pickling be an alternative?\nIf you are able to make all of your modules pickleable than you should be able to pickle everything in globals() so it can be reloaded again.\n",
"If you know in advance the modules, classes, functions, variables, etc... in use, you could pickle them to disk and reload. I'm not sure off the top of my head the best way to tackle the issue if your environment contains many unknowns. Though, it may suffice to pickle globals and locals.\n"
] | [
5,
1,
1,
0,
0
] | [] | [] | [
"interpreter",
"python"
] | stackoverflow_0000431432_interpreter_python.txt |
Q:
when to delete user's session
I'm writing a webapp that will only be used by authenticated users. Some temporary databases and log files will be created during each user session. I'd like to erase all these temp files when the session is finished.
Obviously, a logout or window close event would be sufficient to close the session, but in some cases the user may keep the browser open long after he's finished.
Another approach would be to time user sessions or delete the temp files during routine maintenance.
How do you go about it?
A:
User sessions should have a timeout value and should be closed when the timeout expires or the user logs out. Log out is an obvious time to do this and the time out needs to be there in case the user navigates away from your application without logging out.
A:
A cron job to clean up any expired session data in the database is a good thing. Depending on how long your sessions last, and how big your database is, you might want to cleanup more often than once per day. But one cleanup pass per day is usually fine.
A:
Delete User's Session during:
1) Logout
2) Automatic timeout (the length of the timeout can be set through the web.config)
3) As part of any other routine maintenance methods you already have running by deleting any session information which hasn't been accessed for some defined period of time (likely shorter than your automatic timeout length because if it was the same length it should already be taken care of)
| when to delete user's session | I'm writing a webapp that will only be used by authenticated users. Some temporary databases and log files will be created during each user session. I'd like to erase all these temp files when the session is finished.
Obviously, a logout or window close event would be sufficient to close the session, but in some cases the user may keep the browser open long after he's finished.
Another approach would be to time user sessions or delete the temp files during routine maintenance.
How do you go about it?
| [
"User sessions should have a timeout value and should be closed when the timeout expires or the user logs out. Log out is an obvious time to do this and the time out needs to be there in case the user navigates away from your application without logging out.\n",
"A cron job to clean up any expired session data in the database is a good thing. Depending on how long your sessions last, and how big your database is, you might want to cleanup more often than once per day. But one cleanup pass per day is usually fine.\n",
"Delete User's Session during:\n1) Logout\n2) Automatic timeout (the length of the timeout can be set through the web.config)\n3) As part of any other routine maintenance methods you already have running by deleting any session information which hasn't been accessed for some defined period of time (likely shorter than your automatic timeout length because if it was the same length it should already be taken care of)\n"
] | [
1,
1,
0
] | [] | [] | [
"python",
"session"
] | stackoverflow_0000432115_python_session.txt |
Q:
How to assign a new class attribute via __dict__?
I want to assign a class attribute via a string object - but how?
Example:
class test(object):
pass
a = test()
test.value = 5
a.value
# -> 5
test.__dict__['value']
# -> 5
# BUT:
attr_name = 'next_value'
test.__dict__[attr_name] = 10
# -> 'dictproxy' object does not support item assignment
A:
There is a builtin function for this:
setattr(test, attr_name, 10)
Reference: http://docs.python.org/library/functions.html#setattr
Example:
>>> class a(object): pass
>>> a.__dict__['wut'] = 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'dictproxy' object does not support item assignment
>>> setattr(a, 'wut', 7)
>>> a.wut
7
| How to assign a new class attribute via __dict__? | I want to assign a class attribute via a string object - but how?
Example:
class test(object):
pass
a = test()
test.value = 5
a.value
# -> 5
test.__dict__['value']
# -> 5
# BUT:
attr_name = 'next_value'
test.__dict__[attr_name] = 10
# -> 'dictproxy' object does not support item assignment
| [
"There is a builtin function for this:\nsetattr(test, attr_name, 10)\n\nReference: http://docs.python.org/library/functions.html#setattr\nExample:\n>>> class a(object): pass\n>>> a.__dict__['wut'] = 4\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: 'dictproxy' object does not support item assignment\n>>> setattr(a, 'wut', 7)\n>>> a.wut\n7\n\n"
] | [
79
] | [] | [] | [
"attributes",
"class",
"oop",
"python"
] | stackoverflow_0000432786_attributes_class_oop_python.txt |
Q:
Python Library to Generate VCF Files?
Know of any good libraries for this? I did some searches and didn't come across anything. Someone somewhere must have done this before, I hate to reinvent the wheel.
A:
I would look at:
http://vobject.skyhouseconsulting.com/usage.html (look under "Usage examples")
Very easy parsing and generation of both vCal and vCard.
A:
PyCoCuMa appears to have a VCF parser built into it, and it'll generate VCard output. You might have some luck with it. I played around with it a bit; it parsed some VCF files I have lying around without any problems. You'll most likely have to poke through the source to figure out how to use it, though.
See:
http://www.srcco.de/v/pycocuma
http://pycocuma.sourcearchive.com/documentation/0.4.5-6-5/vcard_8py-source.html
| Python Library to Generate VCF Files? | Know of any good libraries for this? I did some searches and didn't come across anything. Someone somewhere must have done this before, I hate to reinvent the wheel.
| [
"I would look at:\nhttp://vobject.skyhouseconsulting.com/usage.html (look under \"Usage examples\")\nVery easy parsing and generation of both vCal and vCard.\n",
"PyCoCuMa appears to have a VCF parser built into it, and it'll generate VCard output. You might have some luck with it. I played around with it a bit; it parsed some VCF files I have lying around without any problems. You'll most likely have to poke through the source to figure out how to use it, though.\nSee:\nhttp://www.srcco.de/v/pycocuma\nhttp://pycocuma.sourcearchive.com/documentation/0.4.5-6-5/vcard_8py-source.html\n"
] | [
8,
2
] | [] | [] | [
"python",
"vcf_vcard"
] | stackoverflow_0000433331_python_vcf_vcard.txt |
Q:
Filtering a complete date in django?
There are several filter methods for dates (year,month,day). If I want to match a full date, say 2008/10/18, is there a better way than this:
Entry.objects.filter(pub_date__year=2008).filter(pub_date__month=10).filter(pub_date__day=18)
A:
How about using a datetime object. For example:
from datetime import datetime
Entry.objects.filter(pub_date=datetime(2008, 10, 18))
| Filtering a complete date in django? | There are several filter methods for dates (year,month,day). If I want to match a full date, say 2008/10/18, is there a better way than this:
Entry.objects.filter(pub_date__year=2008).filter(pub_date__month=10).filter(pub_date__day=18)
| [
"How about using a datetime object. For example:\nfrom datetime import datetime\nEntry.objects.filter(pub_date=datetime(2008, 10, 18))\n\n"
] | [
10
] | [] | [] | [
"date",
"django",
"django_queryset",
"filter",
"python"
] | stackoverflow_0000433507_date_django_django_queryset_filter_python.txt |
Q:
Shorthand adding/appending in Python
I like that in PHP I can do the following
$myInteger++;
$myString += 'more text';
With Python I must do the following
myInteger = myInteger + 1
myString = myString + "more text"
Is there a better way to add or append to a variable in Python?
A:
Python doesn't have the increment (++) and decrement (--) operators, but it does have the += operator (and -=, etc.) so you can do this:
myInteger += 1
myString += "more text"
A:
You could do it in the same way you are doing it in PHP:
var += 1
But my advice is to write it down clear:
var = var + 1
| Shorthand adding/appending in Python | I like that in PHP I can do the following
$myInteger++;
$myString += 'more text';
With Python I must do the following
myInteger = myInteger + 1
myString = myString + "more text"
Is there a better way to add or append to a variable in Python?
| [
"Python doesn't have the increment (++) and decrement (--) operators, but it does have the += operator (and -=, etc.) so you can do this:\nmyInteger += 1\nmyString += \"more text\"\n\n",
"You could do it in the same way you are doing it in PHP:\nvar += 1\n\nBut my advice is to write it down clear:\nvar = var + 1\n\n"
] | [
35,
3
] | [] | [] | [
"python"
] | stackoverflow_0000433795_python.txt |
Q:
Is late binding consistent with the philosophy of "readibility counts"?
I am sorry all - I am not here to blame Python. This is just a reflection on whether what I believe is right. Being a Python devotee for two years, I have been writing only small apps and singing Python's praises wherever I go. I recently had the chance to read Django's code, and have started wondering if Python really follows its "readability counts" philosophy. For example,
class A:
a = 10
b = "Madhu"
def somemethod(self, arg1):
self.c = 20.22
d = "some local variable"
# do something
....
...
def somemethod2 (self, arg2):
self.c = "Changed the variable"
# do something 2
...
It's difficult to track the flow of code in situations where the instance variables are created upon use (i.e. self.c in the above snippet). It's not possible to see which instance variables are defined when reading a substantial amount of code written in this manner. It becomes very frustrating even when reading a class with just 6-8 methods and not more than 100-150 lines of code.
I am interested in knowing if my reading of this code is skewed by C++/Java style, since most other languages follow the same approach as them. Is there a Pythonic way of reading this code more fluently? What made Python developers adopt this strategy keeping "readability counts" in mind?
A:
The code fragment you present is fairly atypical (which might also because you probably made it up):
you wouldn't normally have an instance variable (self.c) that is a floating point number at some point, and a string at a different point. It should be either a number or a string all the time.
you normally don't bring instance variables into life in an arbitrary method. Instead, you typically have a constructor (__init__) that initializes all variables.
you typically don't have instance variables named a, b, c. Instead, they have some speaking names.
With these fixed, your example would be much more readable.
A:
A sufficiently talented miscreant can write unreadable code in any language. Python attempts to impose some rules on structure and naming to nudge coders in the right direction, but there's no way to force such a thing.
For what it's worth, I try to limit the scope of local variables to the area where they're used in every language that i use - for me, not having to maintain a huge mental dictionary makes re-familiarizing myself with a bit of code much, much easier.
A:
I agree that what you have seen can be confusing and ought to be accompanied by documentation. But confusing things can happen in any language.
In your own code, you should apply whatever conventions make things easiest for you to maintain the code. With respect to this particular issue, there are a number of possible things that can help.
Using something like Epydoc, you can specify all the instance variables a class will have. Be scrupulous about documenting your code, and be equally scrupulous about ensuring that your code and your documentation remain in sync.
Adopt coding conventions that encourage the kind of code you find easiest to maintain. There's nothing better than setting a good example.
Keep your classes and functions small and well-defined. If they get too big, break them up. It's easier to figure out what's going on that way.
If you really want to insist that instance variables be declared before referenced, there are some metaclass tricks you can use. e.g., You can create a common base class that, using metaclass logic, enforces the convention that only variables that are declared when the subclass is declared can later be set.
A:
This problem is easily solved by specifying coding standards such as declaring all instance variables in the init method of your object. This isn't really a problem with python as much as the programmer.
A:
If what the code is doing becomes mysterious for some reason .. there should either be comments or the function names should make it obvious.
This is just my opinion though.
A:
I personally think not having to declare variables is one of the dangerous things in Python, especially when doing classes. It is all too easy to accidentally create a variable by simple mistyping and then boggle at the code at length, unable to find the mistake.
A:
Adding a property just before you need it will prevent you from using it before it's got a value. Personally, I always find classes hard to follow just from reading source - I read the documentation and find out what it's supposed to do, and then it usually makes sense when I read the source again.
A:
The fact that such stuff is allowed is only useful in rare times for prototyping; while Javascript tends to allow anything and maybe such an example could be considered normal (I don't really know), in Python this is mostly a negative byproduct of omission of type declaration, which can help speeding up development - if you at some point change your mind on the type of a variable, fixing type declarations can take more time than the fixes to actual code, in some cases, including the renaming of a type, but also cases where you use a different type with some similar methods and no superclass/subclass relationship.
| Is late binding consistent with the philosophy of "readibility counts"? | I am sorry all - I am not here to blame Python. This is just a reflection on whether what I believe is right. Being a Python devotee for two years, I have been writing only small apps and singing Python's praises wherever I go. I recently had the chance to read Django's code, and have started wondering if Python really follows its "readability counts" philosophy. For example,
class A:
a = 10
b = "Madhu"
def somemethod(self, arg1):
self.c = 20.22
d = "some local variable"
# do something
....
...
def somemethod2 (self, arg2):
self.c = "Changed the variable"
# do something 2
...
It's difficult to track the flow of code in situations where the instance variables are created upon use (i.e. self.c in the above snippet). It's not possible to see which instance variables are defined when reading a substantial amount of code written in this manner. It becomes very frustrating even when reading a class with just 6-8 methods and not more than 100-150 lines of code.
I am interested in knowing if my reading of this code is skewed by C++/Java style, since most other languages follow the same approach as them. Is there a Pythonic way of reading this code more fluently? What made Python developers adopt this strategy keeping "readability counts" in mind?
| [
"The code fragment you present is fairly atypical (which might also because you probably made it up):\n\nyou wouldn't normally have an instance variable (self.c) that is a floating point number at some point, and a string at a different point. It should be either a number or a string all the time.\nyou normally don't bring instance variables into life in an arbitrary method. Instead, you typically have a constructor (__init__) that initializes all variables.\nyou typically don't have instance variables named a, b, c. Instead, they have some speaking names.\n\nWith these fixed, your example would be much more readable.\n",
"A sufficiently talented miscreant can write unreadable code in any language. Python attempts to impose some rules on structure and naming to nudge coders in the right direction, but there's no way to force such a thing.\nFor what it's worth, I try to limit the scope of local variables to the area where they're used in every language that i use - for me, not having to maintain a huge mental dictionary makes re-familiarizing myself with a bit of code much, much easier.\n",
"I agree that what you have seen can be confusing and ought to be accompanied by documentation. But confusing things can happen in any language.\nIn your own code, you should apply whatever conventions make things easiest for you to maintain the code. With respect to this particular issue, there are a number of possible things that can help.\n\nUsing something like Epydoc, you can specify all the instance variables a class will have. Be scrupulous about documenting your code, and be equally scrupulous about ensuring that your code and your documentation remain in sync.\nAdopt coding conventions that encourage the kind of code you find easiest to maintain. There's nothing better than setting a good example.\nKeep your classes and functions small and well-defined. If they get too big, break them up. It's easier to figure out what's going on that way.\nIf you really want to insist that instance variables be declared before referenced, there are some metaclass tricks you can use. e.g., You can create a common base class that, using metaclass logic, enforces the convention that only variables that are declared when the subclass is declared can later be set.\n\n",
"This problem is easily solved by specifying coding standards such as declaring all instance variables in the init method of your object. This isn't really a problem with python as much as the programmer.\n",
"If what the code is doing becomes mysterious for some reason .. there should either be comments or the function names should make it obvious.\nThis is just my opinion though.\n",
"I personally think not having to declare variables is one of the dangerous things in Python, especially when doing classes. It is all too easy to accidentally create a variable by simple mistyping and then boggle at the code at length, unable to find the mistake.\n",
"Adding a property just before you need it will prevent you from using it before it's got a value. Personally, I always find classes hard to follow just from reading source - I read the documentation and find out what it's supposed to do, and then it usually makes sense when I read the source again.\n",
"The fact that such stuff is allowed is only useful in rare times for prototyping; while Javascript tends to allow anything and maybe such an example could be considered normal (I don't really know), in Python this is mostly a negative byproduct of omission of type declaration, which can help speeding up development - if you at some point change your mind on the type of a variable, fixing type declarations can take more time than the fixes to actual code, in some cases, including the renaming of a type, but also cases where you use a different type with some similar methods and no superclass/subclass relationship.\n"
] | [
14,
5,
3,
3,
2,
2,
2,
0
] | [] | [] | [
"python",
"readability"
] | stackoverflow_0000433662_python_readability.txt |
Q:
extracting stream from pdf in python
How can I extract the part of this stream (the one named BLABLABLA) from the pdf file which contains it??
<</Contents 583 0 R/CropBox[0 0 595.22 842]/MediaBox[0 0 595.22 842]/Parent 29 0 /Resources<</ColorSpace<</CS0 563 0 R>>/ExtGState<</GS0 568 0 R>>/Font<</TT0 559 0 R/TT1 560 0 R/TT2 561 0 R/TT3 562 0 R>>/ProcSet[/PDF/Text/ImageC]/Properties<</MC0<</BLABLABLA 584 0 R>>/MC1<</SubKey 582 0 R>>>>/XObject<</Im0 578 0 R>>>>/Rotate 0/StructParents 0/Type/Page>>
Or, in other worlds, how can I extract a subkey from a pdf stream?
I would like to use some python's library (like pyPdf or ReportLab), but even some C/C++ lib should go well for me.
Can anyone help me?
A:
IIUC, a stream in a PDF is just a sequence of binary data. I think you are wanting to extract part of an object. Are you wanting a standard object, like an image or text? It would be a lot easier to give you example code if there was a real example.
This might help get you started:
import pyPdf
pdf = pyPdf.PdfFileReader(open("pdffile.pdf"))
list(pdf.pages) # Process all the objects.
print pdf.resolvedObjects
| extracting stream from pdf in python | How can I extract the part of this stream (the one named BLABLABLA) from the pdf file which contains it??
<</Contents 583 0 R/CropBox[0 0 595.22 842]/MediaBox[0 0 595.22 842]/Parent 29 0 /Resources<</ColorSpace<</CS0 563 0 R>>/ExtGState<</GS0 568 0 R>>/Font<</TT0 559 0 R/TT1 560 0 R/TT2 561 0 R/TT3 562 0 R>>/ProcSet[/PDF/Text/ImageC]/Properties<</MC0<</BLABLABLA 584 0 R>>/MC1<</SubKey 582 0 R>>>>/XObject<</Im0 578 0 R>>>>/Rotate 0/StructParents 0/Type/Page>>
Or, in other worlds, how can I extract a subkey from a pdf stream?
I would like to use some python's library (like pyPdf or ReportLab), but even some C/C++ lib should go well for me.
Can anyone help me?
| [
"IIUC, a stream in a PDF is just a sequence of binary data. I think you are wanting to extract part of an object. Are you wanting a standard object, like an image or text? It would be a lot easier to give you example code if there was a real example.\nThis might help get you started:\nimport pyPdf\npdf = pyPdf.PdfFileReader(open(\"pdffile.pdf\"))\nlist(pdf.pages) # Process all the objects.\nprint pdf.resolvedObjects\n\n"
] | [
1
] | [] | [] | [
"pdf",
"pypdf",
"python",
"reportlab",
"stream"
] | stackoverflow_0000429437_pdf_pypdf_python_reportlab_stream.txt |
Q:
TKinter windows do not appear when using multiprocessing on Linux
I want to spawn another process to display an error message asynchronously while the rest of the application continues.
I'm using the multiprocessing module in Python 2.6 to create the process and I'm trying to display the window with TKinter.
This code worked okay on Windows, but running it on Linux the TKinter window does not appear if I call 'showerror("MyApp Error", "Something bad happened.")'. It does appear if I run it in the same process by calling showerrorprocess directly. Given this, it seems TKinter is working properly. I can print to the console and do other things from processes spawned by multiprocessing, so it seems to be working too.
They just don't seem to work together. Do I need to do something special to allow spawned subprocesses to create windows?
from multiprocessing import Process
from Tkinter import Tk, Text, END, BOTH, DISABLED
import sys
import traceback
def showerrorprocess(title,text):
"""Pop up a window with the given title and text. The
text will be selectable (so you can copy it to the
clipboard) but not editable. Returns when the
window is closed."""
root = Tk()
root.title(title)
text_box = Text(root,width=80,height=15)
text_box.pack(fill=BOTH)
text_box.insert(END,text)
text_box.config(state=DISABLED)
def quit():
root.destroy()
root.quit()
root.protocol("WM_DELETE_WINDOW", quit)
root.mainloop()
def showerror(title,text):
"""Pop up a window with the given title and text. The
text will be selectable (so you can copy it to the
clipboard) but not editable. Runs asynchronously in
a new child process."""
process = Process(target=showerrorprocess,args=(title,text))
process.start()
Edit
The issue seems to be that TKinter was imported by the parent process, and "inherited" into the child process, but somehow its state is inextricably linked to the parent process and it cannot work in the child. So long as you make sure not to import TKinter before you spawn the child process, it will work because then it is the child process that is importing it for the first time.
A:
This discussion could be helpful.
Here's some sample problems I found:
While the multiprocessing module follows threading closely, it's definitely not an exact match. One example: since parameters to a
process must be pickleable, I had to go through a lot of code
changes to avoid passing Tkinter objects since these aren't
pickleable. This doesn't occur with the threading module.
process.terminate() doesn't really work after the first attempt. The second or third attempt simply hangs the interpreter, probably
because data structures are corrupted (mentioned in the API, but this
is little consolation).
A:
Maybe calling the shell command xhost + before calling your program from that same shell will work?
I am guessing your problem lies with the X-server.
| TKinter windows do not appear when using multiprocessing on Linux | I want to spawn another process to display an error message asynchronously while the rest of the application continues.
I'm using the multiprocessing module in Python 2.6 to create the process and I'm trying to display the window with TKinter.
This code worked okay on Windows, but running it on Linux the TKinter window does not appear if I call 'showerror("MyApp Error", "Something bad happened.")'. It does appear if I run it in the same process by calling showerrorprocess directly. Given this, it seems TKinter is working properly. I can print to the console and do other things from processes spawned by multiprocessing, so it seems to be working too.
They just don't seem to work together. Do I need to do something special to allow spawned subprocesses to create windows?
from multiprocessing import Process
from Tkinter import Tk, Text, END, BOTH, DISABLED
import sys
import traceback
def showerrorprocess(title,text):
"""Pop up a window with the given title and text. The
text will be selectable (so you can copy it to the
clipboard) but not editable. Returns when the
window is closed."""
root = Tk()
root.title(title)
text_box = Text(root,width=80,height=15)
text_box.pack(fill=BOTH)
text_box.insert(END,text)
text_box.config(state=DISABLED)
def quit():
root.destroy()
root.quit()
root.protocol("WM_DELETE_WINDOW", quit)
root.mainloop()
def showerror(title,text):
"""Pop up a window with the given title and text. The
text will be selectable (so you can copy it to the
clipboard) but not editable. Runs asynchronously in
a new child process."""
process = Process(target=showerrorprocess,args=(title,text))
process.start()
Edit
The issue seems to be that TKinter was imported by the parent process, and "inherited" into the child process, but somehow its state is inextricably linked to the parent process and it cannot work in the child. So long as you make sure not to import TKinter before you spawn the child process, it will work because then it is the child process that is importing it for the first time.
| [
"This discussion could be helpful.\n\nHere's some sample problems I found: \n\nWhile the multiprocessing module follows threading closely, it's definitely not an exact match. One example: since parameters to a\n process must be pickleable, I had to go through a lot of code\n changes to avoid passing Tkinter objects since these aren't\n pickleable. This doesn't occur with the threading module. \nprocess.terminate() doesn't really work after the first attempt. The second or third attempt simply hangs the interpreter, probably\n because data structures are corrupted (mentioned in the API, but this\n is little consolation).\n\n\n",
"Maybe calling the shell command xhost + before calling your program from that same shell will work?\nI am guessing your problem lies with the X-server. \n"
] | [
4,
0
] | [] | [] | [
"linux",
"multiprocessing",
"python",
"tkinter"
] | stackoverflow_0000410469_linux_multiprocessing_python_tkinter.txt |
Q:
Analyse python list with algorithm for counting occurences over date ranges
The following shows the structure of some data I have (format: a list of lists)
data =
[
[1,2008-12-01],
[1,2008-12-01],
[2,2008-12-01]
... (the lists continue)
]
The dates range from 2008-12-01 to 2008-12-25.
The first field identifies a user by id, the second field (a date field) shows when this user visited a page on my site.
I need to analyse this data so that i get the following results
25 users visted on 1 day
100 users visted on 2 days
300 users visted on 4 days
... upto 25 days
I am using python and don't know where to start !
EDIT
I'm sorry it seems I wasnt clear enough about what I needed as a few people have given answers that are not what I'm looking for.
I need to find out how many users visited on all the days e.g.
10 users visited on 25 days (or every day)
Then I'd like the to list the same for each frequency of days from 1 - 25. So as per my original example above
25 users visited for only one day (out of the 25)
100 users visited on 2 days (out of the 25)
etc
I DONT need to know how many visited on each day
thanks
A:
Your result is a dictionary, right?
{ userNumber: setOfDays }
How about this to get started.
from collections import defaultdict
visits = defaultdict(set)
for user, date in someList:
visits[user].add(date)
This gives you a dictionary with a set of dates on which they visited.
counts = defaultdict(int)
for user in visits:
v= len(visits[user])
count[v] += 1
This gives you a dictionary of # visits, # of users with that many visits.
Is that the kind of thing you're looking for?
A:
Rewriting S.Lott's answer in SQL as an exercise, just to check that I got the requirements right...
SELECT * FROM someList;
userid | date
--------+------------
1 | 2008-12-01
1 | 2008-12-02
1 | 2008-12-03
1 | 2008-12-04
1 | 2008-12-05
2 | 2008-12-03
2 | 2008-12-04
2 | 2008-12-05
3 | 2008-12-04
4 | 2008-12-04
5 | 2008-12-05
5 | 2008-12-05
SELECT countdates, COUNT(userid) AS nusers
FROM ( SELECT userid, COUNT (DISTINCT date) AS countdates
FROM someList
GROUP BY userid ) AS visits
GROUP BY countdates
HAVING countdates <= 25
ORDER BY countdates;
countdates | nusers
------------+--------
1 | 3
3 | 1
5 | 1
A:
This is probably not the most pythonic or efficient or smartest or whatever way of doing this. But maybe you can confirm if I've understood the requirements correctly:
>>> log=[[1, '2008-12-01'], [1, '2008-12-01'],[2, '2008-12-01'],[2, '2008-12-03'], [1, '2008-12-04'], [3, '2008-12-04'], [4, '2008-12-04']]
>>> all_dates = sorted(set([d for d in [x[1] for x in log]]))
>>> for i in range(0, len(all_dates)):
... log_slice = [d for d in log if d[1] <= all_dates[i]]
... num_users = len(set([u for u in [x[0] for x in log_slice]]))
... print "%d users visited in %d days" % (num_users, i + 1)
...
2 users visited in 1 days
2 users visited in 2 days
4 users visited in 3 days
>>>
A:
First, I should mention that you NEED to store the date as a string. Currently, it would do arithmetic on your current entry. So, if you format data like this, it will work better:
data =
[
[1,"2008-12-01"],
[1,"2008-12-01"],
[2,"2008-12-01"]
]
Next, we can do something like this to get the number for each day:
result = {}
for (id, date) in data:
if date not in result:
result[date] = 1
else:
result[date] += 1
Now you can get the number of users for a specific date by doing something like this:
print result[some_date]
A:
It is unclear what exactly your requirement are. Here's my take:
#!/usr/bin/env python
from collections import defaultdict
data = [
[1,'2008-12-01'],
[3,'2008-12-25'],
[1,'2008-12-01'],
[2,'2008-12-01'],
]
d = defaultdict(set)
for id, day in data:
d[day].add(id)
for day in sorted(d):
print('%d user(s) visited on %s' % (len(d[day]), day))
It prints:
2 user(s) visited on 2008-12-01
1 user(s) visited on 2008-12-25
A:
How about this: this gives you set of days as well as count:
In [39]: from itertools import groupby ##itertools is a part of the standard library.
In [40]: l=[[1, '2008-12-01'],
....: [1, '2008-12-01'],
....: [2, '2008-12-01'],
....: [1, '2008-12-01'],
....: [3, '3008-12-04']]
In [41]: l.sort()
In [42]: l
Out[42]:
[[1, '2008-12-01'],
[1, '2008-12-01'],
[1, '2008-12-01'],
[2, '2008-12-01'],
[3, '3008-12-04']]
In [43]: for key, group in groupby(l, lambda x: x[0]):
....: group=list(group)
....: print key,' :: ', len(group), ' :: ', group
....:
....:
1 :: 3 :: [[1, '2008-12-01'], [1, '2008-12-01'], [1, '2008-12-01']]
2 :: 1 :: [[2, '2008-12-01']]
3 :: 1 :: [[3, '3008-12-04']]
user::number of visits :: visit dates
Here the user -1 visits on 2008-12-01 3 times, if you are looking to count only distinct dates then
for key, group in groupby(l, lambda x: x[0]):
group=list(group)
print key,' :: ', len(set([(lambda y: y[1])(each) for each in group])), ' :: ', group
....:
....:
1 :: 1 :: [[1, '2008-12-01'], [1, '2008-12-01'], [1, '2008-12-01']]
2 :: 1 :: [[2, '2008-12-01']]
3 :: 1 :: [[3, '3008-12-04']]
| Analyse python list with algorithm for counting occurences over date ranges | The following shows the structure of some data I have (format: a list of lists)
data =
[
[1,2008-12-01],
[1,2008-12-01],
[2,2008-12-01]
... (the lists continue)
]
The dates range from 2008-12-01 to 2008-12-25.
The first field identifies a user by id, the second field (a date field) shows when this user visited a page on my site.
I need to analyse this data so that i get the following results
25 users visted on 1 day
100 users visted on 2 days
300 users visted on 4 days
... upto 25 days
I am using python and don't know where to start !
EDIT
I'm sorry it seems I wasnt clear enough about what I needed as a few people have given answers that are not what I'm looking for.
I need to find out how many users visited on all the days e.g.
10 users visited on 25 days (or every day)
Then I'd like the to list the same for each frequency of days from 1 - 25. So as per my original example above
25 users visited for only one day (out of the 25)
100 users visited on 2 days (out of the 25)
etc
I DONT need to know how many visited on each day
thanks
| [
"Your result is a dictionary, right?\n{ userNumber: setOfDays }\n\nHow about this to get started.\nfrom collections import defaultdict\nvisits = defaultdict(set)\nfor user, date in someList:\n visits[user].add(date)\n\nThis gives you a dictionary with a set of dates on which they visited. \ncounts = defaultdict(int)\nfor user in visits:\n v= len(visits[user])\n count[v] += 1\n\nThis gives you a dictionary of # visits, # of users with that many visits.\nIs that the kind of thing you're looking for?\n",
"Rewriting S.Lott's answer in SQL as an exercise, just to check that I got the requirements right...\nSELECT * FROM someList;\n\n userid | date \n--------+------------\n 1 | 2008-12-01\n 1 | 2008-12-02\n 1 | 2008-12-03\n 1 | 2008-12-04\n 1 | 2008-12-05\n 2 | 2008-12-03\n 2 | 2008-12-04\n 2 | 2008-12-05\n 3 | 2008-12-04\n 4 | 2008-12-04\n 5 | 2008-12-05\n 5 | 2008-12-05\n\nSELECT countdates, COUNT(userid) AS nusers\nFROM ( SELECT userid, COUNT (DISTINCT date) AS countdates\n FROM someList\n GROUP BY userid ) AS visits\nGROUP BY countdates\nHAVING countdates <= 25\nORDER BY countdates;\n\n countdates | nusers \n------------+--------\n 1 | 3\n 3 | 1\n 5 | 1\n\n",
"This is probably not the most pythonic or efficient or smartest or whatever way of doing this. But maybe you can confirm if I've understood the requirements correctly:\n>>> log=[[1, '2008-12-01'], [1, '2008-12-01'],[2, '2008-12-01'],[2, '2008-12-03'], [1, '2008-12-04'], [3, '2008-12-04'], [4, '2008-12-04']]\n>>> all_dates = sorted(set([d for d in [x[1] for x in log]]))\n>>> for i in range(0, len(all_dates)):\n... log_slice = [d for d in log if d[1] <= all_dates[i]]\n... num_users = len(set([u for u in [x[0] for x in log_slice]]))\n... print \"%d users visited in %d days\" % (num_users, i + 1)\n... \n2 users visited in 1 days\n2 users visited in 2 days\n4 users visited in 3 days\n>>> \n\n",
"First, I should mention that you NEED to store the date as a string. Currently, it would do arithmetic on your current entry. So, if you format data like this, it will work better:\ndata = \n[ \n [1,\"2008-12-01\"],\n [1,\"2008-12-01\"],\n [2,\"2008-12-01\"]\n]\n\nNext, we can do something like this to get the number for each day:\nresult = {}\nfor (id, date) in data:\n if date not in result:\n result[date] = 1\n else:\n result[date] += 1\n\nNow you can get the number of users for a specific date by doing something like this:\nprint result[some_date]\n\n",
"It is unclear what exactly your requirement are. Here's my take:\n#!/usr/bin/env python\nfrom collections import defaultdict\n\ndata = [ \n [1,'2008-12-01'],\n [3,'2008-12-25'],\n [1,'2008-12-01'],\n [2,'2008-12-01'],\n]\n\nd = defaultdict(set)\nfor id, day in data:\n d[day].add(id)\n\nfor day in sorted(d):\n print('%d user(s) visited on %s' % (len(d[day]), day))\n\nIt prints:\n2 user(s) visited on 2008-12-01\n1 user(s) visited on 2008-12-25\n\n",
"How about this: this gives you set of days as well as count:\nIn [39]: from itertools import groupby ##itertools is a part of the standard library.\n\nIn [40]: l=[[1, '2008-12-01'],\n ....: [1, '2008-12-01'],\n ....: [2, '2008-12-01'],\n ....: [1, '2008-12-01'],\n ....: [3, '3008-12-04']]\n\nIn [41]: l.sort()\n\nIn [42]: l\nOut[42]: \n[[1, '2008-12-01'],\n [1, '2008-12-01'],\n [1, '2008-12-01'],\n [2, '2008-12-01'],\n [3, '3008-12-04']]\n\nIn [43]: for key, group in groupby(l, lambda x: x[0]):\n ....: group=list(group)\n ....: print key,' :: ', len(group), ' :: ', group\n ....: \n ....: \n1 :: 3 :: [[1, '2008-12-01'], [1, '2008-12-01'], [1, '2008-12-01']]\n2 :: 1 :: [[2, '2008-12-01']]\n3 :: 1 :: [[3, '3008-12-04']]\n\nuser::number of visits :: visit dates\nHere the user -1 visits on 2008-12-01 3 times, if you are looking to count only distinct dates then\nfor key, group in groupby(l, lambda x: x[0]):\n group=list(group)\n print key,' :: ', len(set([(lambda y: y[1])(each) for each in group])), ' :: ', group\n ....: \n ....: \n1 :: 1 :: [[1, '2008-12-01'], [1, '2008-12-01'], [1, '2008-12-01']]\n2 :: 1 :: [[2, '2008-12-01']]\n3 :: 1 :: [[3, '3008-12-04']]\n\n"
] | [
4,
1,
1,
0,
0,
0
] | [] | [] | [
"algorithm",
"python"
] | stackoverflow_0000433669_algorithm_python.txt |
Q:
What is the easiest way to export data from a live Google App Engine application?
I'm especially interested in solutions with source code available (Django independency is a plus, but I'm willing to hack my way through)
A:
You can, of course, write your own handler. Other than that, your options currently are limited to:
gae-rest, which provides a RESTful interface to the datastore.
approcket, a tool for replicating between MySQL and App Engine.
The amusingly named GAEBAR - Google App Engine Backup and Restore.
A:
Update: New version of Google AppEngine supports data import to and export from the online application natively. In their terms this is called upload_data and download_data respectively (names of subcommands of appcfg.py).
Please refer to Google documentation how to export and import data from/to GAE. This is probably the better way to do it today.
My old answer is below:
I use to_xml() method of the Model class to export the datastore.
class XmlExport(webapp.RequestHandler):
def get(self):
objects=MyModel.all().fetch(1000)
xml='<?xml version="1.0" encoding="UTF-8"?>\n<site>\n'
for o in objects:
xml = xml + o.to_xml()
xml = xml + '</site>'
self.response.headers['Content-Type']='text/xml; charset=utf-8'
self.response.out.write(xml)
| What is the easiest way to export data from a live Google App Engine application? | I'm especially interested in solutions with source code available (Django independency is a plus, but I'm willing to hack my way through)
| [
"You can, of course, write your own handler. Other than that, your options currently are limited to:\n\ngae-rest, which provides a RESTful interface to the datastore.\napprocket, a tool for replicating between MySQL and App Engine.\nThe amusingly named GAEBAR - Google App Engine Backup and Restore.\n\n",
"Update: New version of Google AppEngine supports data import to and export from the online application natively. In their terms this is called upload_data and download_data respectively (names of subcommands of appcfg.py).\nPlease refer to Google documentation how to export and import data from/to GAE. This is probably the better way to do it today.\nMy old answer is below:\n\nI use to_xml() method of the Model class to export the datastore.\nclass XmlExport(webapp.RequestHandler):\n def get(self):\n objects=MyModel.all().fetch(1000)\n xml='<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n<site>\\n'\n for o in objects:\n xml = xml + o.to_xml()\n xml = xml + '</site>'\n self.response.headers['Content-Type']='text/xml; charset=utf-8'\n self.response.out.write(xml)\n\n"
] | [
6,
3
] | [] | [] | [
"frameworks",
"google_app_engine",
"python"
] | stackoverflow_0000426820_frameworks_google_app_engine_python.txt |
Q:
How do I install a Python extension module using distutils?
I'm working on a Python package named "lehmer" that includes a bunch of extension modules written in C. Currently, I have a single extension module, "rng". I am using Python's Distutils to build and install the module. I can compile and install the module, but when I try to import the module using import lehmer.rng or from lehmer import rng, the Python interpreter throws an ImportError exception. I can import "lehmer" fine.
Here are the contents of my setup.py file:
from distutils.core import setup, Extension
exts = [Extension("rng", ["lehmer/rng.c"])]
setup(name="lehmer",
version="0.1",
description="A Lehmer random number generator",
author="Steve Park, Dave Geyer, and Michael Dippery",
maintainer="Michael Dippery",
maintainer_email="mpd@cs.wm.edu",
packages=["lehmer"],
ext_package="lehmer",
ext_modules=exts)
When I list the contents of Python's site-packages directory, I see the following:
th107c-4 lehmer $ ls /scratch/usr/lib64/python2.5/site-packages/lehmer
__init__.py __init__.pyc rng.so*
My PYTHONPATH environment variable is set correctly, so that's not the problem (and as noted before, I can import lehmer just fine, so I know that PYTHONPATH is not the issue). Python uses the following search paths (as reported by sys.path):
['', '/scratch/usr/lib64/python2.5/site-packages', '/usr/lib/python25.zip', '/usr/lib64/python2.5', '/usr/lib64/python2.5/plat-linux2', '/usr/lib64/python2.5/lib-tk', '/usr/lib64/python2.5/lib-dynload', '/usr/lib64/python2.5/site-packages', '/usr/lib64/python2.5/site-packages/Numeric', '/usr/lib64/python2.5/site-packages/PIL', '/usr/lib64/python2.5/site-packages/SaX', '/usr/lib64/python2.5/site-packages/gtk-2.0', '/usr/lib64/python2.5/site-packages/wx-2.8-gtk2-unicode', '/usr/local/lib64/python2.5/site-packages']
Update
It works when used on an OpenSUSE 10 box, but the C extensions still fail to load when tested on Mac OS X. Here are the results from the Python interpreter:
>>> sys.path
['', '/usr/local/lib/python2.5/site-packages', '/opt/local/lib/python25.zip', '/opt/local/lib/python2.5', '/opt/local/lib/python2.5/plat-darwin', '/opt/local/lib/python2.5/plat-mac', '/opt/local/lib/python2.5/plat-mac/lib-scriptpackages', '/opt/local/lib/python2.5/lib-tk', '/opt/local/lib/python2.5/lib-dynload', '/opt/local/lib/python2.5/site-packages']
>>> from lehmer import rng
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name rng
>>> import lehmer.rngs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rngs
>>> import lehmer.rng
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rng
>>> from lehmer import rngs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name rngs
A:
For the record (and because I am tired of seeing this marked as unanswered), here were the problems:
Since the current directory is automatically added to the Python packages path, the interpreter was first looking in the current directory for packages; since some C modules were not compiled in the current directory, the interpreter couldn't find them. Solution: Don't launch the interpreter from the same directory in which your working copy of the code is stored.
Distutils did not install the module with the correct permissions on OS X. Solution: Fix the permissions.
| How do I install a Python extension module using distutils? | I'm working on a Python package named "lehmer" that includes a bunch of extension modules written in C. Currently, I have a single extension module, "rng". I am using Python's Distutils to build and install the module. I can compile and install the module, but when I try to import the module using import lehmer.rng or from lehmer import rng, the Python interpreter throws an ImportError exception. I can import "lehmer" fine.
Here are the contents of my setup.py file:
from distutils.core import setup, Extension
exts = [Extension("rng", ["lehmer/rng.c"])]
setup(name="lehmer",
version="0.1",
description="A Lehmer random number generator",
author="Steve Park, Dave Geyer, and Michael Dippery",
maintainer="Michael Dippery",
maintainer_email="mpd@cs.wm.edu",
packages=["lehmer"],
ext_package="lehmer",
ext_modules=exts)
When I list the contents of Python's site-packages directory, I see the following:
th107c-4 lehmer $ ls /scratch/usr/lib64/python2.5/site-packages/lehmer
__init__.py __init__.pyc rng.so*
My PYTHONPATH environment variable is set correctly, so that's not the problem (and as noted before, I can import lehmer just fine, so I know that PYTHONPATH is not the issue). Python uses the following search paths (as reported by sys.path):
['', '/scratch/usr/lib64/python2.5/site-packages', '/usr/lib/python25.zip', '/usr/lib64/python2.5', '/usr/lib64/python2.5/plat-linux2', '/usr/lib64/python2.5/lib-tk', '/usr/lib64/python2.5/lib-dynload', '/usr/lib64/python2.5/site-packages', '/usr/lib64/python2.5/site-packages/Numeric', '/usr/lib64/python2.5/site-packages/PIL', '/usr/lib64/python2.5/site-packages/SaX', '/usr/lib64/python2.5/site-packages/gtk-2.0', '/usr/lib64/python2.5/site-packages/wx-2.8-gtk2-unicode', '/usr/local/lib64/python2.5/site-packages']
Update
It works when used on an OpenSUSE 10 box, but the C extensions still fail to load when tested on Mac OS X. Here are the results from the Python interpreter:
>>> sys.path
['', '/usr/local/lib/python2.5/site-packages', '/opt/local/lib/python25.zip', '/opt/local/lib/python2.5', '/opt/local/lib/python2.5/plat-darwin', '/opt/local/lib/python2.5/plat-mac', '/opt/local/lib/python2.5/plat-mac/lib-scriptpackages', '/opt/local/lib/python2.5/lib-tk', '/opt/local/lib/python2.5/lib-dynload', '/opt/local/lib/python2.5/site-packages']
>>> from lehmer import rng
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name rng
>>> import lehmer.rngs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rngs
>>> import lehmer.rng
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rng
>>> from lehmer import rngs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name rngs
| [
"For the record (and because I am tired of seeing this marked as unanswered), here were the problems:\n\nSince the current directory is automatically added to the Python packages path, the interpreter was first looking in the current directory for packages; since some C modules were not compiled in the current directory, the interpreter couldn't find them. Solution: Don't launch the interpreter from the same directory in which your working copy of the code is stored.\nDistutils did not install the module with the correct permissions on OS X. Solution: Fix the permissions.\n\n"
] | [
4
] | [] | [] | [
"distutils",
"module",
"python"
] | stackoverflow_0000302867_distutils_module_python.txt |
Q:
Ensuring contact form email isn't lost (python)
I have a website with a contact form. User submits name, email and message and the site emails me the details.
Very occasionally my server has a problem with it's email system and so the user gets an error and those contact details are lost. (Don't say: get a better server, any server can have email go down now and then and we do get a lot of submissions).
I would like to implement a system that could store the user's details if the mail sending function returns with an error code. Then on every further submission, check for any stored submissions and try to send them off to me.
But how to store the data?
I'm using python so I thought about using shelve (single file semi-database). Or maybe someone could suggest a better data format? (I do think a full database solution would be overkill.)
The problem I see with a single file approach is race-conditions: two or more failed emails at the same time would cause two edits to the data file resulting in data corruption.
So what to do? Multi-file solution, file locking or something else?
A:
When we implement email sending functionality in our environment we do it in a decoupled way. So for example a user would submit their data which would get stored in a database. We then have a separate service that runs, queries the database and sends out email. That way if there are ever any email server issues, the service will just try again later, data and user confidence is never lost.
A:
try sqlite. It has default python bindings in the standard library and should work for a useful level of load (or so I am told)
A:
You could, as suggested, use sqlite for this. The main question is: How mans is "a lot of submissions"? I it is below of a few per second this would work. Otherwise the database file will be locked all the time and you have another problem.
But you could it also keep it plain, stupid and simple: write files to disk.
For every sumission a file is written in a temporary directory (if the mail server is down). Then you add a few lines to the mailserver startup script that reads the directory and sends out mails. No databases, no locking problems and if you use a directory for which a quota is set (or a ramdisk with fixed size) you shouldn't run into any problems.
| Ensuring contact form email isn't lost (python) | I have a website with a contact form. User submits name, email and message and the site emails me the details.
Very occasionally my server has a problem with it's email system and so the user gets an error and those contact details are lost. (Don't say: get a better server, any server can have email go down now and then and we do get a lot of submissions).
I would like to implement a system that could store the user's details if the mail sending function returns with an error code. Then on every further submission, check for any stored submissions and try to send them off to me.
But how to store the data?
I'm using python so I thought about using shelve (single file semi-database). Or maybe someone could suggest a better data format? (I do think a full database solution would be overkill.)
The problem I see with a single file approach is race-conditions: two or more failed emails at the same time would cause two edits to the data file resulting in data corruption.
So what to do? Multi-file solution, file locking or something else?
| [
"When we implement email sending functionality in our environment we do it in a decoupled way. So for example a user would submit their data which would get stored in a database. We then have a separate service that runs, queries the database and sends out email. That way if there are ever any email server issues, the service will just try again later, data and user confidence is never lost.\n",
"try sqlite. It has default python bindings in the standard library and should work for a useful level of load (or so I am told)\n",
"You could, as suggested, use sqlite for this. The main question is: How mans is \"a lot of submissions\"? I it is below of a few per second this would work. Otherwise the database file will be locked all the time and you have another problem. \nBut you could it also keep it plain, stupid and simple: write files to disk.\nFor every sumission a file is written in a temporary directory (if the mail server is down). Then you add a few lines to the mailserver startup script that reads the directory and sends out mails. No databases, no locking problems and if you use a directory for which a quota is set (or a ramdisk with fixed size) you shouldn't run into any problems.\n"
] | [
8,
4,
2
] | [] | [] | [
"data_formats",
"email",
"python",
"race_condition"
] | stackoverflow_0000436003_data_formats_email_python_race_condition.txt |
Q:
Python-Regex, what's going on here?
I've got a book on python recently and it's got a chapter on Regex, there's a section of code which I can't really understand. Can someone explain exactly what's going on here (this section is on Regex groups)?
>>> my_regex = r'(?P<zip>Zip:\s*\d\d\d\d\d)\s*(State:\s*\w\w)'
>>> addrs = "Zip: 10010 State: NY"
>>> y = re.search(my_regex, addrs)
>>> y.groupdict('zip')
{'zip': 'Zip: 10010'}
>>> y.group(2)
'State: NY'
A:
regex definition:
(?P<zip>...)
Creates a named group "zip"
Zip:\s*
Match "Zip:" and zero or more whitespace characters
\d
Match a digit
\w
Match a word character [A-Za-z0-9_]
y.groupdict('zip')
The groupdict method returns a dictionary with named groups as keys and their matches as values. In this case, the match for the "zip" group gets returned
y.group(2)
Return the match for the second group, which is a unnamed group "(...)"
Hope that helps.
A:
The search method will return an object containing the results of your regex pattern.
groupdict returns a dictionnary of groups where the keys are the name of the groups defined by (?P...). Here name is a name for the group.
group returns a list of groups that are matched. "State: NY" is your third group. The first is the entire string and the second is "Zip: 10010".
This was a relatively simple question by the way. I simply looked up the method documentation on google and found this page. Google is your friend.
A:
# my_regex = r' <= this means that the string is a raw string, normally you'd need to use double backslashes
# ( ... ) this groups something
# ? this means that the previous bit was optional, why it's just after a group bracket I know not
# * this means "as many of as you can find"
# \s is whitespace
# \d is a digit, also works with [0-9]
# \w is an alphanumeric character
my_regex = r'(?P<zip>Zip:\s*\d\d\d\d\d)\s*(State:\s*\w\w)'
addrs = "Zip: 10010 State: NY"
# Runs the grep on the string
y = re.search(my_regex, addrs)
A:
The (?P<identifier>match) syntax is Python's way of implementing named capturing groups. That way, you can access what was matched by match using a name instead of just a sequential number.
Since the first set of parentheses is named zip, you can access its match using the match's groupdict method to get an {identifier: match} pair. Or you could use y.group('zip') if you're only interested in the match (which usually makes sense since you already know the identifier). You could also access the same match using its sequential number (1). The next match is unnamed, so the only way to access it is its number.
A:
Adding to previous answers: In my opinion you'd better choose one type of groups (named or unnamed) and stick with it. Normally I use named groups. For example:
>>> my_regex = r'(?P<zip>Zip:\s*\d\d\d\d\d)\s*(?P<state>State:\s*\w\w)'
>>> addrs = "Zip: 10010 State: NY"
>>> y = re.search(my_regex, addrs)
>>> print y.groupdict()
{'state': 'State: NY', 'zip': 'Zip: 10010'}
A:
strfriend is your friend:
http://strfriend.com/vis?re=(Zip%3A\s*\d\d\d\d\d)\s*(State%3A\s*\w\w)
EDIT: Why the heck is it making the entire line a link in the actual comment, but not the preview?
| Python-Regex, what's going on here? | I've got a book on python recently and it's got a chapter on Regex, there's a section of code which I can't really understand. Can someone explain exactly what's going on here (this section is on Regex groups)?
>>> my_regex = r'(?P<zip>Zip:\s*\d\d\d\d\d)\s*(State:\s*\w\w)'
>>> addrs = "Zip: 10010 State: NY"
>>> y = re.search(my_regex, addrs)
>>> y.groupdict('zip')
{'zip': 'Zip: 10010'}
>>> y.group(2)
'State: NY'
| [
"regex definition:\n(?P<zip>...)\n\nCreates a named group \"zip\"\nZip:\\s*\n\nMatch \"Zip:\" and zero or more whitespace characters\n\\d\n\nMatch a digit\n\\w\n\nMatch a word character [A-Za-z0-9_]\ny.groupdict('zip')\n\nThe groupdict method returns a dictionary with named groups as keys and their matches as values. In this case, the match for the \"zip\" group gets returned\ny.group(2)\n\nReturn the match for the second group, which is a unnamed group \"(...)\"\nHope that helps.\n",
"The search method will return an object containing the results of your regex pattern. \ngroupdict returns a dictionnary of groups where the keys are the name of the groups defined by (?P...). Here name is a name for the group.\ngroup returns a list of groups that are matched. \"State: NY\" is your third group. The first is the entire string and the second is \"Zip: 10010\".\nThis was a relatively simple question by the way. I simply looked up the method documentation on google and found this page. Google is your friend.\n",
"# my_regex = r' <= this means that the string is a raw string, normally you'd need to use double backslashes\n# ( ... ) this groups something\n# ? this means that the previous bit was optional, why it's just after a group bracket I know not\n# * this means \"as many of as you can find\"\n# \\s is whitespace\n# \\d is a digit, also works with [0-9]\n# \\w is an alphanumeric character\nmy_regex = r'(?P<zip>Zip:\\s*\\d\\d\\d\\d\\d)\\s*(State:\\s*\\w\\w)'\naddrs = \"Zip: 10010 State: NY\"\n\n# Runs the grep on the string\ny = re.search(my_regex, addrs)\n\n",
"The (?P<identifier>match) syntax is Python's way of implementing named capturing groups. That way, you can access what was matched by match using a name instead of just a sequential number.\nSince the first set of parentheses is named zip, you can access its match using the match's groupdict method to get an {identifier: match} pair. Or you could use y.group('zip') if you're only interested in the match (which usually makes sense since you already know the identifier). You could also access the same match using its sequential number (1). The next match is unnamed, so the only way to access it is its number.\n",
"Adding to previous answers: In my opinion you'd better choose one type of groups (named or unnamed) and stick with it. Normally I use named groups. For example: \n>>> my_regex = r'(?P<zip>Zip:\\s*\\d\\d\\d\\d\\d)\\s*(?P<state>State:\\s*\\w\\w)'\n>>> addrs = \"Zip: 10010 State: NY\"\n>>> y = re.search(my_regex, addrs)\n>>> print y.groupdict()\n{'state': 'State: NY', 'zip': 'Zip: 10010'}\n\n",
"strfriend is your friend:\nhttp://strfriend.com/vis?re=(Zip%3A\\s*\\d\\d\\d\\d\\d)\\s*(State%3A\\s*\\w\\w)\nEDIT: Why the heck is it making the entire line a link in the actual comment, but not the preview?\n"
] | [
8,
2,
1,
0,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000433388_python_regex.txt |
Q:
What should "value_from_datadict" method of a custom form widget return?
I'm trying to build my own custom django form widgets (putting them in widgets.py of my project directory). What should the value "value_from_datadict()" return? Is it returning a string or the actual expected value of the field?
I'm building my own version of a split date/time widget using JQuery objects, what should each part of the widget return? Should the date widget return a datetime and the time widget return a datetime? What glue code merges the two values together?
A:
For value_from_datadict() you want to return the value you expect or None. The source in django/forms/widgets.py provides some examples.
But you should be able to build a DatePicker widget by just providing a render method:
DATE_FORMAT = '%m/%d/%y'
class DatePickerWidget(widgets.Widget):
def render(self, name, value, attrs=None):
if value is None:
vstr = ''
elif hasattr(value, 'strftime'):
vstr = datetime_safe.new_datetime(value).strftime(DATE_FORMAT)
else:
vstr = value
id = "id_%s" % name
args = [
"<input type=\"text\" value=\"%s\" name=\"%s\" id=\"%s\" />" % \
(vstr, name, id),
"<script type=\"text/javascript\">$(\"#%s\").datepicker({dateFormat:'mm/dd/y'});</script>" % id
]
return mark_safe("\n".join(args))
A:
The Django source says
Given a
dictionary of data and this widget's
name, returns the value of this
widget. Returns None if it's not
provided.
Reading the code, I see that Django's separate Date and Time widgets are both subclasses of Input, subclasses of Widget, which appears to work with simple Strings.
| What should "value_from_datadict" method of a custom form widget return? | I'm trying to build my own custom django form widgets (putting them in widgets.py of my project directory). What should the value "value_from_datadict()" return? Is it returning a string or the actual expected value of the field?
I'm building my own version of a split date/time widget using JQuery objects, what should each part of the widget return? Should the date widget return a datetime and the time widget return a datetime? What glue code merges the two values together?
| [
"For value_from_datadict() you want to return the value you expect or None. The source in django/forms/widgets.py provides some examples.\nBut you should be able to build a DatePicker widget by just providing a render method:\nDATE_FORMAT = '%m/%d/%y'\n\nclass DatePickerWidget(widgets.Widget):\n def render(self, name, value, attrs=None):\n if value is None:\n vstr = ''\n elif hasattr(value, 'strftime'):\n vstr = datetime_safe.new_datetime(value).strftime(DATE_FORMAT)\n else:\n vstr = value\n id = \"id_%s\" % name\n args = [\n \"<input type=\\\"text\\\" value=\\\"%s\\\" name=\\\"%s\\\" id=\\\"%s\\\" />\" % \\\n (vstr, name, id),\n \"<script type=\\\"text/javascript\\\">$(\\\"#%s\\\").datepicker({dateFormat:'mm/dd/y'});</script>\" % id\n ]\n return mark_safe(\"\\n\".join(args))\n\n",
"The Django source says \n\nGiven a \n dictionary of data and this widget's\n name, returns the value of this\n widget. Returns None if it's not\n provided.\n\nReading the code, I see that Django's separate Date and Time widgets are both subclasses of Input, subclasses of Widget, which appears to work with simple Strings.\n"
] | [
5,
0
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0000436944_django_django_forms_python.txt |
Q:
Is there a way to automatically generate a list of columns that need indexing?
The beauty of ORM lulled me into a soporific sleep. I've got an existing Django app with a lack of database indexes. Is there a way to automatically generate a list of columns that need indexing?
I was thinking maybe some middleware that logs which columns are involved in WHERE clauses? but is there anything built into MySQL that might help?
A:
Yes, there is.
If you take a look at the slow query log, there's an option --log-queries-not-using-indexes
A:
No.
Adding indexes willy-nilly to all "slow" queries will also slow down inserts, updates and deletes.
Indexes are a balancing act between fast queries and fast changes. There is no general or "right" answer. There's certainly nothing that can automate this.
You have to measure the improvement across your whole application as you add and change indexes.
| Is there a way to automatically generate a list of columns that need indexing? | The beauty of ORM lulled me into a soporific sleep. I've got an existing Django app with a lack of database indexes. Is there a way to automatically generate a list of columns that need indexing?
I was thinking maybe some middleware that logs which columns are involved in WHERE clauses? but is there anything built into MySQL that might help?
| [
"Yes, there is.\nIf you take a look at the slow query log, there's an option --log-queries-not-using-indexes\n",
"No.\nAdding indexes willy-nilly to all \"slow\" queries will also slow down inserts, updates and deletes.\nIndexes are a balancing act between fast queries and fast changes. There is no general or \"right\" answer. There's certainly nothing that can automate this.\nYou have to measure the improvement across your whole application as you add and change indexes.\n"
] | [
4,
4
] | [] | [] | [
"database",
"django",
"django_models",
"mysql",
"python"
] | stackoverflow_0000438559_database_django_django_models_mysql_python.txt |
Q:
Python data structure: SQL, XML, or .py file
What is the best way to store large amounts of data in python, given one (or two) 500,000 item+ dictionary used for undirected graph searching?
I've been considering a few options such as storing the data as XML:
<key name="a">
<value data="1" />
<value data="2" />
</key>
<key name="b">
...
or in a python file for direct access:
db = {"a": [1, 2], "b": ...}
or in a SQL database? I'm thinking this would be the best solution, but would I have to rely more on SQL to do the computation than python itself?
A:
The Python source technique absolutely rules.
XML is slow to parse, and relatively hard to read by people. That's why companies like Altova are in business -- XML isn't pleasant to edit.
Python source db = {"a": [1, 2], "b": ...} is
Fast to parse.
Easy to read by people.
If you have programs that read and write giant dictionaries, use pprint for the writing so that you get a nicely formatted output. Something easier to read.
If you're worried about portability, consider YAML (or JSON) for serializing the object. They parse quickly also, and are much, much easier to read than XML.
A:
I would consider using one of the many graph libraries available for python (e.g. python-graph)
A:
You need to specify your problem a bit better. I'll make a few assumptions:
1) your data is static and you just want to search it,
2) you have enough memory to store it.
If application startup speed is not critical, the data format is up to you, just as long as you can get it into Python memory. Use simple data types (dicts, lists, strings) to store data, not an XML graph, if you want to access it quickly. You might consider writing a lightweight class of your own to express nodes and store links to other nodes in a dict or array.
If application startup time is critical, consider loading your data in a Python program and pickling it out to a file; you can then load the pickled data structure (which should be really fast) in the production application.
If, on the other hand, your data is too big to fit in memory, or you want to be able to modify it persistently, you could use SQL for storage (either an external server or an SQLite database) or ZODB (a Python object database).
A:
If you store your data on an XML file it will be easier to modify (i.e. using notepad ...) but you must take in account that reading and parsing all that amount of data from a XML file is a heavy duty.
Using a SQL database (maybe PostGres) will make the choiche some more performant, DMBS are more optimized than direct filesystem reading/parsing.
If you store all your data in some Python structure on a separate file, you can than have the advantage of bytecode compilation (.pyc) that doesn't give a boost in computational therms but allows for a faster load (wich is what you want).
I would choose the last one.
A:
XML is really oriented to tree structures and is very verbose. You can look at RDF for ways to describe a graph in XML but it still has other disadvantages, e.g. the time to read, parse, and instantiate 500k+ objects and the amount of file space used.
SQL is really oriented to describing rows in tables. You can store graphs of course, but you'll see a performance penalty here too.
I would try python pickling first to see if it meets your needs. It will probably be the most compact and the fastest to read in and instantiate all the objects.
Really the only reason to use other formats is if you need something they offer, e.g. transactions in SQL or cross-language processing of XML.
A:
The python file approach will surely be the fastest if you have a way to maintain the file.
| Python data structure: SQL, XML, or .py file | What is the best way to store large amounts of data in python, given one (or two) 500,000 item+ dictionary used for undirected graph searching?
I've been considering a few options such as storing the data as XML:
<key name="a">
<value data="1" />
<value data="2" />
</key>
<key name="b">
...
or in a python file for direct access:
db = {"a": [1, 2], "b": ...}
or in a SQL database? I'm thinking this would be the best solution, but would I have to rely more on SQL to do the computation than python itself?
| [
"The Python source technique absolutely rules.\nXML is slow to parse, and relatively hard to read by people. That's why companies like Altova are in business -- XML isn't pleasant to edit.\nPython source db = {\"a\": [1, 2], \"b\": ...} is \n\nFast to parse.\nEasy to read by people. \n\nIf you have programs that read and write giant dictionaries, use pprint for the writing so that you get a nicely formatted output. Something easier to read.\nIf you're worried about portability, consider YAML (or JSON) for serializing the object. They parse quickly also, and are much, much easier to read than XML.\n",
"I would consider using one of the many graph libraries available for python (e.g. python-graph)\n",
"You need to specify your problem a bit better. I'll make a few assumptions:\n1) your data is static and you just want to search it,\n2) you have enough memory to store it.\nIf application startup speed is not critical, the data format is up to you, just as long as you can get it into Python memory. Use simple data types (dicts, lists, strings) to store data, not an XML graph, if you want to access it quickly. You might consider writing a lightweight class of your own to express nodes and store links to other nodes in a dict or array.\nIf application startup time is critical, consider loading your data in a Python program and pickling it out to a file; you can then load the pickled data structure (which should be really fast) in the production application.\nIf, on the other hand, your data is too big to fit in memory, or you want to be able to modify it persistently, you could use SQL for storage (either an external server or an SQLite database) or ZODB (a Python object database). \n",
"If you store your data on an XML file it will be easier to modify (i.e. using notepad ...) but you must take in account that reading and parsing all that amount of data from a XML file is a heavy duty.\nUsing a SQL database (maybe PostGres) will make the choiche some more performant, DMBS are more optimized than direct filesystem reading/parsing.\nIf you store all your data in some Python structure on a separate file, you can than have the advantage of bytecode compilation (.pyc) that doesn't give a boost in computational therms but allows for a faster load (wich is what you want).\nI would choose the last one.\n",
"XML is really oriented to tree structures and is very verbose. You can look at RDF for ways to describe a graph in XML but it still has other disadvantages, e.g. the time to read, parse, and instantiate 500k+ objects and the amount of file space used.\nSQL is really oriented to describing rows in tables. You can store graphs of course, but you'll see a performance penalty here too.\nI would try python pickling first to see if it meets your needs. It will probably be the most compact and the fastest to read in and instantiate all the objects.\nReally the only reason to use other formats is if you need something they offer, e.g. transactions in SQL or cross-language processing of XML.\n",
"The python file approach will surely be the fastest if you have a way to maintain the file.\n"
] | [
6,
2,
1,
0,
0,
0
] | [] | [] | [
"data_structures",
"graph",
"python",
"sql",
"xml"
] | stackoverflow_0000438185_data_structures_graph_python_sql_xml.txt |
Q:
Help in FileNotFoundException -Python
This is my code:
try:
import clr, sys
from xml.dom.minidom import parse
import datetime
sys.path.append("C:\\teest")
clr.AddReference("TCdll")
from ClassLibrary1 import Class1
cl = Class1()
except ( ImportError ) :
print "Module may not be existing "
My TCdll is in C:\test.I just gave it as C:\teest to know the error.
Exception is this:
Traceback (most recent call last):
File "C:/Python25/13thjan/PSE.py", line 8, in <module>
clr.AddReference("TCdll")
FileNotFoundException: Unable to find assembly 'TCdll'.
at Python.Runtime.CLRModule.AddReference(String name)
How to handle this exception ??
Help needed immediately
A:
You need to find out how clr.AddReference maps to a file name.
EDIT:
I think you're asking how to catch the exception from the AddReference call?
Replace:
clr.AddReference("TCdll")
with:
try:
clr.AddReference("TCdll")
except FileNotFoundException,e:
print "Failed to find reference",e
sys.exit(1)
| Help in FileNotFoundException -Python | This is my code:
try:
import clr, sys
from xml.dom.minidom import parse
import datetime
sys.path.append("C:\\teest")
clr.AddReference("TCdll")
from ClassLibrary1 import Class1
cl = Class1()
except ( ImportError ) :
print "Module may not be existing "
My TCdll is in C:\test.I just gave it as C:\teest to know the error.
Exception is this:
Traceback (most recent call last):
File "C:/Python25/13thjan/PSE.py", line 8, in <module>
clr.AddReference("TCdll")
FileNotFoundException: Unable to find assembly 'TCdll'.
at Python.Runtime.CLRModule.AddReference(String name)
How to handle this exception ??
Help needed immediately
| [
"You need to find out how clr.AddReference maps to a file name.\nEDIT:\nI think you're asking how to catch the exception from the AddReference call?\nReplace:\nclr.AddReference(\"TCdll\")\n\nwith:\ntry:\n clr.AddReference(\"TCdll\")\nexcept FileNotFoundException,e:\n print \"Failed to find reference\",e\n sys.exit(1)\n\n"
] | [
4
] | [] | [] | [
"filenotfoundexception",
"handler",
"python"
] | stackoverflow_0000438733_filenotfoundexception_handler_python.txt |
Q:
How do you create a list like PHP's in Python?
This is an incredibly simple question (I'm new to Python).
I basically want a data structure like a PHP array -- i.e., I want to initialise it and then just add values into it.
As far as I can tell, this is not possible with Python, so I've got the maximum value I might want to use as an index, but I can't figure out how to create an empty list of a specified length.
Also, is a list the right data structure to use to model what feels like it should just be an array? I tried to use an array, but it seemed unhappy with storing strings.
Edit: Sorry, I didn't explain very clearly what I was looking for. When I add items into the list, I do not want to put them in in sequence, but rather I want to insert them into specified slots in the list.
I.e., I want to be able to do this:
list = []
for row in rows:
c = list_of_categories.index(row["id"])
print c
list[c] = row["name"]
A:
Depending on how you are going to use the list, it may be that you actually want a dictionary. This will work:
d = {}
for row in rows:
c = list_of_categories.index(row["id"])
print c
d[c] = row["name"]
... or more compactly:
d = dict((list_of_categories.index(row['id']), row['name']) for row in rows)
print d
PHP arrays are much more like Python dicts than they are like Python lists. For example, they can have strings for keys.
And confusingly, Python has an array module, which is described as "efficient arrays of numeric values", which is definitely not what you want.
A:
If the number of items you want is known in advance, and you want to access them using integer, 0-based, consecutive indices, you might try this:
n = 3
array = n * [None]
print array
array[2] = 11
array[1] = 47
array[0] = 42
print array
This prints:
[None, None, None]
[42, 47, 11]
A:
Use the list constructor, and append your items, like this:
l = list ()
l.append ("foo")
l.append (3)
print (l)
gives me ['foo', 3], which should be what you want. See the documentation on list and the sequence type documentation.
EDIT Updated
For inserting, use insert, like this:
l = list ()
l.append ("foo")
l.append (3)
l.insert (1, "new")
print (l)
which prints ['foo', 'new', 3]
A:
http://diveintopython3.ep.io/native-datatypes.html#lists
You don't need to create empty lists with a specified length. You just add to them and query about their current length if needed.
What you can't do without preparing to catch an exception is to use a non existent index. Which is probably what you are used to in PHP.
A:
You can use this syntax to create a list with n elements:
lst = [0] * n
But be careful! The list will contain n copies of this object. If this object is mutable and you change one element, then all copies will be changed! In this case you should use:
lst = [some_object() for i in xrange(n)]
Then you can access these elements:
for i in xrange(n):
lst[i] += 1
A Python list is comparable to a vector in other languages. It is a resizable array, not a linked list.
A:
Sounds like what you need might be a dictionary rather than an array if you want to insert into specified indices.
dict = {'a': 1, 'b': 2, 'c': 3}
dict['a']
1
A:
I agree with ned that you probably need a dictionary for what you're trying to do. But here's a way to get a list of those lists of categories you can do this:
lst = [list_of_categories.index(row["id"]) for row in rows]
A:
use a dictionary, because what you're really asking for is a structure you can access by arbitrary keys
list = {}
for row in rows:
c = list_of_categories.index(row["id"])
print c
list[c] = row["name"]
Then you can iterate through the known contents with:
for x in list.values():
print x
Or check if something exists in the "list":
if 3 in list:
print "it's there"
A:
I'm not sure if I understood what you mean or want to do, but it seems that you want a list which
is dictonary-like where the index is the key. Even if I think, the usage of a dictonary would be a better
choice, here's my answer: Got a problem - make an object:
class MyList(UserList.UserList):
NO_ITEM = 'noitem'
def insertAt(self, item, index):
length = len(self)
if index < length:
self[index] = item
elif index == length:
self.append(item)
else:
for i in range(0, index-length):
self.append(self.NO_ITEM)
self.append(item)
Maybe some errors in the python syntax (didn't check), but in principle it should work.
Of course the else case works also for the elif, but I thought, it might be a little harder
to read this way.
| How do you create a list like PHP's in Python? | This is an incredibly simple question (I'm new to Python).
I basically want a data structure like a PHP array -- i.e., I want to initialise it and then just add values into it.
As far as I can tell, this is not possible with Python, so I've got the maximum value I might want to use as an index, but I can't figure out how to create an empty list of a specified length.
Also, is a list the right data structure to use to model what feels like it should just be an array? I tried to use an array, but it seemed unhappy with storing strings.
Edit: Sorry, I didn't explain very clearly what I was looking for. When I add items into the list, I do not want to put them in in sequence, but rather I want to insert them into specified slots in the list.
I.e., I want to be able to do this:
list = []
for row in rows:
c = list_of_categories.index(row["id"])
print c
list[c] = row["name"]
| [
"Depending on how you are going to use the list, it may be that you actually want a dictionary. This will work:\nd = {}\n\nfor row in rows:\n c = list_of_categories.index(row[\"id\"])\n print c\n d[c] = row[\"name\"]\n\n... or more compactly:\nd = dict((list_of_categories.index(row['id']), row['name']) for row in rows)\nprint d\n\nPHP arrays are much more like Python dicts than they are like Python lists. For example, they can have strings for keys.\nAnd confusingly, Python has an array module, which is described as \"efficient arrays of numeric values\", which is definitely not what you want.\n",
"If the number of items you want is known in advance, and you want to access them using integer, 0-based, consecutive indices, you might try this:\nn = 3\narray = n * [None]\nprint array\narray[2] = 11\narray[1] = 47\narray[0] = 42\nprint array\n\nThis prints:\n[None, None, None]\n[42, 47, 11]\n\n",
"Use the list constructor, and append your items, like this:\nl = list ()\nl.append (\"foo\")\nl.append (3)\nprint (l)\n\ngives me ['foo', 3], which should be what you want. See the documentation on list and the sequence type documentation.\nEDIT Updated\nFor inserting, use insert, like this:\nl = list ()\nl.append (\"foo\")\nl.append (3)\nl.insert (1, \"new\")\nprint (l)\n\nwhich prints ['foo', 'new', 3]\n",
"http://diveintopython3.ep.io/native-datatypes.html#lists\nYou don't need to create empty lists with a specified length. You just add to them and query about their current length if needed. \nWhat you can't do without preparing to catch an exception is to use a non existent index. Which is probably what you are used to in PHP.\n",
"You can use this syntax to create a list with n elements:\nlst = [0] * n\n\nBut be careful! The list will contain n copies of this object. If this object is mutable and you change one element, then all copies will be changed! In this case you should use:\nlst = [some_object() for i in xrange(n)]\n\nThen you can access these elements:\nfor i in xrange(n):\n lst[i] += 1\n\nA Python list is comparable to a vector in other languages. It is a resizable array, not a linked list.\n",
"Sounds like what you need might be a dictionary rather than an array if you want to insert into specified indices.\ndict = {'a': 1, 'b': 2, 'c': 3}\ndict['a']\n\n\n\n1\n\n\n",
"I agree with ned that you probably need a dictionary for what you're trying to do. But here's a way to get a list of those lists of categories you can do this:\nlst = [list_of_categories.index(row[\"id\"]) for row in rows]\n\n",
"use a dictionary, because what you're really asking for is a structure you can access by arbitrary keys\nlist = {}\n\nfor row in rows:\n c = list_of_categories.index(row[\"id\"])\n print c\n list[c] = row[\"name\"]\n\nThen you can iterate through the known contents with:\nfor x in list.values():\n print x\n\nOr check if something exists in the \"list\":\nif 3 in list: \n print \"it's there\"\n\n",
"I'm not sure if I understood what you mean or want to do, but it seems that you want a list which\nis dictonary-like where the index is the key. Even if I think, the usage of a dictonary would be a better\nchoice, here's my answer: Got a problem - make an object:\nclass MyList(UserList.UserList):\n\nNO_ITEM = 'noitem'\n\ndef insertAt(self, item, index):\n\n length = len(self)\n if index < length:\n self[index] = item\n elif index == length:\n self.append(item)\n else:\n for i in range(0, index-length):\n self.append(self.NO_ITEM)\n self.append(item)\n\nMaybe some errors in the python syntax (didn't check), but in principle it should work.\n Of course the else case works also for the elif, but I thought, it might be a little harder\n to read this way. \n"
] | [
10,
4,
2,
2,
1,
1,
1,
1,
1
] | [] | [] | [
"list",
"python"
] | stackoverflow_0000438813_list_python.txt |
Q:
Debugging pylons in Eclipse under Ubuntu
I am trying to get pylons to debug in Eclipse under Ubuntu.
Specifically. I am not sure what to use for the 'Main Module' on the Run configurations dialog.
(this is a similar question on stackoverflow, but I think it applies to windows as I can't find paster-script.py on my system)
Can anyone help?
A:
I've managed to fix this now.
In Window>Preferences>Pydev>Interpreter-Python remove the python interpreter and reload it (select New) after installing pylons.
In the Terminal cd into the projects directory. Then type sudo python setup.py develop
Not sure what this does, but it does the trick (if any one wants to fill me in, please do)
In Run>Open Debug Dialog enter the location of paster in Main Module. For me this is /usr/bin/paster . Then in the Arguments tab in Program arguments enter serve /locationOfYourProject/development.ini
All set to go.
It took a lot of search for me to find out that it does not work if the arguments includes --reload
A:
I got it running basically almost the same way - although you do not have to do the setup.py develop step - it works fine without that.
What it does is that is sets global link to your project directory for a python package named after your project name.
A:
I do need this step "sudo python setup.py develop" to get it running.. otherwise it throw out some exceptions.
btw, the setup.py is the one in your created project.
A:
Haven't tried on Eclipse, but I bet the solution I have been using to debug Pylons apps in WingIDE will work here too.
Write the following two-liner (name it run_me.py or similarly) and save it in your project directory:
from paste.script.serve import ServeCommand
ServeCommand("serve").run(["development.ini"])
Set this file as main debug target (aka main module)
Enjoy.
| Debugging pylons in Eclipse under Ubuntu | I am trying to get pylons to debug in Eclipse under Ubuntu.
Specifically. I am not sure what to use for the 'Main Module' on the Run configurations dialog.
(this is a similar question on stackoverflow, but I think it applies to windows as I can't find paster-script.py on my system)
Can anyone help?
| [
"I've managed to fix this now.\nIn Window>Preferences>Pydev>Interpreter-Python remove the python interpreter and reload it (select New) after installing pylons.\nIn the Terminal cd into the projects directory. Then type sudo python setup.py develop \nNot sure what this does, but it does the trick (if any one wants to fill me in, please do)\nIn Run>Open Debug Dialog enter the location of paster in Main Module. For me this is /usr/bin/paster . Then in the Arguments tab in Program arguments enter serve /locationOfYourProject/development.ini\nAll set to go.\nIt took a lot of search for me to find out that it does not work if the arguments includes --reload\n",
"I got it running basically almost the same way - although you do not have to do the setup.py develop step - it works fine without that. \nWhat it does is that is sets global link to your project directory for a python package named after your project name.\n",
"I do need this step \"sudo python setup.py develop\" to get it running.. otherwise it throw out some exceptions.\nbtw, the setup.py is the one in your created project.\n",
"Haven't tried on Eclipse, but I bet the solution I have been using to debug Pylons apps in WingIDE will work here too.\n\nWrite the following two-liner (name it run_me.py or similarly) and save it in your project directory:\nfrom paste.script.serve import ServeCommand\nServeCommand(\"serve\").run([\"development.ini\"])\nSet this file as main debug target (aka main module)\nEnjoy.\n\n"
] | [
4,
1,
1,
0
] | [] | [] | [
"debugging",
"eclipse",
"pylons",
"python",
"ubuntu"
] | stackoverflow_0000312599_debugging_eclipse_pylons_python_ubuntu.txt |
Q:
Python: Problem with overloaded constructors
WARNING: I have been learning Python for all of 10 minutes so apologies for any stupid questions!
I have written the following code, however I get the following exception:
Message File
Name Line Position Traceback Node
31
exceptions.TypeError: this constructor takes no arguments
class Computer:
name = "Computer1"
ip = "0.0.0.0"
screenSize = 17
def Computer(compName, compIp, compScreenSize):
name = compName
ip = compIp
screenSize = compScreenSize
printStats()
return
def Computer():
printStats()
return
def printStats():
print "Computer Statistics: --------------------------------"
print "Name:" + name
print "IP:" + ip
print "ScreenSize:" , screenSize // cannot concatenate 'str' and 'tuple' objects
print "-----------------------------------------------------"
return
comp1 = Computer()
comp2 = Computer("The best computer in the world", "27.1.0.128",22)
Any thoughts?
A:
I'm going to assume you're coming from a Java-ish background, so there are a few key differences to point out.
class Computer(object):
"""Docstrings are used kind of like Javadoc to document classes and
members. They are the first thing inside a class or method.
You probably want to extend object, to make it a "new-style" class.
There are reasons for this that are a bit complex to explain."""
# everything down here is a static variable, unlike in Java or C# where
# declarations here are for what members a class has. All instance
# variables in Python are dynamic, unless you specifically tell Python
# otherwise.
defaultName = "belinda"
defaultRes = (1024, 768)
defaultIP = "192.168.5.307"
def __init__(self, name=defaultName, resolution=defaultRes, ip=defaultIP):
"""Constructors in Python are called __init__. Methods with names
like __something__ often have special significance to the Python
interpreter.
The first argument to any class method is a reference to the current
object, called "self" by convention.
You can use default function arguments instead of function
overloading."""
self.name = name
self.resolution = resolution
self.ip = ip
# and so on
def printStats(self):
"""You could instead use a __str__(self, ...) function to return this
string. Then you could simply do "print(str(computer))" if you wanted
to."""
print "Computer Statistics: --------------------------------"
print "Name:" + self.name
print "IP:" + self.ip
print "ScreenSize:" , self.resolution //cannot concatenate 'str' and 'tuple' objects
print "-----------------------------------------------------"
A:
Constructors in Python are called __init__. You must also use "self" as the first argument for all methods in your class, and use that to set instance variables in the class.
class Computer:
def __init__(self, compName = "Computer1", compIp = "0.0.0.0", compScreenSize = 22):
self.name = compName
self.ip = compIp
self.screenSize = compScreenSize
self.printStats()
def printStats(self):
print "Computer Statistics: --------------------------------"
print "Name:", self.name
print "IP:", self.ip
print "ScreenSize:", self.screenSize
print "-----------------------------------------------------"
comp1 = Computer()
comp2 = Computer("The best computer in the world", "27.1.0.128",22)
A:
dude get yourself a python book. Dive into Python is pretty good.
A:
For a start, look here.
A:
There are a number of things to point out:
All instance methods in Python have an explicit self argument.
Constructors are called __init__.
You cannot overload methods. You can achieve a similar effect by using default method arguments.
C++:
class comp {
std::string m_name;
foo(std::string name);
};
foo::foo(std::string name) : m_name(name) {}
Python:
class comp:
def __init__(self, name=None):
if name: self.name = name
else: self.name = 'defaultName'
A:
That isn't valid python.
The constructor for a Python class is def __init__(self, ...): and you cannot overload it.
What you can do is use defaults for the arguments, eg.
class Computer:
def __init__(self, compName="Computer1", compIp="0.0.0.0", compScreenSize=17):
self.name = compName
self.ip = compIp
self.screenSize = compScreenSize
self.printStats()
return
def printStats(self):
print "Computer Statistics: --------------------------------"
print "Name : %s" % self.name
print "IP : %s" % self.ip
print "ScreenSize: %s" % self.screenSize
print "-----------------------------------------------------"
return
comp1 = Computer()
comp2 = Computer("The best computer in the world", "27.1.0.128",22)
A:
Ah, these are common gotchas for new python developers.
First, the constructor should be called:
__init__()
Your second issue is forgetting to include the self parameter to your class methods.
Furthermore, when you define the second constructor, you're replacing the definition of the Computer() method. Python is extremely dynamic and will cheerfully let you redefine class methods.
The more pythonic way is probably to use default values for the parameters if you don't want to make them required.
A:
Python does not support function overloading.
| Python: Problem with overloaded constructors | WARNING: I have been learning Python for all of 10 minutes so apologies for any stupid questions!
I have written the following code, however I get the following exception:
Message File
Name Line Position Traceback Node
31
exceptions.TypeError: this constructor takes no arguments
class Computer:
name = "Computer1"
ip = "0.0.0.0"
screenSize = 17
def Computer(compName, compIp, compScreenSize):
name = compName
ip = compIp
screenSize = compScreenSize
printStats()
return
def Computer():
printStats()
return
def printStats():
print "Computer Statistics: --------------------------------"
print "Name:" + name
print "IP:" + ip
print "ScreenSize:" , screenSize // cannot concatenate 'str' and 'tuple' objects
print "-----------------------------------------------------"
return
comp1 = Computer()
comp2 = Computer("The best computer in the world", "27.1.0.128",22)
Any thoughts?
| [
"I'm going to assume you're coming from a Java-ish background, so there are a few key differences to point out.\nclass Computer(object):\n \"\"\"Docstrings are used kind of like Javadoc to document classes and\n members. They are the first thing inside a class or method.\n\n You probably want to extend object, to make it a \"new-style\" class.\n There are reasons for this that are a bit complex to explain.\"\"\"\n\n # everything down here is a static variable, unlike in Java or C# where\n # declarations here are for what members a class has. All instance\n # variables in Python are dynamic, unless you specifically tell Python\n # otherwise.\n defaultName = \"belinda\"\n defaultRes = (1024, 768)\n defaultIP = \"192.168.5.307\"\n\n def __init__(self, name=defaultName, resolution=defaultRes, ip=defaultIP):\n \"\"\"Constructors in Python are called __init__. Methods with names\n like __something__ often have special significance to the Python\n interpreter.\n\n The first argument to any class method is a reference to the current\n object, called \"self\" by convention.\n\n You can use default function arguments instead of function\n overloading.\"\"\"\n self.name = name\n self.resolution = resolution\n self.ip = ip\n # and so on\n\n def printStats(self):\n \"\"\"You could instead use a __str__(self, ...) function to return this\n string. Then you could simply do \"print(str(computer))\" if you wanted\n to.\"\"\"\n print \"Computer Statistics: --------------------------------\"\n print \"Name:\" + self.name\n print \"IP:\" + self.ip\n print \"ScreenSize:\" , self.resolution //cannot concatenate 'str' and 'tuple' objects\n print \"-----------------------------------------------------\"\n\n",
"Constructors in Python are called __init__. You must also use \"self\" as the first argument for all methods in your class, and use that to set instance variables in the class.\nclass Computer:\n\n def __init__(self, compName = \"Computer1\", compIp = \"0.0.0.0\", compScreenSize = 22):\n self.name = compName\n self.ip = compIp\n self.screenSize = compScreenSize\n\n self.printStats()\n\n def printStats(self):\n print \"Computer Statistics: --------------------------------\"\n print \"Name:\", self.name\n print \"IP:\", self.ip\n print \"ScreenSize:\", self.screenSize\n print \"-----------------------------------------------------\"\n\n\ncomp1 = Computer()\ncomp2 = Computer(\"The best computer in the world\", \"27.1.0.128\",22)\n\n",
"dude get yourself a python book. Dive into Python is pretty good. \n",
"For a start, look here.\n",
"There are a number of things to point out:\n\nAll instance methods in Python have an explicit self argument.\nConstructors are called __init__.\nYou cannot overload methods. You can achieve a similar effect by using default method arguments.\n\nC++:\nclass comp {\n std::string m_name;\n foo(std::string name);\n};\n\nfoo::foo(std::string name) : m_name(name) {}\n\nPython:\nclass comp:\n def __init__(self, name=None):\n if name: self.name = name\n else: self.name = 'defaultName'\n\n",
"That isn't valid python.\nThe constructor for a Python class is def __init__(self, ...): and you cannot overload it.\nWhat you can do is use defaults for the arguments, eg.\nclass Computer:\n def __init__(self, compName=\"Computer1\", compIp=\"0.0.0.0\", compScreenSize=17):\n self.name = compName\n self.ip = compIp\n self.screenSize = compScreenSize\n\n self.printStats()\n\n return\n\n def printStats(self):\n print \"Computer Statistics: --------------------------------\"\n print \"Name : %s\" % self.name\n print \"IP : %s\" % self.ip\n print \"ScreenSize: %s\" % self.screenSize\n print \"-----------------------------------------------------\"\n return\n\ncomp1 = Computer()\ncomp2 = Computer(\"The best computer in the world\", \"27.1.0.128\",22)\n\n",
"Ah, these are common gotchas for new python developers.\nFirst, the constructor should be called: \n__init__()\n\nYour second issue is forgetting to include the self parameter to your class methods. \nFurthermore, when you define the second constructor, you're replacing the definition of the Computer() method. Python is extremely dynamic and will cheerfully let you redefine class methods.\nThe more pythonic way is probably to use default values for the parameters if you don't want to make them required.\n",
"Python does not support function overloading.\n"
] | [
36,
5,
4,
2,
2,
1,
1,
1
] | [] | [] | [
"constructor_overloading",
"exception",
"python"
] | stackoverflow_0000312695_constructor_overloading_exception_python.txt |
Q:
unicode() vs. str.decode() for a utf8 encoded byte string (python 2.x)
Is there any reason to prefer unicode(somestring, 'utf8') as opposed to somestring.decode('utf8')?
My only thought is that .decode() is a bound method so python may be able to resolve it more efficiently, but correct me if I'm wrong.
A:
It's easy to benchmark it:
>>> from timeit import Timer
>>> ts = Timer("s.decode('utf-8')", "s = 'ééé'")
>>> ts.timeit()
8.9185450077056885
>>> tu = Timer("unicode(s, 'utf-8')", "s = 'ééé'")
>>> tu.timeit()
2.7656929492950439
>>>
Obviously, unicode() is faster.
FWIW, I don't know where you get the impression that methods would be faster - it's quite the contrary.
A:
I'd prefer 'something'.decode(...) since the unicode type is no longer there in Python 3.0, while text = b'binarydata'.decode(encoding) is still valid.
| unicode() vs. str.decode() for a utf8 encoded byte string (python 2.x) | Is there any reason to prefer unicode(somestring, 'utf8') as opposed to somestring.decode('utf8')?
My only thought is that .decode() is a bound method so python may be able to resolve it more efficiently, but correct me if I'm wrong.
| [
"It's easy to benchmark it:\n>>> from timeit import Timer\n>>> ts = Timer(\"s.decode('utf-8')\", \"s = 'ééé'\")\n>>> ts.timeit()\n8.9185450077056885\n>>> tu = Timer(\"unicode(s, 'utf-8')\", \"s = 'ééé'\") \n>>> tu.timeit()\n2.7656929492950439\n>>> \n\nObviously, unicode() is faster.\nFWIW, I don't know where you get the impression that methods would be faster - it's quite the contrary.\n",
"I'd prefer 'something'.decode(...) since the unicode type is no longer there in Python 3.0, while text = b'binarydata'.decode(encoding) is still valid. \n"
] | [
23,
23
] | [] | [] | [
"python",
"unicode",
"utf_8"
] | stackoverflow_0000440320_python_unicode_utf_8.txt |
Q:
Extracting Embedded Images From Outlook Email
I am using Microsoft's CDO (Collaboration Data Objects) to programmatically read mail from an Outlook mailbox and save embedded image attachments. I'm trying to do this from Python using the Win32 extensions, but samples in any language that uses CDO would be helpful.
So far, I am here...
The following Python code will read the last email in my mailbox, print the names of the attachments, and print the message body:
from win32com.client import Dispatch
session = Dispatch('MAPI.session')
session.Logon('','',0,1,0,0,'exchange.foo.com\nbar');
inbox = session.Inbox
message = inbox.Messages.Item(inbox.Messages.Count)
for attachment in message.Attachments:
print attachment
print message.Text
session.Logoff()
However, the attachment names are things like: "zesjvqeqcb_chart_0". Inside the email source, I see image source links like this:
<IMG src="cid:zesjvqeqcb_chart_0">
So, is it possible to use this CID URL (or anything else) to extract the actual image and save it locally?
A:
Difference in versions of OS/Outlook/CDO is what might be the source of confusion, so here are the steps to get it working on WinXP/Outlook 2007/CDO 1.21:
install CDO 1.21
install win32com.client
goto C:\Python25\Lib\site-packages\win32com\client\ directory run the following:
python makepy.py
from the list select "Microsoft CDO 1.21 Library (1.21)", click ok
C:\Python25\Lib\site-packages\win32com\client>python makepy.py
Generating to C:\Python25\lib\site-packages\win32com\gen_py\3FA7DEA7-6438-101B-ACC1-00AA00423326x0x1x33.py
Building definitions from type library...
Generating...
Importing module
Examining file 3FA7DEA7-6438-101B-ACC1-00AA00423326x0x1x33.py that's just been generated, will give you an idea of what classes, methods, properties and constants are available.
Now that we are done with the boring steps, here is the fun part:
import win32com.client
from win32com.client import Dispatch
session = Dispatch('MAPI.session')
session.Logon ('Outlook') # this is profile name
inbox = session.Inbox
messages = session.Inbox.Messages
message = inbox.Messages.GetFirst()
if(message):
attachments = message.Attachments
for i in range(attachments.Count):
attachment = attachments.Item(i + 1) # yep, indexes are 1 based
filename = "c:\\tmpfile" + str(i)
attachment.WriteToFile(FileName=filename)
session.Logoff()
Same general approach will also work if you have older version of CDO (CDO for win2k)
| Extracting Embedded Images From Outlook Email | I am using Microsoft's CDO (Collaboration Data Objects) to programmatically read mail from an Outlook mailbox and save embedded image attachments. I'm trying to do this from Python using the Win32 extensions, but samples in any language that uses CDO would be helpful.
So far, I am here...
The following Python code will read the last email in my mailbox, print the names of the attachments, and print the message body:
from win32com.client import Dispatch
session = Dispatch('MAPI.session')
session.Logon('','',0,1,0,0,'exchange.foo.com\nbar');
inbox = session.Inbox
message = inbox.Messages.Item(inbox.Messages.Count)
for attachment in message.Attachments:
print attachment
print message.Text
session.Logoff()
However, the attachment names are things like: "zesjvqeqcb_chart_0". Inside the email source, I see image source links like this:
<IMG src="cid:zesjvqeqcb_chart_0">
So, is it possible to use this CID URL (or anything else) to extract the actual image and save it locally?
| [
"Difference in versions of OS/Outlook/CDO is what might be the source of confusion, so here are the steps to get it working on WinXP/Outlook 2007/CDO 1.21:\n\ninstall CDO 1.21\ninstall win32com.client\ngoto C:\\Python25\\Lib\\site-packages\\win32com\\client\\ directory run the following:\n\npython makepy.py\n\nfrom the list select \"Microsoft CDO 1.21 Library (1.21)\", click ok\n\nC:\\Python25\\Lib\\site-packages\\win32com\\client>python makepy.py\nGenerating to C:\\Python25\\lib\\site-packages\\win32com\\gen_py\\3FA7DEA7-6438-101B-ACC1-00AA00423326x0x1x33.py\nBuilding definitions from type library...\nGenerating...\nImporting module\n\nExamining file 3FA7DEA7-6438-101B-ACC1-00AA00423326x0x1x33.py that's just been generated, will give you an idea of what classes, methods, properties and constants are available.\n\nNow that we are done with the boring steps, here is the fun part:\nimport win32com.client\nfrom win32com.client import Dispatch\n\nsession = Dispatch('MAPI.session')\nsession.Logon ('Outlook') # this is profile name\ninbox = session.Inbox\nmessages = session.Inbox.Messages \nmessage = inbox.Messages.GetFirst()\n\nif(message):\n attachments = message.Attachments\n for i in range(attachments.Count):\n attachment = attachments.Item(i + 1) # yep, indexes are 1 based\n\n filename = \"c:\\\\tmpfile\" + str(i)\n attachment.WriteToFile(FileName=filename)\nsession.Logoff()\n\nSame general approach will also work if you have older version of CDO (CDO for win2k)\n"
] | [
5
] | [] | [] | [
"cdo.message",
"email",
"outlook",
"python"
] | stackoverflow_0000440356_cdo.message_email_outlook_python.txt |
Q:
python, sorting a list by a key that's a substring of each element
Part of a programme builds this list,
[u'1 x Affinity for war', u'1 x Intellect', u'2 x Charisma', u'2 x Perception', u'3 x Population growth', u'4 x Affinity for the land', u'5 x Morale']
I'm currently trying to sort it alphabetically by the name of the evolution rather than by the number. Is there any way I can do this without just changing the order the two things appear in the list (as in 'intellect x 1)?
A:
You have to get the "key" from the string.
def myKeyFunc( aString ):
stuff, x, label = aString.partition(' x ')
return label
aList.sort( key= myKeyFunc )
A:
How about:
lst.sort(key=lamdba s: s.split(' x ')[1])
A:
Not knowing if your items are standardized at 1 digit, 1 space, 1 'x', 1 space, multiple words I wrote this up:
mylist = [u'1 x Affinity for war', u'1 x Intellect', u'2 x Charisma', u'2 x Perception', u'3 x Population growth', u'4 x Affinity for the land', u'5 x Morale']
def sort(a, b):
return cmp(" ".join(a.split()[2:]), " ".join(b.split()[2:]))
mylist.sort(sort)
You can edit the parsing inside the sort method but you probably get the idea.
Cheers,
Patrick
A:
To do so, you need to implement a custom compare:
def myCompare(x, y):
x_name = " ".join(x.split()[2:])
y_name = " ".join(y.split()[2:])
return cmp(x_name, y_name)
Then you use that compare definition as the input to your sort function:
myList.sort(myCompare)
| python, sorting a list by a key that's a substring of each element | Part of a programme builds this list,
[u'1 x Affinity for war', u'1 x Intellect', u'2 x Charisma', u'2 x Perception', u'3 x Population growth', u'4 x Affinity for the land', u'5 x Morale']
I'm currently trying to sort it alphabetically by the name of the evolution rather than by the number. Is there any way I can do this without just changing the order the two things appear in the list (as in 'intellect x 1)?
| [
"You have to get the \"key\" from the string.\ndef myKeyFunc( aString ):\n stuff, x, label = aString.partition(' x ')\n return label\n\naList.sort( key= myKeyFunc )\n\n",
"How about:\nlst.sort(key=lamdba s: s.split(' x ')[1])\n\n",
"Not knowing if your items are standardized at 1 digit, 1 space, 1 'x', 1 space, multiple words I wrote this up:\nmylist = [u'1 x Affinity for war', u'1 x Intellect', u'2 x Charisma', u'2 x Perception', u'3 x Population growth', u'4 x Affinity for the land', u'5 x Morale']\ndef sort(a, b):\n return cmp(\" \".join(a.split()[2:]), \" \".join(b.split()[2:]))\n\nmylist.sort(sort)\n\nYou can edit the parsing inside the sort method but you probably get the idea.\nCheers,\nPatrick\n",
"To do so, you need to implement a custom compare:\ndef myCompare(x, y):\n x_name = \" \".join(x.split()[2:])\n y_name = \" \".join(y.split()[2:])\n return cmp(x_name, y_name)\n\nThen you use that compare definition as the input to your sort function:\nmyList.sort(myCompare)\n\n"
] | [
23,
11,
2,
1
] | [
"As you are trying to sort what is essentially custom data, I'd go with a custom sort.\nMerge sort\nBubble sort\nQuicksort\n"
] | [
-10
] | [
"list",
"python",
"sorting"
] | stackoverflow_0000440541_list_python_sorting.txt |
Q:
Python classes from a for loop
I've got a piece of code which contains a for loop to draw things from an XML file;
for evoNode in node.getElementsByTagName('evolution'):
evoName = getText(evoNode.getElementsByTagName( "type")[0].childNodes)
evoId = getText(evoNode.getElementsByTagName( "typeid")[0].childNodes)
evoLevel = getText(evoNode.getElementsByTagName( "level")[0].childNodes)
evoCost = getText(evoNode.getElementsByTagName("costperlevel")[0].childNodes)
evolutions.append("%s x %s" % (evoLevel, evoName))
Currently it outputs into a list called evolutions as it says in the last line of that code, for this and several other for functions with very similar functionality I need it to output into a class instead.
class evolutions:
def __init__(self, evoName, evoId, evoLevel, evoCost)
self.evoName = evoName
self.evoId = evoId
self.evoLevel = evoLevel
self.evoCost = evoCost
How to create a series of instances of this class, each of which is a response from that for function? Or what is a core practical solution? This one doesn't really need the class but one of the others really does.
A:
A list comprehension might be a little cleaner. I'd also move the parsing logic to the constructor to clean up the implemenation:
class Evolution:
def __init__(self, node):
self.node = node
self.type = property("type")
self.typeid = property("typeid")
self.level = property("level")
self.costperlevel = property("costperlevel")
def property(self, prop):
return getText(self.node.getElementsByTagName(prop)[0].childNodes)
evolutionList = [Evolution(evoNode) for evoNode in node.getElementsByTagName('evolution')]
Alternatively, you could use map:
evolutionList = map(Evolution, node.getElementsByTagName('evolution'))
A:
for evoNode in node.getElementsByTagName('evolution'):
evoName = getText(evoNode.getElementsByTagName("type")[0].childNodes)
evoId = getText(evoNode.getElementsByTagName("typeid")[0].childNodes)
evoLevel = getText(evoNode.getElementsByTagName("level")[0].childNodes)
evoCost = getText(evoNode.getElementsByTagName("costperlevel")[0].childNodes)
temporaryEvo = Evolutions(evoName, evoId, evoLevel, evoCost)
evolutionList.append(temporaryEvo)
# Or you can go with the 1 liner
evolutionList.append(Evolutions(evoName, evoId, evoLevel, evoCost))
I renamed your list because it shared the same name as your class and was confusing.
| Python classes from a for loop | I've got a piece of code which contains a for loop to draw things from an XML file;
for evoNode in node.getElementsByTagName('evolution'):
evoName = getText(evoNode.getElementsByTagName( "type")[0].childNodes)
evoId = getText(evoNode.getElementsByTagName( "typeid")[0].childNodes)
evoLevel = getText(evoNode.getElementsByTagName( "level")[0].childNodes)
evoCost = getText(evoNode.getElementsByTagName("costperlevel")[0].childNodes)
evolutions.append("%s x %s" % (evoLevel, evoName))
Currently it outputs into a list called evolutions as it says in the last line of that code, for this and several other for functions with very similar functionality I need it to output into a class instead.
class evolutions:
def __init__(self, evoName, evoId, evoLevel, evoCost)
self.evoName = evoName
self.evoId = evoId
self.evoLevel = evoLevel
self.evoCost = evoCost
How to create a series of instances of this class, each of which is a response from that for function? Or what is a core practical solution? This one doesn't really need the class but one of the others really does.
| [
"A list comprehension might be a little cleaner. I'd also move the parsing logic to the constructor to clean up the implemenation:\nclass Evolution:\n def __init__(self, node):\n self.node = node\n self.type = property(\"type\")\n self.typeid = property(\"typeid\")\n self.level = property(\"level\")\n self.costperlevel = property(\"costperlevel\")\n def property(self, prop):\n return getText(self.node.getElementsByTagName(prop)[0].childNodes)\n\nevolutionList = [Evolution(evoNode) for evoNode in node.getElementsByTagName('evolution')]\n\nAlternatively, you could use map:\nevolutionList = map(Evolution, node.getElementsByTagName('evolution'))\n\n",
"for evoNode in node.getElementsByTagName('evolution'):\n evoName = getText(evoNode.getElementsByTagName(\"type\")[0].childNodes)\n evoId = getText(evoNode.getElementsByTagName(\"typeid\")[0].childNodes)\n evoLevel = getText(evoNode.getElementsByTagName(\"level\")[0].childNodes)\n evoCost = getText(evoNode.getElementsByTagName(\"costperlevel\")[0].childNodes)\n\n temporaryEvo = Evolutions(evoName, evoId, evoLevel, evoCost)\n evolutionList.append(temporaryEvo)\n\n # Or you can go with the 1 liner\n evolutionList.append(Evolutions(evoName, evoId, evoLevel, evoCost))\n\nI renamed your list because it shared the same name as your class and was confusing.\n"
] | [
4,
3
] | [] | [] | [
"class",
"for_loop",
"python"
] | stackoverflow_0000440676_class_for_loop_python.txt |
Q:
Exception handling of a function in Python
Suppose I have a function definiton:
def test():
print 'hi'
I get a TypeError whenever I gives an argument.
Now, I want to put the def statement in try. How do I do this?
A:
try:
test()
except TypeError:
print "error"
A:
In [1]: def test():
...: print 'hi'
...:
In [2]: try:
...: test(1)
...: except:
...: print 'exception'
...:
exception
Here is the relevant section in the tutorial
By the way. to fix this error, you should not wrap the function call in a try-except. Instead call it with the right number of arguments!
A:
You said
Now, I want to put the def statement
in try. How to do this.
The def statement is correct, it is not raising any exceptions. So putting it in a try won't do anything.
What raises the exception is the actual call to the function. So that should be put in the try instead:
try:
test()
except TypeError:
print "error"
A:
If you want to throw the error at call-time, which it sounds like you might want, you could try this aproach:
def test(*args):
if args:
raise
print 'hi'
This will shift the error from the calling location to the function. It accepts any number of parameters via the *args list. Not that I know why you'd want to do that.
A:
A better way to handle a variable number of arguments in Python is as follows:
def foo(*args, **kwargs):
# args will hold the positional arguments
print args
# kwargs will hold the named arguments
print kwargs
# Now, all of these work
foo(1)
foo(1,2)
foo(1,2,third=3)
A:
This is valid:
try:
def test():
print 'hi'
except:
print 'error'
test()
| Exception handling of a function in Python | Suppose I have a function definiton:
def test():
print 'hi'
I get a TypeError whenever I gives an argument.
Now, I want to put the def statement in try. How do I do this?
| [
"try: \n test()\nexcept TypeError:\n print \"error\"\n\n",
"In [1]: def test():\n ...: print 'hi'\n ...:\n\nIn [2]: try:\n ...: test(1)\n ...: except:\n ...: print 'exception'\n ...:\nexception\n\nHere is the relevant section in the tutorial\nBy the way. to fix this error, you should not wrap the function call in a try-except. Instead call it with the right number of arguments!\n",
"You said\n\nNow, I want to put the def statement\n in try. How to do this.\n\nThe def statement is correct, it is not raising any exceptions. So putting it in a try won't do anything.\nWhat raises the exception is the actual call to the function. So that should be put in the try instead:\ntry: \n test()\nexcept TypeError:\n print \"error\"\n\n",
"If you want to throw the error at call-time, which it sounds like you might want, you could try this aproach:\ndef test(*args):\n if args:\n raise\n print 'hi'\n\nThis will shift the error from the calling location to the function. It accepts any number of parameters via the *args list. Not that I know why you'd want to do that.\n",
"A better way to handle a variable number of arguments in Python is as follows:\ndef foo(*args, **kwargs):\n # args will hold the positional arguments\n print args\n\n # kwargs will hold the named arguments\n print kwargs\n\n\n# Now, all of these work\nfoo(1)\nfoo(1,2)\nfoo(1,2,third=3)\n\n",
"This is valid:\ntry:\n def test():\n print 'hi'\nexcept:\n print 'error'\n\n\ntest()\n\n"
] | [
5,
1,
1,
1,
1,
0
] | [] | [] | [
"exception_handling",
"python"
] | stackoverflow_0000438401_exception_handling_python.txt |
Q:
Safe escape function for terminal output
I'm looking for the equivalent of a urlencode for terminal output -- I need to make sure that garbage characters I (may) print from an external source don't end up doing funky things to my terminal, so a prepackaged function to escape special character sequences would be ideal.
I'm working in Python, but anything I can readily translate works too. TIA!
A:
Unfortunately "terminal output" is a very poorly defined criterion for filtering (see question 418176). I would suggest simply whitelisting the characters that you want to allow (which would be most of string.printable), and replacing all others with whatever escaped format you like (\FF, %FF, etc), or even simply stripping them out.
A:
$ ./command | cat -v
$ cat --help | grep nonprinting
-v, --show-nonprinting use ^ and M- notation, except for LFD and TAB
Here's the same in py3k based on android/cat.c:
#!/usr/bin/env python3
"""Emulate `cat -v` behaviour.
use ^ and M- notation, except for LFD and TAB
NOTE: python exits on ^Z in stdin on Windows
NOTE: newlines handling skewed towards interactive terminal.
Particularly, applying the conversion twice might *not* be a no-op
"""
import fileinput, sys
def escape(bytes):
for b in bytes:
assert 0 <= b < 0x100
if b in (0x09, 0x0a): # '\t\n'
yield b
continue
if b > 0x7f: # not ascii
yield 0x4d # 'M'
yield 0x2d # '-'
b &= 0x7f
if b < 0x20: # control char
yield 0x5e # '^'
b |= 0x40
elif b == 0x7f:
yield 0x5e # '^'
yield 0x3f # '?'
continue
yield b
if __name__ == '__main__':
write_bytes = sys.stdout.buffer.write
for bytes in fileinput.input(mode="rb"):
write_bytes(escape(bytes))
Example:
$ perl -e"print map chr,0..0xff" > bytes.bin
$ cat -v bytes.bin > cat-v.out
$ python30 cat-v.py bytes.bin > python.out
$ diff -s cat-v.out python.out
It prints:
Files cat-v.out and python.out are identical
A:
If logging or printing debugging output, I usually use repr() to get a harmless printable version of an object, including strings. This may or may not be what you wanted; the cat --show-nonprinting method others have used in other answers is better for lots of multi-line output.
x = get_weird_data()
print repr(x)
| Safe escape function for terminal output | I'm looking for the equivalent of a urlencode for terminal output -- I need to make sure that garbage characters I (may) print from an external source don't end up doing funky things to my terminal, so a prepackaged function to escape special character sequences would be ideal.
I'm working in Python, but anything I can readily translate works too. TIA!
| [
"Unfortunately \"terminal output\" is a very poorly defined criterion for filtering (see question 418176). I would suggest simply whitelisting the characters that you want to allow (which would be most of string.printable), and replacing all others with whatever escaped format you like (\\FF, %FF, etc), or even simply stripping them out.\n",
"\n$ ./command | cat -v\n\n$ cat --help | grep nonprinting\n-v, --show-nonprinting use ^ and M- notation, except for LFD and TAB\n\nHere's the same in py3k based on android/cat.c:\n#!/usr/bin/env python3\n\"\"\"Emulate `cat -v` behaviour.\n\nuse ^ and M- notation, except for LFD and TAB\n\nNOTE: python exits on ^Z in stdin on Windows\nNOTE: newlines handling skewed towards interactive terminal. \n Particularly, applying the conversion twice might *not* be a no-op\n\"\"\"\nimport fileinput, sys\n\ndef escape(bytes):\n for b in bytes:\n assert 0 <= b < 0x100\n\n if b in (0x09, 0x0a): # '\\t\\n' \n yield b\n continue\n\n if b > 0x7f: # not ascii\n yield 0x4d # 'M'\n yield 0x2d # '-'\n b &= 0x7f\n\n if b < 0x20: # control char\n yield 0x5e # '^'\n b |= 0x40\n elif b == 0x7f:\n yield 0x5e # '^'\n yield 0x3f # '?'\n continue\n\n yield b\n\nif __name__ == '__main__':\n write_bytes = sys.stdout.buffer.write \n for bytes in fileinput.input(mode=\"rb\"):\n write_bytes(escape(bytes))\n\nExample:\n\n$ perl -e\"print map chr,0..0xff\" > bytes.bin \n$ cat -v bytes.bin > cat-v.out \n$ python30 cat-v.py bytes.bin > python.out\n$ diff -s cat-v.out python.out \n\nIt prints:\n\nFiles cat-v.out and python.out are identical\n\n",
"If logging or printing debugging output, I usually use repr() to get a harmless printable version of an object, including strings. This may or may not be what you wanted; the cat --show-nonprinting method others have used in other answers is better for lots of multi-line output.\nx = get_weird_data()\nprint repr(x)\n\n"
] | [
3,
2,
1
] | [
"You could pipe it through strings\n./command | strings\n\nThis will strip out the non string characters\n"
] | [
-1
] | [
"escaping",
"python",
"terminal"
] | stackoverflow_0000437476_escaping_python_terminal.txt |
Q:
How to add additional information to a many-to-many relation?
I'm writing a program to manage orders and then print them.
An order is an object containing the ordering person, the date and the products this person orders. I'd like to add the amount of a certain product one orderer. E.g. 3 eggs, 2 breads.
Is there a simpler way doing this with storm (the ORM I'm using) than splitting the order into smaller pieces so that every order contains only 1 product?
A:
What's wrong with adding extra columns to the intersection table of the many-to-many relationship?
CREATE TABLE orders (
person_id INT NOT NULL,
product_id INT NOT NULL,
quantity INT NOT NULL DEFAULT 1,
PRIMARY KEY (person_id, product_id),
FOREIGN KEY (person_id) REFERENCES persons(person_id),
FOREIGN KEY (product_id) REFERENCES products(product_id)
);
If you use an ORM that can't access additional columns in this table while doing many-to-many queries, you should still be able to access it simply as a dependent table of either products or persons.
| How to add additional information to a many-to-many relation? | I'm writing a program to manage orders and then print them.
An order is an object containing the ordering person, the date and the products this person orders. I'd like to add the amount of a certain product one orderer. E.g. 3 eggs, 2 breads.
Is there a simpler way doing this with storm (the ORM I'm using) than splitting the order into smaller pieces so that every order contains only 1 product?
| [
"What's wrong with adding extra columns to the intersection table of the many-to-many relationship? \nCREATE TABLE orders (\n person_id INT NOT NULL,\n product_id INT NOT NULL,\n quantity INT NOT NULL DEFAULT 1,\n PRIMARY KEY (person_id, product_id),\n FOREIGN KEY (person_id) REFERENCES persons(person_id),\n FOREIGN KEY (product_id) REFERENCES products(product_id)\n);\n\nIf you use an ORM that can't access additional columns in this table while doing many-to-many queries, you should still be able to access it simply as a dependent table of either products or persons.\n"
] | [
3
] | [] | [] | [
"database",
"orm",
"python",
"sqlite",
"storm_orm"
] | stackoverflow_0000441114_database_orm_python_sqlite_storm_orm.txt |
Q:
PIL vs RMagick/ruby-gd
For my next project I plan to create images with text and graphics. I'm comfortable with ruby, but interested in learning python. I figured this may be a good time because PIL looks like a great library to use. However, I don't know how it compares to what ruby has to offer (e.g. RMagick and ruby-gd). From what I can gather PIL had better documentation (does ruby-gd even have a homepage?) and more features. Just wanted to hear a few opinions to help me decide.
Thanks.
Vince
A:
PIL is a good library, use it. ImageMagic (what RMagick wraps) is a very heavy library that should be avoided if possible. Its good for doing local processing of images, say, a batch photo editor, but way too processor inefficient for common image manipulation tasks for web.
EDIT: In response to the question, PIL supports drawing vector shapes. It can draw polygons, curves, lines, fills and text. I've used it in a project to produce rounded alpha corners to PNG images on the fly over the web. It essentially has most of the drawing features of GDI+ (in Windows) or GTK (in Gnome on Linux).
A:
PIL has been around for a long time and is very stable, so it's probably a good candidate for your first Python project. The PIL documentation includes a helpful tutorial, which should get you up to speed quickly.
A:
ImageMagic is a huge library and will do everything under the sun, but many report memory issues with the RMagick variant and I have personally found it to be an overkill for my needs.
As you say ruby-gd is a little thin on the ground when it comes to English documentation.... but GD is a doddle to install on post platforms and there is a little wrapper with some helpful examples called gruby thats worth a look. (If you're after alpha transparency make sure you install the latest GD lib)
For overall community blogy help, PIL's the way.
| PIL vs RMagick/ruby-gd | For my next project I plan to create images with text and graphics. I'm comfortable with ruby, but interested in learning python. I figured this may be a good time because PIL looks like a great library to use. However, I don't know how it compares to what ruby has to offer (e.g. RMagick and ruby-gd). From what I can gather PIL had better documentation (does ruby-gd even have a homepage?) and more features. Just wanted to hear a few opinions to help me decide.
Thanks.
Vince
| [
"PIL is a good library, use it. ImageMagic (what RMagick wraps) is a very heavy library that should be avoided if possible. Its good for doing local processing of images, say, a batch photo editor, but way too processor inefficient for common image manipulation tasks for web.\nEDIT: In response to the question, PIL supports drawing vector shapes. It can draw polygons, curves, lines, fills and text. I've used it in a project to produce rounded alpha corners to PNG images on the fly over the web. It essentially has most of the drawing features of GDI+ (in Windows) or GTK (in Gnome on Linux).\n",
"PIL has been around for a long time and is very stable, so it's probably a good candidate for your first Python project. The PIL documentation includes a helpful tutorial, which should get you up to speed quickly.\n",
"ImageMagic is a huge library and will do everything under the sun, but many report memory issues with the RMagick variant and I have personally found it to be an overkill for my needs.\nAs you say ruby-gd is a little thin on the ground when it comes to English documentation.... but GD is a doddle to install on post platforms and there is a little wrapper with some helpful examples called gruby thats worth a look. (If you're after alpha transparency make sure you install the latest GD lib)\nFor overall community blogy help, PIL's the way.\n"
] | [
7,
4,
3
] | [] | [] | [
"python",
"python_imaging_library",
"rmagick",
"ruby"
] | stackoverflow_0000439641_python_python_imaging_library_rmagick_ruby.txt |
Q:
Why am I seeing 'connection reset by peer' error?
I am testing cogen on a Mac OS X 10.5 box using python 2.6.1. I have a simple echo server and client-pumper that creates 10,000 client connections as a test. 1000, 5000, etc. all work splendidly. However at around 10,000 connections, the server starts dropping random clients - the clients see 'connection reset by peer'.
Is there some basic-networking background knowledge I'm missing here?
Note that my system is configured to handle open files (launchctl limit, sysctl (maxfiles, etc.), and ulimit -n are all valid; been there, done that). Also, I've verified that cogen is picking to use kqueue under the covers.
If I add a slight delay to the client-connect() calls everything works great. Thus, my question is, why would a server under stress drop other clients when there's a high frequency of connections in a short period of time? Anyone else ever run into this?
For completeness' sake, here's my code.
Here is the server:
# echoserver.py
from cogen.core import sockets, schedulers, proactors
from cogen.core.coroutines import coroutine
import sys, socket
port = 1200
@coroutine
def server():
srv = sockets.Socket()
srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
addr = ('0.0.0.0', port)
srv.bind(addr)
srv.listen(64)
print "Listening on", addr
while 1:
conn, addr = yield srv.accept()
m.add(handler, args=(conn, addr))
client_count = 0
@coroutine
def handler(sock, addr):
global client_count
client_count += 1
print "SERVER: [connect] clients=%d" % client_count
fh = sock.makefile()
yield fh.write("WELCOME TO (modified) ECHO SERVER !\r\n")
yield fh.flush()
try:
while 1:
line = yield fh.readline(1024)
#print `line`
if line.strip() == 'exit':
yield fh.write("GOOD BYE")
yield fh.close()
raise sockets.ConnectionClosed('goodbye')
yield fh.write(line)
yield fh.flush()
except sockets.ConnectionClosed:
pass
fh.close()
sock.close()
client_count -= 1
print "SERVER: [disconnect] clients=%d" % client_count
m = schedulers.Scheduler()
m.add(server)
m.run()
And here is the client:
# echoc.py
import sys, os, traceback, socket, time
from cogen.common import *
from cogen.core import sockets
port, conn_count = 1200, 10000
clients = 0
@coroutine
def client(num):
sock = sockets.Socket()
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
reader = None
try:
try:
# remove this sleep and we start to see
# 'connection reset by peer' errors
time.sleep(0.001)
yield sock.connect(("127.0.0.1", port))
except Exception:
print 'Error in client # ', num
traceback.print_exc()
return
global clients
clients += 1
print "CLIENT #=%d [connect] clients=%d" % (num,clients)
reader = sock.makefile('r')
while 1:
line = yield reader.readline(1024)
except sockets.ConnectionClosed:
pass
except:
print "CLIENT #=%d got some other error" % num
finally:
if reader: reader.close()
sock.close()
clients -= 1
print "CLIENT #=%d [disconnect] clients=%d" % (num,clients)
m = Scheduler()
for i in range(0, conn_count):
m.add(client, args=(i,))
m.run()
Thanks for any information!
A:
Python's socket I/O sometimes suffers from connection reset by peer. It has to do with the Global Interpreter Lock and how threads are scheduled. I blogged some references on the subject.
The time.sleep(0.0001) appears to be the recommended solution because it adjusts thread scheduling and allows the socket I/O to finish.
| Why am I seeing 'connection reset by peer' error? | I am testing cogen on a Mac OS X 10.5 box using python 2.6.1. I have a simple echo server and client-pumper that creates 10,000 client connections as a test. 1000, 5000, etc. all work splendidly. However at around 10,000 connections, the server starts dropping random clients - the clients see 'connection reset by peer'.
Is there some basic-networking background knowledge I'm missing here?
Note that my system is configured to handle open files (launchctl limit, sysctl (maxfiles, etc.), and ulimit -n are all valid; been there, done that). Also, I've verified that cogen is picking to use kqueue under the covers.
If I add a slight delay to the client-connect() calls everything works great. Thus, my question is, why would a server under stress drop other clients when there's a high frequency of connections in a short period of time? Anyone else ever run into this?
For completeness' sake, here's my code.
Here is the server:
# echoserver.py
from cogen.core import sockets, schedulers, proactors
from cogen.core.coroutines import coroutine
import sys, socket
port = 1200
@coroutine
def server():
srv = sockets.Socket()
srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
addr = ('0.0.0.0', port)
srv.bind(addr)
srv.listen(64)
print "Listening on", addr
while 1:
conn, addr = yield srv.accept()
m.add(handler, args=(conn, addr))
client_count = 0
@coroutine
def handler(sock, addr):
global client_count
client_count += 1
print "SERVER: [connect] clients=%d" % client_count
fh = sock.makefile()
yield fh.write("WELCOME TO (modified) ECHO SERVER !\r\n")
yield fh.flush()
try:
while 1:
line = yield fh.readline(1024)
#print `line`
if line.strip() == 'exit':
yield fh.write("GOOD BYE")
yield fh.close()
raise sockets.ConnectionClosed('goodbye')
yield fh.write(line)
yield fh.flush()
except sockets.ConnectionClosed:
pass
fh.close()
sock.close()
client_count -= 1
print "SERVER: [disconnect] clients=%d" % client_count
m = schedulers.Scheduler()
m.add(server)
m.run()
And here is the client:
# echoc.py
import sys, os, traceback, socket, time
from cogen.common import *
from cogen.core import sockets
port, conn_count = 1200, 10000
clients = 0
@coroutine
def client(num):
sock = sockets.Socket()
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
reader = None
try:
try:
# remove this sleep and we start to see
# 'connection reset by peer' errors
time.sleep(0.001)
yield sock.connect(("127.0.0.1", port))
except Exception:
print 'Error in client # ', num
traceback.print_exc()
return
global clients
clients += 1
print "CLIENT #=%d [connect] clients=%d" % (num,clients)
reader = sock.makefile('r')
while 1:
line = yield reader.readline(1024)
except sockets.ConnectionClosed:
pass
except:
print "CLIENT #=%d got some other error" % num
finally:
if reader: reader.close()
sock.close()
clients -= 1
print "CLIENT #=%d [disconnect] clients=%d" % (num,clients)
m = Scheduler()
for i in range(0, conn_count):
m.add(client, args=(i,))
m.run()
Thanks for any information!
| [
"Python's socket I/O sometimes suffers from connection reset by peer. It has to do with the Global Interpreter Lock and how threads are scheduled. I blogged some references on the subject.\nThe time.sleep(0.0001) appears to be the recommended solution because it adjusts thread scheduling and allows the socket I/O to finish.\n"
] | [
8
] | [] | [] | [
"network_programming",
"python",
"sockets",
"system"
] | stackoverflow_0000441374_network_programming_python_sockets_system.txt |
Q:
Efficient layout for a distributed python server?
If I wanted to have Python distributed across multiple processors on multiple computers, what would my best approach be? If I have 3 eight-core servers, that would mean I would have to run 24 python processes. I would be using the multiprocessing library, and to share objects it looks like the best idea would be to use a manager. I want all the nodes to work together as one big process, so one manager would be ideal, yet that would give my server a single point of failure. Is there a better solution? Would replicating a manager's object store be a good idea?
Also, if the manager is going to be doing all of the database querying, would it make sense to have it on the same machine as the database?
A:
I think more information would be helpful, on what sort of thing you are serving, what sort of database you'd use, what sort of latency/throughput requirements you have, etc. Lots of stuff depends on your requirements: eg. if your system is a typical server which has a lot of reads and not so many writes, and you don't have a problem with reading slightly stale data, you could perform local reads against a cache on each process and only push the writes to the database, broadcasting the results back to the caches.
For a start, I think it depends on what the manager has to do. After all, worrying about single points of failure may be pointless if your system is so trivial that failure is not going to occur short of catastrophic hardware failure. But if you just have one, having it on the same machine as the database makes sense. You reduce latency, and your system can't survive if one goes down without the other anyway.
A:
You have two main challenges in distributing the processes:
Co-ordinating the work being split up, distributed and re-collected (mapped and reduced, you might say)
Sharing the right live data between co-dependent processes
The answer to #1 will very much depend on what sort of processing you're doing. If it's easily horizontally partitionable (i.e. you can split the bigger task into several independent smaller tasks), a load balancer like HAProxy might be a convenient way to spread the load.
If the task isn't trivially horizontally partitionable, I'd first look to see if existing tools, like Hadoop, would work for me. Distributed task management is a difficult task to get right, and the wheel's already been invented.
As for #2, sharing state between the processes, your life will be much easier if you share an absolute minimum, and then only share it explicitly and in a well-defined way. I would personally use SQLAlchemy backed by your RDBMS of choice for even the smallest of tasks. The query interface is powerful and pain-free enough for small and large projects alike.
A:
Seems the gist of your question is how to share objects and state. More information, particularly size, frequency, rate of change, and source of data would be very helpful.
For cross machine shared memory you probably want to look at memcached. You can store your data and access it quickly and easy from any of the worker processes.
If your scenario is more of a simple job distribution model you might want to look at a queuing server - put your jobs and their associated data onto a queue and have the workers pick up jobs from the queue. Beanstalkd is probably a good choice for the queue, and here's a getting started tutorial.
| Efficient layout for a distributed python server? | If I wanted to have Python distributed across multiple processors on multiple computers, what would my best approach be? If I have 3 eight-core servers, that would mean I would have to run 24 python processes. I would be using the multiprocessing library, and to share objects it looks like the best idea would be to use a manager. I want all the nodes to work together as one big process, so one manager would be ideal, yet that would give my server a single point of failure. Is there a better solution? Would replicating a manager's object store be a good idea?
Also, if the manager is going to be doing all of the database querying, would it make sense to have it on the same machine as the database?
| [
"I think more information would be helpful, on what sort of thing you are serving, what sort of database you'd use, what sort of latency/throughput requirements you have, etc. Lots of stuff depends on your requirements: eg. if your system is a typical server which has a lot of reads and not so many writes, and you don't have a problem with reading slightly stale data, you could perform local reads against a cache on each process and only push the writes to the database, broadcasting the results back to the caches.\nFor a start, I think it depends on what the manager has to do. After all, worrying about single points of failure may be pointless if your system is so trivial that failure is not going to occur short of catastrophic hardware failure. But if you just have one, having it on the same machine as the database makes sense. You reduce latency, and your system can't survive if one goes down without the other anyway.\n",
"You have two main challenges in distributing the processes:\n\nCo-ordinating the work being split up, distributed and re-collected (mapped and reduced, you might say)\nSharing the right live data between co-dependent processes\n\nThe answer to #1 will very much depend on what sort of processing you're doing. If it's easily horizontally partitionable (i.e. you can split the bigger task into several independent smaller tasks), a load balancer like HAProxy might be a convenient way to spread the load.\nIf the task isn't trivially horizontally partitionable, I'd first look to see if existing tools, like Hadoop, would work for me. Distributed task management is a difficult task to get right, and the wheel's already been invented.\nAs for #2, sharing state between the processes, your life will be much easier if you share an absolute minimum, and then only share it explicitly and in a well-defined way. I would personally use SQLAlchemy backed by your RDBMS of choice for even the smallest of tasks. The query interface is powerful and pain-free enough for small and large projects alike.\n",
"Seems the gist of your question is how to share objects and state. More information, particularly size, frequency, rate of change, and source of data would be very helpful.\nFor cross machine shared memory you probably want to look at memcached. You can store your data and access it quickly and easy from any of the worker processes.\nIf your scenario is more of a simple job distribution model you might want to look at a queuing server - put your jobs and their associated data onto a queue and have the workers pick up jobs from the queue. Beanstalkd is probably a good choice for the queue, and here's a getting started tutorial.\n"
] | [
3,
3,
0
] | [] | [] | [
"multiprocessing",
"python"
] | stackoverflow_0000441061_multiprocessing_python.txt |
Q:
Generic catch for python
I have some extremely weird behavior that seems to result in silent exceptions. How can I write a general try catch where I can debug all exceptions. Something along the lines of:
try:
# something that fails
except e:
print e
A bit more about the problem at hand in detail:
I have a Django app that on my computer (Ubuntu Linux 8.10) works fine both through runserver and mod-python. On the deploy server (Ubuntu Linux 8.10) it works fine through runserver, but fails via apache in mod-python.
I have reduced the cause down to a part off the app that uses Berkeley DB (bsddb.db), and secondary keys. The callback method for secondary keys uses pickle to format the keys. It fails when I call pickle on a single value. However, it only fails when I use cPickle, and using pickle on the same values outside the callback function also works.
I just want to know why it fails with cPickle.
A:
Exceptions are already printed by default before program termination. If you want to send the error somewhere else (not print it) you can do this:
try:
something()
except Exception as e:
send_somewhere(traceback.format_exception(*sys.exc_info()))
raise # reraises the exception
note that this format using the as keyword is for python > 2.6. The old way was:
except Exception, e:
A:
The traceback module is quite useful for formatting tracebacks. You can then write it to a logfile.
A:
Does this work? :
except BaseException, e:
| Generic catch for python | I have some extremely weird behavior that seems to result in silent exceptions. How can I write a general try catch where I can debug all exceptions. Something along the lines of:
try:
# something that fails
except e:
print e
A bit more about the problem at hand in detail:
I have a Django app that on my computer (Ubuntu Linux 8.10) works fine both through runserver and mod-python. On the deploy server (Ubuntu Linux 8.10) it works fine through runserver, but fails via apache in mod-python.
I have reduced the cause down to a part off the app that uses Berkeley DB (bsddb.db), and secondary keys. The callback method for secondary keys uses pickle to format the keys. It fails when I call pickle on a single value. However, it only fails when I use cPickle, and using pickle on the same values outside the callback function also works.
I just want to know why it fails with cPickle.
| [
"Exceptions are already printed by default before program termination. If you want to send the error somewhere else (not print it) you can do this:\ntry:\n something()\nexcept Exception as e:\n send_somewhere(traceback.format_exception(*sys.exc_info()))\n raise # reraises the exception\n\nnote that this format using the as keyword is for python > 2.6. The old way was:\nexcept Exception, e:\n\n",
"The traceback module is quite useful for formatting tracebacks. You can then write it to a logfile.\n",
"Does this work? :\nexcept BaseException, e:\n\n"
] | [
196,
5,
2
] | [] | [] | [
"exception",
"python"
] | stackoverflow_0000442343_exception_python.txt |
Q:
How do I split email address/password string in two in Python?
Lets say we have this string: [18] email@email.com:pwd:
email@email.com is the email and pwd is the password.
Also, lets say we have this variable with a value
f = "[18] email@email.com:pwd:"
I would like to know if there is a way to make two other variables named var1 and var2, where the var1 variable will take the exact email info from variable f and var2 the exact password info from var2.
The result after running the app should be like:
var1 = "email@email.com"
and
var2 = "pwd"
A:
>>> var1, var2, _ = "[18] email@email.com:pwd:"[5:].split(":")
>>> var1, var2
('email@email.com', 'pwd')
Or if the "[18]" is not a fixed prefix:
>>> var1, var2, _ = "[18] email@email.com:pwd:".split("] ")[1].split(":")
>>> var1, var2
('email@email.com', 'pwd')
A:
import re
var1, var2 = re.findall(r'\s(.*?):(.*):', f)[0]
If findall()[0] feels like two steps forward and one back:
var1, var2 = re.search(r'\s(.*?):(.*):', f).groups()
A:
var1, var2 = re.split(r'[ :]', f)[1:3]
A:
To split on the first colon ":", you can do:
# keep all after last space
f1= f.rpartition(" ")[2]
var1, _, var2= f1.partition(":")
| How do I split email address/password string in two in Python? | Lets say we have this string: [18] email@email.com:pwd:
email@email.com is the email and pwd is the password.
Also, lets say we have this variable with a value
f = "[18] email@email.com:pwd:"
I would like to know if there is a way to make two other variables named var1 and var2, where the var1 variable will take the exact email info from variable f and var2 the exact password info from var2.
The result after running the app should be like:
var1 = "email@email.com"
and
var2 = "pwd"
| [
">>> var1, var2, _ = \"[18] email@email.com:pwd:\"[5:].split(\":\")\n>>> var1, var2\n('email@email.com', 'pwd')\n\nOr if the \"[18]\" is not a fixed prefix:\n>>> var1, var2, _ = \"[18] email@email.com:pwd:\".split(\"] \")[1].split(\":\")\n>>> var1, var2\n('email@email.com', 'pwd')\n\n",
"import re\nvar1, var2 = re.findall(r'\\s(.*?):(.*):', f)[0]\n\nIf findall()[0] feels like two steps forward and one back:\nvar1, var2 = re.search(r'\\s(.*?):(.*):', f).groups()\n\n",
"var1, var2 = re.split(r'[ :]', f)[1:3]\n\n",
"To split on the first colon \":\", you can do:\n# keep all after last space\nf1= f.rpartition(\" \")[2]\nvar1, _, var2= f1.partition(\":\")\n\n"
] | [
9,
7,
5,
1
] | [] | [] | [
"parsing",
"python",
"split"
] | stackoverflow_0000436394_parsing_python_split.txt |
Q:
How do I create an HTTP server in Python using the first available port?
I want to avoid hardcoding the port number as in the following:
httpd = make_server('', 8000, simple_app)
The reason I'm creating the server this way is that I want to use it as a 'kernel' for an Adobe AIR app so it will communicate using PyAMF. Since I'm running this on the client side it is very possible that any port I define is already in use. If there is a better way to do this and I am asking the wrong question please let me know.
A:
The problem is that you need a known port for the application to use. But if you give a port number of 0, I believe the OS will provide you with the first available unused port.
A:
The problem is that you need a known port for the application to use. But if you give a port number of 0, I believe the OS will provide you with the first available unused port.
You are correct, sir. Here's how that works:
>>> import socket
>>> s = socket.socket()
>>> s.bind(("", 0))
>>> s.getsockname()
('0.0.0.0', 54485)
I now have a socket bound to port 54485.
A:
Is make_server a function that you've written? More specifically, do you handle the code that creates the sockets? If you do, there should be a way where you don't specify a port number (or you specify 0 as a port number) and the OS will pick an available one for you.
Besides that, you could just pick a random port number, like 54315... it's unlikely someone will be using that one.
| How do I create an HTTP server in Python using the first available port? | I want to avoid hardcoding the port number as in the following:
httpd = make_server('', 8000, simple_app)
The reason I'm creating the server this way is that I want to use it as a 'kernel' for an Adobe AIR app so it will communicate using PyAMF. Since I'm running this on the client side it is very possible that any port I define is already in use. If there is a better way to do this and I am asking the wrong question please let me know.
| [
"The problem is that you need a known port for the application to use. But if you give a port number of 0, I believe the OS will provide you with the first available unused port.\n",
"\nThe problem is that you need a known port for the application to use. But if you give a port number of 0, I believe the OS will provide you with the first available unused port.\n\nYou are correct, sir. Here's how that works:\n>>> import socket\n>>> s = socket.socket()\n>>> s.bind((\"\", 0))\n>>> s.getsockname()\n('0.0.0.0', 54485)\n\nI now have a socket bound to port 54485.\n",
"Is make_server a function that you've written? More specifically, do you handle the code that creates the sockets? If you do, there should be a way where you don't specify a port number (or you specify 0 as a port number) and the OS will pick an available one for you.\nBesides that, you could just pick a random port number, like 54315... it's unlikely someone will be using that one.\n"
] | [
7,
7,
2
] | [
"Firewalls allow you to permit or deny traffic on a port-by-port basis. For this reason alone, an application without a well-defined port should expect to run into all kinds of problems in a client installation. \nI say pick a random port, and make it very easy for the user to change the port if need be. \nHere's a good starting place for well-known ports.\n"
] | [
-2
] | [
"httpserver",
"python"
] | stackoverflow_0000442062_httpserver_python.txt |
Q:
How to properly interact with a process using subprocess module
I'm having problems redirecting stdio of another program using subprocess module. Just reading from stdout results in hanging, and Popen.communicate() works but it closes pipes after reading/writing. What's the easiest way to implement this?
I was playing around with this on windows:
import subprocess
proc = subprocess.Popen('python -c "while True: print \'Hi %s!\' % raw_input()"',
shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
while True:
proc.stdin.write('world\n')
proc_read = proc.stdout.readline()
if proc_read:
print proc_read
A:
Doesn't fit 100% to your example but helps to understand the underlying issue: Process P starts child C. Child C writes something to its stdout. stdout of C is a pipe which has a 4096 character buffer and the output is shorter than that. Now, C waits for some input. For C, everything is fine.
P waits for the output which will never come because the OS sees no reason to flush the output buffer of C (with so little data in it). Since P never gets the output of C, it will never write anything to C, so C hangs waiting for the input from P.
Fix: Use flush after every write to a pipe forcing the OS to send the data now.
In your case, adding proc.stdin.flush() in the main while loop and a sys.stdout.flush() in the child loop after the print should fix your problem.
You should also consider moving the code which reads from the other process into a thread. The idea here is that you can never know when the data will arrive and using a thread helps you to understand these issues while you write the code which processes the results.
At this place, I wanted to show you the new Python 2.6 documentation but it doesn't explain the flush issue, either :( Oh well ...
| How to properly interact with a process using subprocess module | I'm having problems redirecting stdio of another program using subprocess module. Just reading from stdout results in hanging, and Popen.communicate() works but it closes pipes after reading/writing. What's the easiest way to implement this?
I was playing around with this on windows:
import subprocess
proc = subprocess.Popen('python -c "while True: print \'Hi %s!\' % raw_input()"',
shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
while True:
proc.stdin.write('world\n')
proc_read = proc.stdout.readline()
if proc_read:
print proc_read
| [
"Doesn't fit 100% to your example but helps to understand the underlying issue: Process P starts child C. Child C writes something to its stdout. stdout of C is a pipe which has a 4096 character buffer and the output is shorter than that. Now, C waits for some input. For C, everything is fine.\nP waits for the output which will never come because the OS sees no reason to flush the output buffer of C (with so little data in it). Since P never gets the output of C, it will never write anything to C, so C hangs waiting for the input from P.\nFix: Use flush after every write to a pipe forcing the OS to send the data now. \nIn your case, adding proc.stdin.flush() in the main while loop and a sys.stdout.flush() in the child loop after the print should fix your problem.\nYou should also consider moving the code which reads from the other process into a thread. The idea here is that you can never know when the data will arrive and using a thread helps you to understand these issues while you write the code which processes the results.\nAt this place, I wanted to show you the new Python 2.6 documentation but it doesn't explain the flush issue, either :( Oh well ...\n"
] | [
21
] | [] | [] | [
"python",
"subprocess"
] | stackoverflow_0000443057_python_subprocess.txt |
Q:
cascading forms in Django/else using any Pythonic framework
Can anyone point to an example written in Python (django preferred) with ajax for cascading forms? Cascading Forms is basically forms whose field values change if and when another field value changes. Example Choose Country, and then States will change...
A:
This is (mostly) front-end stuff.
As you may have noticed Django attempts to leave all the AJAX stuff up to you, so I don't think you'll find anything built in to do this.
However, using JS (which is what you'll have to do in order to do this without submitting a billion forms manually), you could easily have a django-base view your JS could communicate with:
def get_states(request, country):
# work out which states are available
#import simplesjon as sj
return sj....
Then bind your AJAX request to the onchange event of the select (I can't remember if that's right for select boxes) and populate the next field based on the return of the JSON query.
10 minute job with jquery and simplejson.
A:
I would also suggest considering getting a mapping of all data once instead of requesting subfield values one by one. Unless the subfield choices change frequently (states/cities change?) or huge in numbers (>1000) this should offer best performance and it is less complex.
You don't even need to create a seperate view, just include a chunk of JavaScript (a JSON mapping more precisely) with your response containing the form.
| cascading forms in Django/else using any Pythonic framework | Can anyone point to an example written in Python (django preferred) with ajax for cascading forms? Cascading Forms is basically forms whose field values change if and when another field value changes. Example Choose Country, and then States will change...
| [
"This is (mostly) front-end stuff.\nAs you may have noticed Django attempts to leave all the AJAX stuff up to you, so I don't think you'll find anything built in to do this.\nHowever, using JS (which is what you'll have to do in order to do this without submitting a billion forms manually), you could easily have a django-base view your JS could communicate with:\ndef get_states(request, country):\n # work out which states are available\n #import simplesjon as sj\n return sj.... \n\nThen bind your AJAX request to the onchange event of the select (I can't remember if that's right for select boxes) and populate the next field based on the return of the JSON query.\n10 minute job with jquery and simplejson.\n",
"I would also suggest considering getting a mapping of all data once instead of requesting subfield values one by one. Unless the subfield choices change frequently (states/cities change?) or huge in numbers (>1000) this should offer best performance and it is less complex.\nYou don't even need to create a seperate view, just include a chunk of JavaScript (a JSON mapping more precisely) with your response containing the form.\n"
] | [
3,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000442596_django_python.txt |
Q:
Jython, Query multiple columns dynamically
I am working with a oracle database and Jython.
I can get data pulled from the database no problem.
results = statement.executeQuery("select %s from %s where column_id = '%s'", % (column, table, id))
This works fine if I want to pull one column of data.
Say I wanted to loop threw a list like this:
columns = ['column1', 'column2', 'column3', 'column4', 'column5']
So the query ended up looking like this:
results = statement.executeQuery("select %s, %s, %s, %s, %s from %s where column_id = '%s'", % (column1, column2, column3, column4, column5, table, id))
How could I do this?
The reason I want to achive this is because I may want to pull 6 or 7 columns and I would like to store different queries in a external file.
I hope you understand what I mean. If not I will try to re word it as best as I can.
Cheers
Arthur
A:
You could simply substitute all the columns into your query as a single string, like this:
columns = ['column1', 'column2', 'column3', 'column4', 'column5']
results = statement.executeQuery("select %s from %s where column_id = '%s'" % (",".join(columns), table, id))
By the way, this isn't protecting against SQL injection, so I'm assuming the columns, table, and id inputs are program-generated or sanitized.
| Jython, Query multiple columns dynamically | I am working with a oracle database and Jython.
I can get data pulled from the database no problem.
results = statement.executeQuery("select %s from %s where column_id = '%s'", % (column, table, id))
This works fine if I want to pull one column of data.
Say I wanted to loop threw a list like this:
columns = ['column1', 'column2', 'column3', 'column4', 'column5']
So the query ended up looking like this:
results = statement.executeQuery("select %s, %s, %s, %s, %s from %s where column_id = '%s'", % (column1, column2, column3, column4, column5, table, id))
How could I do this?
The reason I want to achive this is because I may want to pull 6 or 7 columns and I would like to store different queries in a external file.
I hope you understand what I mean. If not I will try to re word it as best as I can.
Cheers
Arthur
| [
"You could simply substitute all the columns into your query as a single string, like this:\ncolumns = ['column1', 'column2', 'column3', 'column4', 'column5']\nresults = statement.executeQuery(\"select %s from %s where column_id = '%s'\" % (\",\".join(columns), table, id))\n\nBy the way, this isn't protecting against SQL injection, so I'm assuming the columns, table, and id inputs are program-generated or sanitized.\n"
] | [
3
] | [] | [] | [
"jython",
"oracle10g",
"python"
] | stackoverflow_0000443224_jython_oracle10g_python.txt |
Q:
python, index errors
I've got some code which draws data from an xml file but it seems to have randomly starting throwing;
Traceback (most recent call last):
File "C:\Users\mike\Documents\python\arl xml\turn 24 reader", line 52, in <module>
unitCount = getText(evoNode.getElementsByTagName("count")[0].childNodes)
IndexError: list index out of range
it was fine for the first couple of times I ran it then, I dunno if I changed it or not by accident, but now it's throwing the error.
This is an example of the section of the xml it's trying to use;
- <unit>
<count>1200</count>
<type>Zweihander Doppelsoldners</type>
<typeid>102</typeid>
</unit>
and here's the code that it's complains about;
for unitNode in node.getElementsByTagName('unit'):
unitName = getText(evoNode.getElementsByTagName("type")[0].childNodes)
unitId = getText(evoNode.getElementsByTagName("typeid")[0].childNodes)
unitCount = getText(evoNode.getElementsByTagName("count")[0].childNodes)
unitList.append("%s x %s" % (unitName, unitCount))
While I accept that it complains about the count line first because count is the highest of the three on the xml file in the units section I'm still not sure why it's complaining, given that it succesfully runs a very similar set of code from which that was cloned and editted.
Anyone know what I can do or can suggest ways to refine the question?
A:
A simple idea: check if evoNode.getElementsByTagName("count") returns a non-empty list:
counts = evoNode.getElementsByTagName("count")
if counts:
unitCount = getText(counts[0].childNodes)
Of course, the check should be applied to all the lists retrieved by your code.
One Other thing, you iterate using unitNode, but inside the loop, you access evoNode, which is probably the same for every iteration.
A:
As gimel said you should check getElementsByTagName("count") if it returns non empty list, but back to your problem:
If you said that it was working before then my guess that the problem is with the source where you get the XML.
| python, index errors | I've got some code which draws data from an xml file but it seems to have randomly starting throwing;
Traceback (most recent call last):
File "C:\Users\mike\Documents\python\arl xml\turn 24 reader", line 52, in <module>
unitCount = getText(evoNode.getElementsByTagName("count")[0].childNodes)
IndexError: list index out of range
it was fine for the first couple of times I ran it then, I dunno if I changed it or not by accident, but now it's throwing the error.
This is an example of the section of the xml it's trying to use;
- <unit>
<count>1200</count>
<type>Zweihander Doppelsoldners</type>
<typeid>102</typeid>
</unit>
and here's the code that it's complains about;
for unitNode in node.getElementsByTagName('unit'):
unitName = getText(evoNode.getElementsByTagName("type")[0].childNodes)
unitId = getText(evoNode.getElementsByTagName("typeid")[0].childNodes)
unitCount = getText(evoNode.getElementsByTagName("count")[0].childNodes)
unitList.append("%s x %s" % (unitName, unitCount))
While I accept that it complains about the count line first because count is the highest of the three on the xml file in the units section I'm still not sure why it's complaining, given that it succesfully runs a very similar set of code from which that was cloned and editted.
Anyone know what I can do or can suggest ways to refine the question?
| [
"A simple idea: check if evoNode.getElementsByTagName(\"count\") returns a non-empty list:\ncounts = evoNode.getElementsByTagName(\"count\")\nif counts:\n unitCount = getText(counts[0].childNodes)\n\nOf course, the check should be applied to all the lists retrieved by your code.\nOne Other thing, you iterate using unitNode, but inside the loop, you access evoNode, which is probably the same for every iteration.\n",
"As gimel said you should check getElementsByTagName(\"count\") if it returns non empty list, but back to your problem:\nIf you said that it was working before then my guess that the problem is with the source where you get the XML.\n"
] | [
2,
1
] | [] | [] | [
"for_loop",
"python",
"xml"
] | stackoverflow_0000443813_for_loop_python_xml.txt |
Q:
How do I find out the path of the currently executing script?
Duplicate of: In Python, how do I get the path and name of the file that is currently executing?
I would like to find out the path to the currently executing script.
I have tried os.getcwd() but that only returns the directory I ran the script from not the actual directory the script is stored.
A:
In Python, __file__ identifies the current Python file. Thus:
print "I'm inside Python file %s" % __file__
will print the current Python file. Note that this works in imported Python modules, as well as scripts.
A:
How about using sys.path[0]
You can do something like
'print os.path.join(sys.path[0], sys.argv[0])'
https://docs.python.org/library/sys.html
| How do I find out the path of the currently executing script? | Duplicate of: In Python, how do I get the path and name of the file that is currently executing?
I would like to find out the path to the currently executing script.
I have tried os.getcwd() but that only returns the directory I ran the script from not the actual directory the script is stored.
| [
"In Python, __file__ identifies the current Python file. Thus:\nprint \"I'm inside Python file %s\" % __file__\n\nwill print the current Python file. Note that this works in imported Python modules, as well as scripts.\n",
"How about using sys.path[0] \nYou can do something like\n'print os.path.join(sys.path[0], sys.argv[0])'\nhttps://docs.python.org/library/sys.html\n"
] | [
8,
1
] | [] | [] | [
"python"
] | stackoverflow_0000444376_python.txt |
Q:
How do I use a 2-d boolean array to select from a 1-d array on a per-row basis in numpy?
Let me illustrate this question with an example:
import numpy
matrix = numpy.identity(5, dtype=bool) #Using identity as a convenient way to create an array with the invariant that there will only be one True value per row, the solution should apply to any array with this invariant
base = numpy.arange(5,30,5) #This could be any 1-d array, provided its length is the same as the length of axis=1 of matrix from above
result = numpy.array([ base[line] for line in matrix ])
result now holds the desired result, but I'm sure there is a numpy-specific method for doing this that avoids the explicit iteration. What is it?
A:
If I understand your question correctly you can simply use matrix multiplication:
result = numpy.dot(matrix, base)
If the result must have the same shape as in your example just add a reshape:
result = numpy.dot(matrix, base).reshape((5,1))
If the matrix is not symmetric be careful about the order in dot.
A:
Here is another ugly way of doing it:
n.apply_along_axis(base.__getitem__, 0, matrix).reshape((5,1))
A:
My try:
numpy.sum(matrix * base, axis=1)
| How do I use a 2-d boolean array to select from a 1-d array on a per-row basis in numpy? | Let me illustrate this question with an example:
import numpy
matrix = numpy.identity(5, dtype=bool) #Using identity as a convenient way to create an array with the invariant that there will only be one True value per row, the solution should apply to any array with this invariant
base = numpy.arange(5,30,5) #This could be any 1-d array, provided its length is the same as the length of axis=1 of matrix from above
result = numpy.array([ base[line] for line in matrix ])
result now holds the desired result, but I'm sure there is a numpy-specific method for doing this that avoids the explicit iteration. What is it?
| [
"If I understand your question correctly you can simply use matrix multiplication:\nresult = numpy.dot(matrix, base)\n\nIf the result must have the same shape as in your example just add a reshape:\nresult = numpy.dot(matrix, base).reshape((5,1))\n\nIf the matrix is not symmetric be careful about the order in dot.\n",
"Here is another ugly way of doing it:\nn.apply_along_axis(base.__getitem__, 0, matrix).reshape((5,1))\n\n",
"My try:\nnumpy.sum(matrix * base, axis=1)\n\n"
] | [
1,
0,
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0000442218_numpy_python.txt |
Q:
pyPdf for IndirectObject extraction
Following this example, I can list all elements into a pdf file
import pyPdf
pdf = pyPdf.PdfFileReader(open("pdffile.pdf"))
list(pdf.pages) # Process all the objects.
print pdf.resolvedObjects
now, I need to extract a non-standard object from the pdf file.
My object is the one named MYOBJECT and it is a string.
The piece printed by the python script that concernes me is:
{'/MYOBJECT': IndirectObject(584, 0)}
The pdf file is this:
558 0 obj
<</Contents 583 0 R/CropBox[0 0 595.22 842]/MediaBox[0 0 595.22 842]/Parent 29 0 R/Resources
<</ColorSpace <</CS0 563 0 R>>
/ExtGState <</GS0 568 0 R>>
/Font<</TT0 559 0 R/TT1 560 0 R/TT2 561 0 R/TT3 562 0 R>>
/ProcSet[/PDF/Text/ImageC]
/Properties<</MC0<</MYOBJECT 584 0 R>>/MC1<</SubKey 582 0 R>> >>
/XObject<</Im0 578 0 R>>>>
/Rotate 0/StructParents 0/Type/Page>>
endobj
...
...
...
584 0 obj
<</Length 8>>stream
1_22_4_1 --->>>> this is the string I need to extract from the object
endstream
endobj
How can I follow the 584 value in order to refer to my string (under pyPdf of course)??
A:
each element in pdf.pages is a dictionary, so assuming it's on page 1, pdf.pages[0]['/MYOBJECT'] should be the element you want.
You can try to print that individually or poke at it with help and dir in a python prompt for more about how to get the string you want
Edit:
after receiving a copy of the pdf, i found the object at pdf.resolvedObjects[0][558]['/Resources']['/Properties']['/MC0']['/MYOBJECT'] and the value can be retrieved via getData()
the following function gives a more generic way to solve this by recursively looking for the key in question
import types
import pyPdf
pdf = pyPdf.PdfFileReader(open('file.pdf'))
pages = list(pdf.pages)
def findInDict(needle,haystack):
for key in haystack.keys():
try:
value = haystack[key]
except:
continue
if key == needle:
return value
if type(value) == types.DictType or isinstance(value,pyPdf.generic.DictionaryObject):
x = findInDict(needle,value)
if x is not None:
return x
answer = findInDict('/MYOBJECT',pdf.resolvedObjects).getData()
A:
An IndirectObject refers to an actual object (it's like a link or alias so that the total size of the PDF can be reduced when the same content appears in multiple places). The getObject method will give you the actual object.
If the object is a text object, then just doing a str() or unicode() on the object should get you the data inside of it.
Alternatively, pyPdf stores the objects in the resolvedObjects attribute. For example, a PDF that contains this object:
13 0 obj
<< /Type /Catalog /Pages 3 0 R >>
endobj
Can be read with this:
>>> import pyPdf
>>> pdf = pyPdf.PdfFileReader(open("pdffile.pdf"))
>>> pages = list(pdf.pages)
>>> pdf.resolvedObjects
{0: {2: {'/Parent': IndirectObject(3, 0), '/Contents': IndirectObject(4, 0), '/Type': '/Page', '/Resources': IndirectObject(6, 0), '/MediaBox': [0, 0, 595.2756, 841.8898]}, 3: {'/Kids': [IndirectObject(2, 0)], '/Count': 1, '/Type': '/Pages', '/MediaBox': [0, 0, 595.2756, 841.8898]}, 4: {'/Filter': '/FlateDecode'}, 5: 147, 6: {'/ColorSpace': {'/Cs1': IndirectObject(7, 0)}, '/ExtGState': {'/Gs2': IndirectObject(9, 0), '/Gs1': IndirectObject(10, 0)}, '/ProcSet': ['/PDF', '/Text'], '/Font': {'/F1.0': IndirectObject(8, 0)}}, 13: {'/Type': '/Catalog', '/Pages': IndirectObject(3, 0)}}}
>>> pdf.resolvedObjects[0][13]
{'/Type': '/Catalog', '/Pages': IndirectObject(3, 0)}
A:
Jehiah's method is good if looking everywhere for the object. My guess (looking at the PDF) is that it is always in the same place (the first page, in the 'MC0' property), and so a much simpler method of finding the string would be:
import pyPdf
pdf = pyPdf.PdfFileReader(open("file.pdf"))
pdf.getPage(0)['/Resources']['/Properties']['/MC0']['/MYOBJECT'].getData()
| pyPdf for IndirectObject extraction | Following this example, I can list all elements into a pdf file
import pyPdf
pdf = pyPdf.PdfFileReader(open("pdffile.pdf"))
list(pdf.pages) # Process all the objects.
print pdf.resolvedObjects
now, I need to extract a non-standard object from the pdf file.
My object is the one named MYOBJECT and it is a string.
The piece printed by the python script that concernes me is:
{'/MYOBJECT': IndirectObject(584, 0)}
The pdf file is this:
558 0 obj
<</Contents 583 0 R/CropBox[0 0 595.22 842]/MediaBox[0 0 595.22 842]/Parent 29 0 R/Resources
<</ColorSpace <</CS0 563 0 R>>
/ExtGState <</GS0 568 0 R>>
/Font<</TT0 559 0 R/TT1 560 0 R/TT2 561 0 R/TT3 562 0 R>>
/ProcSet[/PDF/Text/ImageC]
/Properties<</MC0<</MYOBJECT 584 0 R>>/MC1<</SubKey 582 0 R>> >>
/XObject<</Im0 578 0 R>>>>
/Rotate 0/StructParents 0/Type/Page>>
endobj
...
...
...
584 0 obj
<</Length 8>>stream
1_22_4_1 --->>>> this is the string I need to extract from the object
endstream
endobj
How can I follow the 584 value in order to refer to my string (under pyPdf of course)??
| [
"each element in pdf.pages is a dictionary, so assuming it's on page 1, pdf.pages[0]['/MYOBJECT'] should be the element you want. \nYou can try to print that individually or poke at it with help and dir in a python prompt for more about how to get the string you want\nEdit:\nafter receiving a copy of the pdf, i found the object at pdf.resolvedObjects[0][558]['/Resources']['/Properties']['/MC0']['/MYOBJECT'] and the value can be retrieved via getData()\nthe following function gives a more generic way to solve this by recursively looking for the key in question\nimport types\nimport pyPdf\npdf = pyPdf.PdfFileReader(open('file.pdf'))\npages = list(pdf.pages)\n\ndef findInDict(needle,haystack):\n for key in haystack.keys():\n try:\n value = haystack[key]\n except:\n continue\n if key == needle:\n return value\n if type(value) == types.DictType or isinstance(value,pyPdf.generic.DictionaryObject): \n x = findInDict(needle,value)\n if x is not None:\n return x\n\nanswer = findInDict('/MYOBJECT',pdf.resolvedObjects).getData()\n\n",
"An IndirectObject refers to an actual object (it's like a link or alias so that the total size of the PDF can be reduced when the same content appears in multiple places). The getObject method will give you the actual object.\nIf the object is a text object, then just doing a str() or unicode() on the object should get you the data inside of it.\nAlternatively, pyPdf stores the objects in the resolvedObjects attribute. For example, a PDF that contains this object:\n13 0 obj\n<< /Type /Catalog /Pages 3 0 R >>\nendobj\n\nCan be read with this:\n>>> import pyPdf\n>>> pdf = pyPdf.PdfFileReader(open(\"pdffile.pdf\"))\n>>> pages = list(pdf.pages)\n>>> pdf.resolvedObjects\n{0: {2: {'/Parent': IndirectObject(3, 0), '/Contents': IndirectObject(4, 0), '/Type': '/Page', '/Resources': IndirectObject(6, 0), '/MediaBox': [0, 0, 595.2756, 841.8898]}, 3: {'/Kids': [IndirectObject(2, 0)], '/Count': 1, '/Type': '/Pages', '/MediaBox': [0, 0, 595.2756, 841.8898]}, 4: {'/Filter': '/FlateDecode'}, 5: 147, 6: {'/ColorSpace': {'/Cs1': IndirectObject(7, 0)}, '/ExtGState': {'/Gs2': IndirectObject(9, 0), '/Gs1': IndirectObject(10, 0)}, '/ProcSet': ['/PDF', '/Text'], '/Font': {'/F1.0': IndirectObject(8, 0)}}, 13: {'/Type': '/Catalog', '/Pages': IndirectObject(3, 0)}}}\n>>> pdf.resolvedObjects[0][13]\n{'/Type': '/Catalog', '/Pages': IndirectObject(3, 0)}\n\n",
"Jehiah's method is good if looking everywhere for the object. My guess (looking at the PDF) is that it is always in the same place (the first page, in the 'MC0' property), and so a much simpler method of finding the string would be:\nimport pyPdf\npdf = pyPdf.PdfFileReader(open(\"file.pdf\"))\npdf.getPage(0)['/Resources']['/Properties']['/MC0']['/MYOBJECT'].getData()\n\n"
] | [
10,
6,
2
] | [] | [] | [
"pdf",
"pypdf",
"python",
"stream"
] | stackoverflow_0000436474_pdf_pypdf_python_stream.txt |
Q:
Good Python networking libraries for building a TCP server?
I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me.
Also, would using Twisted even have a benefit over rolling my own server with select.select()?
A:
I must agree that the documentation is a bit terse but the tutorial gets you up and running quickly.
http://twistedmatrix.com/projects/core/documentation/howto/tutorial/index.html
The event-based programming paradigm of Twisted and it's defereds might be a bit weird at the start (was for me) but it is worth the learning curve.
You'll get up and running doing much more complex stuff more quickly than if you were to write your own framework and it would also mean one less thing to bug hunt as Twisted is very much production proven.
I don't really know of another framework that can offer as much as Twisted can, so my vote would definitely go for Twisted even if the docs aren't for the faint of heart.
I agree with Greg that SocketServer is a nice middle ground but depending on the target audience of your application and the design of it you might have some nice stuff to look forward to in Twisted (the PerspectiveBroker which is very useful comes to mind - http://twistedmatrix.com/projects/core/documentation/howto/pb-intro.html)
A:
The standard library includes SocketServer and related modules which might be sufficient for your needs. This is a good middle ground between a complex framework like Twisted, and rolling your own select() loop.
A:
If you're reluctant to use Twisted, you might want to check out SocketServer.ThreadingTCPServer. It's easy enough to use, and it's good enough for many purposes.
For the majority of situations, Twisted is probably going to be faster and more reliable, so I'd stomach the documentation if you can :)
A:
Just adding an answer to re-iterate other posters - it'll be worth it to use Twisted. There's no reason to write yet another TCP server that'll end up working not as well as one using twisted would. The only reason would be if writing your own is much faster, developer-wise, but if you just bite the bullet and learn twisted now, your future projects will benefit greatly. And, as others have said, you'll be able to do much more complex stuff if you use twisted from the start.
A:
I've tried 3 approaches:
Write my own select() loop framework (pretty much dead, I don't necessarily recommend it.)
Using SocketServer
Twisted
I used the SocketServer for a internal web service with fairly low traffic. Is used for a fairly high traffic internal logging service. Both perform perfectly well and seem pretty reliable for production use. For anything that needs to be performant I think the Twisted stuff is much better, but it's a lot more work to get your head around the architecture.
| Good Python networking libraries for building a TCP server? | I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me.
Also, would using Twisted even have a benefit over rolling my own server with select.select()?
| [
"I must agree that the documentation is a bit terse but the tutorial gets you up and running quickly.\nhttp://twistedmatrix.com/projects/core/documentation/howto/tutorial/index.html\nThe event-based programming paradigm of Twisted and it's defereds might be a bit weird at the start (was for me) but it is worth the learning curve.\nYou'll get up and running doing much more complex stuff more quickly than if you were to write your own framework and it would also mean one less thing to bug hunt as Twisted is very much production proven.\nI don't really know of another framework that can offer as much as Twisted can, so my vote would definitely go for Twisted even if the docs aren't for the faint of heart.\nI agree with Greg that SocketServer is a nice middle ground but depending on the target audience of your application and the design of it you might have some nice stuff to look forward to in Twisted (the PerspectiveBroker which is very useful comes to mind - http://twistedmatrix.com/projects/core/documentation/howto/pb-intro.html)\n",
"The standard library includes SocketServer and related modules which might be sufficient for your needs. This is a good middle ground between a complex framework like Twisted, and rolling your own select() loop.\n",
"If you're reluctant to use Twisted, you might want to check out SocketServer.ThreadingTCPServer. It's easy enough to use, and it's good enough for many purposes.\nFor the majority of situations, Twisted is probably going to be faster and more reliable, so I'd stomach the documentation if you can :)\n",
"Just adding an answer to re-iterate other posters - it'll be worth it to use Twisted. There's no reason to write yet another TCP server that'll end up working not as well as one using twisted would. The only reason would be if writing your own is much faster, developer-wise, but if you just bite the bullet and learn twisted now, your future projects will benefit greatly. And, as others have said, you'll be able to do much more complex stuff if you use twisted from the start.\n",
"I've tried 3 approaches:\n\nWrite my own select() loop framework (pretty much dead, I don't necessarily recommend it.)\nUsing SocketServer\nTwisted\n\nI used the SocketServer for a internal web service with fairly low traffic. Is used for a fairly high traffic internal logging service. Both perform perfectly well and seem pretty reliable for production use. For anything that needs to be performant I think the Twisted stuff is much better, but it's a lot more work to get your head around the architecture. \n"
] | [
10,
6,
1,
1,
1
] | [] | [] | [
"networking",
"python",
"twisted"
] | stackoverflow_0000441849_networking_python_twisted.txt |
Q:
How do I reverse Unicode decomposition using Python?
Using Python 2.5, I have some text in stored in a unicode object:
Dinis e Isabel, uma difı´cil relac¸a˜o
conjugal e polı´tica
This appears to be decomposed Unicode. Is there a generic way in Python to reverse the decomposition, so I end up with:
Dinis e Isabel, uma difícil relação
conjugal e política
A:
I think you are looking for this:
>>> import unicodedata
>>> print unicodedata.normalize("NFC",u"c\u0327")
ç
A:
Unfortunately it seems I actually have (for example) \u00B8 (cedilla) instead of \u0327 (combining cedilla) in my text.
Eurgh, nasty! You can still do it automatically, though the process wouldn't be entirely lossless as it involves a compatibility decomposition (NFKD).
Normalise U+00B8 to NFKD and you'll get a space followed by the U+0327. You could then scan through the string looking for any case of space-followed-by-combining-character, and remove the space. Finally recompose to NFC to put the combining characters onto the previous character instead.
s= unicodedata.normalize('NFKD', s)
s= ''.join(c for i, c in enumerate(s) if c!=' ' or unicodedata.combining(s[i+1])==0)
s= unicodedata.normalize('NFC', s)
A:
I can't really give you a definitive answer to your question because I never tried that. But there is a unicodedata module in the standard library. It has two functions decomposition() and normalize() that might help you here.
Edit: Make sure that it really is decomposed unicode. Sometimes there are weird ways to write characters that can't be directly expressed in an encoding. Like "a which is meant to be mentally parsed by a human or some specialized program as ä.
| How do I reverse Unicode decomposition using Python? | Using Python 2.5, I have some text in stored in a unicode object:
Dinis e Isabel, uma difı´cil relac¸a˜o
conjugal e polı´tica
This appears to be decomposed Unicode. Is there a generic way in Python to reverse the decomposition, so I end up with:
Dinis e Isabel, uma difícil relação
conjugal e política
| [
"I think you are looking for this:\n>>> import unicodedata \n>>> print unicodedata.normalize(\"NFC\",u\"c\\u0327\")\nç\n\n",
"\nUnfortunately it seems I actually have (for example) \\u00B8 (cedilla) instead of \\u0327 (combining cedilla) in my text.\n\nEurgh, nasty! You can still do it automatically, though the process wouldn't be entirely lossless as it involves a compatibility decomposition (NFKD).\nNormalise U+00B8 to NFKD and you'll get a space followed by the U+0327. You could then scan through the string looking for any case of space-followed-by-combining-character, and remove the space. Finally recompose to NFC to put the combining characters onto the previous character instead.\ns= unicodedata.normalize('NFKD', s)\ns= ''.join(c for i, c in enumerate(s) if c!=' ' or unicodedata.combining(s[i+1])==0)\ns= unicodedata.normalize('NFC', s)\n\n",
"I can't really give you a definitive answer to your question because I never tried that. But there is a unicodedata module in the standard library. It has two functions decomposition() and normalize() that might help you here.\nEdit: Make sure that it really is decomposed unicode. Sometimes there are weird ways to write characters that can't be directly expressed in an encoding. Like \"a which is meant to be mentally parsed by a human or some specialized program as ä.\n"
] | [
7,
5,
1
] | [] | [] | [
"python",
"unicode"
] | stackoverflow_0000446222_python_unicode.txt |
Q:
What's the best way to specify a proxy with username and password for an **https** connection in python?
I read somewhere that currently urllib2 doesn't support authenticated https connection. My proxy uses a basic authentication only, but how to open an https based webpage through it .
Please help me.
Thanks.
A:
"urllib2 doesn't support authenticated https connection" False.
# Build Handler to support HTTP Basic Authentication...
basic_handler = urllib2.HTTPBasicAuthHandler()
basic_handler.add_password(realm, self.urlBase, username, password)
# Get cookies, also, to handle login
self.cookies= cookielib.CookieJar()
cookie_handler= urllib2.HTTPCookieProcessor( self.cookies )
# Assemble the final opener
opener = urllib2.build_opener(basic_handler,cookie_handler)
A:
You can use httplib2, which solves some of the limitations of urllib2 including this one. There is an example here of how to do Basic Authentication on a https connection.
| What's the best way to specify a proxy with username and password for an **https** connection in python? | I read somewhere that currently urllib2 doesn't support authenticated https connection. My proxy uses a basic authentication only, but how to open an https based webpage through it .
Please help me.
Thanks.
| [
"\"urllib2 doesn't support authenticated https connection\" False.\n # Build Handler to support HTTP Basic Authentication...\n basic_handler = urllib2.HTTPBasicAuthHandler()\n basic_handler.add_password(realm, self.urlBase, username, password)\n # Get cookies, also, to handle login\n self.cookies= cookielib.CookieJar()\n cookie_handler= urllib2.HTTPCookieProcessor( self.cookies )\n # Assemble the final opener\n opener = urllib2.build_opener(basic_handler,cookie_handler)\n\n",
"You can use httplib2, which solves some of the limitations of urllib2 including this one. There is an example here of how to do Basic Authentication on a https connection.\n"
] | [
2,
0
] | [] | [] | [
"authentication",
"https",
"proxy",
"python"
] | stackoverflow_0000446869_authentication_https_proxy_python.txt |
Q:
retrieving XMLHttpRequest parameters in python
Client-side code submits an object (in the POST request body) or query string (if using GET method) via ajax request to a python cgi script. Please note that the object/query string parameters are not coming from a
<form> or <isindex>.
How can I retrieve these parameters from within the server-side python script using standard library modules (e.g., cgi)?
Thanks very much
EDIT:
@codeape: Thanks, but wouldn't that work only for submitted forms? In my case, no form is being submitted, just an asynchronous request. Using your script, len(f.keys()) returns 0 if no form is submitted! I can probably recast the request as a form submission, but is there a better way?
A:
You use the cgi.FieldStorage class. Example CGI script:
#! /usr/bin/python
import cgi
from os import environ
import cgitb
cgitb.enable()
print "Content-type: text/plain"
print
print "REQUEST_METHOD:", environ["REQUEST_METHOD"]
print "Values:"
f = cgi.FieldStorage()
for k in f.keys():
print "%s: %s" % (k, f.getfirst(k))
A:
codeape already answered on this. Just for the record, please understand that how the HTTP request is emitted is totally orthogonal - what the server get is an HTTP request, period.
A:
@codeape , @bruno desthuilliers:
Indeed, cgi.FieldStorage() retrieves the parameters. The output I was getting earlier was apparently due to my passing the parameters as a string (JSON.stringify() object) in the body of the request -- rather than as key-value pairs.
Thanks
| retrieving XMLHttpRequest parameters in python | Client-side code submits an object (in the POST request body) or query string (if using GET method) via ajax request to a python cgi script. Please note that the object/query string parameters are not coming from a
<form> or <isindex>.
How can I retrieve these parameters from within the server-side python script using standard library modules (e.g., cgi)?
Thanks very much
EDIT:
@codeape: Thanks, but wouldn't that work only for submitted forms? In my case, no form is being submitted, just an asynchronous request. Using your script, len(f.keys()) returns 0 if no form is submitted! I can probably recast the request as a form submission, but is there a better way?
| [
"You use the cgi.FieldStorage class. Example CGI script:\n#! /usr/bin/python\n\nimport cgi\nfrom os import environ\nimport cgitb\ncgitb.enable()\n\nprint \"Content-type: text/plain\"\nprint\nprint \"REQUEST_METHOD:\", environ[\"REQUEST_METHOD\"]\nprint \"Values:\"\nf = cgi.FieldStorage()\nfor k in f.keys():\n print \"%s: %s\" % (k, f.getfirst(k))\n\n",
"codeape already answered on this. Just for the record, please understand that how the HTTP request is emitted is totally orthogonal - what the server get is an HTTP request, period.\n",
"@codeape , @bruno desthuilliers: \n\nIndeed, cgi.FieldStorage() retrieves the parameters. The output I was getting earlier was apparently due to my passing the parameters as a string (JSON.stringify() object) in the body of the request -- rather than as key-value pairs.\nThanks\n"
] | [
5,
0,
0
] | [] | [] | [
"ajax",
"cgi",
"python"
] | stackoverflow_0000445942_ajax_cgi_python.txt |
Q:
urlsafe_b64encode always ends in '=' ?:
I think this must be a stupid question, but why do the results of urlsafe_b64encode() always end with a '=' for me?
'=' isn't url safe?
from random import getrandbits
from base64 import urlsafe_b64encode
from hashlib import sha256
from time import sleep
def genKey():
keyLenBits = 64
a = str(getrandbits(keyLenBits))
b = urlsafe_b64encode(sha256(a).digest())
print b
while 1:
genKey()
sleep(1)
output :
DxFOVxWvvzGdOSh2ARkK-2XPXNavnpiCkD6RuKLffvA=
xvA99ZLBrLvtf9-k0-YUFcLsiKl8Q8KmkD7ahIqPZ5Y=
jYbNK7j62KCBA5gnoiSpM2AGOPxmyQTIJIl_wWdOwoY=
CPIKkXPfIX4bd8lQtUj1dYG3ZOBxmZTMkVpmR7Uvu4s=
HlTs0tBW805gaxfMrq3OPOa6Crg7MsLSLnqe-eX0JEA=
FKRu0ePZEppHsvACWYssL1b2uZhjy9UU5LI8sWIqHe8=
aY_kVaT8kjB4RRfp3S6xG2vJaL0vAwQPifsBcN1LYvo=
6Us3XsewqnEcovMb5EEPtf4Fp4ucWfjPVso-UkRuaRc=
_vAI943yOWs3t2F6suUGy47LJjQsgi_XLiMKhYZnm9M=
CcUSXVqPNT_eb8VXasFXhvNosPOWQQWjGlipQp_68aY=
A:
Base64 uses '=' for padding. Your string bit length isn't divisible by 24, so it's padded with '='. By the way, '=' should be URL safe as it's often used for parameters in URLs.
See this discussion, too.
A:
The '=' is for padding. If you want to pass the output as the value of a URL parameter, you'll want to escape it first, so that the padding doesn't get lost when later reading in the value.
import urllib
param_value = urllib.quote_plus(b64_data)
Python is just following RFC3548 by allowing the '=' for padding, even though it seems like a more suitable character should replace it.
A:
I would expect that an URI parser would ignore a "=" in the value part of a parameter.
The URI parameters are: "&" , [name], "=", [value], next, so an equals sign in the value part is harmless. An unescaped ampersand has more potential to break the parser.
| urlsafe_b64encode always ends in '=' ?: | I think this must be a stupid question, but why do the results of urlsafe_b64encode() always end with a '=' for me?
'=' isn't url safe?
from random import getrandbits
from base64 import urlsafe_b64encode
from hashlib import sha256
from time import sleep
def genKey():
keyLenBits = 64
a = str(getrandbits(keyLenBits))
b = urlsafe_b64encode(sha256(a).digest())
print b
while 1:
genKey()
sleep(1)
output :
DxFOVxWvvzGdOSh2ARkK-2XPXNavnpiCkD6RuKLffvA=
xvA99ZLBrLvtf9-k0-YUFcLsiKl8Q8KmkD7ahIqPZ5Y=
jYbNK7j62KCBA5gnoiSpM2AGOPxmyQTIJIl_wWdOwoY=
CPIKkXPfIX4bd8lQtUj1dYG3ZOBxmZTMkVpmR7Uvu4s=
HlTs0tBW805gaxfMrq3OPOa6Crg7MsLSLnqe-eX0JEA=
FKRu0ePZEppHsvACWYssL1b2uZhjy9UU5LI8sWIqHe8=
aY_kVaT8kjB4RRfp3S6xG2vJaL0vAwQPifsBcN1LYvo=
6Us3XsewqnEcovMb5EEPtf4Fp4ucWfjPVso-UkRuaRc=
_vAI943yOWs3t2F6suUGy47LJjQsgi_XLiMKhYZnm9M=
CcUSXVqPNT_eb8VXasFXhvNosPOWQQWjGlipQp_68aY=
| [
"Base64 uses '=' for padding. Your string bit length isn't divisible by 24, so it's padded with '='. By the way, '=' should be URL safe as it's often used for parameters in URLs.\nSee this discussion, too.\n",
"The '=' is for padding. If you want to pass the output as the value of a URL parameter, you'll want to escape it first, so that the padding doesn't get lost when later reading in the value.\nimport urllib\nparam_value = urllib.quote_plus(b64_data)\n\nPython is just following RFC3548 by allowing the '=' for padding, even though it seems like a more suitable character should replace it.\n",
"I would expect that an URI parser would ignore a \"=\" in the value part of a parameter.\nThe URI parameters are: \"&\" , [name], \"=\", [value], next, so an equals sign in the value part is harmless. An unescaped ampersand has more potential to break the parser.\n"
] | [
9,
3,
1
] | [] | [] | [
"base64",
"hash",
"python"
] | stackoverflow_0000447875_base64_hash_python.txt |
Q:
Multithreaded Resource Access - Where Do I Put My Locks?
I have threaded code where each thread needs to write to the same file. To prevent concurrency issues, I am using a Lock object.
My question is whether I am using the Lock correctly. If I set the lock from within each thread, is that lock global or only specific to that specific thread?
Basically, should I create a Lock first and pass its reference to each thread, or is it ok to set it from within the thread like I do here:
import time
from threading import Thread, Lock
def main():
for i in range(20):
agent = Agent(i)
agent.start()
class Agent(Thread):
def __init__(self, thread_num):
Thread.__init__(self)
self.thread_num = thread_num
def run(self):
while True:
print 'hello from thread %s' % self.thread_num
self.write_result()
def write_result(self):
lock = Lock()
lock.acquire()
try:
f = open('foo.txt', 'a')
f.write('hello from thread %s\n' % self.thread_num)
f.flush()
f.close()
finally:
lock.release()
if __name__ == '__main__':
main()
A:
For your use case one approach could be to write a file subclass that locks:
class LockedWrite(file):
""" Wrapper class to a file object that locks writes """
def __init__(self, *args, **kwds):
super(LockedWrite, self).__init__(*args, **kwds)
self._lock = Lock()
def write(self, *args, **kwds):
self._lock.acquire()
try:
super(LockedWrite, self).write(*args, **kwds)
finally:
self._lock.release()
To use in your code just replace following functions:
def main():
f = LockedWrite('foo.txt', 'a')
for i in range(20):
agent = Agent(i, f)
agent.start()
class Agent(Thread):
def __init__(self, thread_num, fileobj):
Thread.__init__(self)
self.thread_num = thread_num
self._file = fileobj
# ...
def write_result(self):
self._file.write('hello from thread %s\n' % self.thread_num)
This approach puts file locking in the file itself which seems cleaner IMHO
A:
Create the lock outside the method.
class Agent(Thread):
mylock = Lock()
def write_result(self):
self.mylock.acquire()
try:
...
finally:
self.mylock.release()
or if using python >= 2.5:
class Agent(Thread):
mylock = Lock()
def write_result(self):
with self.mylock:
...
To use that with python 2.5 you must import the statement from the future:
from __future__ import with_statement
A:
The lock() method returns a lock object for every call. So every thread ( actually every call to write_result ) will have a different lock object. And there will be no locking.
A:
The lock that's used needs to be common to all threads, or at least ensure that two locks can't lock the same resource at the same time.
A:
The lock instance should be associated with the file instance.
In other words, you should create both the lock and file at the same time and pass both to each thread.
A:
You can simplify things a bit (at the cost of slightly more overhead) by designating a single thread (probably created exclusively for this purpose) as the sole thread that writes to the file, and have all other threads delegate to the file-writer by placing the string that they want to add to the file into a queue.Queue object.
Queues have all of the locking built-in, so any thread can safely call Queue.put() at any time. The file-writer would be the only thread calling Queue.get(), and can presumably spend much of its time blocking on that call (with a reasonable timeout to allow the thread to cleanly respond to a shutdown request). All of the synchronization issues will be handled by the Queue, and you'll be spared having to worry about whether you've forgotten some lock acquire/release somewhere... :)
A:
I'm pretty sure that the lock needs to be the same object for each thread. Try this:
import time
from threading import Thread, Lock
def main():
lock = Lock()
for i in range(20):
agent = Agent(i, lock)
agent.start()
class Agent(Thread, Lock):
def __init__(self, thread_num, lock):
Thread.__init__(self)
self.thread_num = thread_num
self.lock = lock
def run(self):
while True:
print 'hello from thread %s' % self.thread_num
self.write_result()
def write_result(self):
self.lock.acquire()
try:
f = open('foo.txt', 'a')
f.write('hello from thread %s\n' % self.thread_num)
f.flush()
f.close()
finally:
lock.release()
if __name__ == '__main__':
main()
| Multithreaded Resource Access - Where Do I Put My Locks? | I have threaded code where each thread needs to write to the same file. To prevent concurrency issues, I am using a Lock object.
My question is whether I am using the Lock correctly. If I set the lock from within each thread, is that lock global or only specific to that specific thread?
Basically, should I create a Lock first and pass its reference to each thread, or is it ok to set it from within the thread like I do here:
import time
from threading import Thread, Lock
def main():
for i in range(20):
agent = Agent(i)
agent.start()
class Agent(Thread):
def __init__(self, thread_num):
Thread.__init__(self)
self.thread_num = thread_num
def run(self):
while True:
print 'hello from thread %s' % self.thread_num
self.write_result()
def write_result(self):
lock = Lock()
lock.acquire()
try:
f = open('foo.txt', 'a')
f.write('hello from thread %s\n' % self.thread_num)
f.flush()
f.close()
finally:
lock.release()
if __name__ == '__main__':
main()
| [
"For your use case one approach could be to write a file subclass that locks:\nclass LockedWrite(file):\n \"\"\" Wrapper class to a file object that locks writes \"\"\"\n def __init__(self, *args, **kwds):\n super(LockedWrite, self).__init__(*args, **kwds)\n self._lock = Lock()\n\n def write(self, *args, **kwds):\n self._lock.acquire()\n try:\n super(LockedWrite, self).write(*args, **kwds)\n finally:\n self._lock.release()\n\nTo use in your code just replace following functions:\ndef main():\n f = LockedWrite('foo.txt', 'a')\n\n for i in range(20):\n agent = Agent(i, f)\n agent.start()\n\nclass Agent(Thread):\n def __init__(self, thread_num, fileobj):\n Thread.__init__(self)\n self.thread_num = thread_num\n self._file = fileobj \n\n# ...\n\n def write_result(self):\n self._file.write('hello from thread %s\\n' % self.thread_num)\n\nThis approach puts file locking in the file itself which seems cleaner IMHO\n",
"Create the lock outside the method.\nclass Agent(Thread):\n mylock = Lock()\n def write_result(self):\n self.mylock.acquire()\n try:\n ...\n finally:\n self.mylock.release()\n\nor if using python >= 2.5:\nclass Agent(Thread):\n mylock = Lock()\n def write_result(self):\n with self.mylock:\n ...\n\nTo use that with python 2.5 you must import the statement from the future:\nfrom __future__ import with_statement\n\n",
"The lock() method returns a lock object for every call. So every thread ( actually every call to write_result ) will have a different lock object. And there will be no locking. \n",
"The lock that's used needs to be common to all threads, or at least ensure that two locks can't lock the same resource at the same time.\n",
"The lock instance should be associated with the file instance.\nIn other words, you should create both the lock and file at the same time and pass both to each thread.\n",
"You can simplify things a bit (at the cost of slightly more overhead) by designating a single thread (probably created exclusively for this purpose) as the sole thread that writes to the file, and have all other threads delegate to the file-writer by placing the string that they want to add to the file into a queue.Queue object. \nQueues have all of the locking built-in, so any thread can safely call Queue.put() at any time. The file-writer would be the only thread calling Queue.get(), and can presumably spend much of its time blocking on that call (with a reasonable timeout to allow the thread to cleanly respond to a shutdown request). All of the synchronization issues will be handled by the Queue, and you'll be spared having to worry about whether you've forgotten some lock acquire/release somewhere... :)\n",
"I'm pretty sure that the lock needs to be the same object for each thread. Try this:\nimport time\nfrom threading import Thread, Lock\n\ndef main():\n lock = Lock()\n for i in range(20):\n agent = Agent(i, lock)\n agent.start()\n\nclass Agent(Thread, Lock):\n def __init__(self, thread_num, lock):\n Thread.__init__(self)\n self.thread_num = thread_num\n self.lock = lock\n\n def run(self):\n while True:\n print 'hello from thread %s' % self.thread_num\n self.write_result() \n\n def write_result(self):\n self.lock.acquire()\n try:\n f = open('foo.txt', 'a')\n f.write('hello from thread %s\\n' % self.thread_num)\n f.flush()\n f.close()\n finally:\n lock.release()\n\nif __name__ == '__main__':\n main()\n\n"
] | [
6,
3,
1,
1,
1,
1,
0
] | [] | [] | [
"locking",
"multithreading",
"python"
] | stackoverflow_0000448034_locking_multithreading_python.txt |
Q:
Python - downloading a file over HTTP with progress bar and basic authentication
I'm using urllib.urlretrieve to download a file, and implementing a download progress bar using the reporthook parameter. Since urlretrieve doesn't directly support authentication, I came up with
import urllib
def urlretrieve_with_basic_auth(url, filename=None, reporthook=None, data=None,
username="", password=""):
class OpenerWithAuth(urllib.FancyURLopener):
def prompt_user_passwd(self, host, realm):
return username, password
return OpenerWithAuth().retrieve(url, filename, reporthook, data)
This works -- but it seems like there might be a more direct way to do this (maybe with urllib2 or httplib2 or...) --any ideas?
A:
urlgrabber has built-in support for progress bars, authentication, and more.
| Python - downloading a file over HTTP with progress bar and basic authentication | I'm using urllib.urlretrieve to download a file, and implementing a download progress bar using the reporthook parameter. Since urlretrieve doesn't directly support authentication, I came up with
import urllib
def urlretrieve_with_basic_auth(url, filename=None, reporthook=None, data=None,
username="", password=""):
class OpenerWithAuth(urllib.FancyURLopener):
def prompt_user_passwd(self, host, realm):
return username, password
return OpenerWithAuth().retrieve(url, filename, reporthook, data)
This works -- but it seems like there might be a more direct way to do this (maybe with urllib2 or httplib2 or...) --any ideas?
| [
"urlgrabber has built-in support for progress bars, authentication, and more.\n"
] | [
7
] | [] | [] | [
"download",
"http",
"python"
] | stackoverflow_0000448207_download_http_python.txt |
Q:
Getting out of a function in Python
I want to get out of a function when an exception occurs or so.
I want to use other method than 'return'
A:
If you catch an exception and then want to rethrow it, this pattern is pretty simple:
try:
do_something_dangerous()
except:
do_something_to_apologize()
raise
Of course if you want to raise the exception in the first place, that's easy, too:
def do_something_dangerous(self):
raise Exception("Boo!")
If that's not what you wanted, please provide more information!
A:
Can't think of another way to "get out" of a function other than a) return, b) throw an exception, or c) terminate execution of the program.
A:
The exception itself will terminate the function:
def f():
a = 1 / 0 # will raise an exception
return a
try:
f()
except:
print 'no longer in f()'
A:
Assuming you want to "stop" execution inside of that method. There's a few things you can do.
Don't catch the exception. This will return control to the method that called it in the first place. You can then do whatever you want with it.
sys.exit(0) This one actually exits the entire program.
return I know you said you don't want return, but hear me out. Return is useful, because based on the value you return, it would be a good way to let your users know what went wrong.
A:
As others have pointed out, an exception will get you out of the method. You shouldn't be ashamed or embarassed by exceptions; an exception indicates an error, but that's not necessarily the same as a bug.
For example, say I'm writing a factorial function. Factorial isn't defined for negative numbers, so I might do this:
def factorial(n):
if n < 0:
raise ValueError
if n == 0:
return 1
return n*factorial(n-1)
I would then look for the exception:
n = raw_input('Enter a number.')
try:
print factorial(n)
except ValueError:
print 'You entered a negative number.'
I can make the exception more informative than a ValueError by defining my own:
class NegativeInputError(Exception):
pass
# in the function:
if n < 0:
raise NegativeInputError
HTH!
| Getting out of a function in Python | I want to get out of a function when an exception occurs or so.
I want to use other method than 'return'
| [
"If you catch an exception and then want to rethrow it, this pattern is pretty simple:\ntry:\n do_something_dangerous()\nexcept:\n do_something_to_apologize()\n raise\n\nOf course if you want to raise the exception in the first place, that's easy, too:\ndef do_something_dangerous(self):\n raise Exception(\"Boo!\")\n\nIf that's not what you wanted, please provide more information!\n",
"Can't think of another way to \"get out\" of a function other than a) return, b) throw an exception, or c) terminate execution of the program.\n",
"The exception itself will terminate the function:\ndef f():\n a = 1 / 0 # will raise an exception\n return a\n\ntry:\n f()\nexcept:\n print 'no longer in f()'\n\n",
"Assuming you want to \"stop\" execution inside of that method. There's a few things you can do.\n\nDon't catch the exception. This will return control to the method that called it in the first place. You can then do whatever you want with it. \nsys.exit(0) This one actually exits the entire program.\nreturn I know you said you don't want return, but hear me out. Return is useful, because based on the value you return, it would be a good way to let your users know what went wrong. \n\n",
"As others have pointed out, an exception will get you out of the method. You shouldn't be ashamed or embarassed by exceptions; an exception indicates an error, but that's not necessarily the same as a bug.\nFor example, say I'm writing a factorial function. Factorial isn't defined for negative numbers, so I might do this:\ndef factorial(n):\n if n < 0:\n raise ValueError\n if n == 0:\n return 1\n return n*factorial(n-1)\n\nI would then look for the exception:\nn = raw_input('Enter a number.')\ntry:\n print factorial(n)\nexcept ValueError:\n print 'You entered a negative number.'\n\nI can make the exception more informative than a ValueError by defining my own:\nclass NegativeInputError(Exception):\n pass\n\n# in the function:\nif n < 0:\n raise NegativeInputError\n\nHTH!\n"
] | [
18,
4,
4,
3,
3
] | [] | [] | [
"exception",
"function",
"python"
] | stackoverflow_0000446782_exception_function_python.txt |
Q:
decrypting pdf protected by aes-256bit using the right password
Is there any way to decrypting a pdf protected by an aes-256 bit key?
I have the correct password and I need a command-line tool (or library - perhaps in python :P ) for decrypting the file and then doing some operation over it.
The best thing could be if the file could be saved decrypted, then I elaborate it and then I can remove it...
Does anyone know something about it?
A:
import pyPdf
pdf = pyPdf.PdfFileReader(open("file.pdf"))
pdf.decrypt("password")
You can then do whatever you want with the contents. This will work with either the user or owner passwords.
| decrypting pdf protected by aes-256bit using the right password | Is there any way to decrypting a pdf protected by an aes-256 bit key?
I have the correct password and I need a command-line tool (or library - perhaps in python :P ) for decrypting the file and then doing some operation over it.
The best thing could be if the file could be saved decrypted, then I elaborate it and then I can remove it...
Does anyone know something about it?
| [
"import pyPdf \npdf = pyPdf.PdfFileReader(open(\"file.pdf\"))\npdf.decrypt(\"password\")\n\nYou can then do whatever you want with the contents. This will work with either the user or owner passwords.\n"
] | [
4
] | [] | [] | [
"aes",
"command_line",
"pdf",
"python"
] | stackoverflow_0000447600_aes_command_line_pdf_python.txt |
Q:
Setting up Python on Windows/ Apache?
I want to get a simple Python "hello world" web page script to run on Windows Vista/ Apache but hit different walls. I'm using WAMP. I've installed mod_python and the module shows, but I'm not quite sure what I'm supposed to do in e.g. http.conf (things like AddHandler mod_python .py either bring me to a file not found, or a forbidden, or module not found errors when accessing http://localhost/myfolder/index.py). I can get mod_python.publisher to work but do I "want" this/ need this?
Can anyone help?
Thanks!
A:
Stay away from mod_python. One common misleading idea is that mod_python is like mod_php, but for python. That is not true. Wsgi is the standard to run python web applications, defined by PEP 333. So use mod_wsgi instead.
Or alternatively, use some web framework that has a server. Cherrypy's one is particulary good. You will be able to run your application both standalone and through mod_wsgi.
An example of Hello World application using cherrypy:
import cherrypy
class HelloWorld(object):
def index(self):
return "Hello World!"
index.exposed = True
application = HelloWorld()
if __name__ == '__main__':
cherrypy.engine.start()
cherrypy.engine.block()
Very easy huh? Running this application directly on python will start a webserver. Configuring mod_wsgi to it will make it run inside apache.
A:
You do not NEED mod_python to run Python code on the web, you could use simple CGI programming to run your python code, with the instructions in the following link: http://www.imladris.com/Scripts/PythonForWindows.html
That should give you some of the configuration options you need to enable Python with CGI, and a google search should give you reams of other info on how to program in it and such.
Mod_python is useful if you want a slightly more "friendly" interface, or more control over the request itself. You can use it to create request filters and other things for the Apache server, and with the publisher handler you get a simpler way of handling webpage requests via python.
The publisher handler works by mapping URLs to Python objects/functions. This means you can define a function named 'foo' in your python file, and any request to http://localhost/foo would call that function automatically. More info here: http://www.modpython.org/live/current/doc-html/hand-pub-alg-trav.html
As for the Apache config to make things work, something like this should serve you well
<Directory /var/www/html/python/>
SetHandler mod_python
PythonHandler mod_python.publisher
PythonDebug On
</Directory>
If you have /var/www/html/ set up as your web server's root and have a file called index.py in the python/ directory in there, then any request to http://localhost/python/foo should call the foo() function in index.py, or fail with a 404 if it doesn't exist.
A:
AddHandler mod_python .py
Have you set 'PythonHandler'?
These days, consider using WSGI instead of native mod-python interfaces for more wide-ranging deployment options. Either through mod-python's WSGI support, or, maybe better, mod-wsgi. (CGI via eg. wsgiref will also work fine and is easy to get set up on a development environment where you don't care about its rubbish performance.)
| Setting up Python on Windows/ Apache? | I want to get a simple Python "hello world" web page script to run on Windows Vista/ Apache but hit different walls. I'm using WAMP. I've installed mod_python and the module shows, but I'm not quite sure what I'm supposed to do in e.g. http.conf (things like AddHandler mod_python .py either bring me to a file not found, or a forbidden, or module not found errors when accessing http://localhost/myfolder/index.py). I can get mod_python.publisher to work but do I "want" this/ need this?
Can anyone help?
Thanks!
| [
"Stay away from mod_python. One common misleading idea is that mod_python is like mod_php, but for python. That is not true. Wsgi is the standard to run python web applications, defined by PEP 333. So use mod_wsgi instead.\nOr alternatively, use some web framework that has a server. Cherrypy's one is particulary good. You will be able to run your application both standalone and through mod_wsgi.\nAn example of Hello World application using cherrypy:\nimport cherrypy\n\nclass HelloWorld(object):\n def index(self):\n return \"Hello World!\"\n index.exposed = True\n\napplication = HelloWorld()\nif __name__ == '__main__':\n cherrypy.engine.start()\n cherrypy.engine.block()\n\nVery easy huh? Running this application directly on python will start a webserver. Configuring mod_wsgi to it will make it run inside apache.\n",
"You do not NEED mod_python to run Python code on the web, you could use simple CGI programming to run your python code, with the instructions in the following link: http://www.imladris.com/Scripts/PythonForWindows.html\nThat should give you some of the configuration options you need to enable Python with CGI, and a google search should give you reams of other info on how to program in it and such.\nMod_python is useful if you want a slightly more \"friendly\" interface, or more control over the request itself. You can use it to create request filters and other things for the Apache server, and with the publisher handler you get a simpler way of handling webpage requests via python.\nThe publisher handler works by mapping URLs to Python objects/functions. This means you can define a function named 'foo' in your python file, and any request to http://localhost/foo would call that function automatically. More info here: http://www.modpython.org/live/current/doc-html/hand-pub-alg-trav.html\nAs for the Apache config to make things work, something like this should serve you well\n<Directory /var/www/html/python/>\n SetHandler mod_python\n PythonHandler mod_python.publisher\n PythonDebug On\n</Directory>\n\nIf you have /var/www/html/ set up as your web server's root and have a file called index.py in the python/ directory in there, then any request to http://localhost/python/foo should call the foo() function in index.py, or fail with a 404 if it doesn't exist.\n",
"\nAddHandler mod_python .py\n\nHave you set 'PythonHandler'?\nThese days, consider using WSGI instead of native mod-python interfaces for more wide-ranging deployment options. Either through mod-python's WSGI support, or, maybe better, mod-wsgi. (CGI via eg. wsgiref will also work fine and is easy to get set up on a development environment where you don't care about its rubbish performance.)\n"
] | [
25,
4,
0
] | [] | [] | [
"mod_python",
"python",
"wamp"
] | stackoverflow_0000449055_mod_python_python_wamp.txt |
Q:
Is there a tool for converting VB to a scripting language, e.g. Python or Ruby?
I've discovered VB2Py, but it's been silent for almost 5 years. Are there any other tools out there which could be used to convert VB6 projects to Python, Ruby, Tcl, whatever?
A:
I doubt there would be a good solution for that since VB6 relies too much on the windows API and VBRun libraries though you could translate code that does something else besides GUI operations
Is there something special you need to do with that code? You could compile your VB6 functionality and expose it as a COM object and connect it to python with IronPython or IronRuby which are Python and Ruby implementations in .Net thus, allowing you to access .Net functionality although im not quite sure if COM exposed objects are easily pluggable to those interpreters.
Maybe if you explain a bit more what you want to do you would get a wiser response.
A:
Compile the VB code either into a normal DLL or a COM DLL. All Pythons on Windows, including the vanilla ActivePython distribution (IronPython isn't required) can connect to
both types of DLLs.
I think this is your best option. As Gustavo said, finding something that will compile arbitrary VB6 code into Python sounds like an unachievable dream.
A:
Generally, Python is much, much more expressive than VB. Things which took many lines of code in VB can be represented more simply in Python.
If the VB is truly epic in scale, a manual rewrite may be hard. But maintaining VB6 may be just as hard.
If the VB is intimately tied to Windows GUI presentation, any rewrite may be hard. Some VB programs can have a cryptic organization where critical features are buried in the VB code attached to GUI controls.
If the VB is not very big and doesn't use mystery features of the GUI controls, it will probably be much simpler to rewrite the program and do a good job of refactoring the legacy code into something leaner and cleaner.
| Is there a tool for converting VB to a scripting language, e.g. Python or Ruby? | I've discovered VB2Py, but it's been silent for almost 5 years. Are there any other tools out there which could be used to convert VB6 projects to Python, Ruby, Tcl, whatever?
| [
"I doubt there would be a good solution for that since VB6 relies too much on the windows API and VBRun libraries though you could translate code that does something else besides GUI operations\nIs there something special you need to do with that code? You could compile your VB6 functionality and expose it as a COM object and connect it to python with IronPython or IronRuby which are Python and Ruby implementations in .Net thus, allowing you to access .Net functionality although im not quite sure if COM exposed objects are easily pluggable to those interpreters.\nMaybe if you explain a bit more what you want to do you would get a wiser response.\n",
"Compile the VB code either into a normal DLL or a COM DLL. All Pythons on Windows, including the vanilla ActivePython distribution (IronPython isn't required) can connect to \nboth types of DLLs.\nI think this is your best option. As Gustavo said, finding something that will compile arbitrary VB6 code into Python sounds like an unachievable dream.\n",
"Generally, Python is much, much more expressive than VB. Things which took many lines of code in VB can be represented more simply in Python.\nIf the VB is truly epic in scale, a manual rewrite may be hard. But maintaining VB6 may be just as hard. \nIf the VB is intimately tied to Windows GUI presentation, any rewrite may be hard. Some VB programs can have a cryptic organization where critical features are buried in the VB code attached to GUI controls.\nIf the VB is not very big and doesn't use mystery features of the GUI controls, it will probably be much simpler to rewrite the program and do a good job of refactoring the legacy code into something leaner and cleaner.\n"
] | [
3,
3,
1
] | [] | [] | [
"python",
"vb6"
] | stackoverflow_0000449734_python_vb6.txt |
Q:
ImportError when using Google App Engine
When I add the following line to Google's helloworld example:
from reportlab.pdfgen import canvas
I get the following error:
<type 'exceptions.ImportError'>: No module named reportlab.pdfgen
I can get at the reportlab.pdfgen library from the python console. Why can't I get at it from google's dev_appserver?
A:
Copying the module locally worked.
From
Python\Lib\site-packages\reportlab
to
helloworld\reportlab
A:
I believe the google app engine does not include all the standard python modules. I know that anything that works with sockes is disabled, such as urllib.urlopen().
| ImportError when using Google App Engine | When I add the following line to Google's helloworld example:
from reportlab.pdfgen import canvas
I get the following error:
<type 'exceptions.ImportError'>: No module named reportlab.pdfgen
I can get at the reportlab.pdfgen library from the python console. Why can't I get at it from google's dev_appserver?
| [
"Copying the module locally worked.\nFrom \nPython\\Lib\\site-packages\\reportlab\n\nto \nhelloworld\\reportlab \n\n",
"I believe the google app engine does not include all the standard python modules. I know that anything that works with sockes is disabled, such as urllib.urlopen().\n"
] | [
3,
0
] | [] | [] | [
"google_app_engine",
"import",
"python"
] | stackoverflow_0000450883_google_app_engine_import_python.txt |
Q:
How to check if a file can be created inside given directory on MS XP/Vista?
I have a code that creates file(s) in user-specified directory. User can point to a directory in which he can't create files, but he can rename it.
I have created directory for test purposes, let's call it C:\foo.
I have following permissions to C:\foo:
Traversing directory/Execute file
Removing subfolders and files
Removing
Read permissions
Change permissions
Take ownership
I don't have any of the following permissions to C:\foo:
Full Control
File creation
Folder creation
I have tried following approaches, so far:
os.access('C:\foo', os.W_OK) == True
st = os.stat('C:\foo')
mode = st[stat.ST_MODE]
mode & stat.S_IWRITE == True
I believe that this is caused by the fact that I can rename folder, so it is changeable for me. But it's content - not.
Does anyone know how can I write code that will check for a given directory if current user has permissions to create file in that directory?
In brief - I want to check if current user has File creation and Folder creation permissions for given folder name.
EDIT: The need for such code arisen from the Test case no 3 from 'Certified for Windows Vista' program, which states:
The application must not allow the Least-Privileged user to save any files to Windows System directory in order to pass this test case.
Should this be understood as 'Application may try to save file in Windows System directory, but shouldn't crash on failure?' or rather 'Application has to perform security checks before trying to save file?'
Should I stop bothering just because Windows Vista itself won't allow the Least-Privileged user to save any files in %WINDIR%?
A:
I wouldn't waste time and LOCs on checking for permissions. Ultimate test of file creation in Windows is the creation itself. Other factors may come into play (such as existing files (or worse, folders) with the same name, disk space, background processes. These conditions can even change between the time you make the initial check and the time you actually try to create your file.
So, if I had a scenario like that, I would just design my method to not lose any data in case of failure, to go ahead and try to create my file, and offer the user an option to change the selected directory and try again if creation fails.
A:
I recently wrote a App to pass a set of test to obtain the ISV status from Microsoft and I also add that condition.
The way I understood it was that if the user is Least Priveledge then he won't have permission to write in the system folders. So I approached the problem the the way Ishmaeel described. I try to create the file and catch the exception then inform the user that he doesn't have permission to write files to that directory.
In my understanding an Least-Priviledged user will not have the necessary permissions to write to those folders, if he has then he is not a Least-Priveledge user.
Should I stop bothering just because Windows Vista itself won't allow the Least-Privileged user to save any files in %WINDIR%?
In my opinion? Yes.
A:
I agree with the other answers that the way to do this is to try to create the file and catch the exception.
However, on Vista beware of UAC! See for example "Why does my application allow me to save files to the Windows and System32 folders in Vista?": To support old applications, Vista will "pretend" to create the file while in reality it creates it in the so-called Virtual Store under the current user's profile.
To avoid this you have to specifically tell Vista that you don't want administrative privileges, by including the appropriate commands in the .exe's manifest, see the question linked above.
A:
import os
import tempfile
def can_create_file(folder_path):
try:
tempfile.TemporaryFile(dir=folder_path)
return True
except OSError:
return False
def can_create_folder(folder_path):
try:
name = tempfile.mkdtemp(dir=folder_path)
os.rmdir(name)
return True
except OSError:
return False
| How to check if a file can be created inside given directory on MS XP/Vista? | I have a code that creates file(s) in user-specified directory. User can point to a directory in which he can't create files, but he can rename it.
I have created directory for test purposes, let's call it C:\foo.
I have following permissions to C:\foo:
Traversing directory/Execute file
Removing subfolders and files
Removing
Read permissions
Change permissions
Take ownership
I don't have any of the following permissions to C:\foo:
Full Control
File creation
Folder creation
I have tried following approaches, so far:
os.access('C:\foo', os.W_OK) == True
st = os.stat('C:\foo')
mode = st[stat.ST_MODE]
mode & stat.S_IWRITE == True
I believe that this is caused by the fact that I can rename folder, so it is changeable for me. But it's content - not.
Does anyone know how can I write code that will check for a given directory if current user has permissions to create file in that directory?
In brief - I want to check if current user has File creation and Folder creation permissions for given folder name.
EDIT: The need for such code arisen from the Test case no 3 from 'Certified for Windows Vista' program, which states:
The application must not allow the Least-Privileged user to save any files to Windows System directory in order to pass this test case.
Should this be understood as 'Application may try to save file in Windows System directory, but shouldn't crash on failure?' or rather 'Application has to perform security checks before trying to save file?'
Should I stop bothering just because Windows Vista itself won't allow the Least-Privileged user to save any files in %WINDIR%?
| [
"I wouldn't waste time and LOCs on checking for permissions. Ultimate test of file creation in Windows is the creation itself. Other factors may come into play (such as existing files (or worse, folders) with the same name, disk space, background processes. These conditions can even change between the time you make the initial check and the time you actually try to create your file.\nSo, if I had a scenario like that, I would just design my method to not lose any data in case of failure, to go ahead and try to create my file, and offer the user an option to change the selected directory and try again if creation fails.\n",
"I recently wrote a App to pass a set of test to obtain the ISV status from Microsoft and I also add that condition.\nThe way I understood it was that if the user is Least Priveledge then he won't have permission to write in the system folders. So I approached the problem the the way Ishmaeel described. I try to create the file and catch the exception then inform the user that he doesn't have permission to write files to that directory.\nIn my understanding an Least-Priviledged user will not have the necessary permissions to write to those folders, if he has then he is not a Least-Priveledge user.\n\nShould I stop bothering just because Windows Vista itself won't allow the Least-Privileged user to save any files in %WINDIR%?\n\nIn my opinion? Yes.\n",
"I agree with the other answers that the way to do this is to try to create the file and catch the exception. \nHowever, on Vista beware of UAC! See for example \"Why does my application allow me to save files to the Windows and System32 folders in Vista?\": To support old applications, Vista will \"pretend\" to create the file while in reality it creates it in the so-called Virtual Store under the current user's profile. \nTo avoid this you have to specifically tell Vista that you don't want administrative privileges, by including the appropriate commands in the .exe's manifest, see the question linked above.\n",
"import os\nimport tempfile\n\ndef can_create_file(folder_path):\n try:\n tempfile.TemporaryFile(dir=folder_path)\n return True\n except OSError:\n return False\n\ndef can_create_folder(folder_path):\n try:\n name = tempfile.mkdtemp(dir=folder_path)\n os.rmdir(name)\n return True\n except OSError:\n return False\n\n"
] | [
4,
4,
3,
3
] | [] | [] | [
"permissions",
"python",
"winapi",
"windows",
"windows_vista"
] | stackoverflow_0000450210_permissions_python_winapi_windows_windows_vista.txt |
Q:
Executing command line programs from within python
I'm building a web application that will is going to manipulate (pad, mix, merge etc) sound files and I've found that sox does exactly what I want. Sox is a linux command line program and I'm feeling a little uncomfortable with having the python web app starting new sox processes on my server on a per request basis.
Example:
import os
os.system('sox input.wav -b 24 output.aiff rate -v -L -b 90 48k')
This whole setup seems a little unstable to me.
So my question is, what's the best practice for running command line programs from within a python (or any scripting language) web app?
Message queues would be one thing to implement in order to get around the whole request response cycle. But is there other ways to make these things more elegant?
A:
The subprocess module is the preferred way of running other programs from Python -- much more flexible and nicer to use than os.system.
import subprocess
#subprocess.check_output(['ls', '-l']) # All that is technically needed...
print(subprocess.check_output(['ls', '-l']))
A:
This whole setup seems a little unstable to me.
Talk to the ffmpegx folks about having a GUI front-end over a command-line backend. It doesn't seem to bother them.
Indeed, I submit that a GUI (or web) front-end over a command-line backend is actually more stable, since you have a very, very clean interface between GUI and command. The command can evolve at a different pace from the web, as long as the command-line options are compatible, you have no possibility of breakage.
A:
If you're concerned about server performance then look at capping the number of running sox processes. If the cap has been hit you can always cache the request and inform the user when it's finished in whichever way suits your application.
Alternatively, have the n worker scripts on other machines that pull requests from the db and call sox, and then push the resulting output file to where it needs to be.
A:
I am not familiar with sox, but instead of making repeated calls to the program as a command line, is it possible to set it up as a service and connect to it for requests? You can take a look at the connection interface such as sqlite for inspiration.
| Executing command line programs from within python | I'm building a web application that will is going to manipulate (pad, mix, merge etc) sound files and I've found that sox does exactly what I want. Sox is a linux command line program and I'm feeling a little uncomfortable with having the python web app starting new sox processes on my server on a per request basis.
Example:
import os
os.system('sox input.wav -b 24 output.aiff rate -v -L -b 90 48k')
This whole setup seems a little unstable to me.
So my question is, what's the best practice for running command line programs from within a python (or any scripting language) web app?
Message queues would be one thing to implement in order to get around the whole request response cycle. But is there other ways to make these things more elegant?
| [
"The subprocess module is the preferred way of running other programs from Python -- much more flexible and nicer to use than os.system. \nimport subprocess\n#subprocess.check_output(['ls', '-l']) # All that is technically needed...\nprint(subprocess.check_output(['ls', '-l']))\n\n",
"\nThis whole setup seems a little unstable to me.\n\nTalk to the ffmpegx folks about having a GUI front-end over a command-line backend. It doesn't seem to bother them.\nIndeed, I submit that a GUI (or web) front-end over a command-line backend is actually more stable, since you have a very, very clean interface between GUI and command. The command can evolve at a different pace from the web, as long as the command-line options are compatible, you have no possibility of breakage.\n",
"If you're concerned about server performance then look at capping the number of running sox processes. If the cap has been hit you can always cache the request and inform the user when it's finished in whichever way suits your application.\nAlternatively, have the n worker scripts on other machines that pull requests from the db and call sox, and then push the resulting output file to where it needs to be.\n",
"I am not familiar with sox, but instead of making repeated calls to the program as a command line, is it possible to set it up as a service and connect to it for requests? You can take a look at the connection interface such as sqlite for inspiration. \n"
] | [
319,
28,
3,
2
] | [] | [] | [
"command_line",
"python"
] | stackoverflow_0000450285_command_line_python.txt |
Q:
Cookie Problem in Python
I'm working on a simple HTML scraper for Hulu in python 2.6 and am having problems with logging on to my account. Here's my code so far:
import urllib
import urllib2
from cookielib import CookieJar
#make a cookie and redirect handlers
cookies = CookieJar()
cookie_handler= urllib2.HTTPCookieProcessor(cookies)
redirect_handler= urllib2.HTTPRedirectHandler()
opener = urllib2.build_opener(redirect_handler,cookie_handler)#make opener w/ handlers
#build the url
login_info = {'username':USER,'password':PASS}#USER and PASS are defined
data = urllib.urlencode(login_info)
req = urllib2.Request("http://www.hulu.com/account/authenticate",data)#make the request
test = opener.open(req) #open the page
print test.read() #print html results
The code compiles and runs, but all that prints is:
Login.onError("Please \074a href=\"/support/login_faq#cant_login\"\076enable cookies\074/a\076 and try again.");
I assume there is some error in how I'm handling cookies, but just can't seem to spot it. I've heard Mechanize is a very useful module for this type of program, but as this seems to be the only speed bump left, I was hoping to find my bug.
A:
What you're seeing is a ajax return. It is probably using javascript to set the cookie, and screwing up your attempts to authenticate.
A:
The error message you are getting back could be misleading. For example the server might be looking at user-agent and seeing that say it's not one of the supported browsers, or looking at HTTP_REFERER expecting it to be coming from hulu domain. My point is there are two many variables coming in the request to keep guessing them one by one
I recommend using an http analyzer tool, e.g. Charles or the one in Firebug to figure out what (header fields, cookies, parameters) the client sends to server when you doing hulu login via a browser. This will give you the exact request that you need to construct in your python code.
| Cookie Problem in Python | I'm working on a simple HTML scraper for Hulu in python 2.6 and am having problems with logging on to my account. Here's my code so far:
import urllib
import urllib2
from cookielib import CookieJar
#make a cookie and redirect handlers
cookies = CookieJar()
cookie_handler= urllib2.HTTPCookieProcessor(cookies)
redirect_handler= urllib2.HTTPRedirectHandler()
opener = urllib2.build_opener(redirect_handler,cookie_handler)#make opener w/ handlers
#build the url
login_info = {'username':USER,'password':PASS}#USER and PASS are defined
data = urllib.urlencode(login_info)
req = urllib2.Request("http://www.hulu.com/account/authenticate",data)#make the request
test = opener.open(req) #open the page
print test.read() #print html results
The code compiles and runs, but all that prints is:
Login.onError("Please \074a href=\"/support/login_faq#cant_login\"\076enable cookies\074/a\076 and try again.");
I assume there is some error in how I'm handling cookies, but just can't seem to spot it. I've heard Mechanize is a very useful module for this type of program, but as this seems to be the only speed bump left, I was hoping to find my bug.
| [
"What you're seeing is a ajax return. It is probably using javascript to set the cookie, and screwing up your attempts to authenticate.\n",
"The error message you are getting back could be misleading. For example the server might be looking at user-agent and seeing that say it's not one of the supported browsers, or looking at HTTP_REFERER expecting it to be coming from hulu domain. My point is there are two many variables coming in the request to keep guessing them one by one\nI recommend using an http analyzer tool, e.g. Charles or the one in Firebug to figure out what (header fields, cookies, parameters) the client sends to server when you doing hulu login via a browser. This will give you the exact request that you need to construct in your python code.\n"
] | [
4,
2
] | [] | [] | [
"cookies",
"python",
"urllib2"
] | stackoverflow_0000450787_cookies_python_urllib2.txt |
Q:
What does this Python message mean?
ho-fe3fdd00-12:~ Sam$ easy_install BeautifulSoup
Traceback (most recent call last):
File "/usr/bin/easy_install", line 8, in <module>
load_entry_point('setuptools==0.6c7', 'console_scripts', 'easy_install')()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/setuptools/command/easy_install.py", line 1670, in main
with_ei_usage(lambda:
File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/setuptools/command/easy_install.py", line 1659, in with_ei_usage
return f()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/setuptools/command/easy_install.py", line 1674, in <lambda>
distclass=DistributionWithoutHelpCommands, **kw
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/core.py", line 125, in setup
dist.parse_config_files()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 373, in parse_config_files
parser.read(filename)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ConfigParser.py", line 267, in read
self._read(fp, filename)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ConfigParser.py", line 462, in _read
raise MissingSectionHeaderError(fpname, lineno, line)
ConfigParser.MissingSectionHeaderError: File contains no section headers.
file: /Users/Sam/.pydistutils.cfg, line: 1
'install_lib = ~/Library/Python/$py_version_short/site-packages\n'
I am trying to install beautifulsoup.
The first two lines in ~/.pydistutils.cfg:
install_lib = ~/Library/Python/$py_version_short/site-packages
install_scripts = ~/bin
A:
BeautifulSoup is a pure Python module which you can install by grabbing the BeautifulSoup.py file (eg. from inside the standard .tar.gz distribution) and putting it somewhere on your PythonPath - eg. inside /Users/Sam/Library/Python/2.5/site-packages, if the paths mentioned in the error message are accurate.
No need for fussy and error-prone installers which just overcomplicate the issue.
A:
The configuration file .pydstutils.cfg has a syntax error.
A:
Try to add the line at the top of ~/.pydistutils.cfg:
[easy_install]
| What does this Python message mean? | ho-fe3fdd00-12:~ Sam$ easy_install BeautifulSoup
Traceback (most recent call last):
File "/usr/bin/easy_install", line 8, in <module>
load_entry_point('setuptools==0.6c7', 'console_scripts', 'easy_install')()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/setuptools/command/easy_install.py", line 1670, in main
with_ei_usage(lambda:
File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/setuptools/command/easy_install.py", line 1659, in with_ei_usage
return f()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/setuptools/command/easy_install.py", line 1674, in <lambda>
distclass=DistributionWithoutHelpCommands, **kw
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/core.py", line 125, in setup
dist.parse_config_files()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 373, in parse_config_files
parser.read(filename)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ConfigParser.py", line 267, in read
self._read(fp, filename)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ConfigParser.py", line 462, in _read
raise MissingSectionHeaderError(fpname, lineno, line)
ConfigParser.MissingSectionHeaderError: File contains no section headers.
file: /Users/Sam/.pydistutils.cfg, line: 1
'install_lib = ~/Library/Python/$py_version_short/site-packages\n'
I am trying to install beautifulsoup.
The first two lines in ~/.pydistutils.cfg:
install_lib = ~/Library/Python/$py_version_short/site-packages
install_scripts = ~/bin
| [
"BeautifulSoup is a pure Python module which you can install by grabbing the BeautifulSoup.py file (eg. from inside the standard .tar.gz distribution) and putting it somewhere on your PythonPath - eg. inside /Users/Sam/Library/Python/2.5/site-packages, if the paths mentioned in the error message are accurate.\nNo need for fussy and error-prone installers which just overcomplicate the issue.\n",
"The configuration file .pydstutils.cfg has a syntax error.\n",
"Try to add the line at the top of ~/.pydistutils.cfg:\n[easy_install]\n\n"
] | [
3,
2,
2
] | [] | [] | [
"beautifulsoup",
"easy_install",
"installation",
"macos",
"python"
] | stackoverflow_0000452532_beautifulsoup_easy_install_installation_macos_python.txt |
Q:
What is the best way to handle a bad link given to BeautifulSoup?
I'm working on something that pulls in urls from delicious and then uses those urls to discover associated feeds.
However, some of the bookmarks in delicious are not html links and cause BS to barf. Basically, I want to throw away a link if BS fetches it and it does not look like html.
Right now, this is what I'm getting.
trillian:Documents jauderho$ ./d2o.py "green data center"
processing http://www.greenm3.com/
processing http://www.eweek.com/c/a/Green-IT/How-to-Create-an-EnergyEfficient-Green-Data-Center/?kc=rss
Traceback (most recent call last):
File "./d2o.py", line 53, in <module>
get_feed_links(d_links)
File "./d2o.py", line 43, in get_feed_links
soup = BeautifulSoup(html)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1499, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1230, in __init__
self._feed(isHTML=isHTML)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1263, in _feed
self.builder.feed(markup)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 150, in goahead
k = self.parse_endtag(i)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 314, in parse_endtag
self.error("bad end tag: %r" % (rawdata[i:j],))
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 115, in error
raise HTMLParseError(message, self.getpos())
HTMLParser.HTMLParseError: bad end tag: u'</b />', at line 739, column 1
Update:
Jehiah's answer does the trick. For reference, here's some code to get the content type:
def check_for_html(link):
out = urllib.urlopen(link)
return out.info().getheader('Content-Type')
A:
I simply wrap my BeautifulSoup processing and look for the HTMLParser.HTMLParseError exception
import HTMLParser,BeautifulSoup
try:
soup = BeautifulSoup.BeautifulSoup(raw_html)
for a in soup.findAll('a'):
href = a.['href']
....
except HTMLParser.HTMLParseError:
print "failed to parse",url
but further than that, you can check the content type of the responses when you crawl a page and make sure that it's something like text/html or application/xml+xhtml or something like that before you even try to parse it. That should head off most errors.
| What is the best way to handle a bad link given to BeautifulSoup? | I'm working on something that pulls in urls from delicious and then uses those urls to discover associated feeds.
However, some of the bookmarks in delicious are not html links and cause BS to barf. Basically, I want to throw away a link if BS fetches it and it does not look like html.
Right now, this is what I'm getting.
trillian:Documents jauderho$ ./d2o.py "green data center"
processing http://www.greenm3.com/
processing http://www.eweek.com/c/a/Green-IT/How-to-Create-an-EnergyEfficient-Green-Data-Center/?kc=rss
Traceback (most recent call last):
File "./d2o.py", line 53, in <module>
get_feed_links(d_links)
File "./d2o.py", line 43, in get_feed_links
soup = BeautifulSoup(html)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1499, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1230, in __init__
self._feed(isHTML=isHTML)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1263, in _feed
self.builder.feed(markup)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 150, in goahead
k = self.parse_endtag(i)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 314, in parse_endtag
self.error("bad end tag: %r" % (rawdata[i:j],))
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 115, in error
raise HTMLParseError(message, self.getpos())
HTMLParser.HTMLParseError: bad end tag: u'</b />', at line 739, column 1
Update:
Jehiah's answer does the trick. For reference, here's some code to get the content type:
def check_for_html(link):
out = urllib.urlopen(link)
return out.info().getheader('Content-Type')
| [
"I simply wrap my BeautifulSoup processing and look for the HTMLParser.HTMLParseError exception\nimport HTMLParser,BeautifulSoup\ntry:\n soup = BeautifulSoup.BeautifulSoup(raw_html)\n for a in soup.findAll('a'):\n href = a.['href']\n ....\nexcept HTMLParser.HTMLParseError:\n print \"failed to parse\",url\n\nbut further than that, you can check the content type of the responses when you crawl a page and make sure that it's something like text/html or application/xml+xhtml or something like that before you even try to parse it. That should head off most errors.\n"
] | [
3
] | [] | [] | [
"beautifulsoup",
"parsing",
"python"
] | stackoverflow_0000452884_beautifulsoup_parsing_python.txt |
Q:
PyQt and PyCairo
I know it's possible to place a PyCairo surface inside a Gtk Drawing Area. But I think Qt is a lot better to work with, so I've been wondering if there's anyway to place a PyCairo surface inside some Qt component?
A:
Qt's own OpenGL based surfaces (using QPainter) are known to be much faster than Cairo. Might you explain why you want specifically Cairo in Qt?
For the basics of using QPainter see this excerpt from the book "C++ GUI Programming with Qt4", and while it's C++ code, the PyQt implementation will be parallel.
As for joining Cairo with Qt... This article in ArsTechnica sheds some light - it seems nothing that could help you exists currently (iow., nobody tried such marriage).
A:
For plotting with you should also consider matplotlib, which provides a higher level API and integrates well with PyQT.
| PyQt and PyCairo | I know it's possible to place a PyCairo surface inside a Gtk Drawing Area. But I think Qt is a lot better to work with, so I've been wondering if there's anyway to place a PyCairo surface inside some Qt component?
| [
"Qt's own OpenGL based surfaces (using QPainter) are known to be much faster than Cairo. Might you explain why you want specifically Cairo in Qt?\nFor the basics of using QPainter see this excerpt from the book \"C++ GUI Programming with Qt4\", and while it's C++ code, the PyQt implementation will be parallel.\nAs for joining Cairo with Qt... This article in ArsTechnica sheds some light - it seems nothing that could help you exists currently (iow., nobody tried such marriage).\n",
"For plotting with you should also consider matplotlib, which provides a higher level API and integrates well with PyQT.\n"
] | [
5,
0
] | [] | [] | [
"gtk",
"pyqt",
"python",
"qt"
] | stackoverflow_0000082180_gtk_pyqt_python_qt.txt |
Q:
What is the practical difference between xml, json, rss and atom when interfacing with Twitter?
I'm new to web services and as an introduction I'm playing around with the Twitter API using the Twisted framework in python. I've read up on the different formats they offer, but it's still not clear to me which one I should use in my fairly simple project.
Specifically the practical difference between using JSON or XML is something I'd like guidance on. All I'm doing is requesting the public timeline and caching it locally.
Thanks.
A:
For me it boils down to convenience. Using XML, I have to parse the response in to a DOM (or more usually an ElementTree). Using JSON, one call to simplejson.loads(json_string) and I have a native Python data structure (lists, dictionaries, strings etc) which I can start iterating over and processing. Anything that means writing a few less lines of code is usually a good idea in my opinion.
I often use JSON to move data structures between PHP, Python and JavaScript - again, because it saves me having to figure out an XML serialization and then parse it at the other end.
And like jinzo said, JSON ends up being slightly fewer bytes on the wire.
You might find my blog entry on JSON from a couple of years ago useful: http://simonwillison.net/2006/Dec/20/json/
A:
RSS and Atom are XML formats.
JSON is a string which can be evaluated as Javascript code.
A:
I would say the amount of data being sent over the wire is one factor. XML data stream will be bigger than JSON for the same data. But you can use whatever you know more/have more experience.
I would recommend JSON, as it's more "pythonic" than XML.
| What is the practical difference between xml, json, rss and atom when interfacing with Twitter? | I'm new to web services and as an introduction I'm playing around with the Twitter API using the Twisted framework in python. I've read up on the different formats they offer, but it's still not clear to me which one I should use in my fairly simple project.
Specifically the practical difference between using JSON or XML is something I'd like guidance on. All I'm doing is requesting the public timeline and caching it locally.
Thanks.
| [
"For me it boils down to convenience. Using XML, I have to parse the response in to a DOM (or more usually an ElementTree). Using JSON, one call to simplejson.loads(json_string) and I have a native Python data structure (lists, dictionaries, strings etc) which I can start iterating over and processing. Anything that means writing a few less lines of code is usually a good idea in my opinion.\nI often use JSON to move data structures between PHP, Python and JavaScript - again, because it saves me having to figure out an XML serialization and then parse it at the other end.\nAnd like jinzo said, JSON ends up being slightly fewer bytes on the wire.\nYou might find my blog entry on JSON from a couple of years ago useful: http://simonwillison.net/2006/Dec/20/json/\n",
"RSS and Atom are XML formats.\nJSON is a string which can be evaluated as Javascript code.\n",
"I would say the amount of data being sent over the wire is one factor. XML data stream will be bigger than JSON for the same data. But you can use whatever you know more/have more experience. \nI would recommend JSON, as it's more \"pythonic\" than XML.\n"
] | [
8,
4,
1
] | [] | [] | [
"json",
"python",
"twisted",
"twitter",
"xml"
] | stackoverflow_0000453158_json_python_twisted_twitter_xml.txt |
Q:
Is there a Python news site that's the near equivalent of RubyFlow?
I really like the format and type of links from RubyFlow for Ruby related topics. Is there an equivalent for Python that's active? There is a PythonFlow, but I think it's pretty much dead.
I don't really like http://planet.python.org/ because there's lots of non-Python stuff on there and there's very little summarization of posts.
A:
http://www.reddit.com/r/Python is my favorite source for Python news.
A:
Possibly http://www.planetpython.org/ or http://planet.python.org/.
A:
http://planetpython.org/ (the unofficial planet) is generally better than http://planet.python.org/ (the official one) - I think the maintainers of the unofficial one are a bit more active in trimming feeds and maybe more careful about subscribing to Python category feeds if available (they certainly subscribe to the python tag on my blog rather than the whole feed).
| Is there a Python news site that's the near equivalent of RubyFlow? | I really like the format and type of links from RubyFlow for Ruby related topics. Is there an equivalent for Python that's active? There is a PythonFlow, but I think it's pretty much dead.
I don't really like http://planet.python.org/ because there's lots of non-Python stuff on there and there's very little summarization of posts.
| [
"http://www.reddit.com/r/Python is my favorite source for Python news.\n",
"Possibly http://www.planetpython.org/ or http://planet.python.org/.\n",
"http://planetpython.org/ (the unofficial planet) is generally better than http://planet.python.org/ (the official one) - I think the maintainers of the unofficial one are a bit more active in trimming feeds and maybe more careful about subscribing to Python category feeds if available (they certainly subscribe to the python tag on my blog rather than the whole feed).\n"
] | [
6,
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0000453673_python.txt |
Q:
PYTHONSTARTUP doesn't seem to work
I'm trying to use the PYTHONSTARTUP environmental variable. I set it to be "c:\python25\pythonstartup.py" in My Computer --> Advanced etc., and it doesn't seem to work.
Opening IDLE doesn't run the script, although it recognized the variable:
>>> import os
>>> os.environ['PYTHONSTARTUP']
'c:\\python25\\pythonstartup.py'
>>>
I'm using XP and Python 2.5.2. I do not wish to upgrade to 3.0 yet.
Thanks
A:
The documentation says that PYTHONSTARTUP is only run for interactive sessions. I'm not sure how IDLE runs the Python interpreter, but it could be interfering.
Instead, try running python directly from a command prompt, rather than from clicking on an icon.
A:
To add to Greg Hewgill's correct answer: If IDLE doesn't have a startup file of its own, you can put a file called sitecustomize.py in your path which will be executed for both command prompt and scripts / IDLE sessions.
| PYTHONSTARTUP doesn't seem to work | I'm trying to use the PYTHONSTARTUP environmental variable. I set it to be "c:\python25\pythonstartup.py" in My Computer --> Advanced etc., and it doesn't seem to work.
Opening IDLE doesn't run the script, although it recognized the variable:
>>> import os
>>> os.environ['PYTHONSTARTUP']
'c:\\python25\\pythonstartup.py'
>>>
I'm using XP and Python 2.5.2. I do not wish to upgrade to 3.0 yet.
Thanks
| [
"The documentation says that PYTHONSTARTUP is only run for interactive sessions. I'm not sure how IDLE runs the Python interpreter, but it could be interfering.\nInstead, try running python directly from a command prompt, rather than from clicking on an icon.\n",
"To add to Greg Hewgill's correct answer: If IDLE doesn't have a startup file of its own, you can put a file called sitecustomize.py in your path which will be executed for both command prompt and scripts / IDLE sessions.\n"
] | [
6,
2
] | [] | [] | [
"python",
"startup"
] | stackoverflow_0000453808_python_startup.txt |
Q:
How to create a controller method in Turbogears that can be called from within the controller, or rendered with a template
If you have a controller method like so:
@expose("json")
def artists(self, action="view",artist_id=None):
artists=session.query(model.Artist).all()
return dict(artists=artists)
How can you call that method from within your controller class, and get the python dict back - rather than the json-encoded string of the dict (which requires you to decode it from json back into a python dict). Is it really necessary to write one function to get the data out of your model, and another to pack that data for use by the templates (KID, JSON)? Why is it that when you call this method from in the same class, e.g.:
artists = self.artists()
You get a json string, when that's only appropriate if the method is called as part of a HTML request.
What have I missed?
A:
I normally approach this by having a 'worker' method, which queries the database, transforms results, etc., and a separate exposing method, with all the required decorators. E.g.:
# The _artists method can be used from any other method
def _artists(self, action, artist_id):
artists = session.query(model.Artist).all()
return dict(artists=artists)
@expose("json")
#@identity.require(identity.non_anonymous())
# error handlers, etc.
def artists(self, action="view", artist_id=None):
return self._artists(action=action, artist_id=artist_id)
| How to create a controller method in Turbogears that can be called from within the controller, or rendered with a template | If you have a controller method like so:
@expose("json")
def artists(self, action="view",artist_id=None):
artists=session.query(model.Artist).all()
return dict(artists=artists)
How can you call that method from within your controller class, and get the python dict back - rather than the json-encoded string of the dict (which requires you to decode it from json back into a python dict). Is it really necessary to write one function to get the data out of your model, and another to pack that data for use by the templates (KID, JSON)? Why is it that when you call this method from in the same class, e.g.:
artists = self.artists()
You get a json string, when that's only appropriate if the method is called as part of a HTML request.
What have I missed?
| [
"I normally approach this by having a 'worker' method, which queries the database, transforms results, etc., and a separate exposing method, with all the required decorators. E.g.:\n# The _artists method can be used from any other method\ndef _artists(self, action, artist_id):\n artists = session.query(model.Artist).all()\n return dict(artists=artists)\n\n@expose(\"json\")\n#@identity.require(identity.non_anonymous())\n# error handlers, etc.\ndef artists(self, action=\"view\", artist_id=None):\n return self._artists(action=action, artist_id=artist_id)\n\n"
] | [
1
] | [] | [] | [
"json",
"python",
"turbogears"
] | stackoverflow_0000454223_json_python_turbogears.txt |
Q:
Your most unpythonic code snippet
I am an experienced developer new to python, and still catching myself writing correct but unpythonic code. I thought it might be enlightening and entertaining to see small examples of python code people have written that in some way clashes with the generally preffered way of doing things.
I'm interested in code you actually wrote rather than invented examples. Here is one of mine: In code that was expecting a sequence that could be empty or None I had
if data is not None and len(data) > 0:
I later reduced that to
if data:
The simpler version allows additional true values like True or 10, but that's ok because the caller made a mistake and will get an exception from the statements within the if.
A:
I find manual type checking the most "unpythonic" (although bad in general too). There's two usual cases this is abused. The first is when the logic of the function differs based on the type of the argument. For instance:
def doStuff (myVar):
if isinstance (myVar, str):
# do stuff
pass
elif isinstance (myVar, int):
# do other stuff
pass
The second is when the programmer tries to make a strongly typed function like you would expect to find in a statically typed language.
def doStuff (myVar):
if not isinstance (myVar, int):
raise TypeError ('myVar must be of type int')
A:
Some unpythonic habits that one could inherit from C-like languages:
1) The unnecessary semicolon:
printf "Hello world";
2) Unnecessary indexes:
# Bad
for i in range(len(myList)):
print myList[i]
# Good
for item in myList:
print item
A:
My bad habit was:
index = 0
for item in someCollection:
doSomething(index, item)
index += 1
Instead of:
for index, item in enumerate(someCollection):
doSomething(index, item)
Which is much cleaner :-)
Also - beware of simple
if someList:
This can hurt if someList will become positive integer, or non-empty string or whatever. The error can become hard to find, while
if len(someList):
Would catch such error immediately.
A:
The most unpythonic code I write is to mistakingly reinvent something which already exists in the library or has been overhauled (such as the DSU idiom) - noticing this, ripping it all out, and replacing it with (what is usually) a one-liner feels good.
A:
I have been doing lots of python for years, and I prefer the more verbose first example to the more "pythonic" example.
Maybe it's not the party line, but it is much clearer in intent, and will make much more sense to other developers who may or may not know all the little secrets of python.
ymmv
A:
import random
make_token = (lambda chars, length:"".join([chars[random.randint(0,len(chars)-1)] for c in range(0, length)]))
random_token = make_token("abcdefghijklmnopqrstuvwxyz0123456789", 20)
This made a random token of length 20 made up of the character set put in. I have since learned the error of my ways. ;)
| Your most unpythonic code snippet | I am an experienced developer new to python, and still catching myself writing correct but unpythonic code. I thought it might be enlightening and entertaining to see small examples of python code people have written that in some way clashes with the generally preffered way of doing things.
I'm interested in code you actually wrote rather than invented examples. Here is one of mine: In code that was expecting a sequence that could be empty or None I had
if data is not None and len(data) > 0:
I later reduced that to
if data:
The simpler version allows additional true values like True or 10, but that's ok because the caller made a mistake and will get an exception from the statements within the if.
| [
"I find manual type checking the most \"unpythonic\" (although bad in general too). There's two usual cases this is abused. The first is when the logic of the function differs based on the type of the argument. For instance:\ndef doStuff (myVar):\n if isinstance (myVar, str):\n # do stuff\n pass\n elif isinstance (myVar, int):\n # do other stuff\n pass\n\nThe second is when the programmer tries to make a strongly typed function like you would expect to find in a statically typed language.\ndef doStuff (myVar):\n if not isinstance (myVar, int):\n raise TypeError ('myVar must be of type int')\n\n",
"Some unpythonic habits that one could inherit from C-like languages:\n1) The unnecessary semicolon:\nprintf \"Hello world\";\n\n2) Unnecessary indexes:\n# Bad\nfor i in range(len(myList)):\n print myList[i]\n\n# Good\nfor item in myList:\n print item\n\n",
"My bad habit was:\nindex = 0\nfor item in someCollection:\n doSomething(index, item)\n index += 1\n\nInstead of: \nfor index, item in enumerate(someCollection): \n doSomething(index, item)\n\nWhich is much cleaner :-)\nAlso - beware of simple \nif someList:\n\nThis can hurt if someList will become positive integer, or non-empty string or whatever. The error can become hard to find, while \nif len(someList): \n\nWould catch such error immediately.\n",
"The most unpythonic code I write is to mistakingly reinvent something which already exists in the library or has been overhauled (such as the DSU idiom) - noticing this, ripping it all out, and replacing it with (what is usually) a one-liner feels good.\n",
"I have been doing lots of python for years, and I prefer the more verbose first example to the more \"pythonic\" example.\nMaybe it's not the party line, but it is much clearer in intent, and will make much more sense to other developers who may or may not know all the little secrets of python.\nymmv\n",
"import random\nmake_token = (lambda chars, length:\"\".join([chars[random.randint(0,len(chars)-1)] for c in range(0, length)]))\nrandom_token = make_token(\"abcdefghijklmnopqrstuvwxyz0123456789\", 20)\n\nThis made a random token of length 20 made up of the character set put in. I have since learned the error of my ways. ;)\n"
] | [
7,
5,
4,
2,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000453967_python.txt |
Q:
Find an Image within an Image
I am looking for the best way to detect an image within another image. I have a small image and would like to find the location that it appears within a larger image - which will actually be screen captures. Conceptually, it is like a 'Where's Waldo?' sort of search in the larger image.
Are there any efficient/quick ways to accomplish this? Speed is more important than memory.
Edit:
The 'inner' image may not always have the same scale but will have the same rotation.
It is not safe to assume that the image will be perfectly contained within the other, pixel for pixel.
A:
Wikipedia has an article on Template Matching, with sample code.
(While that page doesn't handle changed scales, it has links to other styles of matching, for example Scale invariant feature transform)
A:
If rotation also had to be catered for, the Generalised Hough Transform can be used.
A:
You can treat this as a substring problem, where characters in the alphabet are pixels and your string is the image. You would need also to use a special character in a similar vein to a linebreak, to denote the image boundary.
The algorithm you want is on wikipedia: http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm
Update: If you cannot assume that the image is perfectly contained within the other, pixel for pixel, then this approach will not work.
There are other, more complicated algorithms based on the same dynamic programming concept as the above, but I won't go into them unless it's necessary.
| Find an Image within an Image | I am looking for the best way to detect an image within another image. I have a small image and would like to find the location that it appears within a larger image - which will actually be screen captures. Conceptually, it is like a 'Where's Waldo?' sort of search in the larger image.
Are there any efficient/quick ways to accomplish this? Speed is more important than memory.
Edit:
The 'inner' image may not always have the same scale but will have the same rotation.
It is not safe to assume that the image will be perfectly contained within the other, pixel for pixel.
| [
"Wikipedia has an article on Template Matching, with sample code.\n(While that page doesn't handle changed scales, it has links to other styles of matching, for example Scale invariant feature transform)\n",
"If rotation also had to be catered for, the Generalised Hough Transform can be used.\n",
"You can treat this as a substring problem, where characters in the alphabet are pixels and your string is the image. You would need also to use a special character in a similar vein to a linebreak, to denote the image boundary.\nThe algorithm you want is on wikipedia: http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm\nUpdate: If you cannot assume that the image is perfectly contained within the other, pixel for pixel, then this approach will not work. \nThere are other, more complicated algorithms based on the same dynamic programming concept as the above, but I won't go into them unless it's necessary.\n"
] | [
6,
1,
0
] | [] | [] | [
"image",
"java",
"python"
] | stackoverflow_0000454498_image_java_python.txt |
Q:
How can I make a list in Python like (0,6,12, .. 144)?
I am not sure, whether I should use for -loop. Perhaps, like
for i in range(145):
by 6: //mistake here?
print i
A:
for i in range(0,150,6):
print i
if you are stepping by a constant
A:
I would prefer:
for i in xrange(25): # from 0 to 24
print 6*i
You can easily build a list containing the same numbers with a similar construct named list comprehension:
numbers = [6*i for i in xrange(25)]
print numbers
If you already have a list of (unknown) numbers, say someNumbers, but you want to print only those which are multiples of 6:
for i in someNumbers:
if i%6 == 0:
print i
| How can I make a list in Python like (0,6,12, .. 144)? | I am not sure, whether I should use for -loop. Perhaps, like
for i in range(145):
by 6: //mistake here?
print i
| [
"for i in range(0,150,6):\n print i\n\nif you are stepping by a constant\n",
"I would prefer:\nfor i in xrange(25): # from 0 to 24\n print 6*i\n\nYou can easily build a list containing the same numbers with a similar construct named list comprehension:\nnumbers = [6*i for i in xrange(25)]\nprint numbers\n\nIf you already have a list of (unknown) numbers, say someNumbers, but you want to print only those which are multiples of 6:\nfor i in someNumbers:\n if i%6 == 0:\n print i\n\n"
] | [
20,
6
] | [
"reqlist = [i for i in range(0,150,6)]\n",
"i = 1\n\nwhile i * 6 < 144:\n i = i + 1\n print i * 6\n\nThere are plenty of ways to do this\n"
] | [
-2,
-5
] | [
"list",
"python"
] | stackoverflow_0000454566_list_python.txt |
Q:
What's the most efficient way to share large amounts of data between Python and C++
I'm writing a system that allows python scripts to be executed within a C++ application.
The python scripts are used to modify values within arrays of data (typically 2048x2048x4 arrays of floats)
I'm currently using numpy arrays created using the array API and registered with Python.
In the python scripts I'm accessing the arrays like
for all x in range(0,Width):
for all y in range(0,Height)
Array[x][y][0] = blah
Array[x][y][1] = blah
Array[x][y][2] = blah
Array[x][y][3] = blah
This seems to be pretty slow. Is there a quicker way to set this up?
A:
You might want to have a look at Boost.Python. It focuses on making C++ code available in Python, but it also provides exec and eval functions that should allow you to efficiently interact with python code.
A:
I thought I would suggest numpy, but you're already using it. I'm afraid that leaves domain-specific changes to do less work. As John Montgomery mentions, you'll need to figure out what is taking the time in the Python code, and determine if you can avoid some of it.
Are there patterns to the work being done in Python? Perhaps you can provide domain-specific helper functions (written in C) that can be called from the Python code.
A:
You should probably see if you can set several values in the array in one step rather than in four (or more) steps.
I believe the ellipsis syntax may be useful here:
How do you use the ellipsis slicing syntax in Python?
Or else something like:
Array[x,y,:]=blah
or
Array[x,y,:]=blah1,blah2,blah3,blah4
if you have different values for each element.
That's assuming that the bottle-neck in your code is due to the number of assignments you are doing. It's probably worth doing some profiling to see exactly what's being slow. Try the same code, but without actually storing the results to see if it's much faster. If not then it's probably not the assignment that's slow...
| What's the most efficient way to share large amounts of data between Python and C++ | I'm writing a system that allows python scripts to be executed within a C++ application.
The python scripts are used to modify values within arrays of data (typically 2048x2048x4 arrays of floats)
I'm currently using numpy arrays created using the array API and registered with Python.
In the python scripts I'm accessing the arrays like
for all x in range(0,Width):
for all y in range(0,Height)
Array[x][y][0] = blah
Array[x][y][1] = blah
Array[x][y][2] = blah
Array[x][y][3] = blah
This seems to be pretty slow. Is there a quicker way to set this up?
| [
"You might want to have a look at Boost.Python. It focuses on making C++ code available in Python, but it also provides exec and eval functions that should allow you to efficiently interact with python code.\n",
"I thought I would suggest numpy, but you're already using it. I'm afraid that leaves domain-specific changes to do less work. As John Montgomery mentions, you'll need to figure out what is taking the time in the Python code, and determine if you can avoid some of it.\nAre there patterns to the work being done in Python? Perhaps you can provide domain-specific helper functions (written in C) that can be called from the Python code. \n",
"You should probably see if you can set several values in the array in one step rather than in four (or more) steps.\nI believe the ellipsis syntax may be useful here:\nHow do you use the ellipsis slicing syntax in Python?\nOr else something like:\nArray[x,y,:]=blah\nor\nArray[x,y,:]=blah1,blah2,blah3,blah4\nif you have different values for each element.\nThat's assuming that the bottle-neck in your code is due to the number of assignments you are doing. It's probably worth doing some profiling to see exactly what's being slow. Try the same code, but without actually storing the results to see if it's much faster. If not then it's probably not the assignment that's slow...\n"
] | [
1,
1,
0
] | [] | [] | [
"c++",
"optimization",
"python"
] | stackoverflow_0000454931_c++_optimization_python.txt |
Q:
Efficient Image Thumbnail Control for Python?
What is the best choice for a Python GUI application to display large number of thumbnails, e.g. 10000 or more? For performance reasons such thumbnail control must support virtual items, i.e. request application for those thumbnails only which are currently visible to user.
A:
In wxPython you can use wxGrid for this as it supports virtual mode and custom cell renderers.
This is the minimal interface you have to implement for a wxGrid "data provider":
class GridData(wx.grid.PyGridTableBase):
def GetColLabelValue(self, col):
pass
def GetNumberRows(self):
pass
def GetNumberCols(self):
pass
def IsEmptyCell(self, row, col):
pass
def GetValue(self, row, col):
pass
This is the minimal interface you have to implement for a wxGrid cell renderer:
class CellRenderer(wx.grid.PyGridCellRenderer):
def Draw(self, grid, attr, dc, rect, row, col, isSelected):
pass
You can find a working example that utilizes these classes in wxPython docs and demos, it's called Grid_MegaExample.
A:
If you had to resort to writing your own, I've had good results using the Python Imaging Library to create thumbnails in the past.
http://www.pythonware.com/products/pil/
A:
Just for completeness: there is a thumbnailCtrl written in/for wxPython, which might be a good starting point.
| Efficient Image Thumbnail Control for Python? | What is the best choice for a Python GUI application to display large number of thumbnails, e.g. 10000 or more? For performance reasons such thumbnail control must support virtual items, i.e. request application for those thumbnails only which are currently visible to user.
| [
"In wxPython you can use wxGrid for this as it supports virtual mode and custom cell renderers.\nThis is the minimal interface you have to implement for a wxGrid \"data provider\":\nclass GridData(wx.grid.PyGridTableBase):\n def GetColLabelValue(self, col):\n pass\n\n def GetNumberRows(self):\n pass\n\n def GetNumberCols(self):\n pass\n\n def IsEmptyCell(self, row, col):\n pass\n\n def GetValue(self, row, col):\n pass\n\nThis is the minimal interface you have to implement for a wxGrid cell renderer:\nclass CellRenderer(wx.grid.PyGridCellRenderer):\n def Draw(self, grid, attr, dc, rect, row, col, isSelected):\n pass\n\nYou can find a working example that utilizes these classes in wxPython docs and demos, it's called Grid_MegaExample.\n",
"If you had to resort to writing your own, I've had good results using the Python Imaging Library to create thumbnails in the past.\nhttp://www.pythonware.com/products/pil/\n",
"Just for completeness: there is a thumbnailCtrl written in/for wxPython, which might be a good starting point.\n"
] | [
2,
1,
1
] | [] | [] | [
"image",
"pyqt",
"python",
"thumbnails",
"wxpython"
] | stackoverflow_0000215052_image_pyqt_python_thumbnails_wxpython.txt |
Q:
Python 3 development and distribution challenges
Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
Maintain two different scripts in the same branch, making improvements to both simultaneously.
Maintain two separate branches, and merge common changes back and forth as development proceeds.
Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas?
A:
Edit: my original answer was based on the state of 2009, with Python 2.6 and 3.0 as the current versions. Now, with Python 2.7 and 3.3, there are other options. In particular, it is now quite feasible to use a single code base for Python 2 and Python 3.
See Porting Python 2 Code to Python 3
Original answer:
The official recommendation says:
For porting existing Python 2.5 or 2.6
source code to Python 3.0, the best
strategy is the following:
(Prerequisite:) Start with excellent test coverage.
Port to Python 2.6. This should be no more work than the average port
from Python 2.x to Python 2.(x+1).
Make sure all your tests pass.
(Still using 2.6:) Turn on the -3 command line switch. This enables
warnings about features that will be
removed (or change) in 3.0. Run your
test suite again, and fix code that
you get warnings about until there are
no warnings left, and all your tests
still pass.
Run the 2to3 source-to-source translator over your source code tree.
(See 2to3 - Automated Python 2 to 3
code translation for more on this
tool.) Run the result of the
translation under Python 3.0. Manually
fix up any remaining issues, fixing
problems until all tests pass again.
It is not recommended to try to write
source code that runs unchanged under
both Python 2.6 and 3.0; you’d have to
use a very contorted coding style,
e.g. avoiding print statements,
metaclasses, and much more. If you are
maintaining a library that needs to
support both Python 2.6 and Python
3.0, the best approach is to modify step 3 above by editing the 2.6
version of the source code and running
the 2to3 translator again, rather than
editing the 3.0 version of the source
code.
Ideally, you would end up with a single version, that is 2.6 compatible and can be translated to 3.0 using 2to3. In practice, you might not be able to achieve this goal completely. So you might need some manual modifications to get it to work under 3.0.
I would maintain these modifications in a branch, like your option 2. However, rather than maintaining the final 3.0-compatible version in this branch, I would consider to apply the manual modifications before the 2to3 translations, and put this modified 2.6 code into your branch. The advantage of this method would be that the difference between this branch and the 2.6 trunk would be rather small, and would only consist of manual changes, not the changes made by 2to3. This way, the separate branches should be easier to maintain and merge, and you should be able to benefit from future improvements in 2to3.
Alternatively, take a bit of a "wait and see" approach. Proceed with your porting only so far as you can go with a single 2.6 version plus 2to3 translation, and postpone the remaining manual modification until you really need a 3.0 version. Maybe by this time, you don't need any manual tweaks anymore...
A:
For developement, option 3 is too cumbersome. Maintaining two branches is the easiest way although the way to do that will vary between VCSes. Many DVCS will be happier with separate repos (with a common ancestry to help merging) and centralized VCS will probably easier to work with with two branches. Option 1 is possible but you may miss something to merge and a bit more error-prone IMO.
For distribution, I'd use option 3 as well if possible. All 3 options are valid anyway and I have seen variations on these models from times to times.
A:
I don't think I'd take this path at all. It's painful whichever way you look at it. Really, unless there's strong commercial interest in keeping both versions simultaneously, this is more headache than gain.
I think it makes more sense to just keep developing for 2.x for now, at least for a few months, up to a year. At some point in time it will be just time to declare on a final, stable version for 2.x and develop the next ones for 3.x+
For example, I won't switch to 3.x until some of the major frameworks go that way: PyQt, matplotlib, numpy, and some others. And I don't really mind if at some point they stop 2.x support and just start developing for 3.x, because I'll know that in a short time I'll be able to switch to 3.x too.
A:
I would start by migrating to 2.6, which is very close to python 3.0. You might even want to wait for 2.7, which will be even closer to python 3.0.
And then, once you have migrated to 2.6 (or 2.7), I suggest you simply keep just one version of the script, with things like "if PY3K:... else:..." in the rare places where it will be mandatory. Of course it's not the kind of code we developers like to write, but then you don't have to worry about managing multiple scripts or branches or patches or distributions, which will be a nightmare.
Whatever you choose, make sure you have thorough tests with 100% code coverage.
Good luck!
A:
Whichever option for development is chosen, most potential issues could be alleviated with thorough unit testing to ensure that the two versions produce matching output. That said, option 2 seems most natural to me: applying changes from one source tree to another source tree is a task (most) version control systems were designed for--why not take advantages of the tools they provide to ease this.
For development, it is difficult to say without 'knowing your audience'. Power Python users would probably appreciate not having to download two copies of your software yet for a more general user-base it should probably 'just work'.
| Python 3 development and distribution challenges | Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
Maintain two different scripts in the same branch, making improvements to both simultaneously.
Maintain two separate branches, and merge common changes back and forth as development proceeds.
Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas?
| [
"Edit: my original answer was based on the state of 2009, with Python 2.6 and 3.0 as the current versions. Now, with Python 2.7 and 3.3, there are other options. In particular, it is now quite feasible to use a single code base for Python 2 and Python 3.\nSee Porting Python 2 Code to Python 3\nOriginal answer:\nThe official recommendation says:\n\nFor porting existing Python 2.5 or 2.6\n source code to Python 3.0, the best\n strategy is the following:\n\n(Prerequisite:) Start with excellent test coverage.\nPort to Python 2.6. This should be no more work than the average port\n from Python 2.x to Python 2.(x+1).\n Make sure all your tests pass.\n(Still using 2.6:) Turn on the -3 command line switch. This enables\n warnings about features that will be\n removed (or change) in 3.0. Run your\n test suite again, and fix code that\n you get warnings about until there are\n no warnings left, and all your tests\n still pass.\nRun the 2to3 source-to-source translator over your source code tree.\n (See 2to3 - Automated Python 2 to 3\n code translation for more on this\n tool.) Run the result of the\n translation under Python 3.0. Manually\n fix up any remaining issues, fixing\n problems until all tests pass again.\n\nIt is not recommended to try to write\n source code that runs unchanged under\n both Python 2.6 and 3.0; you’d have to\n use a very contorted coding style,\n e.g. avoiding print statements,\n metaclasses, and much more. If you are\n maintaining a library that needs to\n support both Python 2.6 and Python\n 3.0, the best approach is to modify step 3 above by editing the 2.6\n version of the source code and running\n the 2to3 translator again, rather than\n editing the 3.0 version of the source\n code.\n\nIdeally, you would end up with a single version, that is 2.6 compatible and can be translated to 3.0 using 2to3. In practice, you might not be able to achieve this goal completely. So you might need some manual modifications to get it to work under 3.0.\nI would maintain these modifications in a branch, like your option 2. However, rather than maintaining the final 3.0-compatible version in this branch, I would consider to apply the manual modifications before the 2to3 translations, and put this modified 2.6 code into your branch. The advantage of this method would be that the difference between this branch and the 2.6 trunk would be rather small, and would only consist of manual changes, not the changes made by 2to3. This way, the separate branches should be easier to maintain and merge, and you should be able to benefit from future improvements in 2to3.\nAlternatively, take a bit of a \"wait and see\" approach. Proceed with your porting only so far as you can go with a single 2.6 version plus 2to3 translation, and postpone the remaining manual modification until you really need a 3.0 version. Maybe by this time, you don't need any manual tweaks anymore...\n",
"For developement, option 3 is too cumbersome. Maintaining two branches is the easiest way although the way to do that will vary between VCSes. Many DVCS will be happier with separate repos (with a common ancestry to help merging) and centralized VCS will probably easier to work with with two branches. Option 1 is possible but you may miss something to merge and a bit more error-prone IMO.\nFor distribution, I'd use option 3 as well if possible. All 3 options are valid anyway and I have seen variations on these models from times to times.\n",
"I don't think I'd take this path at all. It's painful whichever way you look at it. Really, unless there's strong commercial interest in keeping both versions simultaneously, this is more headache than gain. \nI think it makes more sense to just keep developing for 2.x for now, at least for a few months, up to a year. At some point in time it will be just time to declare on a final, stable version for 2.x and develop the next ones for 3.x+ \nFor example, I won't switch to 3.x until some of the major frameworks go that way: PyQt, matplotlib, numpy, and some others. And I don't really mind if at some point they stop 2.x support and just start developing for 3.x, because I'll know that in a short time I'll be able to switch to 3.x too.\n",
"I would start by migrating to 2.6, which is very close to python 3.0. You might even want to wait for 2.7, which will be even closer to python 3.0.\nAnd then, once you have migrated to 2.6 (or 2.7), I suggest you simply keep just one version of the script, with things like \"if PY3K:... else:...\" in the rare places where it will be mandatory. Of course it's not the kind of code we developers like to write, but then you don't have to worry about managing multiple scripts or branches or patches or distributions, which will be a nightmare.\nWhatever you choose, make sure you have thorough tests with 100% code coverage.\nGood luck!\n",
"Whichever option for development is chosen, most potential issues could be alleviated with thorough unit testing to ensure that the two versions produce matching output. That said, option 2 seems most natural to me: applying changes from one source tree to another source tree is a task (most) version control systems were designed for--why not take advantages of the tools they provide to ease this.\nFor development, it is difficult to say without 'knowing your audience'. Power Python users would probably appreciate not having to download two copies of your software yet for a more general user-base it should probably 'just work'.\n"
] | [
9,
2,
2,
1,
0
] | [] | [] | [
"python",
"python_3.x",
"version_control"
] | stackoverflow_0000455717_python_python_3.x_version_control.txt |
Q:
How do you query the set of Users in Google App Domain within your Google App Engine project?
If you have a Google App Engine project you can authenticate based on either a) anyone with a google account or b) a particular google app domain. Since you can connect these two entities I would assume there is some way to query the list of users that can be authenticated. The use case is outputting a roster of all members in an organization to a web page running on Google App Engine. Any thoughts?
A:
Querying all users that could possibly authenticate in the case of 'a' (all gmail users) would be millions and millions users, so I'm sure you don't expect to do that.
I'm sure you actually mean query the ones who have logged into your application previously, in which case you just create a table to store their user information, and populate that whenever an authenticated user is on your site.
You can read more in the Google App Engine Docs under Using User Values With the Datastore
A:
There's nothing built in to App Engine to do this. If you have Apps Premium edition, however, you can use the reporting API.
A:
You would have to use the Premium (or Education) Google apps version, and you can use the api to list all users in the apps domain:
GET https://apps-apis.google.com/a/feeds/domain/user/2.0
see docs here:
http://code.google.com/apis/apps/gdata_provisioning_api_v2.0_reference.html
A:
Yeah, there's no way to get information about people who haven't logged into your application.
| How do you query the set of Users in Google App Domain within your Google App Engine project? | If you have a Google App Engine project you can authenticate based on either a) anyone with a google account or b) a particular google app domain. Since you can connect these two entities I would assume there is some way to query the list of users that can be authenticated. The use case is outputting a roster of all members in an organization to a web page running on Google App Engine. Any thoughts?
| [
"Querying all users that could possibly authenticate in the case of 'a' (all gmail users) would be millions and millions users, so I'm sure you don't expect to do that. \nI'm sure you actually mean query the ones who have logged into your application previously, in which case you just create a table to store their user information, and populate that whenever an authenticated user is on your site.\nYou can read more in the Google App Engine Docs under Using User Values With the Datastore\n",
"There's nothing built in to App Engine to do this. If you have Apps Premium edition, however, you can use the reporting API.\n",
"You would have to use the Premium (or Education) Google apps version, and you can use the api to list all users in the apps domain: \nGET https://apps-apis.google.com/a/feeds/domain/user/2.0\nsee docs here:\nhttp://code.google.com/apis/apps/gdata_provisioning_api_v2.0_reference.html\n",
"Yeah, there's no way to get information about people who haven't logged into your application.\n"
] | [
3,
1,
1,
0
] | [] | [] | [
"google_app_engine",
"google_apps",
"gql",
"gqlquery",
"python"
] | stackoverflow_0000419197_google_app_engine_google_apps_gql_gqlquery_python.txt |
Q:
Does Django development provide a truly flexible 3 layer architecture?
A few weeks ago I asked the question "Is a PHP, Python, PostgreSQL design suitable for a non-web business application?" Is a PHP, Python, PostgreSQL design suitable for a business application?
A lot of the answers recommended skipping the PHP piece and using Django to build the application. As I've explored Django, I've started to question one specific aspect of my goals and how Django comes into play for a non-web business application.
Based on my understanding, Django would manage both the view and controller pieces and PostgreSQL or MySQL would handle the data. But my goal was to clearly separate the layers so that the database, domain logic, and presentation could each be changed without significantly affecting the others. It seems like I'm only separating the M from the VC layers with the Django solution.
So, is it counterproductive for me to build the domain layer in Python with an SQL Alchemy/Elixir ORM tool, PostgreSQL for the database layer, and then still use Django or PHP for the presentation layer? Is this possible or pure insanity?
Basically, I'd be looking at an architecture of Django/PHP > Python/SQLAlchemy > PostgreSQL/MySQL.
Edit: Before the fanboys get mad at me for asking a question about Django, just realize: It's a question, not an accusation. If I knew the answer or had my own opinion, I wouldn't have asked!
A:
You seem to be saying that choosing Django would prevent you from using a more heterogenous solution later. This isn't the case. Django provides a number of interesting connections between the layers, and using Django for all the layers lets you take advantage of those connections. For example, using the Django ORM means that you get the great Django admin app almost for free.
You can choose to use a different ORM within Django, you just won't get the admin app (or generic views, for example) along with it. So a different ORM takes you a step backward from full Django top-to-bottom, but it isn't a step backward from other heterogenous solutions, because those solutions didn't give you intra-layer goodness the admin app in the first place.
Django shouldn't be criticized for not providing a flexible architecture: it's as flexible as any other solution, you just forgo some of the Django benefits if you choose to swap out a layer.
If you choose to start with Django, you can use the Django ORM now, and then later, if you need to switch, you can change over to SQLalchemy. That will be no more difficult than starting with SQLalchemy now and later moving to some other ORM solution.
You haven't said why you anticipate needing to swap out layers. It will be a painful process no matter what, because there is necessarily much code that relies on the behavior of whichever toolset and library you're currently using.
A:
Django will happily let you use whatever libraries you want for whatever you want to use them for -- you want a different ORM, use it, you want a different template engine, use it, and so on -- but is designed to provide a common default stack used by many interoperable applications. In other words, if you swap out an ORM or a template system, you'll lose compatibility with a lot of applications, but the ability to take advantage of a large base of applications typically outweighs this.
In broader terms, however, I'd advise you to spend a bit more time reading up on architectural patterns for web applications, since you seem to have some major conceptual confusion going on. One might just as easily say that, for example, Rails doesn't have a "view" layer since you could use different file systems as the storage location for the view code (in other words: being able to change where and how the data is stored by your model layer doesn't mean you don't have a model layer).
(and it goes without saying that it's also important to know why "strict" or "pure" MVC is an absolutely horrid fit for web applications; MVC in its pure form is useful for applications with many independent ways to initiate interaction, like a word processor with lots of toolbars and input panes, but its benefits quickly start to disappear when you move to the web and have only one way -- an HTTP request -- to interact with the application. This is why there are no "true" MVC web frameworks; they all borrow certain ideas about separation of concerns, but none of them implement the pattern strictly)
A:
You seem to be confusing "separate layers" with "separate languages/technologies." There is no reason you can't separate your concerns appropriately within a single programming language, or within an appropriately modular framework (such as Django). Needlessly multiplying programming languages / frameworks is just needlessly multiplying complexity, which is likely to slow down your initial efforts so much that your project will never reach the point where it needs a technology switch.
A:
You've effectively got a 3 layer architecture whether you use Django's ORM or SQLAlchemy, though your forgo some of the Django's benefits if you choose the latter.
A:
Based on my understanding, Django would manage both the view and controller pieces and PostgreSQL or MySQL would handle the data.
Not really, Django has its own ORM, so it does separate data from view/controller.
here's an entry from the official FAQ about MVC:
Where does the “controller” fit in, then? In Django’s case, it’s probably the framework itself: the machinery that sends a request to the appropriate view, according to the Django URL configuration.
If you’re hungry for acronyms, you might say that Django is a “MTV” framework – that is, “model”, “template”, and “view.” That breakdown makes much more sense.
At the end of the day, of course, it comes down to getting stuff done. And, regardless of how things are named, Django gets stuff done in a way that’s most logical to us.
| Does Django development provide a truly flexible 3 layer architecture? | A few weeks ago I asked the question "Is a PHP, Python, PostgreSQL design suitable for a non-web business application?" Is a PHP, Python, PostgreSQL design suitable for a business application?
A lot of the answers recommended skipping the PHP piece and using Django to build the application. As I've explored Django, I've started to question one specific aspect of my goals and how Django comes into play for a non-web business application.
Based on my understanding, Django would manage both the view and controller pieces and PostgreSQL or MySQL would handle the data. But my goal was to clearly separate the layers so that the database, domain logic, and presentation could each be changed without significantly affecting the others. It seems like I'm only separating the M from the VC layers with the Django solution.
So, is it counterproductive for me to build the domain layer in Python with an SQL Alchemy/Elixir ORM tool, PostgreSQL for the database layer, and then still use Django or PHP for the presentation layer? Is this possible or pure insanity?
Basically, I'd be looking at an architecture of Django/PHP > Python/SQLAlchemy > PostgreSQL/MySQL.
Edit: Before the fanboys get mad at me for asking a question about Django, just realize: It's a question, not an accusation. If I knew the answer or had my own opinion, I wouldn't have asked!
| [
"You seem to be saying that choosing Django would prevent you from using a more heterogenous solution later. This isn't the case. Django provides a number of interesting connections between the layers, and using Django for all the layers lets you take advantage of those connections. For example, using the Django ORM means that you get the great Django admin app almost for free.\nYou can choose to use a different ORM within Django, you just won't get the admin app (or generic views, for example) along with it. So a different ORM takes you a step backward from full Django top-to-bottom, but it isn't a step backward from other heterogenous solutions, because those solutions didn't give you intra-layer goodness the admin app in the first place.\nDjango shouldn't be criticized for not providing a flexible architecture: it's as flexible as any other solution, you just forgo some of the Django benefits if you choose to swap out a layer.\nIf you choose to start with Django, you can use the Django ORM now, and then later, if you need to switch, you can change over to SQLalchemy. That will be no more difficult than starting with SQLalchemy now and later moving to some other ORM solution.\nYou haven't said why you anticipate needing to swap out layers. It will be a painful process no matter what, because there is necessarily much code that relies on the behavior of whichever toolset and library you're currently using.\n",
"Django will happily let you use whatever libraries you want for whatever you want to use them for -- you want a different ORM, use it, you want a different template engine, use it, and so on -- but is designed to provide a common default stack used by many interoperable applications. In other words, if you swap out an ORM or a template system, you'll lose compatibility with a lot of applications, but the ability to take advantage of a large base of applications typically outweighs this.\nIn broader terms, however, I'd advise you to spend a bit more time reading up on architectural patterns for web applications, since you seem to have some major conceptual confusion going on. One might just as easily say that, for example, Rails doesn't have a \"view\" layer since you could use different file systems as the storage location for the view code (in other words: being able to change where and how the data is stored by your model layer doesn't mean you don't have a model layer).\n(and it goes without saying that it's also important to know why \"strict\" or \"pure\" MVC is an absolutely horrid fit for web applications; MVC in its pure form is useful for applications with many independent ways to initiate interaction, like a word processor with lots of toolbars and input panes, but its benefits quickly start to disappear when you move to the web and have only one way -- an HTTP request -- to interact with the application. This is why there are no \"true\" MVC web frameworks; they all borrow certain ideas about separation of concerns, but none of them implement the pattern strictly)\n",
"You seem to be confusing \"separate layers\" with \"separate languages/technologies.\" There is no reason you can't separate your concerns appropriately within a single programming language, or within an appropriately modular framework (such as Django). Needlessly multiplying programming languages / frameworks is just needlessly multiplying complexity, which is likely to slow down your initial efforts so much that your project will never reach the point where it needs a technology switch.\n",
"You've effectively got a 3 layer architecture whether you use Django's ORM or SQLAlchemy, though your forgo some of the Django's benefits if you choose the latter.\n",
"\nBased on my understanding, Django would manage both the view and controller pieces and PostgreSQL or MySQL would handle the data.\n\nNot really, Django has its own ORM, so it does separate data from view/controller.\nhere's an entry from the official FAQ about MVC:\n\nWhere does the “controller” fit in, then? In Django’s case, it’s probably the framework itself: the machinery that sends a request to the appropriate view, according to the Django URL configuration.\nIf you’re hungry for acronyms, you might say that Django is a “MTV” framework – that is, “model”, “template”, and “view.” That breakdown makes much more sense.\nAt the end of the day, of course, it comes down to getting stuff done. And, regardless of how things are named, Django gets stuff done in a way that’s most logical to us.\n\n"
] | [
9,
7,
4,
3,
3
] | [
"There's change and there's change. Django utterly seperates domain model, business rules and presentation. You can change (almost) anything with near zero breakage. And by change I am talking about meaningful end-user focused business change.\nThe technology mix-n-match in this (and the previous) question isn't really very helpful. \nWho -- specifically -- benefits from replacing Django templates with PHP. It's not more productive. It's more complex for no benefit. Who benefits from changing ORM's. The users won't and can't tell. \n"
] | [
-1
] | [
"django",
"model_view_controller",
"orm",
"python"
] | stackoverflow_0000454443_django_model_view_controller_orm_python.txt |
Q:
Is there any advantage in using a Python class?
I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions?
A:
There are none. This is what modules are for: grouping related functions. Using a class full of static methods makes me cringe from Javaitis. The only time I would use a static function is if the function is an integral part of the class. (In fact, I'd probably want to use a class method anyway.)
A:
No. It would be better to make them functions and if they are related, place them into their own module. For instance, if you have a class like this:
class Something(object):
@staticmethod
def foo(x):
return x + 5
@staticmethod
def bar(x, y):
return y + 5 * x
Then it would be better to have a module like,
# something.py
def foo(x):
return x + 5
def bar(x, y):
return y + 5 * x
That way, you use them in the following way:
import something
print something.foo(10)
print something.bar(12, 14)
Don't be afraid of namespaces. ;-)
A:
If your functions are dependent on each other or global state, consider also the third approach:
class Something(object):
def foo(self, x):
return x + 5
def bar(self, x, y):
return y + 5 * self.foo(x)
something = Something()
Using this solution you can test a function in isolation, because you can override behavior of another function or inject dependencies using constructor.
A:
Classes are only useful when you have a set of functionality than interacts with a set of data (instance properties) that needs to be persisted between function calls and referenced in a discrete fashion.
If your class contains nothing other than static methods, then your class is just syntactic cruft, and straight functions are much clearer and all that you need.
A:
Not only are there no advantages, but it makes things slower than using a module full of methods. There's much less need for static methods in python than there is for them in java or c#, they are used in very special cases.
A:
I agree with Benjamin. Rather than having a bunch of static methods, you should probably have a bunch of functions. And if you want to organize them, you should think about using modules rather than classes. However, if you want to refactor your code to be OO, that's another matter.
A:
Depends on the nature of the functions. If they're not strongly unrelated (minimal amount of calls between them) and they don't have any state then yes I'd say dump them into a module. However, you could be shooting yourself in the foot if you ever need to modify the behavior as you're throwing inheritance out the window. So my answer is maybe, and be sure you look at your particular scenario rather then always assuming a module is the best way to collect a set of methods.
| Is there any advantage in using a Python class? | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions?
| [
"There are none. This is what modules are for: grouping related functions. Using a class full of static methods makes me cringe from Javaitis. The only time I would use a static function is if the function is an integral part of the class. (In fact, I'd probably want to use a class method anyway.)\n",
"No. It would be better to make them functions and if they are related, place them into their own module. For instance, if you have a class like this:\nclass Something(object):\n\n @staticmethod\n def foo(x):\n return x + 5\n\n @staticmethod\n def bar(x, y):\n return y + 5 * x\n\nThen it would be better to have a module like,\n# something.py\n\ndef foo(x):\n return x + 5\n\ndef bar(x, y):\n return y + 5 * x\n\nThat way, you use them in the following way:\nimport something\nprint something.foo(10)\nprint something.bar(12, 14)\n\nDon't be afraid of namespaces. ;-)\n",
"If your functions are dependent on each other or global state, consider also the third approach:\nclass Something(object):\n def foo(self, x):\n return x + 5\n\n def bar(self, x, y):\n return y + 5 * self.foo(x)\n\nsomething = Something()\n\nUsing this solution you can test a function in isolation, because you can override behavior of another function or inject dependencies using constructor. \n",
"Classes are only useful when you have a set of functionality than interacts with a set of data (instance properties) that needs to be persisted between function calls and referenced in a discrete fashion.\nIf your class contains nothing other than static methods, then your class is just syntactic cruft, and straight functions are much clearer and all that you need.\n",
"Not only are there no advantages, but it makes things slower than using a module full of methods. There's much less need for static methods in python than there is for them in java or c#, they are used in very special cases.\n",
"I agree with Benjamin. Rather than having a bunch of static methods, you should probably have a bunch of functions. And if you want to organize them, you should think about using modules rather than classes. However, if you want to refactor your code to be OO, that's another matter.\n",
"Depends on the nature of the functions. If they're not strongly unrelated (minimal amount of calls between them) and they don't have any state then yes I'd say dump them into a module. However, you could be shooting yourself in the foot if you ever need to modify the behavior as you're throwing inheritance out the window. So my answer is maybe, and be sure you look at your particular scenario rather then always assuming a module is the best way to collect a set of methods.\n"
] | [
35,
17,
5,
2,
1,
0,
0
] | [] | [] | [
"class",
"python",
"static_methods"
] | stackoverflow_0000456001_class_python_static_methods.txt |
Q:
How to use cvxopt with DSDP?
I'm trying to use DSDP (semidefinite programming package) with cvxopt. I have both of them installed (matlab version for DSDP). I have Python 2.5.2.
When trying to use
dsp(..., solver='dsdp')
I get an errors mentioning solvers.dsdp is not installed.
How to I make them work together?
A:
I would suggest asking this question on the cvxopt group.
| How to use cvxopt with DSDP? | I'm trying to use DSDP (semidefinite programming package) with cvxopt. I have both of them installed (matlab version for DSDP). I have Python 2.5.2.
When trying to use
dsp(..., solver='dsdp')
I get an errors mentioning solvers.dsdp is not installed.
How to I make them work together?
| [
"I would suggest asking this question on the cvxopt group.\n"
] | [
0
] | [] | [] | [
"convex_optimization",
"python"
] | stackoverflow_0000456224_convex_optimization_python.txt |
Q:
Throttling with urllib2
is it possible to easily cap the kbps when using urllib2?
If it is, any code examples or resources you could direct me to would be greatly appreciated.
A:
There is the urlretrieve(url, filename=None, reporthook=None, data=None) function in the urllib module.
If you implement the reporthook-function/object as either a token bucket, or a leaky bucket, you have your global rate-limit.
EDIT: Upon closer examination I see that it isn't as easy to do global rate-limit with reporthook as I thought. reporthook is only given the downloaded amount and the total size, which on their own isn't enough to information to use with the token-bucket. One way to get around it is by storing the last downloaded amount in each rate-limiter, but use a global token-bucket.
EDIT 2: Combined both codes into one example.
"""Rate limiters with shared token bucket."""
import os
import sys
import threading
import time
import urllib
import urlparse
class TokenBucket(object):
"""An implementation of the token bucket algorithm.
source: http://code.activestate.com/recipes/511490/
>>> bucket = TokenBucket(80, 0.5)
>>> print bucket.consume(10)
True
>>> print bucket.consume(90)
False
"""
def __init__(self, tokens, fill_rate):
"""tokens is the total tokens in the bucket. fill_rate is the
rate in tokens/second that the bucket will be refilled."""
self.capacity = float(tokens)
self._tokens = float(tokens)
self.fill_rate = float(fill_rate)
self.timestamp = time.time()
self.lock = threading.RLock()
def consume(self, tokens):
"""Consume tokens from the bucket. Returns 0 if there were
sufficient tokens, otherwise the expected time until enough
tokens become available."""
self.lock.acquire()
tokens = max(tokens,self.tokens)
expected_time = (tokens - self.tokens) / self.fill_rate
if expected_time <= 0:
self._tokens -= tokens
self.lock.release()
return max(0,expected_time)
@property
def tokens(self):
self.lock.acquire()
if self._tokens < self.capacity:
now = time.time()
delta = self.fill_rate * (now - self.timestamp)
self._tokens = min(self.capacity, self._tokens + delta)
self.timestamp = now
value = self._tokens
self.lock.release()
return value
class RateLimit(object):
"""Rate limit a url fetch.
source: http://mail.python.org/pipermail/python-list/2008-January/472859.html
(but mostly rewritten)
"""
def __init__(self, bucket, filename):
self.bucket = bucket
self.last_update = 0
self.last_downloaded_kb = 0
self.filename = filename
self.avg_rate = None
def __call__(self, block_count, block_size, total_size):
total_kb = total_size / 1024.
downloaded_kb = (block_count * block_size) / 1024.
just_downloaded = downloaded_kb - self.last_downloaded_kb
self.last_downloaded_kb = downloaded_kb
predicted_size = block_size/1024.
wait_time = self.bucket.consume(predicted_size)
while wait_time > 0:
time.sleep(wait_time)
wait_time = self.bucket.consume(predicted_size)
now = time.time()
delta = now - self.last_update
if self.last_update != 0:
if delta > 0:
rate = just_downloaded / delta
if self.avg_rate is not None:
rate = 0.9 * self.avg_rate + 0.1 * rate
self.avg_rate = rate
else:
rate = self.avg_rate or 0.
print "%20s: %4.1f%%, %5.1f KiB/s, %.1f/%.1f KiB" % (
self.filename, 100. * downloaded_kb / total_kb,
rate, downloaded_kb, total_kb,
)
self.last_update = now
def main():
"""Fetch the contents of urls"""
if len(sys.argv) < 4:
print 'Syntax: %s rate url1 url2 ...' % sys.argv[0]
raise SystemExit(1)
rate_limit = float(sys.argv[1])
urls = sys.argv[2:]
bucket = TokenBucket(10*rate_limit, rate_limit)
print "rate limit = %.1f" % (rate_limit,)
threads = []
for url in urls:
path = urlparse.urlparse(url,'http')[2]
filename = os.path.basename(path)
print 'Downloading "%s" to "%s"...' % (url,filename)
rate_limiter = RateLimit(bucket, filename)
t = threading.Thread(
target=urllib.urlretrieve,
args=(url, filename, rate_limiter))
t.start()
threads.append(t)
for t in threads:
t.join()
print 'All downloads finished'
if __name__ == "__main__":
main()
| Throttling with urllib2 | is it possible to easily cap the kbps when using urllib2?
If it is, any code examples or resources you could direct me to would be greatly appreciated.
| [
"There is the urlretrieve(url, filename=None, reporthook=None, data=None) function in the urllib module.\nIf you implement the reporthook-function/object as either a token bucket, or a leaky bucket, you have your global rate-limit.\nEDIT: Upon closer examination I see that it isn't as easy to do global rate-limit with reporthook as I thought. reporthook is only given the downloaded amount and the total size, which on their own isn't enough to information to use with the token-bucket. One way to get around it is by storing the last downloaded amount in each rate-limiter, but use a global token-bucket.\n\nEDIT 2: Combined both codes into one example.\n\"\"\"Rate limiters with shared token bucket.\"\"\"\n\nimport os\nimport sys\nimport threading\nimport time\nimport urllib\nimport urlparse\n\nclass TokenBucket(object):\n \"\"\"An implementation of the token bucket algorithm.\n source: http://code.activestate.com/recipes/511490/\n\n >>> bucket = TokenBucket(80, 0.5)\n >>> print bucket.consume(10)\n True\n >>> print bucket.consume(90)\n False\n \"\"\"\n def __init__(self, tokens, fill_rate):\n \"\"\"tokens is the total tokens in the bucket. fill_rate is the\n rate in tokens/second that the bucket will be refilled.\"\"\"\n self.capacity = float(tokens)\n self._tokens = float(tokens)\n self.fill_rate = float(fill_rate)\n self.timestamp = time.time()\n self.lock = threading.RLock()\n\n def consume(self, tokens):\n \"\"\"Consume tokens from the bucket. Returns 0 if there were\n sufficient tokens, otherwise the expected time until enough\n tokens become available.\"\"\"\n self.lock.acquire()\n tokens = max(tokens,self.tokens)\n expected_time = (tokens - self.tokens) / self.fill_rate\n if expected_time <= 0:\n self._tokens -= tokens\n self.lock.release()\n return max(0,expected_time)\n\n @property\n def tokens(self):\n self.lock.acquire()\n if self._tokens < self.capacity:\n now = time.time()\n delta = self.fill_rate * (now - self.timestamp)\n self._tokens = min(self.capacity, self._tokens + delta)\n self.timestamp = now\n value = self._tokens\n self.lock.release()\n return value\n\nclass RateLimit(object):\n \"\"\"Rate limit a url fetch.\n source: http://mail.python.org/pipermail/python-list/2008-January/472859.html\n (but mostly rewritten)\n \"\"\"\n def __init__(self, bucket, filename):\n self.bucket = bucket\n self.last_update = 0\n self.last_downloaded_kb = 0\n\n self.filename = filename\n self.avg_rate = None\n\n def __call__(self, block_count, block_size, total_size):\n total_kb = total_size / 1024.\n\n downloaded_kb = (block_count * block_size) / 1024.\n just_downloaded = downloaded_kb - self.last_downloaded_kb\n self.last_downloaded_kb = downloaded_kb\n\n predicted_size = block_size/1024.\n\n wait_time = self.bucket.consume(predicted_size)\n while wait_time > 0:\n time.sleep(wait_time)\n wait_time = self.bucket.consume(predicted_size)\n\n now = time.time()\n delta = now - self.last_update\n if self.last_update != 0:\n if delta > 0:\n rate = just_downloaded / delta\n if self.avg_rate is not None:\n rate = 0.9 * self.avg_rate + 0.1 * rate\n self.avg_rate = rate\n else:\n rate = self.avg_rate or 0.\n print \"%20s: %4.1f%%, %5.1f KiB/s, %.1f/%.1f KiB\" % (\n self.filename, 100. * downloaded_kb / total_kb,\n rate, downloaded_kb, total_kb,\n )\n self.last_update = now\n\n\ndef main():\n \"\"\"Fetch the contents of urls\"\"\"\n if len(sys.argv) < 4:\n print 'Syntax: %s rate url1 url2 ...' % sys.argv[0]\n raise SystemExit(1)\n rate_limit = float(sys.argv[1])\n urls = sys.argv[2:]\n bucket = TokenBucket(10*rate_limit, rate_limit)\n\n print \"rate limit = %.1f\" % (rate_limit,)\n\n threads = []\n for url in urls:\n path = urlparse.urlparse(url,'http')[2]\n filename = os.path.basename(path)\n print 'Downloading \"%s\" to \"%s\"...' % (url,filename)\n rate_limiter = RateLimit(bucket, filename)\n t = threading.Thread(\n target=urllib.urlretrieve,\n args=(url, filename, rate_limiter))\n t.start()\n threads.append(t)\n\n for t in threads:\n t.join()\n\n print 'All downloads finished'\n\nif __name__ == \"__main__\":\n main()\n\n"
] | [
19
] | [] | [] | [
"bandwidth_throttling",
"python",
"urllib2"
] | stackoverflow_0000456649_bandwidth_throttling_python_urllib2.txt |
Q:
Evaluate my Python server structure
I'm building a game server in Python and I just wanted to get some input on the architecture of the server that I was thinking up.
So, as we all know, Python cannot scale across cores with a single process. Therefore, on a server with 4 cores, I would need to spawn 4 processes.
Here is the steps taken when a client wishes to connect to the server cluster:
The IP the client initially communicates with is the Gateway node. The gateway keeps track of how many clients are on each machine, and forwards the connection request to the machine with the lowest client count.
On each machine, there is one Manager process and X Server processes, where X is the number of cores on the processor (since Python cannot scale across cores, we need to spawn 4 cores to use 100% of a quad core processor)
The manager's job is to keep track of how many clients are on each process, as well as to restart the processes if any of them crash. When a connection request is sent from the gateway to a manager, the manager looks at its server processes on that machine (3 in the diagram) and forwards the request to whatever process has the least amount of clients.
The Server process is what actually does the communicating with the client.
Here is what a 3 machine cluster would look like. For the sake of the diagram, assume each node has 3 cores.
alt text http://img152.imageshack.us/img152/5412/serverlx2.jpg
This also got me thinking - could I implement hot swapping this way? Since each process is controlled by the manager, when I want to swap in a new version of the server process I just let the manager know that it should not send any more connections to it, and then I will register the new version process with the old one. The old version is kept alive as long as clients are connected to it, then terminates when there are no more.
Phew. Let me know what you guys think.
A:
Sounds like you'll want to look at PyProcessing, now included in Python 2.6 and beyond as multiprocessing. It takes care of a lot of the machinery of dealing with multiple processes.
An alternative architectural model is to setup a work queue using something like beanstalkd and have each of the "servers" pull jobs from the queue. That way you can add servers as you wish, swap them out, etc, without having to worry about registering them with the manager (this is assuming the work you're spreading over the servers can be quantified as "jobs").
Finally, it may be worthwhile to build the whole thing on HTTP and take advantage of existing well known and highly scalable load distribution mechanisms, such as nginx. If you can make the communication HTTP based then you'll be able to use lots of off-the-shelf tools to handle most of what you describe.
| Evaluate my Python server structure | I'm building a game server in Python and I just wanted to get some input on the architecture of the server that I was thinking up.
So, as we all know, Python cannot scale across cores with a single process. Therefore, on a server with 4 cores, I would need to spawn 4 processes.
Here is the steps taken when a client wishes to connect to the server cluster:
The IP the client initially communicates with is the Gateway node. The gateway keeps track of how many clients are on each machine, and forwards the connection request to the machine with the lowest client count.
On each machine, there is one Manager process and X Server processes, where X is the number of cores on the processor (since Python cannot scale across cores, we need to spawn 4 cores to use 100% of a quad core processor)
The manager's job is to keep track of how many clients are on each process, as well as to restart the processes if any of them crash. When a connection request is sent from the gateway to a manager, the manager looks at its server processes on that machine (3 in the diagram) and forwards the request to whatever process has the least amount of clients.
The Server process is what actually does the communicating with the client.
Here is what a 3 machine cluster would look like. For the sake of the diagram, assume each node has 3 cores.
alt text http://img152.imageshack.us/img152/5412/serverlx2.jpg
This also got me thinking - could I implement hot swapping this way? Since each process is controlled by the manager, when I want to swap in a new version of the server process I just let the manager know that it should not send any more connections to it, and then I will register the new version process with the old one. The old version is kept alive as long as clients are connected to it, then terminates when there are no more.
Phew. Let me know what you guys think.
| [
"Sounds like you'll want to look at PyProcessing, now included in Python 2.6 and beyond as multiprocessing. It takes care of a lot of the machinery of dealing with multiple processes.\nAn alternative architectural model is to setup a work queue using something like beanstalkd and have each of the \"servers\" pull jobs from the queue. That way you can add servers as you wish, swap them out, etc, without having to worry about registering them with the manager (this is assuming the work you're spreading over the servers can be quantified as \"jobs\").\nFinally, it may be worthwhile to build the whole thing on HTTP and take advantage of existing well known and highly scalable load distribution mechanisms, such as nginx. If you can make the communication HTTP based then you'll be able to use lots of off-the-shelf tools to handle most of what you describe.\n"
] | [
5
] | [] | [] | [
"python"
] | stackoverflow_0000456753_python.txt |
Q:
python web-services: returning a fault from the server using ZSI
I'm interested in writing a python client for a web-service, and for testing purposes it would be very interesting also to have a simple stub server. I'm using python 2.3, and ZSI 2.0.
My problem is that I do not manage to return an exception from the server.
If I raise an exception of the type used for the soap fault in the wsdl, I get the TypeError 'exceptions must be classes, instances, or strings (deprecated), not EmptyStringException_Def'. I thought this meant that the fault object was not a subclass of Exception, but modifying the generated code in this way did not help - and of course, not having to modify the generated code would be much better :)
If I return the fault object as part of the response, it is just ignored.
I couldn't find any documentation about faults handling in ZSI. Any hints?
Here's a sample code for a server of a very simple service with just one method, spellBackwards, which should return a soap fault if the input string is empty:
#!/usr/bin/env python
from ZSI.ServiceContainer import AsServer
from SpellBackwardsService_services_server import *
from SpellBackwardsService_services_types import *
class SpellBackwardsServiceImpl(SpellBackwardsService):
def soap_spellBackwards(self, ps):
response = SpellBackwardsService.soap_spellBackwards(self, ps)
input = self.request._in
if len(input) != 0:
response._out = input[::-1]
else:
e = ns0.EmptyStringException_Def("fault")
e._reason = "Empty input string"
# The following just produces an empty return message:
# response._fault = e
# The following causes TypeError
# raise e
return response
AsServer(port=8666, services=[SpellBackwardsServiceImpl(),])
A:
I've found the answer in this ZSI Cookbook, by Chris Hoobs, linked at the bottom of the ZSI home page:
5.4 Exceptions
A thorny question is how to generate the faults at the server. With the ZSI v2.0 code as
it is provided, this is not possible.
I assume this to be correct since the paper is linked from the project home page.
This paper also suggests a workaround, which consists in patching the Fault.py file in the ZSI distribution.
I tested the workaround and it works as promised; patching the library is as acceptable solution for me since I need to generate a server for test purposes only (i.e. I'll not need to distribute the patched library).
A:
apologies for not being able to answer the question.
I battled with ZSI for a while.
I'm now using SUDS : https://fedorahosted.org/suds/wiki , and everything has become much simpler.
| python web-services: returning a fault from the server using ZSI | I'm interested in writing a python client for a web-service, and for testing purposes it would be very interesting also to have a simple stub server. I'm using python 2.3, and ZSI 2.0.
My problem is that I do not manage to return an exception from the server.
If I raise an exception of the type used for the soap fault in the wsdl, I get the TypeError 'exceptions must be classes, instances, or strings (deprecated), not EmptyStringException_Def'. I thought this meant that the fault object was not a subclass of Exception, but modifying the generated code in this way did not help - and of course, not having to modify the generated code would be much better :)
If I return the fault object as part of the response, it is just ignored.
I couldn't find any documentation about faults handling in ZSI. Any hints?
Here's a sample code for a server of a very simple service with just one method, spellBackwards, which should return a soap fault if the input string is empty:
#!/usr/bin/env python
from ZSI.ServiceContainer import AsServer
from SpellBackwardsService_services_server import *
from SpellBackwardsService_services_types import *
class SpellBackwardsServiceImpl(SpellBackwardsService):
def soap_spellBackwards(self, ps):
response = SpellBackwardsService.soap_spellBackwards(self, ps)
input = self.request._in
if len(input) != 0:
response._out = input[::-1]
else:
e = ns0.EmptyStringException_Def("fault")
e._reason = "Empty input string"
# The following just produces an empty return message:
# response._fault = e
# The following causes TypeError
# raise e
return response
AsServer(port=8666, services=[SpellBackwardsServiceImpl(),])
| [
"I've found the answer in this ZSI Cookbook, by Chris Hoobs, linked at the bottom of the ZSI home page: \n\n5.4 Exceptions\n A thorny question is how to generate the faults at the server. With the ZSI v2.0 code as\n it is provided, this is not possible.\n\nI assume this to be correct since the paper is linked from the project home page.\nThis paper also suggests a workaround, which consists in patching the Fault.py file in the ZSI distribution.\nI tested the workaround and it works as promised; patching the library is as acceptable solution for me since I need to generate a server for test purposes only (i.e. I'll not need to distribute the patched library).\n",
"apologies for not being able to answer the question.\nI battled with ZSI for a while.\nI'm now using SUDS : https://fedorahosted.org/suds/wiki , and everything has become much simpler.\n"
] | [
1,
0
] | [] | [] | [
"fault",
"python",
"web_services",
"zsi"
] | stackoverflow_0000446912_fault_python_web_services_zsi.txt |
Q:
How to handle the error that occurs on giving wrong number of parameters in a function call in Python?
When i give wrong number of parameters in a function , i get errors.
How do I handle it?
I gave
def fun_name(...):
try:
...
except TypeError:
print 'Wrong no of arg'
It is not working.
Help please.
A:
The caller triggers this exception, not the receiver.
If you want the receiving function to explicitly check argument count you'll need to use varargs:
def fun_name(*args):
if len(args) != 2:
raise TypeError('Two arguments required')
A:
You need to handle it where you call the function.
try:
fun_name(...)
except TypeError:
print "error!"
A:
If you call a function with the wrong number of parameters then there are two possibilities:
Either you design your function to handle an arbitrary number of arguments. Then you should know what to do with the extra arguments. The answer of Alec Thomas shows you how to handle this case.
Or your design is fundamentally flawed and you are in deep trouble. Catching the error doesn't help in this case.
| How to handle the error that occurs on giving wrong number of parameters in a function call in Python? | When i give wrong number of parameters in a function , i get errors.
How do I handle it?
I gave
def fun_name(...):
try:
...
except TypeError:
print 'Wrong no of arg'
It is not working.
Help please.
| [
"The caller triggers this exception, not the receiver.\nIf you want the receiving function to explicitly check argument count you'll need to use varargs:\ndef fun_name(*args):\n if len(args) != 2:\n raise TypeError('Two arguments required')\n\n",
"You need to handle it where you call the function.\ntry:\n fun_name(...)\nexcept TypeError:\n print \"error!\"\n\n",
"If you call a function with the wrong number of parameters then there are two possibilities:\n\nEither you design your function to handle an arbitrary number of arguments. Then you should know what to do with the extra arguments. The answer of Alec Thomas shows you how to handle this case.\nOr your design is fundamentally flawed and you are in deep trouble. Catching the error doesn't help in this case.\n\n"
] | [
4,
4,
0
] | [
"If you remove the try...catch parts it should show you what kind of exception it is throwing.\n"
] | [
-2
] | [
"function_call",
"python"
] | stackoverflow_0000456673_function_call_python.txt |
Q:
What is the correct procedure to store a utf-16 encoded rss stream into sqlite3 using python
I have a python sgi script that attempts to extract an rss items that is posted to it and store the rss in a sqlite3 db. I am using flup as the WSGIServer.
To obtain the posted content:
postData = environ["wsgi.input"].read(int(environ["CONTENT_LENGTH"]))
To attempt to store in the db:
from pysqlite2 import dbapi2 as sqlite
ldb = sqlite.connect("/var/vhost/mysite.com/db/rssharvested.db")
lcursor = ldb.cursor()
lcursor.execute("INSERT into rss(data) VALUES(?)", (postData,))
This results in only the first few characters of the rss being stored in the record:
ÿþ<
I believe the initial chars are the BOM of the rss.
I have tried every permutation I could think of including first encoding rss as utf-8 and then attempting to store but the results were the same. I could not decode because some characters could not be represented as unicode.
Running python 2.5.2
sqlite 3.5.7
Thanks in advance for any insight into this problem.
Here is a sample of the initial data contained in postData as modified by the repr function, written to a file and viewed with less:
'\xef\xbb\xbf
Thanks for the all the replies! Very helpful.
The sample I submitted didn't make it through the stackoverflow html filters will try again, converting less and greater than to entities (preview indicates this works).
\xef\xbb\xbf<?xml version="1.0" encoding="utf-16"?><rss xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><channel><item d3p1:size="0" xsi:type="tFileItem" xmlns:d3p1="http://htinc.com/opensearch-ex/1.0/">
A:
Regarding the insertion encoding - in any decent database API, you should insert unicode strings and unicode strings only.
For the reading and parsing bit, I'd recommend Mark Pilgrim's Feed Parser. It properly handles BOM, and the license allows commercial use. This may be a bit too heavy handed if you are not doing any actual parsing on the RSS data.
A:
Are you sure your incoming data are encoded as UTF-16 (otherwise known as UCS-2)?
UTF-16 encoded unicode strings typically include lots of NUL characters (surely for all characters existing in ASCII too), so UTF-16 data hardly can be stored in environment variables (env vars in POSIX are NUL terminated).
Please provide samples of the postData variable contents. Output them using repr().
Until then, the solid advice is: in all DB interactions, your strings on the Python side should be unicode strings; the DB interface should take care of all translations/encodings/decodings necessary.
A:
Before the SQL insertion you should to convert the string to unicode compatible strings. If you raise an UnicodeError exception, then encode the string.encode("utf-8").
Or , you can autodetect encoding and encode it , on his encode schema. Auto detect encoding
| What is the correct procedure to store a utf-16 encoded rss stream into sqlite3 using python | I have a python sgi script that attempts to extract an rss items that is posted to it and store the rss in a sqlite3 db. I am using flup as the WSGIServer.
To obtain the posted content:
postData = environ["wsgi.input"].read(int(environ["CONTENT_LENGTH"]))
To attempt to store in the db:
from pysqlite2 import dbapi2 as sqlite
ldb = sqlite.connect("/var/vhost/mysite.com/db/rssharvested.db")
lcursor = ldb.cursor()
lcursor.execute("INSERT into rss(data) VALUES(?)", (postData,))
This results in only the first few characters of the rss being stored in the record:
ÿþ<
I believe the initial chars are the BOM of the rss.
I have tried every permutation I could think of including first encoding rss as utf-8 and then attempting to store but the results were the same. I could not decode because some characters could not be represented as unicode.
Running python 2.5.2
sqlite 3.5.7
Thanks in advance for any insight into this problem.
Here is a sample of the initial data contained in postData as modified by the repr function, written to a file and viewed with less:
'\xef\xbb\xbf
Thanks for the all the replies! Very helpful.
The sample I submitted didn't make it through the stackoverflow html filters will try again, converting less and greater than to entities (preview indicates this works).
\xef\xbb\xbf<?xml version="1.0" encoding="utf-16"?><rss xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><channel><item d3p1:size="0" xsi:type="tFileItem" xmlns:d3p1="http://htinc.com/opensearch-ex/1.0/">
| [
"Regarding the insertion encoding - in any decent database API, you should insert unicode strings and unicode strings only.\nFor the reading and parsing bit, I'd recommend Mark Pilgrim's Feed Parser. It properly handles BOM, and the license allows commercial use. This may be a bit too heavy handed if you are not doing any actual parsing on the RSS data.\n",
"Are you sure your incoming data are encoded as UTF-16 (otherwise known as UCS-2)?\nUTF-16 encoded unicode strings typically include lots of NUL characters (surely for all characters existing in ASCII too), so UTF-16 data hardly can be stored in environment variables (env vars in POSIX are NUL terminated).\nPlease provide samples of the postData variable contents. Output them using repr().\nUntil then, the solid advice is: in all DB interactions, your strings on the Python side should be unicode strings; the DB interface should take care of all translations/encodings/decodings necessary.\n",
"Before the SQL insertion you should to convert the string to unicode compatible strings. If you raise an UnicodeError exception, then encode the string.encode(\"utf-8\").\nOr , you can autodetect encoding and encode it , on his encode schema. Auto detect encoding\n"
] | [
1,
1,
0
] | [] | [] | [
"python",
"sqlite",
"wsgi"
] | stackoverflow_0000457641_python_sqlite_wsgi.txt |
Q:
wxPython is not throwing exceptions when it should, instead giving raw error messages
I'm coding the menu for an application I'm writing in python, using wxPython libraries for the user interface, and I'm attempting to add icons to some of the menu items. Because I'm trying to be conscientious about it, I'm trying to limit the damage done if one of the image files referenced doesn't exist, and the most simple method (in my mind) is to use exceptions.
The trouble is, when I link to a file that doesn't exist, an exception isn't thrown. Instead I get a horrific message box stating:
Can't load image from <path>: Image does not exist.
This message is exactly the type of thing I'm trying to stop, but even with the broadest exception catching statement nothing works.
This is a cut down version, taking what seems to be relevant, from what I've written:
NewProject = wx.MenuItem(File, -1, "&New Project\tCtrl+N", "Create a new project")
try:
# raises an error message but not an exception
NewProject.SetBitmap(wx.Image( <path> ), wx.BITMAP_TYPE_PNG).ConvertToBitmap())
except Exception:
pass
So these are my questions: What am I doing wrong? Am I approaching this in the wrong direction, putting too much emphasis on exceptions, when there are other ways around it (though non that seem in my head to be as simple)? Is this a bug in the wxPython library, since I'm pretty sure it should be throwing an exception even if it's not the best way around it?
p.s. The best a Google search could do was recommend I converted all the images to python code using the img2py module that comes with wxPython, but I would prefer to keep the images in image format for what I'm doing.
A:
That is a known issue. Robin Dunn answered it a couple of times: just create your Logging method eg.:
dummy_log=wx.LogNull()
when the variable dummy_log runs out of scope, normal logging is enabled again.
| wxPython is not throwing exceptions when it should, instead giving raw error messages | I'm coding the menu for an application I'm writing in python, using wxPython libraries for the user interface, and I'm attempting to add icons to some of the menu items. Because I'm trying to be conscientious about it, I'm trying to limit the damage done if one of the image files referenced doesn't exist, and the most simple method (in my mind) is to use exceptions.
The trouble is, when I link to a file that doesn't exist, an exception isn't thrown. Instead I get a horrific message box stating:
Can't load image from <path>: Image does not exist.
This message is exactly the type of thing I'm trying to stop, but even with the broadest exception catching statement nothing works.
This is a cut down version, taking what seems to be relevant, from what I've written:
NewProject = wx.MenuItem(File, -1, "&New Project\tCtrl+N", "Create a new project")
try:
# raises an error message but not an exception
NewProject.SetBitmap(wx.Image( <path> ), wx.BITMAP_TYPE_PNG).ConvertToBitmap())
except Exception:
pass
So these are my questions: What am I doing wrong? Am I approaching this in the wrong direction, putting too much emphasis on exceptions, when there are other ways around it (though non that seem in my head to be as simple)? Is this a bug in the wxPython library, since I'm pretty sure it should be throwing an exception even if it's not the best way around it?
p.s. The best a Google search could do was recommend I converted all the images to python code using the img2py module that comes with wxPython, but I would prefer to keep the images in image format for what I'm doing.
| [
"That is a known issue. Robin Dunn answered it a couple of times: just create your Logging method eg.:\ndummy_log=wx.LogNull()\n\nwhen the variable dummy_log runs out of scope, normal logging is enabled again.\n"
] | [
3
] | [] | [] | [
"exception",
"python",
"wxpython"
] | stackoverflow_0000458943_exception_python_wxpython.txt |
Q:
Filtering by relation count in SQLAlchemy
I'm using the SQLAlchemy Python ORM in a Pylons project. I have a class "Project" which has a one to many relationship with another class "Entry". I want to do a query in SQLAlchemy that gives me all of the projects which have one or more entries associated with them. At the moment I'm doing:
[project for project in Session.query(Project) if len(project.entries)>0]
which I know isn't ideal, but I can't figure out how to do a filter that does what I require (e.g. Session.query(Project).filter(Project.entries.exists())).
Any ideas?
A:
Session.query(Project).filter(Project.entries.any()) should work.
Edited credit of James Brady's comment, be sure to give him some love.
| Filtering by relation count in SQLAlchemy | I'm using the SQLAlchemy Python ORM in a Pylons project. I have a class "Project" which has a one to many relationship with another class "Entry". I want to do a query in SQLAlchemy that gives me all of the projects which have one or more entries associated with them. At the moment I'm doing:
[project for project in Session.query(Project) if len(project.entries)>0]
which I know isn't ideal, but I can't figure out how to do a filter that does what I require (e.g. Session.query(Project).filter(Project.entries.exists())).
Any ideas?
| [
"Session.query(Project).filter(Project.entries.any()) should work.\nEdited credit of James Brady's comment, be sure to give him some love.\n"
] | [
24
] | [] | [] | [
"database",
"pylons",
"python",
"sql",
"sqlalchemy"
] | stackoverflow_0000459125_database_pylons_python_sql_sqlalchemy.txt |
Q:
Which new mp3 player to run old Python scripts
Which mp3/media player could I buy which will allow me to run an existing set of python scripts.
The existing scripts control xmms on linux: providing "next tracks" given data on ratings/last played/genre/how long since acquired/.... so that it all runs on a server upstairs somewhere, and I do not need to choose anything.
I'd like to use these scripts in the car, and I hope there is some media player that will let me change its playlist/current song from Python. Can you recommend such a machine ?
(I'd rather avoid the various types of iPod - they are a bit too "end-user focussed" for me)
A:
The only possibility I'm aware of is to use Rockbox, and then port the Python interpreter to it, or just port the functionality to some set of C programs, whichever suits you best. It might even come with the functionality you need already, so you'd just need to tweak some configuration files only.
Rockbox is an open source firmware for
mp3 players, written from scratch. It
runs on a wide range of players:
Apple: 1st through 5.5th generation iPod, iPod Mini and 1st
generation iPod Nano (not the Shuffle, 2nd/3rd/4th gen Nano, Classic or Touch)
Archos: Jukebox 5000, 6000, Studio, Recorder, FM Recorder,
Recorder V2 and Ondio
Cowon: iAudio X5, X5V, X5L, M5, M5L, M3 and M3L
iriver: H100, H300 and H10 series
Olympus: M:Robe 100
SanDisk: Sansa c200, e200 and e200R series (not the v2 models)
Toshiba: Gigabeat X and F series (not the S series)
A:
Another possibility is to use an old iPod and iPod Linux and the Python Port for iPod Linux.
You'll use Podzilla's Media Player Daemon as the player and you'll have to figure out how to have your python scripts send control messages to it instead of xmms.
| Which new mp3 player to run old Python scripts | Which mp3/media player could I buy which will allow me to run an existing set of python scripts.
The existing scripts control xmms on linux: providing "next tracks" given data on ratings/last played/genre/how long since acquired/.... so that it all runs on a server upstairs somewhere, and I do not need to choose anything.
I'd like to use these scripts in the car, and I hope there is some media player that will let me change its playlist/current song from Python. Can you recommend such a machine ?
(I'd rather avoid the various types of iPod - they are a bit too "end-user focussed" for me)
| [
"The only possibility I'm aware of is to use Rockbox, and then port the Python interpreter to it, or just port the functionality to some set of C programs, whichever suits you best. It might even come with the functionality you need already, so you'd just need to tweak some configuration files only.\n\nRockbox is an open source firmware for\n mp3 players, written from scratch. It\n runs on a wide range of players:\n\nApple: 1st through 5.5th generation iPod, iPod Mini and 1st\n generation iPod Nano (not the Shuffle, 2nd/3rd/4th gen Nano, Classic or Touch)\nArchos: Jukebox 5000, 6000, Studio, Recorder, FM Recorder,\n Recorder V2 and Ondio\nCowon: iAudio X5, X5V, X5L, M5, M5L, M3 and M3L\niriver: H100, H300 and H10 series\nOlympus: M:Robe 100\nSanDisk: Sansa c200, e200 and e200R series (not the v2 models)\nToshiba: Gigabeat X and F series (not the S series)\n\n\n",
"Another possibility is to use an old iPod and iPod Linux and the Python Port for iPod Linux.\nYou'll use Podzilla's Media Player Daemon as the player and you'll have to figure out how to have your python scripts send control messages to it instead of xmms. \n"
] | [
3,
1
] | [] | [] | [
"embedded",
"mp3",
"python"
] | stackoverflow_0000441864_embedded_mp3_python.txt |
Q:
Python/Twisted - Sending to a specific socket object?
I have a "manager" process on a node, and several worker processes. The manager is the actual server who holds all of the connections to the clients. The manager accepts all incoming packets and puts them into a queue, and then the worker processes pull the packets out of the queue, process them, and generate a result. They send the result back to the manager (by putting them into another queue which is read by the manager), but here is where I get stuck: how do I send the result to a specific socket? When dealing with the processing of the packets on a single process, it's easy, because when you receive a packet you can reply to it by just grabbing the "transport" object in-context. But how would I do this with the method I'm using?
A:
It sounds like you might need to keep a reference to the transport (or protocol) along with the bytes the just came in on that protocol in your 'event' object. That way responses that came in on a connection go out on the same connection.
If things don't need to be processed serially perhaps you should think about setting up functors that can handle the data in parallel to remove the need for queueing. Just keep in mind that you will need to protect critical sections of your code.
Edit:
Judging from your other question about evaluating your server design it would seem that processing in parallel may not be possible for your situation, so my first suggestion stands.
| Python/Twisted - Sending to a specific socket object? | I have a "manager" process on a node, and several worker processes. The manager is the actual server who holds all of the connections to the clients. The manager accepts all incoming packets and puts them into a queue, and then the worker processes pull the packets out of the queue, process them, and generate a result. They send the result back to the manager (by putting them into another queue which is read by the manager), but here is where I get stuck: how do I send the result to a specific socket? When dealing with the processing of the packets on a single process, it's easy, because when you receive a packet you can reply to it by just grabbing the "transport" object in-context. But how would I do this with the method I'm using?
| [
"It sounds like you might need to keep a reference to the transport (or protocol) along with the bytes the just came in on that protocol in your 'event' object. That way responses that came in on a connection go out on the same connection. \nIf things don't need to be processed serially perhaps you should think about setting up functors that can handle the data in parallel to remove the need for queueing. Just keep in mind that you will need to protect critical sections of your code.\nEdit:\nJudging from your other question about evaluating your server design it would seem that processing in parallel may not be possible for your situation, so my first suggestion stands.\n"
] | [
3
] | [] | [] | [
"multiprocess",
"python",
"sockets",
"twisted"
] | stackoverflow_0000460068_multiprocess_python_sockets_twisted.txt |