content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
What's a good embedded browser for a pygtk application?
I'm planning on using an embedded browser in my pygtk application and I'm debating between gtkmozembed and pywebkitgtk. Is there any compelling difference between the two? Are there any third options that I don't know about?
It should be noted that I won't be using this to access content on the web. I'm mainly using it for UI purposes.
My priorities are:
It needs to be stable.
It needs to be cross-platform.
It should be easy to use.
It should be actively maintained.
It should be extensible.
It should be fast.
A:
gtkmozembed is not available on Windows, although you can use the gecko embedding interface directly. This would require you to write some C++ code.
As far as I know, the gtk webkit port is not available on Windows yet, and still appears to be undergoing a lot of change.
For an example of a cross-platform gecko embedding solution, check out Miro.
Miro is python, and they've written just a couple of C++ classes to embed gecko on Windows, while using gtkmozembed on linux.
A:
if you judge by the web pages then definitely pywebkitgtk
pygtkmoz from this page
"Note: this project is no longer maintained. Please use gnome-python-extras (http://www.pygtk.org) instead. I apologize for any trouble this might cause, but this is better in the long run. Python bindings for GtkEmbedMozilla."
and pywebkitgtk looks like active project changes
| What's a good embedded browser for a pygtk application? | I'm planning on using an embedded browser in my pygtk application and I'm debating between gtkmozembed and pywebkitgtk. Is there any compelling difference between the two? Are there any third options that I don't know about?
It should be noted that I won't be using this to access content on the web. I'm mainly using it for UI purposes.
My priorities are:
It needs to be stable.
It needs to be cross-platform.
It should be easy to use.
It should be actively maintained.
It should be extensible.
It should be fast.
| [
"gtkmozembed is not available on Windows, although you can use the gecko embedding interface directly. This would require you to write some C++ code.\nAs far as I know, the gtk webkit port is not available on Windows yet, and still appears to be undergoing a lot of change.\nFor an example of a cross-platform gecko embedding solution, check out Miro.\nMiro is python, and they've written just a couple of C++ classes to embed gecko on Windows, while using gtkmozembed on linux.\n",
"if you judge by the web pages then definitely pywebkitgtk \npygtkmoz from this page\n\"Note: this project is no longer maintained. Please use gnome-python-extras (http://www.pygtk.org) instead. I apologize for any trouble this might cause, but this is better in the long run. Python bindings for GtkEmbedMozilla.\"\nand pywebkitgtk looks like active project changes\n"
] | [
6,
2
] | [] | [] | [
"gtk",
"mozilla",
"python",
"user_interface",
"webkit"
] | stackoverflow_0000569547_gtk_mozilla_python_user_interface_webkit.txt |
Q:
What's different between Python and Javascript regular expressions?
Are Python and JavaScript regular expression syntax identical?
If not, then:
What are the important differences between them
Is there a python library that "implements" JavaScript regexps?
A:
There is a comparison table here:
Regex Flavor Comparison
A:
Part 1
They are different; One difference is Python supports Unicode and Javascript doesn't.
Part 2
Read Mastering Regular Expressions. It gives information on how to identify the back-end engines (DFA vs NFA vs Hybrid) that a regex flavour uses. It gives tons of information on the different regex flavours out there.
There is way too much information to convey on a single SO answer, so you're better off having a solid piece of reference material on the subject.
A:
http://www.regular-expressions.info/javascript.html vs http://www.regular-expressions.info/python.html
| What's different between Python and Javascript regular expressions? | Are Python and JavaScript regular expression syntax identical?
If not, then:
What are the important differences between them
Is there a python library that "implements" JavaScript regexps?
| [
"There is a comparison table here:\nRegex Flavor Comparison\n",
"Part 1\nThey are different; One difference is Python supports Unicode and Javascript doesn't. \nPart 2\nRead Mastering Regular Expressions. It gives information on how to identify the back-end engines (DFA vs NFA vs Hybrid) that a regex flavour uses. It gives tons of information on the different regex flavours out there. \nThere is way too much information to convey on a single SO answer, so you're better off having a solid piece of reference material on the subject.\n",
"http://www.regular-expressions.info/javascript.html vs http://www.regular-expressions.info/python.html\n"
] | [
32,
7,
1
] | [] | [] | [
"javascript",
"python",
"regex"
] | stackoverflow_0000636485_javascript_python_regex.txt |
Q:
Privilege Escalation in Web Environment for File Access
I have a situation where I would like to elevate the permissions I have in a web environment so that I can access a serial device.
The specific case is where I have a web interface for configuring a modem that comes up on /dev/ttyUSB[0-9].
Zero or more modems will be plugged in by an end user. I am writing some software that is capable of discerning which is a USB Wireless Modem by reading /sys/devices and talking to the modem using some AT commands.
I would like to be able to open the device and do something like:
ser = serial.Serial(tty, baudrate=115200, timeout=10)
ser.write('AT+CGSN\r\n')
imei = ser.readline()
The problem is that pyserial does this: self.fd = os.open(self.portstr, os.O_RDWR|os.O_NOCTTY|os.O_NONBLOCK) to open the serial port, where portstr is /dev/ttyUSB0, but it does it as the nobody user, which is unprivileged.
Serial ports on this system are owned by root:uucp and are set as 0660 (i.e. rw-rw----).
What is the best way for a user such as nobody who should have as few permissions as possible to open a file in dev?
Ideas I will consider:
Doing things in a subprocess using sudo.
Changing permissions of the files in /dev/ (instructions on how to do this properly using udev are appreciated!)
Using another API or piece of software I have not considered.
A:
In this case, I would write a daemon run by a system user that accepted socket connections and relayed them to the appropriate device, then use sockets within the web application to talk to the daemon. This also helps keep the web app from blocking when opening a device that isn't quite ready to deal with users, settle locks a little more sanely than you could with CGI, etc.
However, if you wish to give the application the ability to talk to the devices directly, give them the same permissions that null has.
A:
There's another way of doing this: Unix/Linux allows sending file descriptors via Unix sockets. There's even a Perl module for that: PassAccessRights.pm. Didn't find similar module for Python but it could be easily implemented.
A:
"What is the best way for a user such as nobody who should have as few permissions as possible to open a file in dev?"
Actually, you're better off using mod_wsgi in daemon mode for your web application. The mod_wsgi user can be any username (and group) you provide.
You can run as a user with appropriately defined privileges.
See http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess
A:
Configure a udev rule to chgrp the new device to nobody, if it is acceptable that every access via the web interface be permitted the same access to the device. Here's what I put in my eee-bpw package in file /etc/udev/rules.d/99-bpw.rules.
# Sierra Wireless AirCard 880 U
BUS=="usb", KERNEL=="ttyUSB2*", ACTION=="add", \
PRODUCT=="1199/6855/0", DEVNAME=="/dev/tts/USB2", \
OWNER="root", GROUP="dialout", \
SYMLINK+="bpw", RUN="/usr/sbin/bpw"
Substitute nobody for dialout. This particular rule assumes the device name to be /dev/ttyUSB2, but you can extend the rule considerably, see the udev documentation.
| Privilege Escalation in Web Environment for File Access | I have a situation where I would like to elevate the permissions I have in a web environment so that I can access a serial device.
The specific case is where I have a web interface for configuring a modem that comes up on /dev/ttyUSB[0-9].
Zero or more modems will be plugged in by an end user. I am writing some software that is capable of discerning which is a USB Wireless Modem by reading /sys/devices and talking to the modem using some AT commands.
I would like to be able to open the device and do something like:
ser = serial.Serial(tty, baudrate=115200, timeout=10)
ser.write('AT+CGSN\r\n')
imei = ser.readline()
The problem is that pyserial does this: self.fd = os.open(self.portstr, os.O_RDWR|os.O_NOCTTY|os.O_NONBLOCK) to open the serial port, where portstr is /dev/ttyUSB0, but it does it as the nobody user, which is unprivileged.
Serial ports on this system are owned by root:uucp and are set as 0660 (i.e. rw-rw----).
What is the best way for a user such as nobody who should have as few permissions as possible to open a file in dev?
Ideas I will consider:
Doing things in a subprocess using sudo.
Changing permissions of the files in /dev/ (instructions on how to do this properly using udev are appreciated!)
Using another API or piece of software I have not considered.
| [
"In this case, I would write a daemon run by a system user that accepted socket connections and relayed them to the appropriate device, then use sockets within the web application to talk to the daemon. This also helps keep the web app from blocking when opening a device that isn't quite ready to deal with users, settle locks a little more sanely than you could with CGI, etc.\nHowever, if you wish to give the application the ability to talk to the devices directly, give them the same permissions that null has.\n",
"There's another way of doing this: Unix/Linux allows sending file descriptors via Unix sockets. There's even a Perl module for that: PassAccessRights.pm. Didn't find similar module for Python but it could be easily implemented.\n",
"\"What is the best way for a user such as nobody who should have as few permissions as possible to open a file in dev?\"\nActually, you're better off using mod_wsgi in daemon mode for your web application. The mod_wsgi user can be any username (and group) you provide. \nYou can run as a user with appropriately defined privileges.\nSee http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess\n",
"Configure a udev rule to chgrp the new device to nobody, if it is acceptable that every access via the web interface be permitted the same access to the device. Here's what I put in my eee-bpw package in file /etc/udev/rules.d/99-bpw.rules.\n# Sierra Wireless AirCard 880 U\nBUS==\"usb\", KERNEL==\"ttyUSB2*\", ACTION==\"add\", \\\n PRODUCT==\"1199/6855/0\", DEVNAME==\"/dev/tts/USB2\", \\\n OWNER=\"root\", GROUP=\"dialout\", \\\n SYMLINK+=\"bpw\", RUN=\"/usr/sbin/bpw\"\n\nSubstitute nobody for dialout. This particular rule assumes the device name to be /dev/ttyUSB2, but you can extend the rule considerably, see the udev documentation.\n"
] | [
2,
2,
1,
1
] | [
"The sudo idea could be possible. IIRC, you can set specific commands to be sudo-able, but without requiring a password.\nThe other option is to put nobody in a group that has access to the device you want, or to start Apache as the group that does have access.\nIf you're using fastcgi (or equiv), I think you can have it run scripts as the owning user (some shared hosts do this).\nTo change permissions of files in /dev, just chmod them.\n"
] | [
-1
] | [
"cgi",
"linux",
"permissions",
"python",
"serial_port"
] | stackoverflow_0000628956_cgi_linux_permissions_python_serial_port.txt |
Q:
How do you add event to Trac event time line
I am writing a plug-in for Trac. I would like to add an event to the time line each time the plug-in receives some data from a Git post-receive hook.
Looking at the timeline API, it seems you can only add new source of events. So you are responsible for retrieving and displaying the data. I would prefer saving my event to an existent source.
Where should I look in the Trac API to save events?
ps: my plan is to rely on a remote repository and remote web interface to the code like Github.
pss: The time line has to display commits from the main project git repository and its clones. I don't want to host a copy of every repository that matter to the project.
A:
The timeline API is a level higher than what you need to do. There is a general VCS implementation of it in ChangesetModule, which delegates the changeset (event) retrieval itself to a VCS-specific Repository. So you should implement the versioncontrol API instead.
The API is designed for a “pull model”, in which Trac queries the VCS when constructing a timeline. If you really prefer a “push model” (why?), you could try working off the CacheRepository implementation as a base, injecting your events into the cache, or just writing an event-storing repository from scratch. Be aware that this goes against the grain of the existing design, and will very probably be unnecessary extra effort.
I suggest that you go with the normal pull model instead, it will be easier and cleaner. You could use the Subversion implementation or the Mercurial implementation as a reference, and probably use GitPython to talk to git.
| How do you add event to Trac event time line | I am writing a plug-in for Trac. I would like to add an event to the time line each time the plug-in receives some data from a Git post-receive hook.
Looking at the timeline API, it seems you can only add new source of events. So you are responsible for retrieving and displaying the data. I would prefer saving my event to an existent source.
Where should I look in the Trac API to save events?
ps: my plan is to rely on a remote repository and remote web interface to the code like Github.
pss: The time line has to display commits from the main project git repository and its clones. I don't want to host a copy of every repository that matter to the project.
| [
"The timeline API is a level higher than what you need to do. There is a general VCS implementation of it in ChangesetModule, which delegates the changeset (event) retrieval itself to a VCS-specific Repository. So you should implement the versioncontrol API instead.\nThe API is designed for a “pull model”, in which Trac queries the VCS when constructing a timeline. If you really prefer a “push model” (why?), you could try working off the CacheRepository implementation as a base, injecting your events into the cache, or just writing an event-storing repository from scratch. Be aware that this goes against the grain of the existing design, and will very probably be unnecessary extra effort.\nI suggest that you go with the normal pull model instead, it will be easier and cleaner. You could use the Subversion implementation or the Mercurial implementation as a reference, and probably use GitPython to talk to git.\n"
] | [
2
] | [] | [] | [
"plugins",
"python",
"trac"
] | stackoverflow_0000623822_plugins_python_trac.txt |
Q:
Django workflow when modifying models frequently?
as I usually don't do the up front design of my models in Django projects I end up modifying the models a lot and thus deleting my test database every time (because "syncdb" won't ever alter the tables automatically for you). Below lies my workflow and I'd like to hear about yours. Any thoughts welcome..
Modify the model.
Delete the test database. (always a simple sqlite database for me.)
Run "syncdb".
Generate some test data via code.
goto 1.
A secondary question regarding this.. In case your workflow is like above, how do you execute the 4. step? Do you generate the test data manually or is there a proper hook point in Django apps where you can inject the test-data-generating-code at server startup?\
TIA.
A:
Steps 2 & 3 can be done in one step:
manage.py reset appname
Step 4 is most easily managed, from my understanding, by using fixtures
A:
This is a job for Django's fixtures. They are convenient because they are database independent and the test harness (and manage.py) have built-in support for them.
To use them:
Set up your data in your app (call
it "foo") using the admin tool
Create a fixtures directory in your
"foo" app directory
Type: python manage.py dumpdata --indent=4 foo > foo/fixtures/foo.json
Now, after your syncdb stage, you just type:
python manage.py loaddata foo.json
And your data will be re-created.
If you want them in a test case:
class FooTests(TestCase):
fixtures = ['foo.json']
Note that you will have to recreate or manually update your fixtures if your schema changes drastically.
You can read more about fixtures in the django docs for Fixture Loading
A:
Here's what we do.
Apps are named with a Schema version number. appa_2, appb_1, etc.
Minor changes don't change the number.
Major changes increment the number. Syncdb works. And a "data migration" script can be written.
def migrate_appa_2_to_3():
for a in appa_2.SomeThing.objects.all():
appa_3.AnotherThing.create( a.this, a.that )
appa_3.NewThing.create( a.another, a.yetAnother )
for b in ...
The point is that drop and recreate isn't always appropriate. It's sometimes helpful to move data form the old model to the new model without rebuilding from scratch.
A:
South is the coolest.
Though good ol' reset works best when data doesn't matter.
http://south.aeracode.org/
A:
To add to Matthew's response, I often also use custom SQL to provide initial data as documented here.
Django just looks for files in <app>/sql/<modelname>.sql and runs them after creating tables during syncdb or sqlreset. I use custom SQL when I need to do something like populate my Django tables from other non-Django database tables.
A:
Personally my development db is for a project I'm working on right now is rather large, so I use dmigrations to create db migration scripts to modify the db (rather than wiping out the db everytime like I did in the beginning).
Edit: Actually, I'm using South now :-)
| Django workflow when modifying models frequently? | as I usually don't do the up front design of my models in Django projects I end up modifying the models a lot and thus deleting my test database every time (because "syncdb" won't ever alter the tables automatically for you). Below lies my workflow and I'd like to hear about yours. Any thoughts welcome..
Modify the model.
Delete the test database. (always a simple sqlite database for me.)
Run "syncdb".
Generate some test data via code.
goto 1.
A secondary question regarding this.. In case your workflow is like above, how do you execute the 4. step? Do you generate the test data manually or is there a proper hook point in Django apps where you can inject the test-data-generating-code at server startup?\
TIA.
| [
"Steps 2 & 3 can be done in one step:\nmanage.py reset appname\n\nStep 4 is most easily managed, from my understanding, by using fixtures\n",
"This is a job for Django's fixtures. They are convenient because they are database independent and the test harness (and manage.py) have built-in support for them.\nTo use them:\n\nSet up your data in your app (call\nit \"foo\") using the admin tool\nCreate a fixtures directory in your\n\"foo\" app directory\nType: python manage.py dumpdata --indent=4 foo > foo/fixtures/foo.json\n\nNow, after your syncdb stage, you just type:\n python manage.py loaddata foo.json\n\nAnd your data will be re-created.\nIf you want them in a test case:\nclass FooTests(TestCase):\n fixtures = ['foo.json']\n\nNote that you will have to recreate or manually update your fixtures if your schema changes drastically.\nYou can read more about fixtures in the django docs for Fixture Loading\n",
"Here's what we do.\n\nApps are named with a Schema version number. appa_2, appb_1, etc.\nMinor changes don't change the number.\nMajor changes increment the number. Syncdb works. And a \"data migration\" script can be written.\ndef migrate_appa_2_to_3():\n for a in appa_2.SomeThing.objects.all():\n appa_3.AnotherThing.create( a.this, a.that )\n appa_3.NewThing.create( a.another, a.yetAnother )\n for b in ...\n\n\nThe point is that drop and recreate isn't always appropriate. It's sometimes helpful to move data form the old model to the new model without rebuilding from scratch.\n",
"South is the coolest.\nThough good ol' reset works best when data doesn't matter. \nhttp://south.aeracode.org/\n",
"To add to Matthew's response, I often also use custom SQL to provide initial data as documented here.\nDjango just looks for files in <app>/sql/<modelname>.sql and runs them after creating tables during syncdb or sqlreset. I use custom SQL when I need to do something like populate my Django tables from other non-Django database tables.\n",
"Personally my development db is for a project I'm working on right now is rather large, so I use dmigrations to create db migration scripts to modify the db (rather than wiping out the db everytime like I did in the beginning).\nEdit: Actually, I'm using South now :-)\n"
] | [
22,
15,
12,
11,
4,
1
] | [] | [] | [
"django",
"django_models",
"django_syncdb",
"python",
"workflow"
] | stackoverflow_0000497654_django_django_models_django_syncdb_python_workflow.txt |
Q:
Unable to make each sentence to start at a new line in LaTex by AWK/Python
I have a long document in LaTex, which contains paragraphs. The paragraphs contain sentences such that no subsequent sentence start at a new line.
How can you make each subsequent sentence to start at a new line in my .tex file?
My attempt to the problem
We need to put \n to the end of Sentence B where Sentence B has Sentence A before it.
We must not put \n to the situations where there are the mark \.
I see that the problem can be solved by AWK and Python.
A:
So you want every sentence in your .tex file to start on a new line, but without introducing extra paragraphs? Is that correct?
Possibly you could go through your file and, every time you see a '.' followed by whitespace and a capital letter, insert a newline.
e.g. in python:
import re
sentence_end = r'\.\s+([A-Z])'
source = open('myfile.tex')
dest = open('myfile-out.tex', 'w')
for line in source:
dest.write(re.sub(sentence_end, '.\n\g<1>', line))
A:
What's wrong with putting a newline after each period? Eg:
awk '{ gsub(/\. +/, ".\n"); print }'
$ echo "abc. 123. xyz." | awk '{ gsub(/\. +/, ".\n"); print }'
abc.
123.
xyz.
A:
If I read your question correctly, what you need is the \newline command. Put it after each sentence. \\ is a shortcut for this.
A regex to do this would be something like
s/\. ([A-Z])/.\\newline\1/
| Unable to make each sentence to start at a new line in LaTex by AWK/Python | I have a long document in LaTex, which contains paragraphs. The paragraphs contain sentences such that no subsequent sentence start at a new line.
How can you make each subsequent sentence to start at a new line in my .tex file?
My attempt to the problem
We need to put \n to the end of Sentence B where Sentence B has Sentence A before it.
We must not put \n to the situations where there are the mark \.
I see that the problem can be solved by AWK and Python.
| [
"So you want every sentence in your .tex file to start on a new line, but without introducing extra paragraphs? Is that correct?\nPossibly you could go through your file and, every time you see a '.' followed by whitespace and a capital letter, insert a newline.\ne.g. in python:\nimport re\nsentence_end = r'\\.\\s+([A-Z])'\n\nsource = open('myfile.tex')\ndest = open('myfile-out.tex', 'w')\nfor line in source:\n dest.write(re.sub(sentence_end, '.\\n\\g<1>', line))\n\n",
"What's wrong with putting a newline after each period? Eg:\nawk '{ gsub(/\\. +/, \".\\n\"); print }'\n\n$ echo \"abc. 123. xyz.\" | awk '{ gsub(/\\. +/, \".\\n\"); print }'\nabc.\n123.\nxyz.\n\n",
"If I read your question correctly, what you need is the \\newline command. Put it after each sentence. \\\\ is a shortcut for this.\nA regex to do this would be something like\ns/\\. ([A-Z])/.\\\\newline\\1/\n\n"
] | [
3,
2,
2
] | [] | [] | [
"awk",
"latex",
"python"
] | stackoverflow_0000636887_awk_latex_python.txt |
Q:
Best way to remove duplicate characters (words) in a string?
What would be the best way of removing any duplicate characters and sets of characters separated by spaces in string?
I think this example explains it better:
foo = 'h k k h2 h'
should become:
foo = 'h k h2' # order not important
Other example:
foo = 's s k'
becomes:
foo = 's k'
A:
' '.join(set(foo.split()))
Note that split() by default will split on all whitespace characters. (e.g. tabs, newlines, spaces)
So if you want to split ONLY on a space then you have to use:
' '.join(set(foo.split(' ')))
A:
Do you mean?
' '.join( set( someString.split() ) )
That's the unique space-delimited words in no particular order.
A:
out = []
for word in input.split():
if not word in out:
out.append(word)
output_string = " ".join(out)
Longer than using a set, but it keeps the order.
Edit: Nevermind. I missed the part in the question about order not being important. Using a set is better.
| Best way to remove duplicate characters (words) in a string? | What would be the best way of removing any duplicate characters and sets of characters separated by spaces in string?
I think this example explains it better:
foo = 'h k k h2 h'
should become:
foo = 'h k h2' # order not important
Other example:
foo = 's s k'
becomes:
foo = 's k'
| [
"' '.join(set(foo.split()))\n\nNote that split() by default will split on all whitespace characters. (e.g. tabs, newlines, spaces)\nSo if you want to split ONLY on a space then you have to use:\n' '.join(set(foo.split(' ')))\n\n",
"Do you mean?\n' '.join( set( someString.split() ) )\n\nThat's the unique space-delimited words in no particular order.\n",
"out = []\nfor word in input.split():\n if not word in out:\n out.append(word)\noutput_string = \" \".join(out)\n\nLonger than using a set, but it keeps the order.\nEdit: Nevermind. I missed the part in the question about order not being important. Using a set is better.\n"
] | [
13,
10,
6
] | [] | [] | [
"duplicates",
"python",
"string"
] | stackoverflow_0000636977_duplicates_python_string.txt |
Q:
What should I be aware of when moving from asp.net to python for web development?
I'm thinking about converting an app from Asp.net to python. I would like to know: what are the key comparisons to be aware of when moving a asp.net app to python(insert framework)?
Does python have user controls? Master pages?
A:
First, Python is a language, while ASP.NET is a web framework. In fact, you can code ASP.NET applications using IronPython.
If you want to leave ASP.NET behind and go with the Python "stack," then you can choose from several different web application frameworks, including Django and Zope.
Zope, for example, offers a pluggable architecture where you can "add on" things like wikis, blogs, and so on. It also has page templates, which are somewhat similar to the ASP.NET master page.
A:
I second the note by Out Into Space on how python is a language versus a web framework; it's an important observation that underlies pretty much everything you will experience in moving from ASP.NET to Python.
On a similar note, you will also find that the differences in language style and developer community between C#/VB.NET and Python influence the basic approach to developing web frameworks. This would be the same whether you were moving from web frameworks written in java, php, ruby, perl or any other language for that matter.
The old "when you have a hammer, everything looks like a nail" adage really shows in the basic design of the frameworks :-) Because of this, though, you will find yourself with a few paradigm shifts to make when you substitute that hammer for a screwdriver.
For example, Python web frameworks rely much less on declarative configuration than ASP.NET. Django, for example, has only a single config file that really has only a couple dozen lines (once you strip out the comments :-) ). Similarly, URL configuration and the page lifecycle are quite compact compared to ASP.NET, while being just as powerful. There's more "convention" over configuration (though much less so that Rails), and heavy use of the fact that modules in Python are top-level objects in the language... not everything has to be a class. This cuts down on the amount of code involved, and makes the application flow highly readable.
As Out Into Space mentioned, zope's page templates are "somewhat" similar to ASP.NET master page, but not exactly. Django also offers page templates that inherit from each other, and they work very well, but not if you're trying to use them like an ASP.NET template.
There also isn't a tradition of user controls in Python web frameworks a la .NET. The configuration machinery, request/response process indirection, handler complexity, and code-library size is just not part of the feel that python developers have for their toolset.
We all argue that you can build the same web application, with probably less code, and more easily debuggable/maintainable using pythonic-tools :-) The main benefit here being that you also get to take advantage of the python language, and a pythonic framework, which is what makes python developers happy to go to work in the morning. YMMV, of course.
All of which to say, you'll find you can do everything you've always done, just differently. Whether or not the differences please or frustrate you will determine if a python web framework is the right tool for you in the long run.
A:
Most frameworks for python has a 'templating' engine which provide similar functionality of ASP.NET's Master pages and User Controls. :)
Thanks for the replies Out Of Space and Jarret Hardie
| What should I be aware of when moving from asp.net to python for web development? | I'm thinking about converting an app from Asp.net to python. I would like to know: what are the key comparisons to be aware of when moving a asp.net app to python(insert framework)?
Does python have user controls? Master pages?
| [
"First, Python is a language, while ASP.NET is a web framework. In fact, you can code ASP.NET applications using IronPython.\nIf you want to leave ASP.NET behind and go with the Python \"stack,\" then you can choose from several different web application frameworks, including Django and Zope.\nZope, for example, offers a pluggable architecture where you can \"add on\" things like wikis, blogs, and so on. It also has page templates, which are somewhat similar to the ASP.NET master page.\n",
"I second the note by Out Into Space on how python is a language versus a web framework; it's an important observation that underlies pretty much everything you will experience in moving from ASP.NET to Python.\nOn a similar note, you will also find that the differences in language style and developer community between C#/VB.NET and Python influence the basic approach to developing web frameworks. This would be the same whether you were moving from web frameworks written in java, php, ruby, perl or any other language for that matter.\nThe old \"when you have a hammer, everything looks like a nail\" adage really shows in the basic design of the frameworks :-) Because of this, though, you will find yourself with a few paradigm shifts to make when you substitute that hammer for a screwdriver.\nFor example, Python web frameworks rely much less on declarative configuration than ASP.NET. Django, for example, has only a single config file that really has only a couple dozen lines (once you strip out the comments :-) ). Similarly, URL configuration and the page lifecycle are quite compact compared to ASP.NET, while being just as powerful. There's more \"convention\" over configuration (though much less so that Rails), and heavy use of the fact that modules in Python are top-level objects in the language... not everything has to be a class. This cuts down on the amount of code involved, and makes the application flow highly readable. \nAs Out Into Space mentioned, zope's page templates are \"somewhat\" similar to ASP.NET master page, but not exactly. Django also offers page templates that inherit from each other, and they work very well, but not if you're trying to use them like an ASP.NET template.\nThere also isn't a tradition of user controls in Python web frameworks a la .NET. The configuration machinery, request/response process indirection, handler complexity, and code-library size is just not part of the feel that python developers have for their toolset.\nWe all argue that you can build the same web application, with probably less code, and more easily debuggable/maintainable using pythonic-tools :-) The main benefit here being that you also get to take advantage of the python language, and a pythonic framework, which is what makes python developers happy to go to work in the morning. YMMV, of course.\nAll of which to say, you'll find you can do everything you've always done, just differently. Whether or not the differences please or frustrate you will determine if a python web framework is the right tool for you in the long run.\n",
"Most frameworks for python has a 'templating' engine which provide similar functionality of ASP.NET's Master pages and User Controls. :)\nThanks for the replies Out Of Space and Jarret Hardie\n"
] | [
8,
4,
0
] | [] | [] | [
"asp.net",
"python"
] | stackoverflow_0000624062_asp.net_python.txt |
Q:
What are some strategies to write python code that works in CPython, Jython and IronPython
Having tries to target two of these environments at the same time I can safely say the if you have to use a database etc. you end up having to write unique code for that environment. Have you got a great way to handle this situation?
A:
If you do find you need to write unique code for an environment, use pythons
import mymodule_jython as mymodule
import mymodule_cpython as mymodule
have this stuff in a simple module (''module_importer''?) and write your code like this:
from module_importer import mymodule
This way, all you need to do is alter module_importer.py per platform.
A:
@Daren Thomas: I agree, but you should use the platform module to determine which interpreter you're running.
A:
I write code for CPython and IronPython but tip should work for Jython as well.
Basically, I write all the platform specific code in separate modules/packages and then import the appropriate one based on platform I'm running on. (see cdleary's comment above)
This is especially important when it comes to the differences between the SQLite implementations and if you are implementing any GUI code.
A:
The #1 thing IMO: Focus on thread safety. CPython's GIL makes writing threadsafe code easy because only one thread can access the interpreter at a time. IronPython and Jython are a little less hand-holding though.
A:
I'm pretty sure you already know this but unfortunately Jython can't load c extension modules.
A:
There are two major issues at play here...
Firstly, to my knowledge, only CPython has RAII - you have to close your own resources in Jython, Ironpython, etc.
And Secondly, as has been mentioned, is thread safety.
| What are some strategies to write python code that works in CPython, Jython and IronPython | Having tries to target two of these environments at the same time I can safely say the if you have to use a database etc. you end up having to write unique code for that environment. Have you got a great way to handle this situation?
| [
"If you do find you need to write unique code for an environment, use pythons \nimport mymodule_jython as mymodule\n\nimport mymodule_cpython as mymodule\n\nhave this stuff in a simple module (''module_importer''?) and write your code like this:\nfrom module_importer import mymodule\n\nThis way, all you need to do is alter module_importer.py per platform.\n",
"@Daren Thomas: I agree, but you should use the platform module to determine which interpreter you're running.\n",
"I write code for CPython and IronPython but tip should work for Jython as well.\nBasically, I write all the platform specific code in separate modules/packages and then import the appropriate one based on platform I'm running on. (see cdleary's comment above)\nThis is especially important when it comes to the differences between the SQLite implementations and if you are implementing any GUI code.\n",
"The #1 thing IMO: Focus on thread safety. CPython's GIL makes writing threadsafe code easy because only one thread can access the interpreter at a time. IronPython and Jython are a little less hand-holding though.\n",
"I'm pretty sure you already know this but unfortunately Jython can't load c extension modules.\n",
"There are two major issues at play here...\nFirstly, to my knowledge, only CPython has RAII - you have to close your own resources in Jython, Ironpython, etc.\nAnd Secondly, as has been mentioned, is thread safety.\n"
] | [
15,
10,
2,
1,
0,
0
] | [] | [] | [
"cpython",
"ironpython",
"jython",
"python"
] | stackoverflow_0000053543_cpython_ironpython_jython_python.txt |
Q:
Python Profiling in Eclipse
This questions is semi-based of this one here:
How can you profile a python script?
I thought that this would be a great idea to run on some of my programs. Although profiling from a batch file as explained in the aforementioned answer is possible, I think it would be even better to have this option in Eclipse. At the same time, making my entire program a function and profiling it would mean I have to alter the source code?
How can I configure eclipse such that I have the ability to run the profile command on my existing programs?
Any tips or suggestions are welcomed!
A:
if you follow the common python idiom to make all your code, even the "existing programs", importable as modules, you could do exactly what you describe, without any additional hassle.
here is the specific idiom I am talking about, which turns your program's flow "upside-down" since the __name__ == '__main__' will be placed at the bottom of the file, once all your defs are done:
# program.py file
def foo():
""" analogous to a main(). do something here """
pass
# ... fill in rest of function def's here ...
# here is where the code execution and control flow will
# actually originate for your code, when program.py is
# invoked as a program. a very common Pythonism...
if __name__ == '__main__':
foo()
In my experience, it is quite easy to retrofit any existing scripts you have to follow this form, probably a couple minutes at most.
Since there are other benefits to having you program also a module, you'll find most python scripts out there actually do it this way. One benefit of doing it this way: anything python you write is potentially useable in module form, including cProfile-ing of your foo().
A:
You can always make separate modules that do just profiling specific stuff in your other modules. You can organize modules like these in a separate package. That way you don't change your existing code.
| Python Profiling in Eclipse | This questions is semi-based of this one here:
How can you profile a python script?
I thought that this would be a great idea to run on some of my programs. Although profiling from a batch file as explained in the aforementioned answer is possible, I think it would be even better to have this option in Eclipse. At the same time, making my entire program a function and profiling it would mean I have to alter the source code?
How can I configure eclipse such that I have the ability to run the profile command on my existing programs?
Any tips or suggestions are welcomed!
| [
"if you follow the common python idiom to make all your code, even the \"existing programs\", importable as modules, you could do exactly what you describe, without any additional hassle.\nhere is the specific idiom I am talking about, which turns your program's flow \"upside-down\" since the __name__ == '__main__' will be placed at the bottom of the file, once all your defs are done:\n# program.py file\n\ndef foo():\n \"\"\" analogous to a main(). do something here \"\"\"\n pass\n\n# ... fill in rest of function def's here ...\n\n# here is where the code execution and control flow will\n# actually originate for your code, when program.py is\n# invoked as a program. a very common Pythonism...\nif __name__ == '__main__':\n foo()\n\nIn my experience, it is quite easy to retrofit any existing scripts you have to follow this form, probably a couple minutes at most.\nSince there are other benefits to having you program also a module, you'll find most python scripts out there actually do it this way. One benefit of doing it this way: anything python you write is potentially useable in module form, including cProfile-ing of your foo().\n",
"You can always make separate modules that do just profiling specific stuff in your other modules. You can organize modules like these in a separate package. That way you don't change your existing code.\n"
] | [
4,
0
] | [] | [] | [
"eclipse",
"profiling",
"python"
] | stackoverflow_0000636695_eclipse_profiling_python.txt |
Q:
Django with Passenger
I'm trying to get a trivial Django project working with Passenger on Dreamhost, following the instructions here
I've set up the directories exactly as in that tutorial, and ensured that django is on my PYTHONPATH (I can run python and type 'import django' without any errors). However, when I try to access the url in a browser, I get the following message: "An error occurred importing your passenger_wsgi.py". Here is the contents of my passenger_wsgi.py file:
import sys, os
sys.path.append("/path/to/web/root/") # I used the actual path in my file
os.environ['DJANGO_SETTINGS_MODULE'] = ‘myproject.settings’
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
However, when I put the following simple "Hello World" application in passenger_wsgi.py, it works as intended, suggesting Passenger is set up correctly:
def application(environ, start_response):
write = start_response('200 OK', [('Content-type', 'text/plain')])
return ["Hello, world!"]
What am I missing? Seems like some config issue.
A:
Are those fancy quotation marks also in your code?
os.environ['DJANGO_SETTINGS_MODULE'] = ‘myproject.settings’
^ ^
If so, start by fixing them, as they cause a syntax error.
| Django with Passenger | I'm trying to get a trivial Django project working with Passenger on Dreamhost, following the instructions here
I've set up the directories exactly as in that tutorial, and ensured that django is on my PYTHONPATH (I can run python and type 'import django' without any errors). However, when I try to access the url in a browser, I get the following message: "An error occurred importing your passenger_wsgi.py". Here is the contents of my passenger_wsgi.py file:
import sys, os
sys.path.append("/path/to/web/root/") # I used the actual path in my file
os.environ['DJANGO_SETTINGS_MODULE'] = ‘myproject.settings’
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
However, when I put the following simple "Hello World" application in passenger_wsgi.py, it works as intended, suggesting Passenger is set up correctly:
def application(environ, start_response):
write = start_response('200 OK', [('Content-type', 'text/plain')])
return ["Hello, world!"]
What am I missing? Seems like some config issue.
| [
"Are those fancy quotation marks also in your code?\nos.environ['DJANGO_SETTINGS_MODULE'] = ‘myproject.settings’\n ^ ^\n\nIf so, start by fixing them, as they cause a syntax error.\n"
] | [
24
] | [] | [] | [
"django",
"dreamhost",
"passenger",
"python"
] | stackoverflow_0000637565_django_dreamhost_passenger_python.txt |
Q:
Paging depending on grouping of items in Django
For a website implemented in Django/Python we have the following requirement:
On a view page there are 15 messages per web paging shown. When there are more two or more messages from the same source, that follow each other on the view, they should be grouped together.
Maybe not clear, but with the following exemple it might be:
An example is (with 5 messages on a page this time):
Message1 Source1
Message2 Source2
Message3 Source2
Message4 Source1
Message5 Source3
...
This should be shown as:
Message1 Source1
Message2 Source2 (click here to 1 more message from Source2)
Message4 Source1
Message5 Source3
Message6 Source2
So on each page a fixed number of items is shown on page, where some have been regrouped.
We are wondering how we can create a Django or MySQL query to query this data in a optimal and in an easy way. Note that paging is used and that the messages are sorted by time.
PS: I don't think there is a simple solution for this due to the nature of SQL, but sometimes complex problems can be easily solved
A:
I don't see any great way to do what you're trying to do directly. If you're willing to accept a little de-normalization, I would recommend a pre-save signal to mark messages as being at the head.
#In your model
head = models.BooleanField(default=True)
#As a signal plugin:
def check_head(sender, **kwargs):
message = kwargs['instance']
if hasattr(message,'no_check_head') and message.no_check_head:
return
previous_message = Message.objects.filter(time__lt=message.time).order_by('-time')[0]
if message.source == previous_message.source:
message.head = False
next_message = Message.objects.filter(time__gt=message.time).order_by('time')[0]
if message.source == next_message.source:
next_message.head = False
next_message.no_check_head
next_message.save()
Then your query becomes magically simple:
messages = Message.objects.filter(head=True).order_by('time')[0:15]
To be quite honest...the signal listener would have to be a bit more complicated than the one I wrote. There are a host of lost synchronization/lost update problems inherent in my approach, the solutions to which will vary depending on your server (if it is single-processed, multi-threaded, then a python Lock object should get you by, but if it is multi-processed, then you will really need to implement locking based on files or database objects). Also, you will certainly also have to write a corresponding delete signal listener.
Obviously this solution involves adding some database hits, but they are on edit as opposed to on view, which might be worthwhile for you. Otherwise, perhaps consider a cruder approach: grab 30 stories, loop through the in the view, knock out the ones you won't display, and if you have 15 left, display them, otherwise repeat. Definitely an awful worst-case scenario, but perhaps not terrible average case?
If you had a server configuration that used a single process that's multi-threaded, a Lock or RLock should do the trick. Here's a possible implementation with non-reentrant lock:
import thread
lock = thread.allocate_lock()
def check_head(sender, **kwargs):
# This check must come outside the safe zone
# Otherwise, your code will screech to a hault
message = kwargs['instance']
if hasattr(message,'no_check_head') and message.no_check_head:
return
# define safe zone
lock.acquire()
# see code above
....
lock.release()
Again, a corresponding delete signal is critical as well.
EDIT: Many or most server configurations (such as Apache) will prefork, meaning there are several processes going on. The above code will be useless in that case. See this page for ideas on how to get started synchronizing with forked processes.
A:
I have a simple, though not perfect, template-only solution for this. In the template you can regroup the records using the regroup template tag. After regrouping you can hide successive records from the same source:
{% regroup records by source as grouped_records %}
{% for group in grouped_records %}
{% for item in group.list %}
<li{% if not forloop.first %} style="display:none"{% endif %}>
{{ item.message }} {{ iterm.source }}
{% if forloop.first %}
{% ifnotequal group.list|length 1 %}
<a href="#" onclick="...">Show more from the same source...</a>
{% endifnotequal %}
{% endif %}
</li>
{% endfor %}
{% endfor %}
This would be perfect if it wasn't for one thing: Pagination. If you mean to display 15 items per page, and on one page the first five are fromone source, next five from another, and the last five yet another, there would be only three visible items on the page.
| Paging depending on grouping of items in Django | For a website implemented in Django/Python we have the following requirement:
On a view page there are 15 messages per web paging shown. When there are more two or more messages from the same source, that follow each other on the view, they should be grouped together.
Maybe not clear, but with the following exemple it might be:
An example is (with 5 messages on a page this time):
Message1 Source1
Message2 Source2
Message3 Source2
Message4 Source1
Message5 Source3
...
This should be shown as:
Message1 Source1
Message2 Source2 (click here to 1 more message from Source2)
Message4 Source1
Message5 Source3
Message6 Source2
So on each page a fixed number of items is shown on page, where some have been regrouped.
We are wondering how we can create a Django or MySQL query to query this data in a optimal and in an easy way. Note that paging is used and that the messages are sorted by time.
PS: I don't think there is a simple solution for this due to the nature of SQL, but sometimes complex problems can be easily solved
| [
"I don't see any great way to do what you're trying to do directly. If you're willing to accept a little de-normalization, I would recommend a pre-save signal to mark messages as being at the head.\n#In your model\nhead = models.BooleanField(default=True)\n\n#As a signal plugin:\ndef check_head(sender, **kwargs):\n message = kwargs['instance']\n if hasattr(message,'no_check_head') and message.no_check_head:\n return\n previous_message = Message.objects.filter(time__lt=message.time).order_by('-time')[0]\n if message.source == previous_message.source:\n message.head = False\n next_message = Message.objects.filter(time__gt=message.time).order_by('time')[0]\n if message.source == next_message.source:\n next_message.head = False\n next_message.no_check_head\n next_message.save()\n\nThen your query becomes magically simple:\nmessages = Message.objects.filter(head=True).order_by('time')[0:15]\n\nTo be quite honest...the signal listener would have to be a bit more complicated than the one I wrote. There are a host of lost synchronization/lost update problems inherent in my approach, the solutions to which will vary depending on your server (if it is single-processed, multi-threaded, then a python Lock object should get you by, but if it is multi-processed, then you will really need to implement locking based on files or database objects). Also, you will certainly also have to write a corresponding delete signal listener.\nObviously this solution involves adding some database hits, but they are on edit as opposed to on view, which might be worthwhile for you. Otherwise, perhaps consider a cruder approach: grab 30 stories, loop through the in the view, knock out the ones you won't display, and if you have 15 left, display them, otherwise repeat. Definitely an awful worst-case scenario, but perhaps not terrible average case?\nIf you had a server configuration that used a single process that's multi-threaded, a Lock or RLock should do the trick. Here's a possible implementation with non-reentrant lock:\nimport thread\nlock = thread.allocate_lock()\ndef check_head(sender, **kwargs):\n # This check must come outside the safe zone\n # Otherwise, your code will screech to a hault\n message = kwargs['instance']\n if hasattr(message,'no_check_head') and message.no_check_head:\n return\n # define safe zone\n lock.acquire()\n # see code above\n ....\n lock.release()\n\nAgain, a corresponding delete signal is critical as well.\nEDIT: Many or most server configurations (such as Apache) will prefork, meaning there are several processes going on. The above code will be useless in that case. See this page for ideas on how to get started synchronizing with forked processes.\n",
"I have a simple, though not perfect, template-only solution for this. In the template you can regroup the records using the regroup template tag. After regrouping you can hide successive records from the same source:\n{% regroup records by source as grouped_records %}\n{% for group in grouped_records %}\n {% for item in group.list %}\n <li{% if not forloop.first %} style=\"display:none\"{% endif %}>\n {{ item.message }} {{ iterm.source }}\n {% if forloop.first %}\n {% ifnotequal group.list|length 1 %}\n <a href=\"#\" onclick=\"...\">Show more from the same source...</a>\n {% endifnotequal %} \n {% endif %}\n </li>\n {% endfor %}\n{% endfor %}\n\nThis would be perfect if it wasn't for one thing: Pagination. If you mean to display 15 items per page, and on one page the first five are fromone source, next five from another, and the last five yet another, there would be only three visible items on the page.\n"
] | [
3,
1
] | [] | [] | [
"django",
"django_models",
"mysql",
"python",
"sql"
] | stackoverflow_0000638647_django_django_models_mysql_python_sql.txt |
Q:
Default encoding for python for stderr?
I've got a noisy python script that I want to silence by directing its stderr output to /dev/null (using bash BTW).
Like so:
python -u parse.py 1> /tmp/output3.txt 2> /dev/null
but it quickly exits prematurely. Hmm. I can't see the traceback because of course that goes out with stderr. It runs noisily and normally if I don't direct stderr somewhere.
So let's try redirecting it to a file somewhere rather than /dev/null, and take a look at what it's outputting:
python -u parse.py 1> /tmp/output3.txt 2> /tmp/foo || tail /tmp/foo
Traceback (most recent call last):
File "parse.py", line 79, in <module>
parseit('pages-articles.xml')
File "parse.py", line 33, in parseit
print >>sys.stderr, "bad page title", page_title
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128)
So, the stderr that's being generated contains utf8, and for some reason python refuses to print non-ascii when it's being redirected, even though it's being directed to /dev/null (though of course python doesn't know that).
How can I silence the stderr of a python script even though it contains utf8? Is there any way to do it without re-writing every print to stderr in this script?
A:
You can silence stderr by binding it to a custom writer:
#!/usr/bin/env python
import codecs, sys
class NullWriter:
def write(self, *args, **kwargs):
pass
if len(sys.argv) == 2:
if sys.argv[1] == '1':
sys.stderr = NullWriter()
elif sys.argv[1] == '2':
#NOTE: sys.stderr.encoding is *read-only*
# therefore the whole stderr should be replaced
# encode all output using 'utf8'
sys.stderr = codecs.getwriter('utf8')(sys.stderr)
print >>sys.stderr, u"\u20AC" # euro sign
print "ok"
Example:
$ python silence_stderr.py
Traceback (most recent call last):
File "silence_stderr.py", line 11, in <module>
print >>sys.stderr, u"\u20AC"
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20ac' in position 0: ordinal not in range(128)
Silenced stderr:
$ python silence_stderr.py 1
ok
Encoded stderr:
$ python silence_stderr.py 2
€
ok
NOTE: I've got the above outputs inside emacs therefore to emulate it in a terminal you could do:
$ python ... 2>out.txt
$ cat out.txt
NOTE: Inside Windows console (after chcp 65001 that switch to 'utf-8' and with truetype font (Lucida Console)) I've got strange results:
C:\> python silence_stderr.py 2
Traceback (most recent call last):
File "silence_stderr.py", line 14, in <module>
print >>sys.stderr, u"\u20AC" # euro sign
File "C:\pythonxy\python\lib\codecs.py", line 304, in write
self.stream.write(data)
IOError: [Errno 13] Permission denied
If the font is not truetype then the exception doesn't raise but the output is wrong.
Perl works for the truetype font:
C:\> perl -E"say qq(\x{20ac})"
Wide character in print at -e line 1.
€
Redirection works though:
C:\>python silence_stderr.py 2 2>tmp.log
ok
C:\>cat tmp.log
€
cat: write error: Permission denied
re comment
From codecs.getwriter documentation:
Look up the codec for the given
encoding and return its StreamWriter
class or factory function. Raises a
LookupError in case the encoding
cannot be found.
An oversimplified view:
class UTF8StreamWriter:
def __init__(self, writer):
self.writer = writer
def write(self, s):
self.writer.write(s.encode('utf-8'))
sys.stderr = UTF8StreamWriter(sys.stderr)
A:
When stderr is not redirected, it takes on the encoding of your terminal. This all goes out the door when you redirect it though. You'll need to use sys.stderr.isatty() in order to detect if it's redirected and encode appropriately.
A:
You could also just encode the string as ASCII, replacing unicode characters that don't map. Then you don't have to worry about what kind of terminal you have.
asciiTitle = page_title.encode("ascii", "backslashreplace")
print >>sys.stderr, "bad page title", asciiTitle
That replaces the characters that can't be encoded with backslash-escapes, i.e. \xfc. There are some other replace options too, described here:
http://docs.python.org/library/stdtypes.html#str.encode
| Default encoding for python for stderr? | I've got a noisy python script that I want to silence by directing its stderr output to /dev/null (using bash BTW).
Like so:
python -u parse.py 1> /tmp/output3.txt 2> /dev/null
but it quickly exits prematurely. Hmm. I can't see the traceback because of course that goes out with stderr. It runs noisily and normally if I don't direct stderr somewhere.
So let's try redirecting it to a file somewhere rather than /dev/null, and take a look at what it's outputting:
python -u parse.py 1> /tmp/output3.txt 2> /tmp/foo || tail /tmp/foo
Traceback (most recent call last):
File "parse.py", line 79, in <module>
parseit('pages-articles.xml')
File "parse.py", line 33, in parseit
print >>sys.stderr, "bad page title", page_title
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128)
So, the stderr that's being generated contains utf8, and for some reason python refuses to print non-ascii when it's being redirected, even though it's being directed to /dev/null (though of course python doesn't know that).
How can I silence the stderr of a python script even though it contains utf8? Is there any way to do it without re-writing every print to stderr in this script?
| [
"You can silence stderr by binding it to a custom writer:\n#!/usr/bin/env python\nimport codecs, sys\n\nclass NullWriter:\n def write(self, *args, **kwargs):\n pass\n\nif len(sys.argv) == 2:\n if sys.argv[1] == '1':\n sys.stderr = NullWriter()\n elif sys.argv[1] == '2':\n #NOTE: sys.stderr.encoding is *read-only* \n # therefore the whole stderr should be replaced\n # encode all output using 'utf8'\n sys.stderr = codecs.getwriter('utf8')(sys.stderr)\n\nprint >>sys.stderr, u\"\\u20AC\" # euro sign\nprint \"ok\"\n\nExample:\n$ python silence_stderr.py\nTraceback (most recent call last):\n File \"silence_stderr.py\", line 11, in <module>\n print >>sys.stderr, u\"\\u20AC\"\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\u20ac' in position 0: ordinal not in range(128)\n\nSilenced stderr:\n$ python silence_stderr.py 1\nok\n\nEncoded stderr:\n$ python silence_stderr.py 2\n€\nok\n\nNOTE: I've got the above outputs inside emacs therefore to emulate it in a terminal you could do:\n$ python ... 2>out.txt\n$ cat out.txt\n\nNOTE: Inside Windows console (after chcp 65001 that switch to 'utf-8' and with truetype font (Lucida Console)) I've got strange results:\nC:\\> python silence_stderr.py 2\nTraceback (most recent call last):\n File \"silence_stderr.py\", line 14, in <module>\n print >>sys.stderr, u\"\\u20AC\" # euro sign\n File \"C:\\pythonxy\\python\\lib\\codecs.py\", line 304, in write\n self.stream.write(data)\nIOError: [Errno 13] Permission denied\n\nIf the font is not truetype then the exception doesn't raise but the output is wrong.\nPerl works for the truetype font:\nC:\\> perl -E\"say qq(\\x{20ac})\"\nWide character in print at -e line 1.\n€\n\nRedirection works though:\nC:\\>python silence_stderr.py 2 2>tmp.log\nok\nC:\\>cat tmp.log\n€\ncat: write error: Permission denied\n\nre comment\nFrom codecs.getwriter documentation:\n\nLook up the codec for the given\n encoding and return its StreamWriter\n class or factory function. Raises a\n LookupError in case the encoding\n cannot be found.\n\nAn oversimplified view:\nclass UTF8StreamWriter:\n def __init__(self, writer):\n self.writer = writer\n def write(self, s):\n self.writer.write(s.encode('utf-8'))\n\nsys.stderr = UTF8StreamWriter(sys.stderr)\n\n",
"When stderr is not redirected, it takes on the encoding of your terminal. This all goes out the door when you redirect it though. You'll need to use sys.stderr.isatty() in order to detect if it's redirected and encode appropriately.\n",
"You could also just encode the string as ASCII, replacing unicode characters that don't map. Then you don't have to worry about what kind of terminal you have.\nasciiTitle = page_title.encode(\"ascii\", \"backslashreplace\")\nprint >>sys.stderr, \"bad page title\", asciiTitle\n\nThat replaces the characters that can't be encoded with backslash-escapes, i.e. \\xfc. There are some other replace options too, described here:\nhttp://docs.python.org/library/stdtypes.html#str.encode\n"
] | [
5,
4,
2
] | [] | [] | [
"bash",
"python",
"shell",
"unicode"
] | stackoverflow_0000637396_bash_python_shell_unicode.txt |
Q:
Django Admin relations between tables: save database updates in several tables
I am using Django admin for managing my data.
I have a Users, Groups and Domains tables.
Users table has many to many relationship with Groups and Domains tables.
Domains table has one to many relationship with Groups table.
and when I save the User data through admin I also need some addtional database updates in the users_group and the users_domains table.
How do I do this? Where do I put the code?
A:
I think you are looking for InlineModels. They allow you to edit related models in the same page as the parent model. If you are looking for greater control than this, you can override the ModelAdmin save methods.
Also, always check out the Manual when you need something. It really is quite good.
A:
The best way to update other database tables is to perform the necessary get and save operations. However, if you have a many-to-many relationship, by default, both sides of the relationship are accessible from a <lower_case_model_name>_set parameter. That is, user.group_set.all() will give you all Group objects associated with a user, while group.user_set.all() will give you all User objects associated with a group. So if you override the save method (or register a signal listener--whichever option sounds stylistically more pleasing), try:
for group in user.group_set.all():
#play with group object
....
group.save()
| Django Admin relations between tables: save database updates in several tables | I am using Django admin for managing my data.
I have a Users, Groups and Domains tables.
Users table has many to many relationship with Groups and Domains tables.
Domains table has one to many relationship with Groups table.
and when I save the User data through admin I also need some addtional database updates in the users_group and the users_domains table.
How do I do this? Where do I put the code?
| [
"I think you are looking for InlineModels. They allow you to edit related models in the same page as the parent model. If you are looking for greater control than this, you can override the ModelAdmin save methods.\nAlso, always check out the Manual when you need something. It really is quite good.\n",
"The best way to update other database tables is to perform the necessary get and save operations. However, if you have a many-to-many relationship, by default, both sides of the relationship are accessible from a <lower_case_model_name>_set parameter. That is, user.group_set.all() will give you all Group objects associated with a user, while group.user_set.all() will give you all User objects associated with a group. So if you override the save method (or register a signal listener--whichever option sounds stylistically more pleasing), try:\nfor group in user.group_set.all():\n #play with group object\n ....\n group.save()\n\n"
] | [
2,
0
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0000635048_django_django_admin_python.txt |
Q:
Python time to age, part 2: timezones
Following on from my previous question, Python time to age, I have now come across a problem regarding the timezone, and it turns out that it's not always going to be "+0200". So when strptime tries to parse it as such, it throws up an exception.
I thought about just chopping off the +0200 with [:-6] or whatever, but is there a real way to do this with strptime?
I am using Python 2.5.2 if it matters.
>>> from datetime import datetime
>>> fmt = "%a, %d %b %Y %H:%M:%S +0200"
>>> datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0200", fmt)
datetime.datetime(2008, 7, 22, 8, 17, 41)
>>> datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0300", fmt)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.5/_strptime.py", line 330, in strptime
(data_string, format))
ValueError: time data did not match format: data=Tue, 22 Jul 2008 08:17:41 +0300 fmt=%a, %d %b %Y %H:%M:%S +0200
A:
is there a real way to do this with strptime?
No, but since your format appears to be an RFC822-family date, you can read it much more easily using the email library instead:
>>> import email.utils
>>> email.utils.parsedate_tz('Tue, 22 Jul 2008 08:17:41 +0200')
(2008, 7, 22, 8, 17, 41, 0, 1, 0, 7200)
(7200 = timezone offset from UTC in seconds)
A:
New in version 2.6.
For a naive object, the %z and %Z
format codes are replaced by empty
strings.
It looks like this is implemented only in >= 2.6, and I think you have to manually parse it.
I can't see another solution than to remove the time zone data:
from datetime import timedelta,datetime
try:
offset = int("Tue, 22 Jul 2008 08:17:41 +0300"[-5:])
except:
print "Error"
delta = timedelta(hours = offset / 100)
fmt = "%a, %d %b %Y %H:%M:%S"
time = datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0200"[:-6], fmt)
time -= delta
A:
You can use the dateutil library which is very useful:
from datetime import datetime
from dateutil.parser import parse
dt = parse("Tue, 22 Jul 2008 08:17:41 +0200")
## datetime.datetime(2008, 7, 22, 8, 17, 41, tzinfo=tzoffset(None, 7200)) <- dt
print dt
2008-07-22 08:17:41+02:00
A:
As far as I know, strptime() doesn't recognize numeric time zone codes. If you know that the string is always going to end with a time zone specification of that form (+ or - followed by 4 digits), just chopping it off and parsing it manually seems like a perfectly reasonable thing to do.
A:
It seems that %Z corresponds to time zone names, not offsets.
For example, given:
>>> format = '%a, %d %b %Y %H:%M:%S %Z'
I can parse:
>>> datetime.datetime.strptime('Tue, 22 Jul 2008 08:17:41 GMT', format)
datetime.datetime(2008, 7, 22, 8, 17, 41)
Although it seems that it doesn't do anything with the time zone, merely observing that it exists and is valid:
>>> datetime.datetime.strptime('Tue, 22 Jul 2008 08:17:41 NZDT', format)
datetime.datetime(2008, 7, 22, 8, 17, 41)
I suppose if you wished, you could locate a mapping of offsets to names, convert your input, and then parse it. It might be simpler to just truncate your input, though.
| Python time to age, part 2: timezones | Following on from my previous question, Python time to age, I have now come across a problem regarding the timezone, and it turns out that it's not always going to be "+0200". So when strptime tries to parse it as such, it throws up an exception.
I thought about just chopping off the +0200 with [:-6] or whatever, but is there a real way to do this with strptime?
I am using Python 2.5.2 if it matters.
>>> from datetime import datetime
>>> fmt = "%a, %d %b %Y %H:%M:%S +0200"
>>> datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0200", fmt)
datetime.datetime(2008, 7, 22, 8, 17, 41)
>>> datetime.strptime("Tue, 22 Jul 2008 08:17:41 +0300", fmt)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.5/_strptime.py", line 330, in strptime
(data_string, format))
ValueError: time data did not match format: data=Tue, 22 Jul 2008 08:17:41 +0300 fmt=%a, %d %b %Y %H:%M:%S +0200
| [
"\nis there a real way to do this with strptime?\n\nNo, but since your format appears to be an RFC822-family date, you can read it much more easily using the email library instead:\n>>> import email.utils\n>>> email.utils.parsedate_tz('Tue, 22 Jul 2008 08:17:41 +0200')\n(2008, 7, 22, 8, 17, 41, 0, 1, 0, 7200)\n\n(7200 = timezone offset from UTC in seconds)\n",
"\nNew in version 2.6.\nFor a naive object, the %z and %Z\n format codes are replaced by empty\n strings.\n\nIt looks like this is implemented only in >= 2.6, and I think you have to manually parse it.\nI can't see another solution than to remove the time zone data:\nfrom datetime import timedelta,datetime\ntry:\n offset = int(\"Tue, 22 Jul 2008 08:17:41 +0300\"[-5:])\nexcept:\n print \"Error\"\n\ndelta = timedelta(hours = offset / 100)\n\nfmt = \"%a, %d %b %Y %H:%M:%S\"\ntime = datetime.strptime(\"Tue, 22 Jul 2008 08:17:41 +0200\"[:-6], fmt)\ntime -= delta\n\n",
"You can use the dateutil library which is very useful:\nfrom datetime import datetime\nfrom dateutil.parser import parse\n\ndt = parse(\"Tue, 22 Jul 2008 08:17:41 +0200\")\n## datetime.datetime(2008, 7, 22, 8, 17, 41, tzinfo=tzoffset(None, 7200)) <- dt\n\nprint dt\n2008-07-22 08:17:41+02:00\n\n",
"As far as I know, strptime() doesn't recognize numeric time zone codes. If you know that the string is always going to end with a time zone specification of that form (+ or - followed by 4 digits), just chopping it off and parsing it manually seems like a perfectly reasonable thing to do.\n",
"It seems that %Z corresponds to time zone names, not offsets.\nFor example, given:\n>>> format = '%a, %d %b %Y %H:%M:%S %Z'\n\nI can parse:\n>>> datetime.datetime.strptime('Tue, 22 Jul 2008 08:17:41 GMT', format)\ndatetime.datetime(2008, 7, 22, 8, 17, 41)\n\nAlthough it seems that it doesn't do anything with the time zone, merely observing that it exists and is valid:\n>>> datetime.datetime.strptime('Tue, 22 Jul 2008 08:17:41 NZDT', format)\ndatetime.datetime(2008, 7, 22, 8, 17, 41)\n\nI suppose if you wished, you could locate a mapping of offsets to names, convert your input, and then parse it. It might be simpler to just truncate your input, though.\n"
] | [
40,
28,
18,
1,
0
] | [] | [] | [
"datetime",
"python",
"timezone"
] | stackoverflow_0000526406_datetime_python_timezone.txt |
Q:
Problem Inserting data into MS Access database using ADO via Python
[Edit 2: More information and debugging in answer below...]
I'm writing a python script to export MS Access databases into a series of text files to allow for more meaningful version control (I know - why Access? Why aren't I using existing solutions? Let's just say the restrictions aren't of a technical nature).
I've successfully exported the full contents and structure of the database using ADO and ADOX via the comtypes library, but I'm getting a problem re-importing the data.
I'm exporting the contents of each table into a text file with a list on each line, like so:
[-9, u'No reply']
[1, u'My home is as clean and comfortable as I want']
[2, u'My home could be more clean or comfortable than it is']
[3, u'My home is not at all clean or comfortable']
And the following function to import the said file:
import os
import sys
import datetime
import comtypes.client as client
from ADOconsts import *
from access_consts import *
class Db:
def create_table_contents(self, verbosity = 0):
conn = client.CreateObject("ADODB.Connection")
rs = client.CreateObject("ADODB.Recordset")
conn.ConnectionString = self.new_con_string
conn.Open()
for fname in os.listdir(self.file_path):
if fname.startswith("Table_"):
tname = fname[6:-4]
if verbosity > 0:
print "Filling table %s." % tname
conn.Execute("DELETE * FROM [%s];" % tname)
rs.Open("SELECT * FROM [%s];" % tname, conn,
adOpenDynamic, adLockOptimistic)
f = open(self.file_path + os.path.sep + fname, "r")
data = f.readline()
print repr(data)
while data != '':
data = eval(data.strip())
print data[0]
print rs.Fields.Count
rs.AddNew()
for i in range(rs.Fields.Count):
if verbosity > 1:
print "Into field %s (type %s) insert value %s." % (
rs.Fields[i].Name, str(rs.Fields[i].Type),
data[i])
rs.Fields[i].Value = data[i]
data = f.readline()
print repr(data)
rs.Update()
rs.Close()
conn.Close()
Everything works fine except that numerical values (double and int) are being inserted as zeros. Any ideas on whether the problem is with my code, eval, comtypes, or ADO?
Edit: I've fixed the problem with inserting numbers - casting them as strings(!) seems to solve the problem for both double and integer fields.
However, I now have a different issue that had previously been obscured by the above: the first field in every row is being set to 0 regardless of data type... Any ideas?
A:
And found an answer.
rs = client.CreateObject("ADODB.Recordset")
Needs to be:
rs = client.CreateObject("ADODB.Recordset", dynamic=True)
Now I just need to look into why. Just hope this question saves someone else a few hours...
A:
Is data[i] being treated as a string? What happens if you specifically cast it as a int/double when you set rs.Fields[i].Value?
Also, what happens when you print out the contents of rs.Fields[i].Value after it is set?
A:
Not a complete answer yet, but it appears to be a problem during the update. I've added some further debugging code in the insertion process which generates the following (example of a single row being updated):
Inserted into field ID (type 3) insert value 1, field value now 1.
Inserted into field TextField (type 202) insert value u'Blah', field value now Blah.
Inserted into field Numbers (type 5) insert value 55.0, field value now 55.0.
After update: [0, u'Blah', 55.0]
The last value in each "Inserted..." line is the result of calling rs.Fields[i].Value before calling rs.Update(). The "After..." line shows the results of calling rs.Fields[i].Value after calling rs.Update().
What's even more annoying is that it's not reliably failing. Rerunning the exact same code on the same records a few minutes later generated:
Inserted into field ID (type 3) insert value 1, field value now 1.
Inserted into field TextField (type 202) insert value u'Blah', field value now Blah.
Inserted into field Numbers (type 5) insert value 55.0, field value now 55.0.
After update: [1, u'Blah', 2.0]
As you can see, results are reliable until you commit them, then... not.
| Problem Inserting data into MS Access database using ADO via Python | [Edit 2: More information and debugging in answer below...]
I'm writing a python script to export MS Access databases into a series of text files to allow for more meaningful version control (I know - why Access? Why aren't I using existing solutions? Let's just say the restrictions aren't of a technical nature).
I've successfully exported the full contents and structure of the database using ADO and ADOX via the comtypes library, but I'm getting a problem re-importing the data.
I'm exporting the contents of each table into a text file with a list on each line, like so:
[-9, u'No reply']
[1, u'My home is as clean and comfortable as I want']
[2, u'My home could be more clean or comfortable than it is']
[3, u'My home is not at all clean or comfortable']
And the following function to import the said file:
import os
import sys
import datetime
import comtypes.client as client
from ADOconsts import *
from access_consts import *
class Db:
def create_table_contents(self, verbosity = 0):
conn = client.CreateObject("ADODB.Connection")
rs = client.CreateObject("ADODB.Recordset")
conn.ConnectionString = self.new_con_string
conn.Open()
for fname in os.listdir(self.file_path):
if fname.startswith("Table_"):
tname = fname[6:-4]
if verbosity > 0:
print "Filling table %s." % tname
conn.Execute("DELETE * FROM [%s];" % tname)
rs.Open("SELECT * FROM [%s];" % tname, conn,
adOpenDynamic, adLockOptimistic)
f = open(self.file_path + os.path.sep + fname, "r")
data = f.readline()
print repr(data)
while data != '':
data = eval(data.strip())
print data[0]
print rs.Fields.Count
rs.AddNew()
for i in range(rs.Fields.Count):
if verbosity > 1:
print "Into field %s (type %s) insert value %s." % (
rs.Fields[i].Name, str(rs.Fields[i].Type),
data[i])
rs.Fields[i].Value = data[i]
data = f.readline()
print repr(data)
rs.Update()
rs.Close()
conn.Close()
Everything works fine except that numerical values (double and int) are being inserted as zeros. Any ideas on whether the problem is with my code, eval, comtypes, or ADO?
Edit: I've fixed the problem with inserting numbers - casting them as strings(!) seems to solve the problem for both double and integer fields.
However, I now have a different issue that had previously been obscured by the above: the first field in every row is being set to 0 regardless of data type... Any ideas?
| [
"And found an answer.\n rs = client.CreateObject(\"ADODB.Recordset\")\n\nNeeds to be:\n rs = client.CreateObject(\"ADODB.Recordset\", dynamic=True)\n\nNow I just need to look into why. Just hope this question saves someone else a few hours...\n",
"Is data[i] being treated as a string? What happens if you specifically cast it as a int/double when you set rs.Fields[i].Value?\nAlso, what happens when you print out the contents of rs.Fields[i].Value after it is set?\n",
"Not a complete answer yet, but it appears to be a problem during the update. I've added some further debugging code in the insertion process which generates the following (example of a single row being updated):\nInserted into field ID (type 3) insert value 1, field value now 1.\nInserted into field TextField (type 202) insert value u'Blah', field value now Blah.\nInserted into field Numbers (type 5) insert value 55.0, field value now 55.0.\nAfter update: [0, u'Blah', 55.0]\n\nThe last value in each \"Inserted...\" line is the result of calling rs.Fields[i].Value before calling rs.Update(). The \"After...\" line shows the results of calling rs.Fields[i].Value after calling rs.Update().\nWhat's even more annoying is that it's not reliably failing. Rerunning the exact same code on the same records a few minutes later generated:\nInserted into field ID (type 3) insert value 1, field value now 1.\nInserted into field TextField (type 202) insert value u'Blah', field value now Blah.\nInserted into field Numbers (type 5) insert value 55.0, field value now 55.0.\nAfter update: [1, u'Blah', 2.0]\n\nAs you can see, results are reliable until you commit them, then... not.\n"
] | [
4,
0,
0
] | [] | [] | [
"ado",
"comtypes",
"ms_access",
"python"
] | stackoverflow_0000638095_ado_comtypes_ms_access_python.txt |
Q:
Unable search names which contain three 7s in random order by AWK/Python/Bash
I need to find names which contain three number 7 in the random order.
My attempt
We need to find first names which do not contain seven
ls | grep [^7]
Then, we could remove these matches from the whole space
ls [remove] ls | grep [^7]
The problem in my pseudo-code starts to repeat itself quickly.
How can you find the names which contain three 7s in the random order by AWK/Python/Bash?
[edit]
The name can contain any number of letters and it contains words of three 7s.
A:
I don't understand the part about "random order". How do you differentiate between the "order" when it's the same token that repeats? Is "a7b7" different from "c7d7" in the order of the 7s?
Anyway, this ought to work:
ls *7*7*7*
It just let's the shell solve the problem, but maybe I didn't understand properly.
EDIT: The above is wrong, it includes cases with more than four 7s which is not wanted. Assuming this is bash, and extended globbing is enabled, this works:
ls *([^7])7*([^7])7*([^7])7*([^7])
This reads as "zero or more characters which are not sevens, followed by a seven, followed by zero or more characters that are not sevens", and so on. It's important to understand that the asterisk is a prefix operator here, operating on the expression ([^7]) which means "any character except 7".
A:
I'm guessing you want to find files that contain exactly three 7's, but no more. Using gnu grep with the extends regexp switch (-E):
ls | grep -E '^([^7]*7){3}[^7]*$'
Should do the trick.
Basically that matches 3 occurrences of "not 7 followed by a 7", then a bunch of "not 7" across the whole string (the ^ and $ at the beginning and end of the pattern respectively).
A:
Something like this:
printf '%s\n' *|awk -F7 NF==4
A:
A Perl solution:
$ ls | perl -ne 'print if (tr/7/7/ == 3)'
3777
4777
5777
6777
7077
7177
7277
7377
7477
7577
7677
...
(I happen to have a directory with 4-digit numbers. 1777 and 2777 don't exist. :-)
A:
Or instead of doing it in a single grep, use one grep to find files with 3-or-more 7s and another to filter out 4-or-more 7s.
ls -f | egrep '7.*7.*7' | grep -v '7.*7.*7.*7'
You could move some of the work into the shell glob with the shorter
ls -f *7*7*7* | grep -v '7.*7.*7.*7'
though if there are a large number of files which match that pattern then the latter won't work because of built-in limits to the glob size.
The '-f' in the 'ls' is to prevent 'ls' from sorting the results. If there is a huge number of files in the directory then the sort time can be quite noticeable.
This two-step filter process is, I think, more understandable than using the [^7] patterns.
Also, here's the solution as a Python script, since you asked for that as an option.
import os
for filename in os.listdir("."):
if filename.count("7") == 4:
print filename
This will handle a few cases that the shell commands won't, like (evil) filenames which contain a newline character. Though even here the output in that case would likely still be wrong, or at least unprepared for by downstream programs.
| Unable search names which contain three 7s in random order by AWK/Python/Bash | I need to find names which contain three number 7 in the random order.
My attempt
We need to find first names which do not contain seven
ls | grep [^7]
Then, we could remove these matches from the whole space
ls [remove] ls | grep [^7]
The problem in my pseudo-code starts to repeat itself quickly.
How can you find the names which contain three 7s in the random order by AWK/Python/Bash?
[edit]
The name can contain any number of letters and it contains words of three 7s.
| [
"I don't understand the part about \"random order\". How do you differentiate between the \"order\" when it's the same token that repeats? Is \"a7b7\" different from \"c7d7\" in the order of the 7s?\nAnyway, this ought to work:\n ls *7*7*7*\n\nIt just let's the shell solve the problem, but maybe I didn't understand properly.\nEDIT: The above is wrong, it includes cases with more than four 7s which is not wanted. Assuming this is bash, and extended globbing is enabled, this works:\nls *([^7])7*([^7])7*([^7])7*([^7])\n\nThis reads as \"zero or more characters which are not sevens, followed by a seven, followed by zero or more characters that are not sevens\", and so on. It's important to understand that the asterisk is a prefix operator here, operating on the expression ([^7]) which means \"any character except 7\".\n",
"I'm guessing you want to find files that contain exactly three 7's, but no more. Using gnu grep with the extends regexp switch (-E):\n\nls | grep -E '^([^7]*7){3}[^7]*$'\n\nShould do the trick.\nBasically that matches 3 occurrences of \"not 7 followed by a 7\", then a bunch of \"not 7\" across the whole string (the ^ and $ at the beginning and end of the pattern respectively).\n",
"Something like this:\nprintf '%s\\n' *|awk -F7 NF==4\n\n",
"A Perl solution:\n$ ls | perl -ne 'print if (tr/7/7/ == 3)'\n3777\n4777\n5777\n6777\n7077\n7177\n7277\n7377\n7477\n7577\n7677\n...\n\n(I happen to have a directory with 4-digit numbers. 1777 and 2777 don't exist. :-)\n",
"Or instead of doing it in a single grep, use one grep to find files with 3-or-more 7s and another to filter out 4-or-more 7s.\nls -f | egrep '7.*7.*7' | grep -v '7.*7.*7.*7'\n\nYou could move some of the work into the shell glob with the shorter\nls -f *7*7*7* | grep -v '7.*7.*7.*7'\n\nthough if there are a large number of files which match that pattern then the latter won't work because of built-in limits to the glob size.\nThe '-f' in the 'ls' is to prevent 'ls' from sorting the results. If there is a huge number of files in the directory then the sort time can be quite noticeable.\nThis two-step filter process is, I think, more understandable than using the [^7] patterns.\nAlso, here's the solution as a Python script, since you asked for that as an option.\nimport os\nfor filename in os.listdir(\".\"):\n if filename.count(\"7\") == 4:\n print filename\n\nThis will handle a few cases that the shell commands won't, like (evil) filenames which contain a newline character. Though even here the output in that case would likely still be wrong, or at least unprepared for by downstream programs.\n"
] | [
7,
5,
2,
2,
1
] | [] | [] | [
"awk",
"bash",
"python",
"regex"
] | stackoverflow_0000639078_awk_bash_python_regex.txt |
Q:
In stackless Python, can you send a channel over a channel?
I do not have stackless currently running, so I can not try this myself.
import stackless
ch1 = stackless.channel()
ch2 = stackless.channel()
ch1.send(ch2)
ch3 = ch1.receive()
Are ch2 and ch3 then the same channel? Say:
text = "Hallo"
ch2.send(text)
assert text == ch3.receive()
This feature reminded me of a talk about Newsqueak that Robert Pike (of Plan9 fame) gave at Google. In Newsqueak you could send channels over channels.
A:
Yes. Just tested.
>>> import stackless
>>> ch1 = stackless.channel()
>>> def a():
... ch2 = stackless.channel()
... ch1.send(ch2)
... ch2.send("Hello")
...
>>> def b():
... ch3 = ch1.receive()
... print ch3.receive()
...
>>> stackless.tasklet(a)()
<stackless.tasklet object at 0x01C6FCB0>
>>> stackless.tasklet(b)()
<stackless.tasklet object at 0x01C6FAB0>
>>> stackless.run()
Hello
A:
Channels send normal Python references so the data you send (channel, string, whatever) is exactly what is received.
One example of sending a channel over a channel is when you use a tasklet as a service, that is, a tasklet listens on a channel for requests, does work, and returns the result. The request needs to include the data for the work and the return channel for the result, so that the result goes to the requestor.
Here's an extreme example I developed for my Stackless talk at PyCon a few years ago. This creates a new tasklet for each function call so I can use a recursive implementation of factorial which doesn't need to worry about Python's stack limit. I allocate a tasklet for each call and it gets the return channel for the result.
import stackless
def call_wrapper(f, args, kwargs, result_ch):
result_ch.send(f(*args, **kwargs))
# ... should also catch and forward exceptions ...
def call(f, *args, **kwargs):
result_ch = stackless.channel()
stackless.tasklet(call_wrapper)(f, args, kwargs, result_ch)
return result_ch.receive()
def factorial(n):
if n <= 1:
return 1
return n * call(factorial, n-1)
print "5! =", factorial(5)
print "1000! / 998! =", factorial(1000)/factorial(998)
The output is:
5! = 120
1000! / 998! = 999000
I have a few other examples of sending channels over channels in my presentation. It's a common thing in Stackless.
| In stackless Python, can you send a channel over a channel? | I do not have stackless currently running, so I can not try this myself.
import stackless
ch1 = stackless.channel()
ch2 = stackless.channel()
ch1.send(ch2)
ch3 = ch1.receive()
Are ch2 and ch3 then the same channel? Say:
text = "Hallo"
ch2.send(text)
assert text == ch3.receive()
This feature reminded me of a talk about Newsqueak that Robert Pike (of Plan9 fame) gave at Google. In Newsqueak you could send channels over channels.
| [
"Yes. Just tested.\n>>> import stackless\n>>> ch1 = stackless.channel()\n>>> def a():\n... ch2 = stackless.channel()\n... ch1.send(ch2)\n... ch2.send(\"Hello\")\n...\n>>> def b():\n... ch3 = ch1.receive()\n... print ch3.receive()\n...\n>>> stackless.tasklet(a)()\n<stackless.tasklet object at 0x01C6FCB0>\n>>> stackless.tasklet(b)()\n<stackless.tasklet object at 0x01C6FAB0>\n>>> stackless.run()\nHello\n\n",
"Channels send normal Python references so the data you send (channel, string, whatever) is exactly what is received.\nOne example of sending a channel over a channel is when you use a tasklet as a service, that is, a tasklet listens on a channel for requests, does work, and returns the result. The request needs to include the data for the work and the return channel for the result, so that the result goes to the requestor.\nHere's an extreme example I developed for my Stackless talk at PyCon a few years ago. This creates a new tasklet for each function call so I can use a recursive implementation of factorial which doesn't need to worry about Python's stack limit. I allocate a tasklet for each call and it gets the return channel for the result. \nimport stackless \n\ndef call_wrapper(f, args, kwargs, result_ch): \n result_ch.send(f(*args, **kwargs)) \n # ... should also catch and forward exceptions ... \n\ndef call(f, *args, **kwargs): \n result_ch = stackless.channel() \n stackless.tasklet(call_wrapper)(f, args, kwargs, result_ch) \n return result_ch.receive() \n\ndef factorial(n): \n if n <= 1: \n return 1 \n return n * call(factorial, n-1) \n\nprint \"5! =\", factorial(5) \nprint \"1000! / 998! =\", factorial(1000)/factorial(998)\n\nThe output is:\n5! = 120 \n1000! / 998! = 999000\n\nI have a few other examples of sending channels over channels in my presentation. It's a common thing in Stackless.\n"
] | [
4,
3
] | [] | [] | [
"python",
"python_stackless",
"stackless"
] | stackoverflow_0000638464_python_python_stackless_stackless.txt |
Q:
Ruby on Rails versus Python
I am in the field of data crunching and very soon might make a move to the world of web programming. Although I am fascinated both by Python and Ruby as both of them seem to be having every similar styles when it comes to writing business logic or data crunching logic.
But when I start googling for web development I start inclining towards Ruby on Rails my question is why is the web world obsessed with ruby on rails and active records so much?
There seem to be so many screencasts to learn Ruby on Rails and plethora of good books too
why is Python not able to pull the crowd when it comes to creating screencasts or ORM's like active record.
A:
Ruby and Python are languages.
Rails is a framework.
So it is not really sensible to compare Ruby on Rails vs Python.
There are Python Frameworks out there you should take a look at for a more direct comparison - http://wiki.python.org/moin/WebFrameworks (e.g. I know Django gets a lot of love, but there are others)
Edit: I've just had a google, there seem to be loads of Django Screencasts.
A:
Ruby gets more attention than Python simply because Ruby has one clear favourite when it comes to web apps while Python has traditionally had a very splintered approach (Zope, Plone, Django, Pylons, Turbogears). The critical mass of having almost all developers using one system as opposed to a variety of individual ones does a lot for improving documentation, finding and removing bugs, building hype and buzz, and so on.
In actual language terms the two are very similar in all but syntax, and Python is more popular generally. Python's perhaps been hindered by being popular in its own right before web frameworks became a big deal, making it harder for the community to agree to concentrate on any single approach.
A:
If you want Python screencasts, see ShowMeDo.com. I'm a co-founder, it is 3.5 yrs old and has over 400 Python screencasts (most are free) along with 600+ other free open-source topics:
http://showmedo.com/videos/python
In the Python section (linked) you'll see videos for Django, the entire TurboGears v1 DVD (provided freely courtesy Kevin Dangoor, the project founder), Python CGI (old-skool), web-scraping and plenty more.
About 1/10th of the content is subscriber-only, the other 90% is created by 100 open-src authors with 100,000 users/month.
Note that both Kyran and myself (co-founders) are A.I./math researchers in the UK with strong academic connections. Many of the Python videos have some links with starting out in data processing, I'll be creating new series over the coming months focused on math/stats/graphing/science purely for Python to accompany those that are already present.
HTH,
Ian.
A:
Ruby and Python have more similarities than differences; the same is true for Rails and Django, which are the leading web frameworks in the respective languages.
Both languages and both frameworks are likely to be rewarding to work with - in personal, "fun" terms at least - I don't know what the job markets are like in the specific areas.
There are some similar questions in StackOverflow: you could do worse than clicking around the "Related" list in the right-hand sidebar to get more feel.
Best thing is to get and try both: pick a small project and build it both ways. Decide which you like better and go for it!
| Ruby on Rails versus Python | I am in the field of data crunching and very soon might make a move to the world of web programming. Although I am fascinated both by Python and Ruby as both of them seem to be having every similar styles when it comes to writing business logic or data crunching logic.
But when I start googling for web development I start inclining towards Ruby on Rails my question is why is the web world obsessed with ruby on rails and active records so much?
There seem to be so many screencasts to learn Ruby on Rails and plethora of good books too
why is Python not able to pull the crowd when it comes to creating screencasts or ORM's like active record.
| [
"Ruby and Python are languages.\nRails is a framework.\nSo it is not really sensible to compare Ruby on Rails vs Python.\nThere are Python Frameworks out there you should take a look at for a more direct comparison - http://wiki.python.org/moin/WebFrameworks (e.g. I know Django gets a lot of love, but there are others)\nEdit: I've just had a google, there seem to be loads of Django Screencasts.\n",
"Ruby gets more attention than Python simply because Ruby has one clear favourite when it comes to web apps while Python has traditionally had a very splintered approach (Zope, Plone, Django, Pylons, Turbogears). The critical mass of having almost all developers using one system as opposed to a variety of individual ones does a lot for improving documentation, finding and removing bugs, building hype and buzz, and so on.\nIn actual language terms the two are very similar in all but syntax, and Python is more popular generally. Python's perhaps been hindered by being popular in its own right before web frameworks became a big deal, making it harder for the community to agree to concentrate on any single approach.\n",
"If you want Python screencasts, see ShowMeDo.com. I'm a co-founder, it is 3.5 yrs old and has over 400 Python screencasts (most are free) along with 600+ other free open-source topics:\nhttp://showmedo.com/videos/python\nIn the Python section (linked) you'll see videos for Django, the entire TurboGears v1 DVD (provided freely courtesy Kevin Dangoor, the project founder), Python CGI (old-skool), web-scraping and plenty more.\nAbout 1/10th of the content is subscriber-only, the other 90% is created by 100 open-src authors with 100,000 users/month.\nNote that both Kyran and myself (co-founders) are A.I./math researchers in the UK with strong academic connections. Many of the Python videos have some links with starting out in data processing, I'll be creating new series over the coming months focused on math/stats/graphing/science purely for Python to accompany those that are already present.\nHTH,\nIan.\n",
"Ruby and Python have more similarities than differences; the same is true for Rails and Django, which are the leading web frameworks in the respective languages.\nBoth languages and both frameworks are likely to be rewarding to work with - in personal, \"fun\" terms at least - I don't know what the job markets are like in the specific areas.\nThere are some similar questions in StackOverflow: you could do worse than clicking around the \"Related\" list in the right-hand sidebar to get more feel.\nBest thing is to get and try both: pick a small project and build it both ways. Decide which you like better and go for it!\n"
] | [
25,
16,
9,
3
] | [] | [] | [
"python",
"ruby"
] | stackoverflow_0000638150_python_ruby.txt |
Q:
Are there any pitfalls with using whitespace in Python?
At the moment I have never had a problem with whitespace in Python (although I've only used it in two projects and I was the only programmer). What are some potential pitfalls with whitespace and indentation in Python for someone learning the language?
A:
It can be confusing in some editors where one line is indented with spaces and the next is indented with a tab. This is confusing as the indentation looks the same but causes an error.
Also when your copying code, if your editor doesn't have a function to indent entire blocks, it could be annoying fixing all the indentation.
But with a good editor and a bit of practice, this shouldn't be a problem. I personally really like the way Python uses white space.
A:
yeah there are some pitfalls, but most of the time, in practice, they turn out to be enemy windmills of the Quixotic style, i.e. imaginary, and nothing to worry about in reality.
I would estimate that the pitfalls one is most likely to encounter are (including mitigating steps identified):
working with others a.k.a. collaboration
a. if you have others which for whatever reason refuse to adhere to PEP 8, then it could become a pain to maintain code. I've never seen this in practice once I point out to them the almost universal convention for python is indent level == four spaces
b. get anyone/everyone you work with to accept the convention and have them figure out how to have their editor automatically do it (or better yet, if you use the same editor, show them how to configure it) such that copy-and-paste and stuff just works.
having to invest in a "decent" editor other than your current preferred one, if your current preferred editor is not python friendly -- not really a pitfall, more an investment requirement to avoid the other pitfalls mentioned associated with copy-and-paste, re-factoring, etc. stop using Notepad and you'll thank yourself in the morning.
a. your efficiency in editing the code will be much higher under an editor which understands python
b. most modern code editors handle python decently. I myself prefer GNU Emacs, and recent versions come with excellent python-mode support out-of-the-box. The are plenty of other editors to explore, including many free alternatives and IDEs.
c. python itself comes out of the box with a "smart" python editor, idle. Check it out if you are not familiar, as it is probably already available with your python install, and may even support python better than your current editor. PyCrust is another option for a python editor implemented in python, and comes as part of wxPython.
some code generation or templating environments that incorporate python (think HTML generation or python CGI/WSGI apps) can have quirks
a. most of them, if they touch python, have taken steps to minimize the nature of python as an issue, but it still pops up once in a while.
b. if you encounter this, familiarize yourself with the steps that the framework authors have already taken to minimize the impact, and read their suggestions (and yes they will have some if it has ever been encountered in their project), and it will be simple to avoid the pitfalls related to python on this.
A:
That actually kept me away from Python for a while. Coming from a strong C background, I felt like I was driving without a seat belt.
It was aggravating when I was trying to fill up a snippet library in my editor with boilerplate, frequently used classes. I learn best by example, so I was grabbing as many interesting snippets as I could with the aim of writing a useful program while learning.
After I got in the habit of re-formatting everything that I borrowed, it wasn't so bad. But it still felt really awkward. I had to get used to a dynamically typed language PLUS indentation controlling my code.
It was quite a leap for me :)
A:
When I look at C and Java code, it's always nicely indented.
Always. Nicely. Indented.
Clearly, C and Java folks spend a lot of time getting their whitespace right.
So do Python programmers.
A:
Whitespace block delimiters force a certain amount of code formatting, which seems to irritate some programmers. Some in our shop seem to be of the attitude that they are too busy, or can't be bothered to pay attention to formatting standards, and a language that forces it rubs them raw. Sometimes the same folks gripe when others do not follow the same patterns of putting curly braces on a new line ;)
I find that Python code from the web is more commonly "readable", since this minor formatting requirement is in place. IMO, this requirement is a very useful feature.
IIRC, does not Haskell, OCaml (#light), and F# also use whitespace in the same fashion? For some reason, I have not seen any complaints about these languages.
A:
Long ago, in and environment far, far away, there were languages (such as RPG) that depended on the column structure of punch cards. This was a tedious and annoying system, and led to many errors, and newer languages such as BASIC, pascal, and so forth were designed without this dependency.
A generation of programmers were trained on these languages and told repeatedly that the freedom to put anything anywhere was a wonderful feature of the newer languages, and they should be grateful. The freedom was used, abused, and calibrated (cf the IOCC) for many years.
Now the pendulum has begun to swing back, but many people still remember that forced layout is bad in some way vague, and resist it.
IMHO, the thing to do is to work with languages on their own terms, and not get hung up on tastes-great-less-filling battles.
A:
Some people say that they don't like python indentation, because it can cause errors, which would be immensely hard to detect in case if tabs and spaces are mixed. For example:
1 if needFrobnicating:
2 frobnicate()
3 update()
Depending on the tab width, line 3 may appear to be in the same block as line 2, or in the enclosing block. This won't cause runtime or compile error, but the program would do unexpected thing.
Though I program in python for 10 years and never seen an error caused by mixing tabs and spaces
A:
When python programmers don't follow the common convention of "Use 4 spaces per indentation level" defined in PEP 8. (If your a python programmer and haven't read it please do so)
Then you run into copy paste issues.
A:
Pick a good editor. You'd want features such as:
Automatic indentation that mimics the last indented line
Automatic indentation that you can control (tabs vs. spaces)
Show whitespace characters
Detection and mimicking of whitespace convention when loading a file
For example, Vim lets me highlight tabs with these settings:
set list
set listchars=tab:\|_
highlight SpecialKey ctermbg=Red guibg=Red
highlight SpecialKey ctermfg=White guifg=White
Which can be turned off at any time using:
set nolist
IMO, I dislike editor settings that convert tabs to spaces or vice versa, because you end up with a mix of tabs and spaces, which can be nasty.
A:
I used to think that the white space issues was just a question of getting used to it.
Someone pointed out some serious flaws with Python indentation and I think they are quite valid and some subconcious understanding of these is what makes experienced programs nervious about the whole thing:-
Cut and paste just doesnt work anymore! You cannot cut boiler plate code from one app and drop it into another app.
Your editor becomes powerless to help you. With C/Jave etc. there are two things going on the "official" curly brackets indentation, and, the "unnofficial" white space indentation. Most editors are able reformat hte white space indentation to match the curly brackets nesting -- which gives you a string visual clue that something is wrong if the indentation is not what you expected. With pythons "space is syntax" paradigm your editor cannot help you.
The sheer pain of introducing another condition into already complex logic. Adding another if then else into an existing condition involves lots of silly error prone inserting of spaces on many lines line by hand.
Refactoring is a nightmare. Moving blocks of code around your classes is so painful its easier to put up with a "wrong" class structure than refactor it into a better one.
A:
If you use Eclipse as your IDE, you should take a look at PyDev; it handles indentation and spacing automatically. You can copy-paste from mixed-spacing sources, and it will convert them for you. Since I started learning the language, I've never once had to think about spacing.
And it's really a non-issue; sane programmers indent anyway. With Python, you just do what you've always done, minus having to type and match braces.
A:
Pitfalls
It can be annoying posting code snippets on web sites that ignore your indentation.
Its hard to see how multi-line anonymous functions (lambdas) can fit in with the syntax of the language.
It makes it hard to embed Python in HTML files to make templates in the way that PHP or C# can be embedded in PHP or ASP.NET pages. But that's not necessarily the best way to design templates anyway.
If your editor does not have sensible commands for block indent and outdent it will be tedious to realign code.
Advantages
Forces even lazy programmers to produce legible code. I've seen examples of brace-language code that I had to spend hours reformatting to be able to read it...
Python programmers do not need to spend hours discussing whether braces should go at the ends of lines K&R style or on lines on their own in the Microsoft style.
Frees the brace characters for use for dictionary and set expressions.
Is automatically pretty legible
A:
The problem is that in Python, if you use spaces to indent basic blocks in one area of a file, and tabs to indent in another, you get a run-time error. This is quite different from semicolons in C.
This isn't really a programming question, though, is it?
A:
The only trouble I've ever had is minor annoyances when I'm using code I wrote before I settled on whether I liked tabs or spaces, or cutting and posting code from a website.
I think most decent editors these days have a convert tabs-to-spaces and back option. Textmate certainly does.
Beyond that, the indentation has never caused me any trouble.
A:
If your using emacs, set a hard tab length of 8 and a soft tab length of 4. This way you will be alterted to any extraneous tab characters. You should always uses 4 spaces instead of tabs.
A:
One drawback I experienced as a beginner whith python was forgetting to set softtabs in my editors gave me lots of trouble.
But after a year of serious use of the language I'm not able to write poorly indented code anymore in any other language.
A:
No, I would say that is one thing to which I can find no downfalls. Yes, it is no doubt irritating to some, but that is just because they have a different habit about their style of formatting. Learn it early, and it's gonna stick.
Just look how many discussions we have over a style matter in languages like C, Cpp, Java and such. You don't see those (useless, no doubt) discussions about languages like Python, F77, and some others mentioned here which have a "fixed" formatting style.
The only thing which is variable is spaces vs. tabs (can be changed with a little trouble with any good editor), and amount of spaces tab accounts for (can be changed with no trouble with any good editor). Voila ! Style discussion complete.
Now I can do something useful :)
| Are there any pitfalls with using whitespace in Python? | At the moment I have never had a problem with whitespace in Python (although I've only used it in two projects and I was the only programmer). What are some potential pitfalls with whitespace and indentation in Python for someone learning the language?
| [
"It can be confusing in some editors where one line is indented with spaces and the next is indented with a tab. This is confusing as the indentation looks the same but causes an error.\nAlso when your copying code, if your editor doesn't have a function to indent entire blocks, it could be annoying fixing all the indentation.\nBut with a good editor and a bit of practice, this shouldn't be a problem. I personally really like the way Python uses white space.\n",
"yeah there are some pitfalls, but most of the time, in practice, they turn out to be enemy windmills of the Quixotic style, i.e. imaginary, and nothing to worry about in reality. \nI would estimate that the pitfalls one is most likely to encounter are (including mitigating steps identified):\n\nworking with others a.k.a. collaboration \na. if you have others which for whatever reason refuse to adhere to PEP 8, then it could become a pain to maintain code. I've never seen this in practice once I point out to them the almost universal convention for python is indent level == four spaces \nb. get anyone/everyone you work with to accept the convention and have them figure out how to have their editor automatically do it (or better yet, if you use the same editor, show them how to configure it) such that copy-and-paste and stuff just works. \nhaving to invest in a \"decent\" editor other than your current preferred one, if your current preferred editor is not python friendly -- not really a pitfall, more an investment requirement to avoid the other pitfalls mentioned associated with copy-and-paste, re-factoring, etc. stop using Notepad and you'll thank yourself in the morning.\na. your efficiency in editing the code will be much higher under an editor which understands python\nb. most modern code editors handle python decently. I myself prefer GNU Emacs, and recent versions come with excellent python-mode support out-of-the-box. The are plenty of other editors to explore, including many free alternatives and IDEs.\nc. python itself comes out of the box with a \"smart\" python editor, idle. Check it out if you are not familiar, as it is probably already available with your python install, and may even support python better than your current editor. PyCrust is another option for a python editor implemented in python, and comes as part of wxPython.\nsome code generation or templating environments that incorporate python (think HTML generation or python CGI/WSGI apps) can have quirks\na. most of them, if they touch python, have taken steps to minimize the nature of python as an issue, but it still pops up once in a while.\nb. if you encounter this, familiarize yourself with the steps that the framework authors have already taken to minimize the impact, and read their suggestions (and yes they will have some if it has ever been encountered in their project), and it will be simple to avoid the pitfalls related to python on this.\n\n",
"That actually kept me away from Python for a while. Coming from a strong C background, I felt like I was driving without a seat belt.\nIt was aggravating when I was trying to fill up a snippet library in my editor with boilerplate, frequently used classes. I learn best by example, so I was grabbing as many interesting snippets as I could with the aim of writing a useful program while learning.\nAfter I got in the habit of re-formatting everything that I borrowed, it wasn't so bad. But it still felt really awkward. I had to get used to a dynamically typed language PLUS indentation controlling my code.\nIt was quite a leap for me :)\n",
"When I look at C and Java code, it's always nicely indented.\nAlways. Nicely. Indented. \nClearly, C and Java folks spend a lot of time getting their whitespace right.\nSo do Python programmers.\n",
"Whitespace block delimiters force a certain amount of code formatting, which seems to irritate some programmers. Some in our shop seem to be of the attitude that they are too busy, or can't be bothered to pay attention to formatting standards, and a language that forces it rubs them raw. Sometimes the same folks gripe when others do not follow the same patterns of putting curly braces on a new line ;)\nI find that Python code from the web is more commonly \"readable\", since this minor formatting requirement is in place. IMO, this requirement is a very useful feature.\nIIRC, does not Haskell, OCaml (#light), and F# also use whitespace in the same fashion? For some reason, I have not seen any complaints about these languages.\n",
"Long ago, in and environment far, far away, there were languages (such as RPG) that depended on the column structure of punch cards. This was a tedious and annoying system, and led to many errors, and newer languages such as BASIC, pascal, and so forth were designed without this dependency.\nA generation of programmers were trained on these languages and told repeatedly that the freedom to put anything anywhere was a wonderful feature of the newer languages, and they should be grateful. The freedom was used, abused, and calibrated (cf the IOCC) for many years.\nNow the pendulum has begun to swing back, but many people still remember that forced layout is bad in some way vague, and resist it.\nIMHO, the thing to do is to work with languages on their own terms, and not get hung up on tastes-great-less-filling battles.\n",
"Some people say that they don't like python indentation, because it can cause errors, which would be immensely hard to detect in case if tabs and spaces are mixed. For example: \n1 if needFrobnicating:\n2 frobnicate()\n3 update()\n\nDepending on the tab width, line 3 may appear to be in the same block as line 2, or in the enclosing block. This won't cause runtime or compile error, but the program would do unexpected thing.\nThough I program in python for 10 years and never seen an error caused by mixing tabs and spaces\n",
"When python programmers don't follow the common convention of \"Use 4 spaces per indentation level\" defined in PEP 8. (If your a python programmer and haven't read it please do so)\nThen you run into copy paste issues.\n",
"Pick a good editor. You'd want features such as:\n\nAutomatic indentation that mimics the last indented line\nAutomatic indentation that you can control (tabs vs. spaces)\nShow whitespace characters\nDetection and mimicking of whitespace convention when loading a file\n\nFor example, Vim lets me highlight tabs with these settings:\nset list\nset listchars=tab:\\|_\nhighlight SpecialKey ctermbg=Red guibg=Red\nhighlight SpecialKey ctermfg=White guifg=White\n\nWhich can be turned off at any time using:\nset nolist\n\nIMO, I dislike editor settings that convert tabs to spaces or vice versa, because you end up with a mix of tabs and spaces, which can be nasty.\n",
"I used to think that the white space issues was just a question of getting used to it.\nSomeone pointed out some serious flaws with Python indentation and I think they are quite valid and some subconcious understanding of these is what makes experienced programs nervious about the whole thing:-\n\nCut and paste just doesnt work anymore! You cannot cut boiler plate code from one app and drop it into another app.\nYour editor becomes powerless to help you. With C/Jave etc. there are two things going on the \"official\" curly brackets indentation, and, the \"unnofficial\" white space indentation. Most editors are able reformat hte white space indentation to match the curly brackets nesting -- which gives you a string visual clue that something is wrong if the indentation is not what you expected. With pythons \"space is syntax\" paradigm your editor cannot help you.\nThe sheer pain of introducing another condition into already complex logic. Adding another if then else into an existing condition involves lots of silly error prone inserting of spaces on many lines line by hand.\nRefactoring is a nightmare. Moving blocks of code around your classes is so painful its easier to put up with a \"wrong\" class structure than refactor it into a better one. \n\n",
"If you use Eclipse as your IDE, you should take a look at PyDev; it handles indentation and spacing automatically. You can copy-paste from mixed-spacing sources, and it will convert them for you. Since I started learning the language, I've never once had to think about spacing.\nAnd it's really a non-issue; sane programmers indent anyway. With Python, you just do what you've always done, minus having to type and match braces.\n",
"Pitfalls\n\nIt can be annoying posting code snippets on web sites that ignore your indentation.\nIts hard to see how multi-line anonymous functions (lambdas) can fit in with the syntax of the language.\nIt makes it hard to embed Python in HTML files to make templates in the way that PHP or C# can be embedded in PHP or ASP.NET pages. But that's not necessarily the best way to design templates anyway.\nIf your editor does not have sensible commands for block indent and outdent it will be tedious to realign code.\n\nAdvantages\n\nForces even lazy programmers to produce legible code. I've seen examples of brace-language code that I had to spend hours reformatting to be able to read it...\nPython programmers do not need to spend hours discussing whether braces should go at the ends of lines K&R style or on lines on their own in the Microsoft style.\nFrees the brace characters for use for dictionary and set expressions.\nIs automatically pretty legible\n\n",
"The problem is that in Python, if you use spaces to indent basic blocks in one area of a file, and tabs to indent in another, you get a run-time error. This is quite different from semicolons in C.\nThis isn't really a programming question, though, is it?\n",
"The only trouble I've ever had is minor annoyances when I'm using code I wrote before I settled on whether I liked tabs or spaces, or cutting and posting code from a website.\nI think most decent editors these days have a convert tabs-to-spaces and back option. Textmate certainly does.\nBeyond that, the indentation has never caused me any trouble.\n",
"If your using emacs, set a hard tab length of 8 and a soft tab length of 4. This way you will be alterted to any extraneous tab characters. You should always uses 4 spaces instead of tabs.\n",
"One drawback I experienced as a beginner whith python was forgetting to set softtabs in my editors gave me lots of trouble.\nBut after a year of serious use of the language I'm not able to write poorly indented code anymore in any other language.\n",
"No, I would say that is one thing to which I can find no downfalls. Yes, it is no doubt irritating to some, but that is just because they have a different habit about their style of formatting. Learn it early, and it's gonna stick.\nJust look how many discussions we have over a style matter in languages like C, Cpp, Java and such. You don't see those (useless, no doubt) discussions about languages like Python, F77, and some others mentioned here which have a \"fixed\" formatting style.\nThe only thing which is variable is spaces vs. tabs (can be changed with a little trouble with any good editor), and amount of spaces tab accounts for (can be changed with no trouble with any good editor). Voila ! Style discussion complete.\nNow I can do something useful :)\n"
] | [
11,
10,
2,
2,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"python",
"whitespace"
] | stackoverflow_0000637295_python_whitespace.txt |
Q:
PHP vs. long-running process (Python, Java, etc.)?
I'd like to have your opinion about writing web apps in PHP vs. a long-running process using tools such as Django or Turbogears for Python.
As far as I know:
- In PHP, pages are fetched from the hard-disk every time (although I assume the OS keeps files in RAM for a while after they've been accessed)
- Pages are recompiled into opcode every time (although tools from eg. Zend can keep a compiled version in RAM)
- Fetching pages every time means reading global and session data every time, and re-opening connections to the DB
So, I guess PHP makes sense on a shared server (multiple sites sharing the same host) to run apps with moderate use, while a long-running process offers higher performance with apps that run on a dedicated server and are under heavy use?
Thanks for any feedback.
A:
After you apply memcache, opcode caching, and connection pooling, the only real difference between PHP and other options is that PHP is short-lived, processed based, while other options are, typically, long-lived multithreaded based.
The advantage PHP has is that its dirt simple to write scripts. You don't have to worry about memory management (its always released at the end of the request), and you don't have to worry about concurrency very much.
The major disadvantage, I can see anyways, is that some more advanced (sometimes crazier?) things are harder: pre-computing results, warming caches, reusing existing data, request prioritizing, and asynchronous programming. I'm sure people can think of many more.
Most of the time, though, those disadvantages aren't a big deal. You can scale by adding more machines and using more caching. The average web developer doesn't need to worry about concurrency control or memory management, so taking the minuscule hit from removing them isn't a big deal.
A:
With APC, which is soon to be included by default in PHP compiled bytecode is kept in RAM.
With mod_php, which is the most popular way to use PHP, the PHP interpreter stays in web server's memory.
With APC data store or memcache, you can have persistent objects in RAM instead of for example always creating them all anew by fetching data from DB.
In real life deployment you'd use all of above.
A:
PHP is fine for either use in my opinion, the performance overheads are rarely noticed. It's usually other processes which will delay the program. It's easy to cache PHP programs with something like eAccelerator.
A:
As many others have noted, PHP nor Django are going to be your bottlenecks. Hitting the hard disk for the bytecode on PHP is irrelevant for a heavily trafficked site because caching will take over at that point. The same is true for Django.
Model/View and user experience design will have order of magnitude benefits to performance over the language itself.
| PHP vs. long-running process (Python, Java, etc.)? | I'd like to have your opinion about writing web apps in PHP vs. a long-running process using tools such as Django or Turbogears for Python.
As far as I know:
- In PHP, pages are fetched from the hard-disk every time (although I assume the OS keeps files in RAM for a while after they've been accessed)
- Pages are recompiled into opcode every time (although tools from eg. Zend can keep a compiled version in RAM)
- Fetching pages every time means reading global and session data every time, and re-opening connections to the DB
So, I guess PHP makes sense on a shared server (multiple sites sharing the same host) to run apps with moderate use, while a long-running process offers higher performance with apps that run on a dedicated server and are under heavy use?
Thanks for any feedback.
| [
"After you apply memcache, opcode caching, and connection pooling, the only real difference between PHP and other options is that PHP is short-lived, processed based, while other options are, typically, long-lived multithreaded based.\nThe advantage PHP has is that its dirt simple to write scripts. You don't have to worry about memory management (its always released at the end of the request), and you don't have to worry about concurrency very much.\nThe major disadvantage, I can see anyways, is that some more advanced (sometimes crazier?) things are harder: pre-computing results, warming caches, reusing existing data, request prioritizing, and asynchronous programming. I'm sure people can think of many more.\nMost of the time, though, those disadvantages aren't a big deal. You can scale by adding more machines and using more caching. The average web developer doesn't need to worry about concurrency control or memory management, so taking the minuscule hit from removing them isn't a big deal.\n",
"\nWith APC, which is soon to be included by default in PHP compiled bytecode is kept in RAM. \nWith mod_php, which is the most popular way to use PHP, the PHP interpreter stays in web server's memory.\nWith APC data store or memcache, you can have persistent objects in RAM instead of for example always creating them all anew by fetching data from DB.\n\nIn real life deployment you'd use all of above.\n",
"PHP is fine for either use in my opinion, the performance overheads are rarely noticed. It's usually other processes which will delay the program. It's easy to cache PHP programs with something like eAccelerator.\n",
"As many others have noted, PHP nor Django are going to be your bottlenecks. Hitting the hard disk for the bytecode on PHP is irrelevant for a heavily trafficked site because caching will take over at that point. The same is true for Django.\nModel/View and user experience design will have order of magnitude benefits to performance over the language itself.\n"
] | [
3,
2,
1,
0
] | [
"PHP is a language like Java etc. \nOnly your executable is the php binary and not the JVM! You can set another MAX-Runtime for PHP-Scripts without any problems (if your shared hosting provider let you do so).\nWhere your apps are running shouldn't depend on the kind of the server. It should depend on the ressources used by the application (CPU-Time,RAM) and what is given by your Server/Vserver/Shared Host!\nFor performance tuning reasons you should have a look at eAccelerator etc. \nApache supports also modules for connection pooling! See mod_dbd.\nIf you need to scale (like in a cluster) you can use distributed memory caching systems like memcached!\n"
] | [
-1
] | [
"php",
"python"
] | stackoverflow_0000639409_php_python.txt |
Q:
Validating Oracle dates in Python
Our Python CMS stores some date values in a generic "attribute" table's varchar column. Some of these dates are later moved into a table with an actual date column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an "Invalid string date" error.
How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?
A:
How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?
I'd change the approach a bit. Have Python parse the original date input as forgivingly as possible, then output the date in a known-good representation.
dateutil's liberal parser may be a good place to start:
import dateutil.parser
d= dateutil.parser.parse('1/2/2003')
d.strftime('%d-%b-%y')
I'm not sure '%d-%b-%y' is actually still the right date format for Oracle, but it'll probably be something similar, ideally with four-digit years and no reliance on month names. (Trap: %b is locale-dependent so may return unwanted month names on a non-English OS.) Perhaps “strftime('%Y-%m-%d')” followed by “TO_DATE(..., 'YYYY-MM-DD')” at the Oracle end is needed?
A:
The format of a date string that Oracle recognizes as a date is a configurable property of the database and as such it's considered bad form to rely on implicit conversions of strings to dates.
Typically Oracle dates format to 'DD-MON-YYYY' but you can't always rely on it being set that way.
Personally I would have the CMS write to this "attribute" table in a standard format like 'YYYY-MM-DD', and then whichever job moves that to a DATE column can explicitly cast the value with to_date( value, 'YYYY-MM-DD' ) and you won't have any problems.
| Validating Oracle dates in Python | Our Python CMS stores some date values in a generic "attribute" table's varchar column. Some of these dates are later moved into a table with an actual date column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an "Invalid string date" error.
How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?
| [
"\nHow can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?\n\nI'd change the approach a bit. Have Python parse the original date input as forgivingly as possible, then output the date in a known-good representation.\ndateutil's liberal parser may be a good place to start:\nimport dateutil.parser\nd= dateutil.parser.parse('1/2/2003')\nd.strftime('%d-%b-%y')\n\nI'm not sure '%d-%b-%y' is actually still the right date format for Oracle, but it'll probably be something similar, ideally with four-digit years and no reliance on month names. (Trap: %b is locale-dependent so may return unwanted month names on a non-English OS.) Perhaps “strftime('%Y-%m-%d')” followed by “TO_DATE(..., 'YYYY-MM-DD')” at the Oracle end is needed?\n",
"The format of a date string that Oracle recognizes as a date is a configurable property of the database and as such it's considered bad form to rely on implicit conversions of strings to dates.\nTypically Oracle dates format to 'DD-MON-YYYY' but you can't always rely on it being set that way.\nPersonally I would have the CMS write to this \"attribute\" table in a standard format like 'YYYY-MM-DD', and then whichever job moves that to a DATE column can explicitly cast the value with to_date( value, 'YYYY-MM-DD' ) and you won't have any problems.\n"
] | [
5,
1
] | [
"Validate as early as possible. Why don't you store dates as dates in your Python CMS? \nIt is difficult to know what date a string like '03-04-2008' is. Is it 3 april 2008 or 4 march 2008? An American will say 4 march 2008 but a Dutch person will say 3 april 2008. \n"
] | [
-1
] | [
"oracle",
"python",
"validation"
] | stackoverflow_0000639949_oracle_python_validation.txt |
Q:
Is there any linux distribution that comes with python 2.6 yet?
I've heard ubuntu 9.4 will but it's still in alpha. Are there any stable distros that come with python 2.6 or at least don't depend on it so much so reinstalling python won't break anything?
A:
Arch Linux - http://www.archlinux.org/
A:
You can install python 2.6 in Ubuntu 8.10 just fine.
./configure --prefix=/usr/local
make
sudo make altinstall --prefix=/usr/local
Then just run python with:
python2.6
If you want to use it in a shebang line, just use:
#!/usr/bin/env python2.6
Your scripts will still work when Jaunty 9.04 is around with native python2.6.
A:
openSUSE 11.1 ships Python 2.6 as standard.
A:
Distrowatch will prob be your best place to look it has lots of details comparing different distros.
http://distrowatch.com/
Karl
A:
For what it's worth, Python 2.6 is available in the Portage tree for Gentoo, but it's hardmasked (that doesn't really count as stable) because apparently there are some programs that don't work with it. My guess is that if you had Gentoo, you could install Python 2.6 and get it to work, but it might not be smart to make it the default version (i.e. you'd want to keep Python 2.5 around as well).
A:
Python 2.6.5 is available in Fedora Rawhide, but Rawhide is the development branch, so that probably doesn't fit your description of "stable" (despite being remarkably stable). It doesn't seem to have made it into Fedora 10 yet, and I don't know if it will or not. It will be in Fedora 11 for sure -- you can get a prerelease of it already, which should be more stable than Rawhide. Alternately, you should be able to grab the package from the prerelease or Rawhide and install it under Fedora 10 without issue, I think.
| Is there any linux distribution that comes with python 2.6 yet? | I've heard ubuntu 9.4 will but it's still in alpha. Are there any stable distros that come with python 2.6 or at least don't depend on it so much so reinstalling python won't break anything?
| [
"Arch Linux - http://www.archlinux.org/\n",
"You can install python 2.6 in Ubuntu 8.10 just fine.\n./configure --prefix=/usr/local\nmake\nsudo make altinstall --prefix=/usr/local\n\nThen just run python with:\npython2.6\n\nIf you want to use it in a shebang line, just use:\n#!/usr/bin/env python2.6\n\nYour scripts will still work when Jaunty 9.04 is around with native python2.6.\n",
"openSUSE 11.1 ships Python 2.6 as standard.\n",
"Distrowatch will prob be your best place to look it has lots of details comparing different distros.\nhttp://distrowatch.com/ \nKarl\n",
"For what it's worth, Python 2.6 is available in the Portage tree for Gentoo, but it's hardmasked (that doesn't really count as stable) because apparently there are some programs that don't work with it. My guess is that if you had Gentoo, you could install Python 2.6 and get it to work, but it might not be smart to make it the default version (i.e. you'd want to keep Python 2.5 around as well).\n",
"Python 2.6.5 is available in Fedora Rawhide, but Rawhide is the development branch, so that probably doesn't fit your description of \"stable\" (despite being remarkably stable). It doesn't seem to have made it into Fedora 10 yet, and I don't know if it will or not. It will be in Fedora 11 for sure -- you can get a prerelease of it already, which should be more stable than Rawhide. Alternately, you should be able to grab the package from the prerelease or Rawhide and install it under Fedora 10 without issue, I think.\n"
] | [
9,
6,
4,
3,
1,
0
] | [] | [] | [
"linux",
"python"
] | stackoverflow_0000640191_linux_python.txt |
Q:
Running unit tests with Nose inside a Python environment such as Autodesk Maya?
I'd like to start creating unit tests for my Maya scripts. These scripts must be run inside the Maya environment and rely on the maya.cmds module namespace.
How can I run Nose tests from inside a running environment such as Maya?
A:
Use the mayapy executable included in your maya install instead of the standard python executable.
In order for this work you'll need to run nose programmatically. Create a python file called runtests.py and put it next to your test files. In it, include the following code:
import os
os.environ['PYTHONPATH'] = '/path/to/site-packages'
import nose
nose.run()
Since mayapy loads its own pythonpath, it doesn't know about the site-packages directory where nose is. os.environ is used to set this manually inside the script. Optionally you can set this as a system environment variable as well.
From the command line use the mayapy application to run the runtests.py script:
/path/to/mayapy.exe runtests.py
You may need to import the maya.standalone depending on what your tests do.
import maya.standalone
maya.standalone.initialize(name='python')
| Running unit tests with Nose inside a Python environment such as Autodesk Maya? | I'd like to start creating unit tests for my Maya scripts. These scripts must be run inside the Maya environment and rely on the maya.cmds module namespace.
How can I run Nose tests from inside a running environment such as Maya?
| [
"Use the mayapy executable included in your maya install instead of the standard python executable.\nIn order for this work you'll need to run nose programmatically. Create a python file called runtests.py and put it next to your test files. In it, include the following code:\nimport os\nos.environ['PYTHONPATH'] = '/path/to/site-packages'\n\nimport nose\nnose.run()\n\nSince mayapy loads its own pythonpath, it doesn't know about the site-packages directory where nose is. os.environ is used to set this manually inside the script. Optionally you can set this as a system environment variable as well.\nFrom the command line use the mayapy application to run the runtests.py script:\n\n/path/to/mayapy.exe runtests.py\n\nYou may need to import the maya.standalone depending on what your tests do.\nimport maya.standalone\nmaya.standalone.initialize(name='python')\n\n"
] | [
15
] | [] | [] | [
"environment",
"maya",
"nose",
"python",
"unit_testing"
] | stackoverflow_0000639744_environment_maya_nose_python_unit_testing.txt |
Q:
Scalable polling of an AppEngine application from numerous "active" clients?
I'm working on an application that will run on Google AppEngine.
I plan to have the web interface of that application wait, among many other things, for notifications coming from the AppEngine server.
Ideally I would have liked to use an XMLHttpRequest() to make a request to the server that would be waiting until the next notification comes from the application.
However there does not appear to be in AppEngine to support this type of logic (correct me if I'm wrong). This means I appear to be limited to polling at periodic intervals.
So the question is:
Does anyone have a good suggestion of how to best design this polling mechanism in order to avoid running into CPU usage quotas of AppEngine? Scalability as the number of "active" clients takes off needs to be considered.
I am specifically interested in suggestions for good management of the polling intervals from the client side and tips for efficient handling of the requests in the AppEngine application as the number of "active" clients grows.
PS: the type of information polled from the server will typically be JSON-encoded information about recently updated/added bits of information (read recently as: in the past few seconds or minutes).
Status Update
Here is a summary of my thoughts so far around this question:
To minimize the CPU load required to answer each individual request generated by the polling approach: use the memcache to minimize the time it takes to collect the reply information. Need to find pointers to a good example of that.
To minimize the number of requests generated to the server by the "active" clients I have several leads:
Make the wait between successive polling requests to the server progressively longer if the user is not actively interacting (i.e. not clicking on anything) within the client web page.
Piggy back on other types of requests to the server, that is include the results of the polling requests into other request results to save on the number of requests.
Comments and pointers to code examples welcome!
A:
OR...
You might be interested in some pubsub implementation. Like the venerable pubsubhubbub, made by guys from Google and Jaiku.
| Scalable polling of an AppEngine application from numerous "active" clients? | I'm working on an application that will run on Google AppEngine.
I plan to have the web interface of that application wait, among many other things, for notifications coming from the AppEngine server.
Ideally I would have liked to use an XMLHttpRequest() to make a request to the server that would be waiting until the next notification comes from the application.
However there does not appear to be in AppEngine to support this type of logic (correct me if I'm wrong). This means I appear to be limited to polling at periodic intervals.
So the question is:
Does anyone have a good suggestion of how to best design this polling mechanism in order to avoid running into CPU usage quotas of AppEngine? Scalability as the number of "active" clients takes off needs to be considered.
I am specifically interested in suggestions for good management of the polling intervals from the client side and tips for efficient handling of the requests in the AppEngine application as the number of "active" clients grows.
PS: the type of information polled from the server will typically be JSON-encoded information about recently updated/added bits of information (read recently as: in the past few seconds or minutes).
Status Update
Here is a summary of my thoughts so far around this question:
To minimize the CPU load required to answer each individual request generated by the polling approach: use the memcache to minimize the time it takes to collect the reply information. Need to find pointers to a good example of that.
To minimize the number of requests generated to the server by the "active" clients I have several leads:
Make the wait between successive polling requests to the server progressively longer if the user is not actively interacting (i.e. not clicking on anything) within the client web page.
Piggy back on other types of requests to the server, that is include the results of the polling requests into other request results to save on the number of requests.
Comments and pointers to code examples welcome!
| [
"OR...\nYou might be interested in some pubsub implementation. Like the venerable pubsubhubbub, made by guys from Google and Jaiku.\n"
] | [
1
] | [] | [] | [
"ajax",
"google_app_engine",
"javascript",
"python"
] | stackoverflow_0000633999_ajax_google_app_engine_javascript_python.txt |
Q:
tell whether python is in -i mode
How can you tell whether python has been started with the -i flag?
According to the docs, you can check the PYTHONINSPECT variable in os.environ, which is the equivalent of -i. But apparently it doesn't work the same way.
Works:
$ PYTHONINSPECT=1 python -c 'import os; print os.environ["PYTHONINSPECT"]'
Doesn't work:
$ python -i -c 'import os; print os.environ["PYTHONINSPECT"]'
The reason I ask is because I have a script that calls sys.exit(-1) if certain conditions fail. This is good, but sometimes I want to manually debug it using -i. I suppose I can just learn to use "PYTHONINSPECT=1 python" instead of "python -i", but it would be nice if there were a universal way of doing this.
A:
How to set inspect mode programmatically
The answer from the link @Jweede provided is imprecise. It should be:
import os
os.environ['PYTHONINSPECT'] = '1'
How to retrieve whether interactive/inspect flags are set
Just another variant of @Brian's answer:
import os
from ctypes import POINTER, c_int, cast, pythonapi
def in_interactive_inspect_mode():
"""Whether '-i' option is present or PYTHONINSPECT is not empty."""
if os.environ.get('PYTHONINSPECT'): return True
iflag_ptr = cast(pythonapi.Py_InteractiveFlag, POINTER(c_int))
#NOTE: in Python 2.6+ ctypes.pythonapi.Py_InspectFlag > 0
# when PYTHONINSPECT set or '-i' is present
return iflag_ptr.contents.value != 0
See the Python's main.c.
A:
I took a look at the source, and although the variable set when -i is provided is stored in Py_InteractiveFlag, it doesn't look like it gets exposed to python.
However, if you don't mind getting your hands a bit dirty with some low-level ctypes inspecting, I think you can get at the value by:
import ctypes, os
def interactive_inspect_mode():
flagPtr = ctypes.cast(ctypes.pythonapi.Py_InteractiveFlag,
ctypes.POINTER(ctypes.c_int))
return flagPtr.contents.value > 0 or bool(os.environ.get("PYTHONINSPECT",False))
[Edit] fix typo and also check PYTHONINSPECT (which doesn't set the variable), as pointed out in comments.
A:
This specifies how to programatically switch your script to interactive mode.
| tell whether python is in -i mode | How can you tell whether python has been started with the -i flag?
According to the docs, you can check the PYTHONINSPECT variable in os.environ, which is the equivalent of -i. But apparently it doesn't work the same way.
Works:
$ PYTHONINSPECT=1 python -c 'import os; print os.environ["PYTHONINSPECT"]'
Doesn't work:
$ python -i -c 'import os; print os.environ["PYTHONINSPECT"]'
The reason I ask is because I have a script that calls sys.exit(-1) if certain conditions fail. This is good, but sometimes I want to manually debug it using -i. I suppose I can just learn to use "PYTHONINSPECT=1 python" instead of "python -i", but it would be nice if there were a universal way of doing this.
| [
"How to set inspect mode programmatically\nThe answer from the link @Jweede provided is imprecise. It should be:\nimport os\nos.environ['PYTHONINSPECT'] = '1'\n\nHow to retrieve whether interactive/inspect flags are set\nJust another variant of @Brian's answer:\nimport os\nfrom ctypes import POINTER, c_int, cast, pythonapi\n\ndef in_interactive_inspect_mode():\n \"\"\"Whether '-i' option is present or PYTHONINSPECT is not empty.\"\"\"\n if os.environ.get('PYTHONINSPECT'): return True\n iflag_ptr = cast(pythonapi.Py_InteractiveFlag, POINTER(c_int))\n #NOTE: in Python 2.6+ ctypes.pythonapi.Py_InspectFlag > 0\n # when PYTHONINSPECT set or '-i' is present \n return iflag_ptr.contents.value != 0\n\nSee the Python's main.c.\n",
"I took a look at the source, and although the variable set when -i is provided is stored in Py_InteractiveFlag, it doesn't look like it gets exposed to python.\nHowever, if you don't mind getting your hands a bit dirty with some low-level ctypes inspecting, I think you can get at the value by:\nimport ctypes, os\n\ndef interactive_inspect_mode():\n flagPtr = ctypes.cast(ctypes.pythonapi.Py_InteractiveFlag, \n ctypes.POINTER(ctypes.c_int))\n return flagPtr.contents.value > 0 or bool(os.environ.get(\"PYTHONINSPECT\",False))\n\n[Edit] fix typo and also check PYTHONINSPECT (which doesn't set the variable), as pointed out in comments. \n",
"This specifies how to programatically switch your script to interactive mode.\n"
] | [
3,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0000640389_python.txt |
Q:
Is it pythonic for a function to return multiple values?
In python, you can have a function return multiple values. Here's a contrived example:
def divide(x, y):
quotient = x/y
remainder = x % y
return quotient, remainder
(q, r) = divide(22, 7)
This seems very useful, but it looks like it can also be abused ("Well..function X already computes what we need as an intermediate value. Let's have X return that value also").
When should you draw the line and define a different method?
A:
Absolutely (for the example you provided).
Tuples are first class citizens in Python
There is a builtin function divmod() that does exactly that.
q, r = divmod(x, y) # ((x - x%y)/y, x%y) Invariant: div*y + mod == x
There are other examples: zip, enumerate, dict.items.
for i, e in enumerate([1, 3, 3]):
print "index=%d, element=%s" % (i, e)
# reverse keys and values in a dictionary
d = dict((v, k) for k, v in adict.items()) # or
d = dict(zip(adict.values(), adict.keys()))
BTW, parentheses are not necessary most of the time.
Citation from Python Library Reference:
Tuples may be constructed in a number of ways:
Using a pair of parentheses to denote the empty tuple: ()
Using a trailing comma for a singleton tuple: a, or (a,)
Separating items with commas: a, b, c or (a, b, c)
Using the tuple() built-in: tuple() or tuple(iterable)
Functions should serve single purpose
Therefore they should return a single object. In your case this object is a tuple. Consider tuple as an ad-hoc compound data structure. There are languages where almost every single function returns multiple values (list in Lisp).
Sometimes it is sufficient to return (x, y) instead of Point(x, y).
Named tuples
With the introduction of named tuples in Python 2.6 it is preferable in many cases to return named tuples instead of plain tuples.
>>> import collections
>>> Point = collections.namedtuple('Point', 'x y')
>>> x, y = Point(0, 1)
>>> p = Point(x, y)
>>> x, y, p
(0, 1, Point(x=0, y=1))
>>> p.x, p.y, p[0], p[1]
(0, 1, 0, 1)
>>> for i in p:
... print(i)
...
0
1
A:
Firstly, note that Python allows for the following (no need for the parenthesis):
q, r = divide(22, 7)
Regarding your question, there's no hard and fast rule either way. For simple (and usually contrived) examples, it may seem that it's always possible for a given function to have a single purpose, resulting in a single value. However, when using Python for real-world applications, you quickly run into many cases where returning multiple values is necessary, and results in cleaner code.
So, I'd say do whatever makes sense, and don't try to conform to an artificial convention. Python supports multiple return values, so use it when appropriate.
A:
The example you give is actually a python builtin function, called divmod. So someone, at some point in time, thought that it was pythonic enough to include in the core functionality.
To me, if it makes the code cleaner, it is pythonic. Compare these two code blocks:
seconds = 1234
minutes, seconds = divmod(seconds, 60)
hours, minutes = divmod(minutes, 60)
seconds = 1234
minutes = seconds / 60
seconds = seconds % 60
hours = minutes / 60
minutes = minutes % 60
A:
Yes, returning multiple values (i.e., a tuple) is definitely pythonic. As others have pointed out, there are plenty of examples in the Python standard library, as well as in well-respected Python projects. Two additional comments:
Returning multiple values is sometimes very, very useful. Take, for example, a method that optionally handles an event (returning some value in doing so) and also returns success or failure. This might arise in a chain of responsibility pattern. In other cases, you want to return multiple, closely linked pieces of data---as in the example given. In this setting, returning multiple values is akin to returning a single instance of an anonymous class with several member variables.
Python's handling of method arguments necessitates the ability to directly return multiple values. In C++, for example, method arguments can be passed by reference, so you can assign output values to them, in addition to the formal return value. In Python, arguments are passed "by reference" (but in the sense of Java, not C++). You can't assign new values to method arguments and have it reflected outside method scope. For example:
// C++
void test(int& arg)
{
arg = 1;
}
int foo = 0;
test(foo); // foo is now 1!
Compare with:
# Python
def test(arg):
arg = 1
foo = 0
test(foo) # foo is still 0
A:
It's definitely pythonic. The fact that you can return multiple values from a function the boilerplate you would have in a language like C where you need to define a struct for every combination of types you return somewhere.
However, if you reach the point where you are returning something crazy like 10 values from a single function, you should seriously consider bundling them in a class because at that point it gets unwieldy.
A:
Returning a tuple is cool. Also note the new namedtuple
which was added in python 2.6 which may make this more palatable for you:
http://docs.python.org/dev/library/collections.html#collections.namedtuple
A:
OT: RSRE's Algol68 has the curious "/:=" operator. eg.
INT quotient:=355, remainder;
remainder := (quotient /:= 113);
Giving a quotient of 3, and a remainder of 16.
Note: typically the value of "(x/:=y)" is discarded as quotient "x" is assigned by reference, but in RSRE's case the returned value is the remainder.
c.f. Integer Arithmetic - Algol68
A:
It's fine to return multiple values using a tuple for simple functions such as divmod. If it makes the code readable, it's Pythonic.
If the return value starts to become confusing, check whether the function is doing too much and split it if it is. If a big tuple is being used like an object, make it an object. Also, consider using named tuples, which will be part of the standard library in Python 2.6.
A:
I'm fairly new to Python, but the tuple technique seems very pythonic to me. However, I've had another idea that may enhance readability. Using a dictionary allows access to the different values by name rather than position. For example:
def divide(x, y):
return {'quotient': x/y, 'remainder':x%y }
answer = divide(22, 7)
print answer['quotient']
print answer['remainder']
| Is it pythonic for a function to return multiple values? | In python, you can have a function return multiple values. Here's a contrived example:
def divide(x, y):
quotient = x/y
remainder = x % y
return quotient, remainder
(q, r) = divide(22, 7)
This seems very useful, but it looks like it can also be abused ("Well..function X already computes what we need as an intermediate value. Let's have X return that value also").
When should you draw the line and define a different method?
| [
"Absolutely (for the example you provided).\nTuples are first class citizens in Python\nThere is a builtin function divmod() that does exactly that.\nq, r = divmod(x, y) # ((x - x%y)/y, x%y) Invariant: div*y + mod == x\n\nThere are other examples: zip, enumerate, dict.items. \nfor i, e in enumerate([1, 3, 3]):\n print \"index=%d, element=%s\" % (i, e)\n\n# reverse keys and values in a dictionary\nd = dict((v, k) for k, v in adict.items()) # or \nd = dict(zip(adict.values(), adict.keys()))\n\nBTW, parentheses are not necessary most of the time.\nCitation from Python Library Reference: \n\nTuples may be constructed in a number of ways:\n\nUsing a pair of parentheses to denote the empty tuple: ()\nUsing a trailing comma for a singleton tuple: a, or (a,)\nSeparating items with commas: a, b, c or (a, b, c)\nUsing the tuple() built-in: tuple() or tuple(iterable)\n\n\nFunctions should serve single purpose\nTherefore they should return a single object. In your case this object is a tuple. Consider tuple as an ad-hoc compound data structure. There are languages where almost every single function returns multiple values (list in Lisp).\nSometimes it is sufficient to return (x, y) instead of Point(x, y).\nNamed tuples\nWith the introduction of named tuples in Python 2.6 it is preferable in many cases to return named tuples instead of plain tuples.\n>>> import collections\n>>> Point = collections.namedtuple('Point', 'x y')\n>>> x, y = Point(0, 1)\n>>> p = Point(x, y)\n>>> x, y, p\n(0, 1, Point(x=0, y=1))\n>>> p.x, p.y, p[0], p[1]\n(0, 1, 0, 1)\n>>> for i in p:\n... print(i)\n...\n0\n1\n\n",
"Firstly, note that Python allows for the following (no need for the parenthesis):\nq, r = divide(22, 7)\n\nRegarding your question, there's no hard and fast rule either way. For simple (and usually contrived) examples, it may seem that it's always possible for a given function to have a single purpose, resulting in a single value. However, when using Python for real-world applications, you quickly run into many cases where returning multiple values is necessary, and results in cleaner code.\nSo, I'd say do whatever makes sense, and don't try to conform to an artificial convention. Python supports multiple return values, so use it when appropriate.\n",
"The example you give is actually a python builtin function, called divmod. So someone, at some point in time, thought that it was pythonic enough to include in the core functionality.\nTo me, if it makes the code cleaner, it is pythonic. Compare these two code blocks:\nseconds = 1234\nminutes, seconds = divmod(seconds, 60)\nhours, minutes = divmod(minutes, 60)\n\nseconds = 1234\nminutes = seconds / 60\nseconds = seconds % 60\nhours = minutes / 60\nminutes = minutes % 60\n\n",
"Yes, returning multiple values (i.e., a tuple) is definitely pythonic. As others have pointed out, there are plenty of examples in the Python standard library, as well as in well-respected Python projects. Two additional comments:\n\nReturning multiple values is sometimes very, very useful. Take, for example, a method that optionally handles an event (returning some value in doing so) and also returns success or failure. This might arise in a chain of responsibility pattern. In other cases, you want to return multiple, closely linked pieces of data---as in the example given. In this setting, returning multiple values is akin to returning a single instance of an anonymous class with several member variables.\nPython's handling of method arguments necessitates the ability to directly return multiple values. In C++, for example, method arguments can be passed by reference, so you can assign output values to them, in addition to the formal return value. In Python, arguments are passed \"by reference\" (but in the sense of Java, not C++). You can't assign new values to method arguments and have it reflected outside method scope. For example:\n// C++\nvoid test(int& arg)\n{\n arg = 1;\n}\n\nint foo = 0;\ntest(foo); // foo is now 1!\n\nCompare with:\n# Python\ndef test(arg):\n arg = 1\n\nfoo = 0\ntest(foo) # foo is still 0\n\n\n",
"It's definitely pythonic. The fact that you can return multiple values from a function the boilerplate you would have in a language like C where you need to define a struct for every combination of types you return somewhere.\nHowever, if you reach the point where you are returning something crazy like 10 values from a single function, you should seriously consider bundling them in a class because at that point it gets unwieldy.\n",
"Returning a tuple is cool. Also note the new namedtuple\nwhich was added in python 2.6 which may make this more palatable for you:\nhttp://docs.python.org/dev/library/collections.html#collections.namedtuple\n",
"OT: RSRE's Algol68 has the curious \"/:=\" operator. eg.\nINT quotient:=355, remainder;\nremainder := (quotient /:= 113);\n\nGiving a quotient of 3, and a remainder of 16. \nNote: typically the value of \"(x/:=y)\" is discarded as quotient \"x\" is assigned by reference, but in RSRE's case the returned value is the remainder.\nc.f. Integer Arithmetic - Algol68\n",
"It's fine to return multiple values using a tuple for simple functions such as divmod. If it makes the code readable, it's Pythonic.\nIf the return value starts to become confusing, check whether the function is doing too much and split it if it is. If a big tuple is being used like an object, make it an object. Also, consider using named tuples, which will be part of the standard library in Python 2.6.\n",
"I'm fairly new to Python, but the tuple technique seems very pythonic to me. However, I've had another idea that may enhance readability. Using a dictionary allows access to the different values by name rather than position. For example:\ndef divide(x, y):\n return {'quotient': x/y, 'remainder':x%y }\n\nanswer = divide(22, 7)\nprint answer['quotient']\nprint answer['remainder']\n\n"
] | [
112,
27,
13,
4,
1,
1,
1,
0,
0
] | [] | [] | [
"function",
"multiple_return_values",
"python",
"return_value"
] | stackoverflow_0000061605_function_multiple_return_values_python_return_value.txt |
Q:
How do I upload a file with mod_python?
I want to create a simple file upload form and I must be completely incapable. I've read docs and tutorials,but for some reason, I'm not getting the submitted form data. I wrote the smallest amount of code I could to test and it still isn't working. Any ideas what's wrong?
def index():
html = '''
<html>
<body>
<form id="fileUpload" action="./result" method="post">
<input type="file" id="file"/>
<input type="submit" value="Upload"/>
</form>
</body>
</html>
'''
return html
def result(req):
try: tmpfile = req.form['file']
except:
return "no file!"
A:
try putting enctype="multipart/form-data" in your form tag. Your mistake is not really mod_python related.
| How do I upload a file with mod_python? | I want to create a simple file upload form and I must be completely incapable. I've read docs and tutorials,but for some reason, I'm not getting the submitted form data. I wrote the smallest amount of code I could to test and it still isn't working. Any ideas what's wrong?
def index():
html = '''
<html>
<body>
<form id="fileUpload" action="./result" method="post">
<input type="file" id="file"/>
<input type="submit" value="Upload"/>
</form>
</body>
</html>
'''
return html
def result(req):
try: tmpfile = req.form['file']
except:
return "no file!"
| [
"try putting enctype=\"multipart/form-data\" in your form tag. Your mistake is not really mod_python related.\n"
] | [
1
] | [] | [] | [
"mod_python",
"python",
"upload"
] | stackoverflow_0000640310_mod_python_python_upload.txt |
Q:
Python - How to calculate equal parts of two dictionaries?
I have a problem with combining or calculating common/equal part of these two dictionaries. In my dictionaries, values are lists:
d1 = {0:['11','18','25','38'],
1:['11','18','25','38'],
2:['11','18','25','38'],
3:['11','18','25','38']}
d2 = {0:['05','08','11','13','16','25','34','38','40', '43'],
1:['05', '08', '09','13','15','20','32','36','38', '40','41'],
2:['02', '08', '11', '13', '18', '20', '22','33','36','39'],
3:['06', '11', '12', '25', '26', '27', '28', '30', '31', '37']}
I'd like to check "d2" and know if there are numbers from "d1". If there are some, I'd like to update one of them with new data or receive 3rd dictionary "d3" with only the values that are identical/equal in both "d1" and "d2" like:
d3 = {0:['11','25','38'], 1:['38'], 2:['11','18'], 3:['11','25']}
Can anyone help me with this?
My fault I forgot to be more specific. I'm looking for a solution in Python.
A:
Assuming this is Python, you want:
dict((x, set(y) & set(d1.get(x, ()))) for (x, y) in d2.iteritems())
to generate the resulting dictionary "d3".
Python 3.0+ version
>>> d3 = {k: list(set(d1.get(k,[])).intersection(v)) for k, v in d2.items()}
{0: ['11', '25', '38'], 1: ['38'], 2: ['11', '18'], 3: ['11', '25']}
The above version (as well as Python 2.x version) allows empty intersections therefore additional filtering is required in general case:
>>> d3 = {k: v for k, v in d3.items() if v}
Combining the above in one pass:
d3 = {}
for k, v in d2.items():
# find common elements for d1 & d2
v3 = set(d1.get(k,[])).intersection(v)
if v3: # whether there are common elements
d3[k] = list(v3)
[Edit: I made this post community wiki so that people can improve it if desired. I concede it might be a little hard to read if you're not used to reading this sort of thing in Python.]
A:
Offering a more readable solution:
d3= {}
for common_key in set(d1) & set(d2):
common_values= set(d1[common_key]) & set(d2[common_key])
d3[common_key]= list(common_values)
EDIT after suggestion:
If you want only keys having at least one common value item:
d3= {}
for common_key in set(d1) & set(d2):
common_values= set(d1[common_key]) & set(d2[common_key])
if common_values:
d3[common_key]= list(common_values)
You could keep the d1 and d2 values as sets instead of lists, if order and duplicates are not important.
A:
The problem boils down to determining the common elements between the two entries. (To obtain the result for all entries, just enclose the code in a loop over all of them.) Furthermore, it looks like each entry is a set (i.e. it has not duplicate elements). Therefore, all you need to do is find the set intersection between these elements. Many languages offer a method or function for doing this; for instance in C++ use the set container and the set_intersection function. This is a lot more efficient than comparing each element in one set against the other, as others have proposed.
A:
If we can assume d1 and d2 have the same keys:
d3 = {}
for k in d1.keys():
intersection = set(d1[k]) & set(d2[k])
d3[k] = [x for x in intersection]
Otherwise, if we can't assume that, then it is a little messier:
d3 = {}
for k in set(d1.keys() + d2.keys()):
intersection = set(d1.get(k, [])) & set(d2.get(k, []))
d3[k] = [x for x in intersection]
Edit: New version taking the comments into account. This one only checks for keys that d1 and d2 have in common, which is what the poster seems to be asking.
d3 = {}
for k in set(d1.keys()) & set(d2.keys()):
intersection = set(d1[k]) & set(d2[k])
d3[k] = list(intersection)
| Python - How to calculate equal parts of two dictionaries? | I have a problem with combining or calculating common/equal part of these two dictionaries. In my dictionaries, values are lists:
d1 = {0:['11','18','25','38'],
1:['11','18','25','38'],
2:['11','18','25','38'],
3:['11','18','25','38']}
d2 = {0:['05','08','11','13','16','25','34','38','40', '43'],
1:['05', '08', '09','13','15','20','32','36','38', '40','41'],
2:['02', '08', '11', '13', '18', '20', '22','33','36','39'],
3:['06', '11', '12', '25', '26', '27', '28', '30', '31', '37']}
I'd like to check "d2" and know if there are numbers from "d1". If there are some, I'd like to update one of them with new data or receive 3rd dictionary "d3" with only the values that are identical/equal in both "d1" and "d2" like:
d3 = {0:['11','25','38'], 1:['38'], 2:['11','18'], 3:['11','25']}
Can anyone help me with this?
My fault I forgot to be more specific. I'm looking for a solution in Python.
| [
"Assuming this is Python, you want:\ndict((x, set(y) & set(d1.get(x, ()))) for (x, y) in d2.iteritems())\n\nto generate the resulting dictionary \"d3\".\nPython 3.0+ version\n>>> d3 = {k: list(set(d1.get(k,[])).intersection(v)) for k, v in d2.items()}\n{0: ['11', '25', '38'], 1: ['38'], 2: ['11', '18'], 3: ['11', '25']}\n\nThe above version (as well as Python 2.x version) allows empty intersections therefore additional filtering is required in general case:\n>>> d3 = {k: v for k, v in d3.items() if v}\n\nCombining the above in one pass:\nd3 = {}\nfor k, v in d2.items():\n # find common elements for d1 & d2\n v3 = set(d1.get(k,[])).intersection(v)\n if v3: # whether there are common elements\n d3[k] = list(v3) \n\n\n[Edit: I made this post community wiki so that people can improve it if desired. I concede it might be a little hard to read if you're not used to reading this sort of thing in Python.]\n",
"Offering a more readable solution:\nd3= {}\nfor common_key in set(d1) & set(d2):\n common_values= set(d1[common_key]) & set(d2[common_key])\n d3[common_key]= list(common_values)\n\nEDIT after suggestion:\nIf you want only keys having at least one common value item:\nd3= {}\nfor common_key in set(d1) & set(d2):\n common_values= set(d1[common_key]) & set(d2[common_key])\n if common_values:\n d3[common_key]= list(common_values)\n\nYou could keep the d1 and d2 values as sets instead of lists, if order and duplicates are not important.\n",
"The problem boils down to determining the common elements between the two entries. (To obtain the result for all entries, just enclose the code in a loop over all of them.) Furthermore, it looks like each entry is a set (i.e. it has not duplicate elements). Therefore, all you need to do is find the set intersection between these elements. Many languages offer a method or function for doing this; for instance in C++ use the set container and the set_intersection function. This is a lot more efficient than comparing each element in one set against the other, as others have proposed.\n",
"If we can assume d1 and d2 have the same keys:\nd3 = {}\nfor k in d1.keys():\n intersection = set(d1[k]) & set(d2[k])\n d3[k] = [x for x in intersection]\n\nOtherwise, if we can't assume that, then it is a little messier:\nd3 = {}\nfor k in set(d1.keys() + d2.keys()):\n intersection = set(d1.get(k, [])) & set(d2.get(k, []))\n d3[k] = [x for x in intersection]\n\nEdit: New version taking the comments into account. This one only checks for keys that d1 and d2 have in common, which is what the poster seems to be asking.\nd3 = {}\nfor k in set(d1.keys()) & set(d2.keys()):\n intersection = set(d1[k]) & set(d2[k])\n d3[k] = list(intersection)\n\n"
] | [
7,
4,
1,
1
] | [
"in pseudocode:\nDictionary d3 = new Dictionary()\nfor (i = 0 to min(d1.size(), d2.size()))\n{\n element shared = getSharedElements(d1[i], d2[i]);\n d3.store(i, shared);\n}\n\nfunction getsharedElements(array e1, array e2)\n{\n element e3 = new element();\n for (int i = 0 to e1.length)\n {\n if (e2.contains(e1[i]))\n {\n e3.add[e1[i]];\n }\n }\n return e3;\n}\n\n"
] | [
-1
] | [
"dictionary",
"list",
"merge",
"python"
] | stackoverflow_0000638360_dictionary_list_merge_python.txt |
Q:
Django, Python Loop Logic Problem
This works, partially. More information may be needed, however, I thought I would post to get advice on anything obvious that might be wrong here.
The problem is that if activity.get_cost() returns a False value, the function seems to exit entirely, returning None.
What I'd like it to do, of course, is accumulate cost Decimal values in the costs = [] and return their sum. Simple, I would have thought... but my novice Python skills are apparently missing something.
More information provided on request. Thank you.
def get_jobrecord_cost(self):
costs = []
for activity in self.activity_set.all():
cost = activity.get_cost()
if cost:
costs.append(cost)
if len(costs):
return sum(costs)
else:
return False
A:
I notice you're returning False if all the costs were None; I don't know if there's a specific reason for that, but it does make it a little bit harder to write. If that's not a requirement, you could write it like this:
def get_jobrecord_cost(self):
costs = [activity.get_cost() or 0 for activity in self.activity_set.all()]
return sum(costs)
A:
I think you can simplify this with:
def get_jobrecord_cost(self):
costs = 0
for activity in self.activity_set.all():
cost = activity.get_cost()
if cost:
costs += cost
return costs
A:
def get_jobrecord_cost(self):
return sum((activity.get_cost() or 0 for activity in activity_set.all()) or 0)
Depending on how much data you're dealing with, this version is just a bit more efficient than DNS's because it uses a generator comprehension and doesn't require loading up a whole list into memory. It's functionally equivalent to grieve's except the looping happens in C. Note that this doesn't necessarily mean this is better. This approach is obviously more dense and can be less readable.
| Django, Python Loop Logic Problem | This works, partially. More information may be needed, however, I thought I would post to get advice on anything obvious that might be wrong here.
The problem is that if activity.get_cost() returns a False value, the function seems to exit entirely, returning None.
What I'd like it to do, of course, is accumulate cost Decimal values in the costs = [] and return their sum. Simple, I would have thought... but my novice Python skills are apparently missing something.
More information provided on request. Thank you.
def get_jobrecord_cost(self):
costs = []
for activity in self.activity_set.all():
cost = activity.get_cost()
if cost:
costs.append(cost)
if len(costs):
return sum(costs)
else:
return False
| [
"I notice you're returning False if all the costs were None; I don't know if there's a specific reason for that, but it does make it a little bit harder to write. If that's not a requirement, you could write it like this:\ndef get_jobrecord_cost(self):\n costs = [activity.get_cost() or 0 for activity in self.activity_set.all()]\n return sum(costs)\n\n",
"I think you can simplify this with:\ndef get_jobrecord_cost(self):\n costs = 0\n for activity in self.activity_set.all():\n cost = activity.get_cost()\n if cost:\n costs += cost\n\n return costs\n\n",
"def get_jobrecord_cost(self):\n return sum((activity.get_cost() or 0 for activity in activity_set.all()) or 0)\n\nDepending on how much data you're dealing with, this version is just a bit more efficient than DNS's because it uses a generator comprehension and doesn't require loading up a whole list into memory. It's functionally equivalent to grieve's except the looping happens in C. Note that this doesn't necessarily mean this is better. This approach is obviously more dense and can be less readable.\n"
] | [
3,
2,
1
] | [] | [] | [
"django_models",
"python"
] | stackoverflow_0000641145_django_models_python.txt |
Q:
Writing to the serial port in Vista from Python
How do I write to the serial port in Vista from Python? The termios package only seem to support posix.
A:
pyserial does the trick, you'll need python extensions for windows for it to work in windows.
A:
Seems like it wasn't any harder than this using pyserial:
import serial
ser = serial.Serial(0) # open first serial port with 9600,8,N,1
print ser.portstr # check which port was really used
ser.write('hello')
ser.close()
| Writing to the serial port in Vista from Python | How do I write to the serial port in Vista from Python? The termios package only seem to support posix.
| [
"pyserial does the trick, you'll need python extensions for windows for it to work in windows.\n",
"Seems like it wasn't any harder than this using pyserial: \nimport serial\n\nser = serial.Serial(0) # open first serial port with 9600,8,N,1\nprint ser.portstr # check which port was really used\nser.write('hello')\nser.close()\n\n"
] | [
9,
7
] | [] | [] | [
"python",
"windows"
] | stackoverflow_0000640802_python_windows.txt |
Q:
Python for mathematics students?
I need to deliver one and half hour seminar on programming for students at the department of mathematics.
I have chosen python as language.
What should be content of my presentation ?
What are good resources available ?
What is necessity of programming for maths students?
How will knowledge of programming will help them?
Thank you !!!
NOTE: I know here is one post but it doesn't solve my problem.
A:
Do the getting started guide to scipy?
http://www.scipy.org/Getting_Started
A:
Sage: http://www.sagemath.org/
A:
Assuming that these students are new to programming (which is quite likely for math students), you'll want to give them a basic introduction to programming (what a function is, what a variable is, how each of these differ from functions and variables in math, etc).
Show them some example programs, with a view to things that will be helpful for math: numerical methods, matrix multiplication, etc.
Wherever possible, wow them so that they'll get excited about using computers for their own projects.
Some Python/Math resources
A:
I would bring up using Python as a free & open source option to replace/augment expensive packages like Matlab, IDL, etc via:
scipy - fft's,
ipython - "shell"/debugger
matplotlib - 2d graphing
MayaVi - 3d graphing/visualization
This video may be helpful.
A:
You are going to have to decide what you want to show them. If you want to show them how to using a computer can be a useful tool in mathematics show them sage and how you can perform numerical methods with it to get answers to hard questions. Then manipulate some algebraic formulas with it. Maybe show how it can whip through hard integrals and derivatives without sweating. They will be nearing the end of some of their first calulus courses after all.
None of this displays why they need to know how to program of course. This just shows how useful other people's programming is for them to use. While you do have the full power of python in sage the reality is the odd "for loop" and some "if statements" is really all of the programming most mathematicians will do with sage most of the time (though there is a significant minority who will do a lot more). If you want to go down this road I would suggest you try to get your hands on one of the Experimental mathematics books(http://www.experimentalmath.info/). These are the guys who (amongst many other interesting results) came up with BBP numbers: which is the way to find arbitrary digits of pi. They mostly use maple and mathematica but most of this work translates to sage.
I would strongly suggest you don't show them how to actually implement numerical methods themselves. Very few mathematicians are writing programs to solve numerical problems. Most just plug their programs into other people's programs. So I don't think showing how they could implement these methods themselves, if only they knew how to program, will excite anyone.
If this were me I think I would probably give a seminar building a simple game plugin for cgsuite (http://cgsuite.sourceforge.net/). I recognize that this is java and not python but their are a lot of advantages to this approach. First young mathematicians always get excited by combinatorial game theory. You are fundamentally showing them how they can use math to always win at certain games. It's like you are giving them a super power.
Second, you are implementing the rules of a game in a program. Game rules are great ways to learn programming idioms because they translate so directly into programming concepts.
And finally, you end up with a tool that can play your game perfectly. 90 minutes is a long time for a seminar as far as I'm concerned. If you can end on a bang, like with 10 minutes of playing a game against a computer, they will leave excited instead of bored and drained.
A:
I would recommend solving a few different kinds of problems from Project Euler in Python and having a discussion about the solutions, how they could have been done differently to be more efficient, etc. as part of the seminar. Python is a very elegant language for solving mathematical problems and should be one of those easier understood than most by mathematics students, so I think you made a good choice there.
A:
I'm assuming this is for Freshmen (only because most higher level Math students will likely know how to program)? If so, do something that is fun and relevant. Go through the basics, but maybe walk them through the logic / basic framework for a Game (which are heavily math oriented) or Python-Based Graphing Calculator.
If you want to get them real geeked though, show them Mathematica. I know, it's not what you selected ... but when I was a Sophomore Math major and first saw what you could do with it, I was in love.
A:
Python will work well, but GNU Octave may be better.
A:
What should be content of my presentation ?
The concept of functional programming with Python.
Some introduction to third party modules like NumPy and SciPy.
What are good resources available ?
Hans Petter Langtangen, Python Scripting for Computational Science, Springer
What is necessity of programming for maths students?
None. Usually maths students will have no problem in programming, since most programming language were developed to solve maths problem.
How will knowledge of programming will help them?
The computer was earlier developed as a tool for scientist to help them solve scientific/mathematics problems efficiently in a very short time, as compared to human.
A:
http://www.sagemath.org
In our wiki is a collection of talks, they may help you! http://wiki.sagemath.org/Talks
Also be aware, that Sage contains NumPy, SciPy and SymPy. Therefore all information about these three python libraries hold for Sage.
| Python for mathematics students? | I need to deliver one and half hour seminar on programming for students at the department of mathematics.
I have chosen python as language.
What should be content of my presentation ?
What are good resources available ?
What is necessity of programming for maths students?
How will knowledge of programming will help them?
Thank you !!!
NOTE: I know here is one post but it doesn't solve my problem.
| [
"Do the getting started guide to scipy?\nhttp://www.scipy.org/Getting_Started\n",
"Sage: http://www.sagemath.org/\n",
"Assuming that these students are new to programming (which is quite likely for math students), you'll want to give them a basic introduction to programming (what a function is, what a variable is, how each of these differ from functions and variables in math, etc).\nShow them some example programs, with a view to things that will be helpful for math: numerical methods, matrix multiplication, etc.\nWherever possible, wow them so that they'll get excited about using computers for their own projects.\nSome Python/Math resources\n",
"I would bring up using Python as a free & open source option to replace/augment expensive packages like Matlab, IDL, etc via:\n\nscipy - fft's, \nipython - \"shell\"/debugger\nmatplotlib - 2d graphing\nMayaVi - 3d graphing/visualization\n\nThis video may be helpful.\n",
"You are going to have to decide what you want to show them. If you want to show them how to using a computer can be a useful tool in mathematics show them sage and how you can perform numerical methods with it to get answers to hard questions. Then manipulate some algebraic formulas with it. Maybe show how it can whip through hard integrals and derivatives without sweating. They will be nearing the end of some of their first calulus courses after all.\nNone of this displays why they need to know how to program of course. This just shows how useful other people's programming is for them to use. While you do have the full power of python in sage the reality is the odd \"for loop\" and some \"if statements\" is really all of the programming most mathematicians will do with sage most of the time (though there is a significant minority who will do a lot more). If you want to go down this road I would suggest you try to get your hands on one of the Experimental mathematics books(http://www.experimentalmath.info/). These are the guys who (amongst many other interesting results) came up with BBP numbers: which is the way to find arbitrary digits of pi. They mostly use maple and mathematica but most of this work translates to sage.\nI would strongly suggest you don't show them how to actually implement numerical methods themselves. Very few mathematicians are writing programs to solve numerical problems. Most just plug their programs into other people's programs. So I don't think showing how they could implement these methods themselves, if only they knew how to program, will excite anyone.\nIf this were me I think I would probably give a seminar building a simple game plugin for cgsuite (http://cgsuite.sourceforge.net/). I recognize that this is java and not python but their are a lot of advantages to this approach. First young mathematicians always get excited by combinatorial game theory. You are fundamentally showing them how they can use math to always win at certain games. It's like you are giving them a super power.\nSecond, you are implementing the rules of a game in a program. Game rules are great ways to learn programming idioms because they translate so directly into programming concepts.\nAnd finally, you end up with a tool that can play your game perfectly. 90 minutes is a long time for a seminar as far as I'm concerned. If you can end on a bang, like with 10 minutes of playing a game against a computer, they will leave excited instead of bored and drained.\n",
"I would recommend solving a few different kinds of problems from Project Euler in Python and having a discussion about the solutions, how they could have been done differently to be more efficient, etc. as part of the seminar. Python is a very elegant language for solving mathematical problems and should be one of those easier understood than most by mathematics students, so I think you made a good choice there. \n",
"I'm assuming this is for Freshmen (only because most higher level Math students will likely know how to program)? If so, do something that is fun and relevant. Go through the basics, but maybe walk them through the logic / basic framework for a Game (which are heavily math oriented) or Python-Based Graphing Calculator. \nIf you want to get them real geeked though, show them Mathematica. I know, it's not what you selected ... but when I was a Sophomore Math major and first saw what you could do with it, I was in love.\n",
"Python will work well, but GNU Octave may be better.\n",
"What should be content of my presentation ?\n\nThe concept of functional programming with Python.\n Some introduction to third party modules like NumPy and SciPy.\n\nWhat are good resources available ?\n\nHans Petter Langtangen, Python Scripting for Computational Science, Springer\n\nWhat is necessity of programming for maths students?\n\nNone. Usually maths students will have no problem in programming, since most programming language were developed to solve maths problem.\n\nHow will knowledge of programming will help them?\n\nThe computer was earlier developed as a tool for scientist to help them solve scientific/mathematics problems efficiently in a very short time, as compared to human.\n\n",
"http://www.sagemath.org\nIn our wiki is a collection of talks, they may help you! http://wiki.sagemath.org/Talks\nAlso be aware, that Sage contains NumPy, SciPy and SymPy. Therefore all information about these three python libraries hold for Sage. \n"
] | [
9,
7,
4,
2,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"math",
"python"
] | stackoverflow_0000593685_math_python.txt |
Q:
Formatted text in GAE
Google app engine question: What is a good way to take formatted text (does not have to be rich text) from the user and then store it in a text or blog property in the datastore? Mainly what I'm looking for is it to store newlines and strings of spaces, so that the text comes back looking the same as when it was submitted.
A:
The text will always "come back" the same as how you put it in. You will lose some formatting rendering to HTML (as you noticed line endings and spaces). One solution might be to render the text into a <pre> element (which implies preformatted text).
<pre>
This text will
be formatted correctly
</pre>
Another way would be to convert your format into HTML which is well formatted. Typically a Wiki might do this: store the text as markup, and render it to HTML. It's probably exactly what this site is doing with it's posts etc. If you do choose this route, I can recommend the creoleparser library, and it works well on Appengine.
A:
Other commonly used simplified markups include Textile and Markdown.
| Formatted text in GAE | Google app engine question: What is a good way to take formatted text (does not have to be rich text) from the user and then store it in a text or blog property in the datastore? Mainly what I'm looking for is it to store newlines and strings of spaces, so that the text comes back looking the same as when it was submitted.
| [
"The text will always \"come back\" the same as how you put it in. You will lose some formatting rendering to HTML (as you noticed line endings and spaces). One solution might be to render the text into a <pre> element (which implies preformatted text).\n<pre>\nThis text will\n\nbe formatted correctly\n</pre>\n\nAnother way would be to convert your format into HTML which is well formatted. Typically a Wiki might do this: store the text as markup, and render it to HTML. It's probably exactly what this site is doing with it's posts etc. If you do choose this route, I can recommend the creoleparser library, and it works well on Appengine.\n",
"Other commonly used simplified markups include Textile and Markdown.\n"
] | [
2,
2
] | [] | [] | [
"google_app_engine",
"google_cloud_datastore",
"python"
] | stackoverflow_0000640733_google_app_engine_google_cloud_datastore_python.txt |
Q:
Function Parser with RegEx in Python
I have a source code in Fortran (almost irrelevant) and I want to parse the function names and arguments.
eg using
(\w+)\([^\(\)]+\)
with
a(b(1 + 2 * 2), c(3,4))
I get the following: (as expected)
b, 1 + 2 * 2
c, 3,4
where I would need
a, b(1 + 2 * 2), c(3,4)
b, 1 + 2 * 2
c, 3,4
Any suggestions?
Thanks for your time...
A:
I don't think this is a job for regular expressions... they can't really handle nested patterns.
This is because regexes are compiled into FSMs (Finite State Machines). In order to parse arbitrarily nested expressions, you can't use a FSM, because you need infinitely many states to keep track of the arbitrary nesting. Also see this SO thread.
A:
This is a nonlinear grammar -- you need to be able to recurse on a set of allowed rules. Look at pyparsing to do simple CFG (Context Free Grammar) parsing via readable specifications.
It's been a while since I've written out CFGs, and I'm probably rusty, so I'll refer you to the Python EBNF to get an idea of how you can construct one for a subset of a language syntax.
Edit: If the example will always be simple, you can code a small state machine class/function that iterates over the tokenized input string, as @Devin Jeanpierre suggests.
A:
It can be done with regular expressions-- use them to tokenize the string, and work with the tokens. i.e. see re.Scanner. Alternatively, just use pyparsing.
A:
You can take a look at PLY (Python Lex-Yacc), it's (in my opinion) very simple to use and well documented, and it comes with a calculator example which could be a good starting point.
A:
You can't do this with regular expression only. It's sort of recursive. You should match first the most external function and its arguments, print the name of the function, then do the same (match the function name, then its arguments) with all its arguments. Regex alone are not enough.
| Function Parser with RegEx in Python | I have a source code in Fortran (almost irrelevant) and I want to parse the function names and arguments.
eg using
(\w+)\([^\(\)]+\)
with
a(b(1 + 2 * 2), c(3,4))
I get the following: (as expected)
b, 1 + 2 * 2
c, 3,4
where I would need
a, b(1 + 2 * 2), c(3,4)
b, 1 + 2 * 2
c, 3,4
Any suggestions?
Thanks for your time...
| [
"I don't think this is a job for regular expressions... they can't really handle nested patterns.\nThis is because regexes are compiled into FSMs (Finite State Machines). In order to parse arbitrarily nested expressions, you can't use a FSM, because you need infinitely many states to keep track of the arbitrary nesting. Also see this SO thread.\n",
"This is a nonlinear grammar -- you need to be able to recurse on a set of allowed rules. Look at pyparsing to do simple CFG (Context Free Grammar) parsing via readable specifications.\nIt's been a while since I've written out CFGs, and I'm probably rusty, so I'll refer you to the Python EBNF to get an idea of how you can construct one for a subset of a language syntax.\nEdit: If the example will always be simple, you can code a small state machine class/function that iterates over the tokenized input string, as @Devin Jeanpierre suggests.\n",
"It can be done with regular expressions-- use them to tokenize the string, and work with the tokens. i.e. see re.Scanner. Alternatively, just use pyparsing.\n",
"You can take a look at PLY (Python Lex-Yacc), it's (in my opinion) very simple to use and well documented, and it comes with a calculator example which could be a good starting point. \n",
"You can't do this with regular expression only. It's sort of recursive. You should match first the most external function and its arguments, print the name of the function, then do the same (match the function name, then its arguments) with all its arguments. Regex alone are not enough.\n"
] | [
2,
2,
2,
2,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000637773_python_regex.txt |
Q:
Scrape a dynamic website
What is the best method to scrape a dynamic website where most of the content is generated by what appears to be ajax requests? I have previous experience with a Mechanize, BeautifulSoup, and python combo, but I am up for something new.
--Edit--
For more detail: I'm trying to scrape the CNN primary database. There is a wealth of information there, but there doesn't appear to be an api.
A:
This is a difficult problem because you either have to reverse engineer the javascript on a per-site basis, or implement a javascript engine and run the scripts (which has its own difficulties and pitfalls).
It's a heavy weight solution, but I've seen people doing this with greasemonkey scripts - allow Firefox to render everything and run the javascript, and then scrape the elements. You can even initiate user actions on the page if needed.
-Adam
A:
The best solution that I found was to use Firebug to monitor XmlHttpRequests, and then to use a script to resend them.
A:
Selenium IDE, a tool for testing, is something I've used for a lot of screen-scraping. There are a few things it doesn't handle well (Javascript window.alert() and popup windows in general), but it does its work on a page by actually triggering the click events and typing into the text boxes. Because the IDE portion runs in Firefox, you don't have to do all of the management of sessions, etc. as Firefox takes care of it. The IDE records and plays tests back.
It also exports C#, PHP, Java, etc. code to build compiled tests/scrapers that are executed on the Selenium server. I've done that for more than a few of my Selenium scripts, which makes things like storing the scraped data in a database much easier.
Scripts are fairly simple to write and alter, being made up of things like ("clickAndWait","submitButton"). Worth a look given what you're describing.
A:
Adam Davis's advice is solid.
I would additionally suggest that you try to "reverse-engineer" what the JavaScript is doing, and instead of trying to scrape the page, you issue the HTTP requests that the JavaScript is issuing and interpret the results yourself (most likely in JSON format, nice and easy to parse). This strategy could be anything from trivial to a total nightmare, depending on the complexity of the JavaScript.
The best possibility, of course, would be to convince the website's maintainers to implement a developer-friendly API. All the cool kids are doing it these days 8-) Of course, they might not want their data scraped in an automated fashion... in which case you can expect a cat-and-mouse game of making their page increasingly difficult to scrape :-(
A:
There is a bit of a learning curve, but tools like Pamie (Python) or Watir (Ruby) will let you latch into the IE web browser and get at the elements. This turns out to be easier than Mechanize and other HTTP level tools since you don't have to emulate the browser, you just ask the browser for the html elements. And it's going to be way easier than reverse engineering the Javascript/Ajax calls. If needed you can also use tools like beatiful soup in conjunction with Pamie.
A:
Probably the easiest way is to use IE webbrowser control in C# (or any other language). You have access to all the stuff inside browser out of the box + you dont need to care about cookies, SSL and so on.
A:
i found the IE Webbrowser control have all kinds of quirks and workarounds that would justify some high quality software to take care of all those inconsistencies, layered around the shvwdoc.dll api and mshtml and provide a framework.
A:
This seems like it's a pretty common problem. I wonder why someone hasn't anyone developed a programmatic browser? I'm envisioning a Firefox you can call from the command line with a URL as an argument and it will load the page, run all of the initial page load JS events and save the resulting file.
I mean Firefox, and other browsers already do this, why can't we simply strip off the UI stuff?
| Scrape a dynamic website | What is the best method to scrape a dynamic website where most of the content is generated by what appears to be ajax requests? I have previous experience with a Mechanize, BeautifulSoup, and python combo, but I am up for something new.
--Edit--
For more detail: I'm trying to scrape the CNN primary database. There is a wealth of information there, but there doesn't appear to be an api.
| [
"This is a difficult problem because you either have to reverse engineer the javascript on a per-site basis, or implement a javascript engine and run the scripts (which has its own difficulties and pitfalls).\nIt's a heavy weight solution, but I've seen people doing this with greasemonkey scripts - allow Firefox to render everything and run the javascript, and then scrape the elements. You can even initiate user actions on the page if needed.\n-Adam\n",
"The best solution that I found was to use Firebug to monitor XmlHttpRequests, and then to use a script to resend them.\n",
"Selenium IDE, a tool for testing, is something I've used for a lot of screen-scraping. There are a few things it doesn't handle well (Javascript window.alert() and popup windows in general), but it does its work on a page by actually triggering the click events and typing into the text boxes. Because the IDE portion runs in Firefox, you don't have to do all of the management of sessions, etc. as Firefox takes care of it. The IDE records and plays tests back.\nIt also exports C#, PHP, Java, etc. code to build compiled tests/scrapers that are executed on the Selenium server. I've done that for more than a few of my Selenium scripts, which makes things like storing the scraped data in a database much easier.\nScripts are fairly simple to write and alter, being made up of things like (\"clickAndWait\",\"submitButton\"). Worth a look given what you're describing.\n",
"Adam Davis's advice is solid.\nI would additionally suggest that you try to \"reverse-engineer\" what the JavaScript is doing, and instead of trying to scrape the page, you issue the HTTP requests that the JavaScript is issuing and interpret the results yourself (most likely in JSON format, nice and easy to parse). This strategy could be anything from trivial to a total nightmare, depending on the complexity of the JavaScript.\nThe best possibility, of course, would be to convince the website's maintainers to implement a developer-friendly API. All the cool kids are doing it these days 8-) Of course, they might not want their data scraped in an automated fashion... in which case you can expect a cat-and-mouse game of making their page increasingly difficult to scrape :-(\n",
"There is a bit of a learning curve, but tools like Pamie (Python) or Watir (Ruby) will let you latch into the IE web browser and get at the elements. This turns out to be easier than Mechanize and other HTTP level tools since you don't have to emulate the browser, you just ask the browser for the html elements. And it's going to be way easier than reverse engineering the Javascript/Ajax calls. If needed you can also use tools like beatiful soup in conjunction with Pamie.\n",
"Probably the easiest way is to use IE webbrowser control in C# (or any other language). You have access to all the stuff inside browser out of the box + you dont need to care about cookies, SSL and so on.\n",
"i found the IE Webbrowser control have all kinds of quirks and workarounds that would justify some high quality software to take care of all those inconsistencies, layered around the shvwdoc.dll api and mshtml and provide a framework. \n",
"This seems like it's a pretty common problem. I wonder why someone hasn't anyone developed a programmatic browser? I'm envisioning a Firefox you can call from the command line with a URL as an argument and it will load the page, run all of the initial page load JS events and save the resulting file.\nI mean Firefox, and other browsers already do this, why can't we simply strip off the UI stuff? \n"
] | [
7,
7,
4,
3,
2,
1,
1,
0
] | [] | [] | [
"ajax",
"beautifulsoup",
"python",
"screen_scraping"
] | stackoverflow_0000206855_ajax_beautifulsoup_python_screen_scraping.txt |
Q:
Python script at Visual C++ 2005 build step not spawning other processes
I have the following post-build step in a VC++ 2005 project that calls a Python 2.5.1 script:
postbuild.py
postbuild.py does:
import os
os.system('cd') # cd is just a test, could be anything
The process never starts, and it's the same with any other process I try, even using subprocess.call or Popen instead of os.system.
Does anyone know about anything related to problems like this in Python 2.5.1 or in build events in Visual C++ 2005 SP1?
A:
Solved. For some reason, using "postbuild.py" as postbuild step inhibits the python script from spawning other processes, where "python.exe postbuild.py" has no problems, and neither "pythonw.exe postbuild.py". I'm not sure why this happens, as all three methods are valid when used from cmd.exe.
But I would like to know if anyone has an explanation for this.
A:
Be aware that the post build event will only run immediately after a completed build. If the project had already been built (and so does not need building again), then the post build step will not run at all.
If you're editing the python script and then trying to get it to run by building the project, then it's not going to do anything unless you edit a file within the project each time, to force the build to occur.
| Python script at Visual C++ 2005 build step not spawning other processes | I have the following post-build step in a VC++ 2005 project that calls a Python 2.5.1 script:
postbuild.py
postbuild.py does:
import os
os.system('cd') # cd is just a test, could be anything
The process never starts, and it's the same with any other process I try, even using subprocess.call or Popen instead of os.system.
Does anyone know about anything related to problems like this in Python 2.5.1 or in build events in Visual C++ 2005 SP1?
| [
"Solved. For some reason, using \"postbuild.py\" as postbuild step inhibits the python script from spawning other processes, where \"python.exe postbuild.py\" has no problems, and neither \"pythonw.exe postbuild.py\". I'm not sure why this happens, as all three methods are valid when used from cmd.exe.\nBut I would like to know if anyone has an explanation for this.\n",
"Be aware that the post build event will only run immediately after a completed build. If the project had already been built (and so does not need building again), then the post build step will not run at all.\nIf you're editing the python script and then trying to get it to run by building the project, then it's not going to do anything unless you edit a file within the project each time, to force the build to occur.\n"
] | [
2,
0
] | [] | [] | [
"post_build_event",
"python",
"spawn",
"spawning",
"visual_c++"
] | stackoverflow_0000642877_post_build_event_python_spawn_spawning_visual_c++.txt |
Q:
Can I override the html_name for a tabularinline field in the admin interface?
Is it possible to override the html naming of fields in TabularInline admin forms so they won't contain dashes?
I'm trying to apply the knowledge obtained here to create a TabularInline admin form that has the auto-complete feature.
It all works except that Django insists in naming the fields in a tabularinline queryset as something in the lines of:
[model]_set-[index]-[field]
So, if my model is TravelLogClient and my foreign key field is company, the fields in the HTML form for the three entries in the tabularinline queryset will be:
travellogclient_set-0-company
travellogclient_set-1-company
travellogclient_set-2-company
The problem is that javascript dislikes identifiers with dashes in them. So the javascript fails and the autocomplete doesn't work.
THIS IS ONLY A PROBLEM WITH TABULAR INLINE forms! If I use Jannis' autocomplete example on a non tabular admin form field, it works just fine because the field name doesn't have the "..._set-[index]-..." portion in the HTML and javascript.
Rather than submitting a patch to django's source code changing dashes for underscores on contrib.forms.forms.py and contrib.forms.formsets.py, it occurred to me that it is possible that behavior can be overridden somehow.
Failing that, what is the easiest way to make those dashes in the html_name become underscores instead?
Thanks in advance!
A:
Paolo and Guðmundur are right. I modified my usage in the javascript according to Guðmundur's suggestion and things now work as expected - no django intervention needed.
Sorry for the mental lapse...
Thanks!
| Can I override the html_name for a tabularinline field in the admin interface? | Is it possible to override the html naming of fields in TabularInline admin forms so they won't contain dashes?
I'm trying to apply the knowledge obtained here to create a TabularInline admin form that has the auto-complete feature.
It all works except that Django insists in naming the fields in a tabularinline queryset as something in the lines of:
[model]_set-[index]-[field]
So, if my model is TravelLogClient and my foreign key field is company, the fields in the HTML form for the three entries in the tabularinline queryset will be:
travellogclient_set-0-company
travellogclient_set-1-company
travellogclient_set-2-company
The problem is that javascript dislikes identifiers with dashes in them. So the javascript fails and the autocomplete doesn't work.
THIS IS ONLY A PROBLEM WITH TABULAR INLINE forms! If I use Jannis' autocomplete example on a non tabular admin form field, it works just fine because the field name doesn't have the "..._set-[index]-..." portion in the HTML and javascript.
Rather than submitting a patch to django's source code changing dashes for underscores on contrib.forms.forms.py and contrib.forms.formsets.py, it occurred to me that it is possible that behavior can be overridden somehow.
Failing that, what is the easiest way to make those dashes in the html_name become underscores instead?
Thanks in advance!
| [
"Paolo and Guðmundur are right. I modified my usage in the javascript according to Guðmundur's suggestion and things now work as expected - no django intervention needed.\nSorry for the mental lapse...\nThanks!\n"
] | [
0
] | [] | [] | [
"django",
"django_admin",
"django_forms",
"python"
] | stackoverflow_0000640218_django_django_admin_django_forms_python.txt |
Q:
regular expression help with converting exp1^exp2 to pow(exp1, exp2)
I am converting some matlab code to C, currently I have some lines that have powers using the ^, which is rather easy to do with something along the lines \(?(\w*)\)?\^\(?(\w*)\)?
works fine for converting (glambda)^(galpha),using the sub routine in python pattern.sub(pow(\g<1>,\g<2>),'(glambda)^(galpha)')
My problem comes with nested parenthesis
So I have a string like:
glambdastar^(1-(1-gphi)*galpha)*(glambdaq)^(-(1-gphi)*galpha);
And I can not figure out how to convert that line to:
pow(glambdastar,(1-(1-gphi)*galpha))*pow(glambdaq,-(1-gphi)*galpha));
A:
Unfortunately, regular expressions aren't the right tool for handling nested structures. There are some regular expressions engines (such as .NET) which have some support for recursion, but most — including the Python engine — do not, and can only handle as many levels of nesting as you build into the expression (which gets ugly fast).
What you really need for this is a simple parser. For example, iterate over the string counting parentheses and storing their locations in a list. When you find a ^ character, put the most recently closed parenthesis group into a "left" variable, then watch the group formed by the next opening parenthesis. When it closes, use it as the "right" value and print the pow(left, right) expression.
A:
I think you can use recursion here.
Once you figure out the Left and Right parts, pass each of those to your function again.
The base case would be that no ^ operator is found, so you will not need to add the pow() function to your result string.
The function will return a string with all the correct pow()'s in place.
I'll come up with an example of this if you want.
A:
Nested parenthesis cannot be described by a regexp and require a full parser (able to understand a grammar, which is something more powerful than a regexp). I do not think there is a solution.
A:
See recent discussion function-parser-with-regex-in-python (one of many similar discussions). Then follow the suggestion to pyparsing.
A:
An alternative would be to iterate until all ^ have been exhausted. no?.
Ruby code:
# assuming str contains the string of data with the expressions you wish to convert
while str.include?('^')
str!.gsub!(/(\w+)\^(\w+)/, 'pow(\1,\2)')
end
| regular expression help with converting exp1^exp2 to pow(exp1, exp2) | I am converting some matlab code to C, currently I have some lines that have powers using the ^, which is rather easy to do with something along the lines \(?(\w*)\)?\^\(?(\w*)\)?
works fine for converting (glambda)^(galpha),using the sub routine in python pattern.sub(pow(\g<1>,\g<2>),'(glambda)^(galpha)')
My problem comes with nested parenthesis
So I have a string like:
glambdastar^(1-(1-gphi)*galpha)*(glambdaq)^(-(1-gphi)*galpha);
And I can not figure out how to convert that line to:
pow(glambdastar,(1-(1-gphi)*galpha))*pow(glambdaq,-(1-gphi)*galpha));
| [
"Unfortunately, regular expressions aren't the right tool for handling nested structures. There are some regular expressions engines (such as .NET) which have some support for recursion, but most — including the Python engine — do not, and can only handle as many levels of nesting as you build into the expression (which gets ugly fast).\nWhat you really need for this is a simple parser. For example, iterate over the string counting parentheses and storing their locations in a list. When you find a ^ character, put the most recently closed parenthesis group into a \"left\" variable, then watch the group formed by the next opening parenthesis. When it closes, use it as the \"right\" value and print the pow(left, right) expression.\n",
"I think you can use recursion here.\nOnce you figure out the Left and Right parts, pass each of those to your function again.\nThe base case would be that no ^ operator is found, so you will not need to add the pow() function to your result string.\nThe function will return a string with all the correct pow()'s in place.\nI'll come up with an example of this if you want.\n",
"Nested parenthesis cannot be described by a regexp and require a full parser (able to understand a grammar, which is something more powerful than a regexp). I do not think there is a solution.\n",
"See recent discussion function-parser-with-regex-in-python (one of many similar discussions). Then follow the suggestion to pyparsing.\n",
"An alternative would be to iterate until all ^ have been exhausted. no?.\nRuby code:\n# assuming str contains the string of data with the expressions you wish to convert\nwhile str.include?('^')\n str!.gsub!(/(\\w+)\\^(\\w+)/, 'pow(\\1,\\2)')\nend\n\n"
] | [
2,
1,
0,
0,
0
] | [] | [] | [
"c",
"python",
"regex"
] | stackoverflow_0000643173_c_python_regex.txt |
Q:
Django: Uploaded file locked. Can't rename
I'm trying to rename a file after it's uploaded in the model's save method. I'm renaming the file to a combination the files primary key and a slug of the file title.
I have it working when a file is first uploaded, when a new file is uploaded, and when there are no changes to the file or file title.
However, when the title of the file is changed, and the system tries to rename the old file to the new path I get the following error:
WindowsError at /admin/main/file/1/
(32, 'The process cannot access the file because it is being used by another process')
I don't really know how to get around this. I've tried just coping the file to the new path. This works, but I don't know I can delete the old version.
Shortened Model:
class File(models.Model):
nzb = models.FileField(upload_to='files/')
name = models.CharField(max_length=256)
name_slug = models.CharField(max_length=256, blank=True, null=True, editable=False)
def save(self):
# Create the name slug.
self.name_slug = re.sub('[^a-zA-Z0-9]', '-', self.name).strip('-').lower()
self.name_slug = re.sub('[-]+', '-', self.name_slug)
# Need the primary key for naming the file.
super(File, self).save()
# Create the system paths we need.
orignal_nzb = u'%(1)s%(2)s' % {'1': settings.MEDIA_ROOT, '2': self.nzb}
renamed_nzb = u'%(1)sfiles/%(2)s_%(3)s.nzb' % {'1': settings.MEDIA_ROOT, '2': self.pk, '3': self.name_slug}
# Rename the file.
if orignal_nzb not in renamed_nzb:
if os.path.isfile(renamed_nzb):
os.remove(renamed_nzb)
# Fails when name is updated.
os.rename(orignal_nzb, renamed_nzb)
self.nzb = 'files/%(1)s_%(2)s.nzb' % {'1': self.pk, '2': self.name_slug}
super(File, self).save()
I suppose the question is, does anyone know how I can rename an uploaded file when the uploaded file isn't be re-uploaded? That's the only time it appears to be locked/in-use.
Update:
Tyler's approach is working, except when a new file is uploaded the primary key is not available and his technique below is throwing an error.
if not instance.pk:
instance.save()
Error:
maximum recursion depth exceeded while calling a Python object
Is there any way to grab the primary key?
A:
I think you should look more closely at the upload_to field. This would probably be simpler than messing around with renaming during save.
http://docs.djangoproject.com/en/dev/ref/models/fields/#filefield
This may also be a callable, such as a
function, which will be called to
obtain the upload path, including the
filename. This callable must be able
to accept two arguments, and return a
Unix-style path (with forward slashes)
to be passed along to the storage
system. The two arguments that will be
passed are:
A:
My other answer is deprecated, use this instead:
class File(models.Model):
nzb = models.FileField(upload_to=get_filename)
...
def get_filename(instance, filename):
if not instance.pk:
instance.save()
# Create the name slug.
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
filename = u'filess/%(2)s_%(3)s.nzb' % {'2': instance.pk, '3': name_slug}
return filename
As of 1.0, upload_to can be callable, in which case it is expected to return the filename, including path (relative to MEDIA_ROOT).
A:
Once uploaded, all you have is an image object in memory, right?
You could save this object yourself in the folder of your choice, and then edit the database entry by hand.
You'd be bypassing the whole Django ORM, and is not something I'd do unlessI couldn't find a more Django way.
| Django: Uploaded file locked. Can't rename | I'm trying to rename a file after it's uploaded in the model's save method. I'm renaming the file to a combination the files primary key and a slug of the file title.
I have it working when a file is first uploaded, when a new file is uploaded, and when there are no changes to the file or file title.
However, when the title of the file is changed, and the system tries to rename the old file to the new path I get the following error:
WindowsError at /admin/main/file/1/
(32, 'The process cannot access the file because it is being used by another process')
I don't really know how to get around this. I've tried just coping the file to the new path. This works, but I don't know I can delete the old version.
Shortened Model:
class File(models.Model):
nzb = models.FileField(upload_to='files/')
name = models.CharField(max_length=256)
name_slug = models.CharField(max_length=256, blank=True, null=True, editable=False)
def save(self):
# Create the name slug.
self.name_slug = re.sub('[^a-zA-Z0-9]', '-', self.name).strip('-').lower()
self.name_slug = re.sub('[-]+', '-', self.name_slug)
# Need the primary key for naming the file.
super(File, self).save()
# Create the system paths we need.
orignal_nzb = u'%(1)s%(2)s' % {'1': settings.MEDIA_ROOT, '2': self.nzb}
renamed_nzb = u'%(1)sfiles/%(2)s_%(3)s.nzb' % {'1': settings.MEDIA_ROOT, '2': self.pk, '3': self.name_slug}
# Rename the file.
if orignal_nzb not in renamed_nzb:
if os.path.isfile(renamed_nzb):
os.remove(renamed_nzb)
# Fails when name is updated.
os.rename(orignal_nzb, renamed_nzb)
self.nzb = 'files/%(1)s_%(2)s.nzb' % {'1': self.pk, '2': self.name_slug}
super(File, self).save()
I suppose the question is, does anyone know how I can rename an uploaded file when the uploaded file isn't be re-uploaded? That's the only time it appears to be locked/in-use.
Update:
Tyler's approach is working, except when a new file is uploaded the primary key is not available and his technique below is throwing an error.
if not instance.pk:
instance.save()
Error:
maximum recursion depth exceeded while calling a Python object
Is there any way to grab the primary key?
| [
"I think you should look more closely at the upload_to field. This would probably be simpler than messing around with renaming during save.\nhttp://docs.djangoproject.com/en/dev/ref/models/fields/#filefield\n\nThis may also be a callable, such as a\n function, which will be called to\n obtain the upload path, including the\n filename. This callable must be able\n to accept two arguments, and return a\n Unix-style path (with forward slashes)\n to be passed along to the storage\n system. The two arguments that will be\n passed are:\n\n",
"My other answer is deprecated, use this instead:\nclass File(models.Model):\n nzb = models.FileField(upload_to=get_filename)\n ...\n def get_filename(instance, filename):\n if not instance.pk:\n instance.save()\n # Create the name slug.\n name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()\n name_slug = re.sub('[-]+', '-', name_slug)\n\n filename = u'filess/%(2)s_%(3)s.nzb' % {'2': instance.pk, '3': name_slug}\n\n return filename\n\nAs of 1.0, upload_to can be callable, in which case it is expected to return the filename, including path (relative to MEDIA_ROOT).\n",
"Once uploaded, all you have is an image object in memory, right?\nYou could save this object yourself in the folder of your choice, and then edit the database entry by hand.\nYou'd be bypassing the whole Django ORM, and is not something I'd do unlessI couldn't find a more Django way.\n"
] | [
5,
3,
0
] | [] | [] | [
"django",
"file_io",
"python"
] | stackoverflow_0000637160_django_file_io_python.txt |
Q:
How do I rewrite $x = $hash{blah} || 'default' in Python?
How do I pull an item out of a Python dictionary without triggering a KeyError? In Perl, I would do:
$x = $hash{blah} || 'default'
What's the equivalent Python?
A:
Use the get(key, default) method:
>>> dict().get("blah", "default")
'default'
A:
If you're going to be doing this a lot, it's better to use collections.defaultdict:
import collections
# Define a little "factory" function that just builds the default value when called.
def get_default_value():
return 'default'
# Create a defaultdict, specifying the factory that builds its default value
dict = collections.defaultdict(get_default_value)
# Now we can look up things without checking, and get 'default' if the key is unknown
x = dict['blah']
A:
x = hash['blah'] if 'blah' in hash else 'default'
A:
x = hash.has_key('blah') and hash['blah'] or 'default'
| How do I rewrite $x = $hash{blah} || 'default' in Python? | How do I pull an item out of a Python dictionary without triggering a KeyError? In Perl, I would do:
$x = $hash{blah} || 'default'
What's the equivalent Python?
| [
"Use the get(key, default) method:\n>>> dict().get(\"blah\", \"default\")\n'default'\n\n",
"If you're going to be doing this a lot, it's better to use collections.defaultdict:\nimport collections\n\n# Define a little \"factory\" function that just builds the default value when called.\ndef get_default_value():\n return 'default'\n\n# Create a defaultdict, specifying the factory that builds its default value\ndict = collections.defaultdict(get_default_value)\n\n# Now we can look up things without checking, and get 'default' if the key is unknown\nx = dict['blah']\n\n",
"x = hash['blah'] if 'blah' in hash else 'default'\n\n",
"x = hash.has_key('blah') and hash['blah'] or 'default'\n\n"
] | [
9,
7,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000643950_python.txt |
Q:
How should I conditionally assign based on the existence of a dictionary key?
I've got some Perl code like this:
my $match = $matches{$key}
? "$matches{$key} was a match!"
: "There was no match."
How is this best rewritten in Python? I'm trying to avoid getting a KeyError.
A:
This.
message = "%s was a match"%(matches[key],) if key in matches else "There was no match."
| How should I conditionally assign based on the existence of a dictionary key? | I've got some Perl code like this:
my $match = $matches{$key}
? "$matches{$key} was a match!"
: "There was no match."
How is this best rewritten in Python? I'm trying to avoid getting a KeyError.
| [
"This.\nmessage = \"%s was a match\"%(matches[key],) if key in matches else \"There was no match.\"\n\n"
] | [
3
] | [] | [] | [
"python"
] | stackoverflow_0000644062_python.txt |
Q:
help with complex join in Django ORM
class Domains(models.Model):
name = models.CharField(max_length=30)
description = models.CharField(max_length= 60)
user = models.ManyToManyField("Users", blank=True, null=True)
def __unicode__(self):
return self.name
class Groups(models.Model):
domain = models.ForeignKey(Domains)
name = models.CharField(max_length=30)
description = models.CharField(max_length= 60)
def __unicode__(self):
return self.name
class Users(models.Model):
login = models.CharField(max_length=30, unique=True)
group = models.ManyToManyField(Groups, blank=True, null=True)
def __unicode__(self):
return self.login
I have the model above. Needed some assistance working with Django ORM.
How would I build a query the returns all the Group names that belong to only those Domains to which a User belongs
A:
I second elo80ka's comment about using singular names for your models. To filter the groups by domain and user, try:
Groups.objects.filter(domain__user=u)
This will perform the appropriate join across the many-to-many. As written, the query will return group objects. If you want the name property only, then append the .values_list('name', flat=True) to the query as elo80ka suggests.
A:
You should probably use singular names for your model classes. For example, I'd rewrite the models as:
class Domain(models.Model):
name = models.CharField(max_length=30)
description = models.CharField(max_length= 60)
user = models.ManyToManyField('User', blank=True, null=True)
def __unicode__(self):
return self.name
class Group(models.Model):
domain = models.ForeignKey(Domain, related_name='groups')
name = models.CharField(max_length=30)
description = models.CharField(max_length= 60)
def __unicode__(self):
return self.name
class User(models.Model):
login = models.CharField(max_length=30, unique=True)
group = models.ManyToManyField(Group, related_name='users', blank=True, null=True)
def __unicode__(self):
return self.login
Since you have users directly related to groups, you don't need to involve domains at all. To fetch all group names for a particular user, you'd do:
Group.objects.filter(users__pk=...).values_list('name', flat=True)
Replace '...' with the ID of the user you're interested in.
| help with complex join in Django ORM | class Domains(models.Model):
name = models.CharField(max_length=30)
description = models.CharField(max_length= 60)
user = models.ManyToManyField("Users", blank=True, null=True)
def __unicode__(self):
return self.name
class Groups(models.Model):
domain = models.ForeignKey(Domains)
name = models.CharField(max_length=30)
description = models.CharField(max_length= 60)
def __unicode__(self):
return self.name
class Users(models.Model):
login = models.CharField(max_length=30, unique=True)
group = models.ManyToManyField(Groups, blank=True, null=True)
def __unicode__(self):
return self.login
I have the model above. Needed some assistance working with Django ORM.
How would I build a query the returns all the Group names that belong to only those Domains to which a User belongs
| [
"I second elo80ka's comment about using singular names for your models. To filter the groups by domain and user, try:\nGroups.objects.filter(domain__user=u)\n\nThis will perform the appropriate join across the many-to-many. As written, the query will return group objects. If you want the name property only, then append the .values_list('name', flat=True) to the query as elo80ka suggests.\n",
"You should probably use singular names for your model classes. For example, I'd rewrite the models as:\nclass Domain(models.Model):\n name = models.CharField(max_length=30)\n description = models.CharField(max_length= 60)\n user = models.ManyToManyField('User', blank=True, null=True)\n\n def __unicode__(self):\n return self.name\n\nclass Group(models.Model):\n domain = models.ForeignKey(Domain, related_name='groups')\n name = models.CharField(max_length=30)\n description = models.CharField(max_length= 60)\n\n def __unicode__(self):\n return self.name\n\nclass User(models.Model):\n login = models.CharField(max_length=30, unique=True)\n group = models.ManyToManyField(Group, related_name='users', blank=True, null=True)\n\n def __unicode__(self):\n return self.login\n\nSince you have users directly related to groups, you don't need to involve domains at all. To fetch all group names for a particular user, you'd do:\nGroup.objects.filter(users__pk=...).values_list('name', flat=True)\n\nReplace '...' with the ID of the user you're interested in.\n"
] | [
4,
2
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000644081_django_django_models_python.txt |
Q:
Concurrency implications of EAFP/LBYL
When writing concurrent/multithreaded code in Python, is it especially important to follow the "Easier to Ask for Forgiveness than Permission" (EAFP) idiom, rather than "Look Before You Leap" (LBYL)? Python's exceptionally dynamic nature means that almost anything (e.g., attribute removal) can happen between looking and leaping---if so, what's the point? For example, consider
# LBYL
if hasattr(foo, 'bar'):
baz = foo.bar
versus
# EAFP
try:
baz = foo.bar
except AttributeError:
pass
In the LBYL example, the bar attribute could disappear from foo before the actual call to foo.bar is made, so do you gain anything from the check? If there's a risk the attribute might disappear, you need locks and/or try/except clauses anyway.
One possible argument here is that this example makes the extremely pessimistic assumption that "antagonistic code" is running that could yank the rug from under you at any moment. In most use cases, this is highly unlikely.
A:
Your thoughts are correct. Some additional points:
If the attribute exists most of the time, try:except: might be much faster than
the LBYL idiom.
If you don't like the try: except: syntax, you can also write:
item = getattr(foo, 'bar', None)
if item is None:
....
else:
....
| Concurrency implications of EAFP/LBYL | When writing concurrent/multithreaded code in Python, is it especially important to follow the "Easier to Ask for Forgiveness than Permission" (EAFP) idiom, rather than "Look Before You Leap" (LBYL)? Python's exceptionally dynamic nature means that almost anything (e.g., attribute removal) can happen between looking and leaping---if so, what's the point? For example, consider
# LBYL
if hasattr(foo, 'bar'):
baz = foo.bar
versus
# EAFP
try:
baz = foo.bar
except AttributeError:
pass
In the LBYL example, the bar attribute could disappear from foo before the actual call to foo.bar is made, so do you gain anything from the check? If there's a risk the attribute might disappear, you need locks and/or try/except clauses anyway.
One possible argument here is that this example makes the extremely pessimistic assumption that "antagonistic code" is running that could yank the rug from under you at any moment. In most use cases, this is highly unlikely.
| [
"Your thoughts are correct. Some additional points:\nIf the attribute exists most of the time, try:except: might be much faster than\nthe LBYL idiom.\nIf you don't like the try: except: syntax, you can also write:\nitem = getattr(foo, 'bar', None)\nif item is None:\n ....\nelse:\n ....\n\n"
] | [
3
] | [] | [] | [
"multithreading",
"python"
] | stackoverflow_0000644052_multithreading_python.txt |
Q:
Parallel/Async Download of S3 data into EC2 in Python?
I have large data files stored in S3 that I need to analyze. Each batch consists of ~50 files, each of which can be analyzed independently.
I'd like to setup parallel downloads of the S3 data into the EC2 instance, and setup triggers that start the analysis process on each file that downloads.
Are there any libraries that handle an async download, trigger on complete model?
If not, I'm thinking of setting up multiple download processes with pyprocessing, each of which will download and analyze a single piece of the file. Does that sound reasonable or are there better alternatives?
A:
Answering my own question, I ended up writing a simple modification to the Amazon S3 python library that lets you download the file in chunks or read it line by line. Available here.
A:
It sounds like you're looking for twisted:
"Twisted is an event-driven networking engine written in Python and licensed under the MIT license."
http://twistedmatrix.com/trac/
I've used the twisted python for quite a few asynchronous projects involving both communicating over the Internet and with subprocesses.
A:
I don't know of anything that already exists that does exactly what you're looking for, but even if not it should be reasonably easy to put together with Python. For a threaded approach, you might take a look at this Python recipe that does multi-threaded HTTP downloads for testing download mirrors.
EDIT: Few packages that I found that might do the majority of the work for you and be what you're looking for
spider
HarvestMan
| Parallel/Async Download of S3 data into EC2 in Python? | I have large data files stored in S3 that I need to analyze. Each batch consists of ~50 files, each of which can be analyzed independently.
I'd like to setup parallel downloads of the S3 data into the EC2 instance, and setup triggers that start the analysis process on each file that downloads.
Are there any libraries that handle an async download, trigger on complete model?
If not, I'm thinking of setting up multiple download processes with pyprocessing, each of which will download and analyze a single piece of the file. Does that sound reasonable or are there better alternatives?
| [
"Answering my own question, I ended up writing a simple modification to the Amazon S3 python library that lets you download the file in chunks or read it line by line. Available here.\n",
"It sounds like you're looking for twisted:\n\"Twisted is an event-driven networking engine written in Python and licensed under the MIT license.\"\nhttp://twistedmatrix.com/trac/\nI've used the twisted python for quite a few asynchronous projects involving both communicating over the Internet and with subprocesses.\n",
"I don't know of anything that already exists that does exactly what you're looking for, but even if not it should be reasonably easy to put together with Python. For a threaded approach, you might take a look at this Python recipe that does multi-threaded HTTP downloads for testing download mirrors.\nEDIT: Few packages that I found that might do the majority of the work for you and be what you're looking for\n\nspider\nHarvestMan\n\n"
] | [
3,
0,
0
] | [] | [] | [
"amazon_ec2",
"amazon_s3",
"python"
] | stackoverflow_0000538875_amazon_ec2_amazon_s3_python.txt |
Q:
How does Python sort a list of tuples?
Empirically, it seems that Python's default list sorter, when passed a list of tuples, will sort by the first element in each tuple. Is that correct? If not, what's the right way to sort a list of tuples by their first elements?
A:
It automatically sorts a list of tuples by the first elements in the tuples, then by the second elements and so on tuple([1,2,3]) will go before tuple([1,2,4]). If you want to override this behaviour pass a callable as the second argument to the sort method. This callable should return 1, -1, 0.
A:
Yes, this is the default. In fact, this is the basis of the classic "DSU" (Decorate-Sort-Undecorate) idiom in Python. See Code Like a Pythonista.
A:
No, tuples are sequence types just like strings. They are sorted the same, by comparing each element in turn:
>>> import random
>>> sorted([(0,0,0,int(random.getrandbits(4))) for x in xrange(10)])
[(0, 0, 0, 0), (0, 0, 0, 4), (0, 0, 0, 5), (0, 0, 0, 7), (0, 0, 0, 8),
(0, 0, 0, 9), (0, 0, 0, 12), (0, 0, 0, 12), (0, 0, 0, 12), (0, 0, 0, 14)]
The three zeroes are only there to show that something other than the first element must be getting inspected.
A:
Try using the internal list sort method and pass a lambda. If your tuples first element is a integer, this should work.
# l is the list of tuples
l.sort(lambda x,y: x-y)
You can use any callable for the compare function, not necessarily a lambda. However it needs to return -1 (less than), 0 (equal) or 1 (greater than).
A:
Check out "Devin Jeanpierre" answer to this question sort-a-dictionary-in-python-by-the-value where he says to use a tuple and shows how to sort by the second value
| How does Python sort a list of tuples? | Empirically, it seems that Python's default list sorter, when passed a list of tuples, will sort by the first element in each tuple. Is that correct? If not, what's the right way to sort a list of tuples by their first elements?
| [
"It automatically sorts a list of tuples by the first elements in the tuples, then by the second elements and so on tuple([1,2,3]) will go before tuple([1,2,4]). If you want to override this behaviour pass a callable as the second argument to the sort method. This callable should return 1, -1, 0.\n",
"Yes, this is the default. In fact, this is the basis of the classic \"DSU\" (Decorate-Sort-Undecorate) idiom in Python. See Code Like a Pythonista.\n",
"No, tuples are sequence types just like strings. They are sorted the same, by comparing each element in turn:\n>>> import random\n>>> sorted([(0,0,0,int(random.getrandbits(4))) for x in xrange(10)])\n[(0, 0, 0, 0), (0, 0, 0, 4), (0, 0, 0, 5), (0, 0, 0, 7), (0, 0, 0, 8),\n(0, 0, 0, 9), (0, 0, 0, 12), (0, 0, 0, 12), (0, 0, 0, 12), (0, 0, 0, 14)]\n\nThe three zeroes are only there to show that something other than the first element must be getting inspected.\n",
"Try using the internal list sort method and pass a lambda. If your tuples first element is a integer, this should work.\n# l is the list of tuples\nl.sort(lambda x,y: x-y)\n\nYou can use any callable for the compare function, not necessarily a lambda. However it needs to return -1 (less than), 0 (equal) or 1 (greater than).\n",
"Check out \"Devin Jeanpierre\" answer to this question sort-a-dictionary-in-python-by-the-value where he says to use a tuple and shows how to sort by the second value\n"
] | [
81,
8,
8,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000644170_python.txt |
Q:
How to handle a glade project with many windows
I'm working on a PyGTK/glade application that currently has 16 windows/dialogs and is about 130KB, and will eventually have around 25 windows/dialogs and be around 200KB. Currently, I'm storing all the windows in one monolithic glade file. When I run a window I call it like...
self.wTree = gtk.glade.XML("interface.glade", "WindowXYZ")
I wonder if it would be a better idea to split each window into it's own glade file. Instead of one glade file with 25 windows/dialogs I'd have 25 glade files with one window/dialog each and call it like so:
self.wTree = gtk.glade.XML("windowxyz.glade")
What do you guys think is the best way to do this? Is one method more resource intensive than another? One thing that would be nice about going to individual glade files is that naming widgets would be easier. For example, I name all my OK buttons "windowxyz_ok", but I could change it to simply "ok" instead. Makes things simpler. The downside is that it may be a bit less convenient to make changes to different windows.
I'm open to any and all arguments. Thanks!
A:
In my projects, I always have one window per glade file. I'd recommend the same for your project.
The following are the two main reasons:
It will be faster and use less memory, since each call to gtk.glade.XML() parses the whole thing. Sure you can pass in the root argument to avoid creating the widget tree for all windows, but you'd still have to parse all the XML, even if you're not interested in it.
Conceptually its easier to understand if have one toplevel per window. You easily know which filename a given dialog/window is in just by looking at the filename.
A:
Did you take some timings to find out whether it makes a difference?
The problem is that, as far as I understand it, Glade always creates all widgets when it parses an XML file, so if you open the XML file and only read a single widget, you are wasting a lot of resources.
The other problem is that you need to re-read the file if you want to have another instance of that widget.
The way I did it before was to put all widgets that were created only once (like the about window, the main window etc) into one glade file, and separate glade files for widgets that needed to be created several times.
A:
I use different glade files for different windows. But I keep dialog associated with a window in the same glade file. As you said, the naming problem is annoying.
A:
I have one glade file with 2 windows. It's about 450kb in size and I have not seen any slowdowns using libglademm with GTKmm.
| How to handle a glade project with many windows | I'm working on a PyGTK/glade application that currently has 16 windows/dialogs and is about 130KB, and will eventually have around 25 windows/dialogs and be around 200KB. Currently, I'm storing all the windows in one monolithic glade file. When I run a window I call it like...
self.wTree = gtk.glade.XML("interface.glade", "WindowXYZ")
I wonder if it would be a better idea to split each window into it's own glade file. Instead of one glade file with 25 windows/dialogs I'd have 25 glade files with one window/dialog each and call it like so:
self.wTree = gtk.glade.XML("windowxyz.glade")
What do you guys think is the best way to do this? Is one method more resource intensive than another? One thing that would be nice about going to individual glade files is that naming widgets would be easier. For example, I name all my OK buttons "windowxyz_ok", but I could change it to simply "ok" instead. Makes things simpler. The downside is that it may be a bit less convenient to make changes to different windows.
I'm open to any and all arguments. Thanks!
| [
"In my projects, I always have one window per glade file. I'd recommend the same for your project.\nThe following are the two main reasons:\n\nIt will be faster and use less memory, since each call to gtk.glade.XML() parses the whole thing. Sure you can pass in the root argument to avoid creating the widget tree for all windows, but you'd still have to parse all the XML, even if you're not interested in it.\nConceptually its easier to understand if have one toplevel per window. You easily know which filename a given dialog/window is in just by looking at the filename.\n\n",
"Did you take some timings to find out whether it makes a difference? \nThe problem is that, as far as I understand it, Glade always creates all widgets when it parses an XML file, so if you open the XML file and only read a single widget, you are wasting a lot of resources.\nThe other problem is that you need to re-read the file if you want to have another instance of that widget.\nThe way I did it before was to put all widgets that were created only once (like the about window, the main window etc) into one glade file, and separate glade files for widgets that needed to be created several times.\n",
"I use different glade files for different windows. But I keep dialog associated with a window in the same glade file. As you said, the naming problem is annoying. \n",
"I have one glade file with 2 windows. It's about 450kb in size and I have not seen any slowdowns using libglademm with GTKmm.\n"
] | [
9,
2,
0,
0
] | [] | [] | [
"glade",
"gtk",
"pygtk",
"python"
] | stackoverflow_0000336013_glade_gtk_pygtk_python.txt |
Q:
Python: Incrementally marshal / pickle an object?
I have a large object I'd like to serialize to disk. I'm finding marshal works quite well and is nice and fast.
Right now I'm creating my large object then calling marshal.dump . I'd like to avoid holding the large object in memory if possible - I'd like to dump it incrementally as I build it. Is that possible?
The object is fairly simple, a dictionary of arrays.
A:
The bsddb module's 'hashopen' and 'btopen' functions provide a persistent dictionary-like interface. Perhaps you could use one of these, instead of a regular dictionary, to incrementally serialize the arrays to disk?
import bsddb
import marshal
db = bsddb.hashopen('file.db')
db['array1'] = marshal.dumps(array1)
db['array2'] = marshal.dumps(array2)
...
db.close()
To retrieve the arrays:
db = bsddb.hashopen('file.db')
array1 = marshal.loads(db['array1'])
...
A:
It all your object has to do is be a dictionary of lists, then you may be able to use the shelve module. It presents a dictionary-like interface where the keys and values are stored in a database file instead of in memory. One limitation which may or may not affect you is that keys in Shelf objects must be strings. Value storage will be more efficient if you specify protocol=-1 when creating the Shelf object to have it use a more efficient binary representation.
A:
This very much depends on how you are building the object. Is it an array of sub objects? You could marshal/pickle each array element as you build it. Is it a dictionary? Same idea applies (marshal/pickle keys)
If it is just a big complex harry object, you might want to marshal dump each piece of the object, and then the apply what ever your 'building' process is when you read it back in.
A:
You should be able to dump the item piece by piece to the file. The two design questions that need settling are:
How are you building the object when you're putting it in memory?
How do you need you're data when it comes out of memory?
If your build process populates the entire array associated with a given key at a time, you might just dump the key:array pair in a file as a separate dictionary:
big_hairy_dictionary['sample_key'] = pre_existing_array
marshal.dump({'sample_key':big_hairy_dictionary['sample_key']},'central_file')
Then on update, each call to marshal.load('central_file') will return a dictionary that you can use to update a central dictionary. But this is really only going to be helpful if, when you need the data back, you want to handle reading 'central_file' once per key.
Alternately, if you are populating arrays element by element in no particular order, maybe try:
big_hairy_dictionary['sample_key'].append(single_element)
marshal.dump(single_element,'marshaled_files/'+'sample_key')
Then, when you load it back, you don't necessarily need to build the entire dictionary to get back what you need; you just call marshal.load('marshaled_files/sample_key') until it returns None, and you have everything associated with the key.
| Python: Incrementally marshal / pickle an object? | I have a large object I'd like to serialize to disk. I'm finding marshal works quite well and is nice and fast.
Right now I'm creating my large object then calling marshal.dump . I'd like to avoid holding the large object in memory if possible - I'd like to dump it incrementally as I build it. Is that possible?
The object is fairly simple, a dictionary of arrays.
| [
"The bsddb module's 'hashopen' and 'btopen' functions provide a persistent dictionary-like interface. Perhaps you could use one of these, instead of a regular dictionary, to incrementally serialize the arrays to disk?\nimport bsddb\nimport marshal\n\ndb = bsddb.hashopen('file.db')\ndb['array1'] = marshal.dumps(array1)\ndb['array2'] = marshal.dumps(array2)\n...\ndb.close()\n\nTo retrieve the arrays:\ndb = bsddb.hashopen('file.db')\narray1 = marshal.loads(db['array1'])\n...\n\n",
"It all your object has to do is be a dictionary of lists, then you may be able to use the shelve module. It presents a dictionary-like interface where the keys and values are stored in a database file instead of in memory. One limitation which may or may not affect you is that keys in Shelf objects must be strings. Value storage will be more efficient if you specify protocol=-1 when creating the Shelf object to have it use a more efficient binary representation.\n",
"This very much depends on how you are building the object. Is it an array of sub objects? You could marshal/pickle each array element as you build it. Is it a dictionary? Same idea applies (marshal/pickle keys)\nIf it is just a big complex harry object, you might want to marshal dump each piece of the object, and then the apply what ever your 'building' process is when you read it back in.\n",
"You should be able to dump the item piece by piece to the file. The two design questions that need settling are:\n\nHow are you building the object when you're putting it in memory?\nHow do you need you're data when it comes out of memory?\n\nIf your build process populates the entire array associated with a given key at a time, you might just dump the key:array pair in a file as a separate dictionary:\nbig_hairy_dictionary['sample_key'] = pre_existing_array\nmarshal.dump({'sample_key':big_hairy_dictionary['sample_key']},'central_file')\n\nThen on update, each call to marshal.load('central_file') will return a dictionary that you can use to update a central dictionary. But this is really only going to be helpful if, when you need the data back, you want to handle reading 'central_file' once per key.\nAlternately, if you are populating arrays element by element in no particular order, maybe try:\nbig_hairy_dictionary['sample_key'].append(single_element)\nmarshal.dump(single_element,'marshaled_files/'+'sample_key')\n\nThen, when you load it back, you don't necessarily need to build the entire dictionary to get back what you need; you just call marshal.load('marshaled_files/sample_key') until it returns None, and you have everything associated with the key.\n"
] | [
4,
4,
0,
0
] | [] | [] | [
"data_structures",
"memory_management",
"python",
"serialization"
] | stackoverflow_0000639821_data_structures_memory_management_python_serialization.txt |
Q:
Discussion of multiple inheritance vs Composition for a project (+other things)
I am writing a python platform for the simulation of distributed sensor swarms. The idea being that the end user can write a custom Node consisting of the SensorNode behaviour (communication, logging, etc) as well as implementing a number of different sensors.
The example below briefly demonstrates the concept.
#prewritten
class Sensor(object):
def __init__(self):
print "Hello from Sensor"
#...
#prewritten
class PositionSensor(Sensor):
def __init__(self):
print "Hello from Position"
Sensor.__init__(self)
#...
#prewritten
class BearingSensor(Sensor):
def __init__(self):
print "Hello from Bearing"
Sensor.__init__(self)
#...
#prewritten
class SensorNode(object):
def __init__(self):
print "Hello from SensorNode"
#...
#USER WRITTEN
class MySensorNode(SensorNode,BearingSensor,PositionSensor):
def CustomMethod(self):
LogData={'Position':position(), 'Bearing':bearing()} #position() from PositionSensor, bearing() from BearingSensor
Log(LogData) #Log() from SensorNode
NEW EDIT:
Firstly an overview of what I am trying to achieve:
I am writing a simulator to simulate swarm intelligence algorithms with particular focus on mobile sensor networks. These networks consist of many small robots communicating individual sensor data to build a complex sensory map of the environment.
The underlying goal of this project is to develop a simulation platform that provides abstracted interfaces to sensors such that the same user-implemented functionality can be directly ported to a robotic swarm running embedded linux. As robotic implementation is the goal, I need to design such that the software node behaves the same, and only has access to information that an physical node would have.
As part of the simulation engine, I will be providing a set of classes modelling different types of sensors and different types of sensor node. I wish to abstract all this complexity away from the user such that all the user must do is define which sensors are present on the node, and what type of sensor node (mobile, fixed position) is being implemented.
My initial thinking was that every sensor would provide a read() method which would return the relevant values, however having read the responses to the question, I see that perhaps more descriptive method names would be beneficial (.distance(), .position(), .bearing(), etc).
I initially wanted use separate classes for the sensors (with common ancestors) so that a more technical user can easily extend one of the existing classes to create a new sensor if they wish. For example:
Sensor
|
DistanceSensor(designed for 360 degree scan range)
| | |
IR Sensor Ultrasonic SickLaser
(narrow) (wider) (very wide)
The reason I was initially thinking of Multiple Inheritance (although it semi-breaks the IS-A relationship of inheritance) was due to the underlying principle behind the simulation system. Let me explain:
The user-implemented MySensorNode should not have direct access to its position within the environment (akin to a robot, the access is indirect through a sensor interface), similarly, the sensors should not know where they are. However, this lack of direct knowledge poses a problem, as the return values of the sensors are all dependent on their position and orientation within the environment (which needs to be simulated to return the correct values).
SensorNode, as a class implemented within the simulation libraries, is responsible for drawing the MySensorNode within the pygame environment - thus, it is the only class that should have direct access to the position and orientation of the sensor node within the environment.
SensorNode is also responsible for translation and rotation within the environment, however this translation and rotation is a side effect of motor actuation.
What I mean by this is that robots cannot directly alter their position within the world, all they can do is provide power to motors, and movement within the world is a side-effect of the motors interaction with the environment. I need to model this accurately within the simulation.
So, to move, the user-implemented functionality may use:
motors(50,50)
This call will, as a side-effect, alter the position of the node within the world.
If SensorNode was implemented using composition, SensorNode.motors(...) would not be able to directly alter instance variables (such as position), nor would MySensorNode.draw() be resolved to SensorNode.draw(), so SensorNode imo should be implemented using inheritance.
In terms of the sensors, the benefit of composition for a problem like this is obvious, MySensorNode is composed of a number of sensors - enough said.
However the problem as I see it is that the Sensors need access to their position and orientation within the world, and if you use composition you will end up with a call like:
>>> PosSensor.position((123,456))
(123,456)
Then again - thinking, you could pass self to the sensor upon initialisation, eg:
PosSensor = PositionSensor(self)
then later
PosSensor.position()
however this PosSensor.position() would then need to access information local to the instance (passed as self during init()), so why call PosSensor at all when you can access the information locally? Also passing your instance to an object you are composed of just seems not quite right, crossing the boundaries of encapsulation and information hiding (even though python doesn't do much to support the idea of information hiding).
If the solution was implemented using multiple inheritance, these problems would disappear:
class MySensorNode(SensorNode,PositionSensor,BearingSensor):
def Think():
while bearing()>0:
# bearing() is provided by BearingSensor and in the simulator
# will simply access local variables provided by SensorNode
# to return the bearing. In robotic implementation, the
# bearing() method will instead access C routines to read
# the actual bearing from a compass sensor
motors(100,-100)
# spin on the spot, will as a side-effect alter the return
# value of bearing()
(Ox,Oy)=position() #provided by PositionSensor
while True:
(Cx,Cy)=position()
if Cx>=Ox+100:
break
else:
motors(100,100)
#full speed ahead!will alter the return value of position()
Hopefully this edit has clarified some things, if you have any questions I'm more than happy to try and clarify them
OLD THINGS:
When an object of type MySensorNode is constructed, all constructors from the superclasses need to be called. I do not want to complicate the user with having to write a custom constructor for MySensorNode which calls the constructor from each superclass. Ideally, what I would like to happen is:
mSN = MySensorNode()
# at this point, the __init__() method is searched for
# and SensorNode.__init__() is called given the order
# of inheritance in MySensorNode.__mro__
# Somehow, I would also like to call all the other constructors
# that were not executed (ie BearingSensor and PositionSensor)
Any insight or general comments would be appreciated,
Cheers :)
OLD EDIT:
Doing something like:
#prewritten
class SensorNode(object):
def __init__(self):
print "Hello from SensorNode"
for clss in type(self).__mro__:
if clss!=SensorNode and clss!=type(self):
clss.__init__(self)
This works, as self is an instance of MySensorNode. However this solution is messy.
A:
The sensor architecture can be solved by using composition if you want to stick to your original map-of-data design. You seem to be new to Python so I'll try to keep idioms to a minimum.
class IRSensor:
def read(self): return {'ir_amplitude': 12}
class UltrasonicSensor:
def read(self): return {'ultrasonic_amplitude': 63}
class SickLaserSensor:
def read(self): return {'laser_amplitude': 55}
class CompositeSensor:
"""Wrap multiple component sensors, coalesce the results, and return
the composite readout.
"""
component_sensors = []
def __init__(self, component_sensors=None):
component_sensors = component_sensors or self.component_sensors
self.sensors = [cls() for cls in component_sensors]
def read(self):
measurements = {}
for sensor in self.sensors:
measurements.update(sensor.read())
return measurements
class MyCompositeSensor(CompositeSensor):
component_sensors = [UltrasonicSensor, IRSensor]
composite_sensor = MyCompositeSensor()
measurement_map = composite_sensor.read()
assert measurement_map['ultrasonic_amplitude'] == 63
assert measurement_map['ir_amplitude'] == 12
The architectural problem you're describing with the actuators is solved by using mixins and proxying (via __getattr__) rather than inheritance. (Proxying can be a nice alternative to inheritance because objects to proxy to can be bound/unbound at runtime. Also, you don't have to worry about handling all initialization in a single constructor using this technique.)
class MovementActuator:
def __init__(self, x=0, y=0):
self.x, self.y = (x, y)
def move(self, x, y):
print 'Moving to', x, y
self.x, self.y = (x, y)
def get_position(self):
return (self.x, self.y)
class CommunicationActuator:
def communicate(self):
return 'Hey you out there!'
class CompositeActuator:
component_actuators = []
def __init__(self, component_actuators=None):
component_actuators = component_actuators \
or self.component_actuators
self.actuators = [cls() for cls in component_actuators]
def __getattr__(self, attr_name):
"""Look for value in component sensors."""
for actuator in self.actuators:
if hasattr(actuator, attr_name):
return getattr(actuator, attr_name)
raise AttributeError(attr_name)
class MyCompositeActuator(CompositeActuator):
component_actuators = [MovementActuator, CommunicationActuator]
composite_actuator = MyCompositeActuator()
assert composite_actuator.get_position() == (0, 0)
assert composite_actuator.communicate() == 'Hey you out there!'
And finally, you can throw it all together with a simple node declaration:
from sensors import *
from actuators import *
class AbstractNode:
sensors = [] # Set of classes.
actuators = [] # Set of classes.
def __init__(self):
self.composite_sensor = CompositeSensor(self.sensors)
self.composite_actuator = CompositeActuator(self.actuators)
class MyNode(AbstractNode):
sensors = [UltrasonicSensor, SickLaserSensor]
actuators = [MovementActuator, CommunicationActuator]
def think(self):
measurement_map = self.composite_sensor.read()
while self.composite_actuator.get_position()[1] >= 0:
self.composite_actuator.move(100, -100)
my_node = MyNode()
my_node.think()
That should give you an idea of the alternatives to the rigid type system. Note that you don't have to rely on the type hierarchy at all -- just implement to a (potentially implicit) common interface.
LESS OLD:
After reading the question more carefully, I see that what you have is a classic example of diamond inheritance, which is the evil that makes people flee towards single inheritance.
You probably don't want this to begin with, since class hierarchy means squat in Python. What you want to do is make a SensorInterface (minimum requirements for a sensor) and have a bunch of "mixin" classes that have totally independent functionality that can be invoked through methods of various names. In your sensor framework you shouldn't say things like isinstance(sensor, PositionSensor) -- you should say things like "can this sensor geo-locate?" in the following form:
def get_position(sensor):
try:
return sensor.geolocate()
except AttributeError:
return None
This is the heart of duck-typing philosophy and EAFP (Easier to Ask for Forgiveness than Permission), both of which the Python language embraces.
You should probably describe what methods these sensors will actually implement so we can describe how you can use mixin classes for your plugin architecture.
OLD:
If they write the code in a module that gets put in a plugin package or what have you, you can magically instrument the classes for them when you import their plugin modules. Something along the lines of this snippet (untested):
import inspect
import types
from sensors import Sensor
def is_class(obj):
return type(obj) in (types.ClassType, types.TypeType)
def instrumented_init(self, *args, **kwargs):
Sensor.__init__(self, *args, **kwargs)
for module in plugin_modules: # Get this from somewhere...
classes = inspect.getmembers(module, predicate=is_class)
for name, cls in classes:
if hasattr(cls, '__init__'):
# User specified own init, may be deriving from something else.
continue
if cls.__bases__ != tuple([Sensor]):
continue # Class doesn't singly inherit from sensor.
cls.__init__ = instrumented_init
You can find the modules within a package with another function.
A:
super calls the next class in the mro-list. This works even if you leave out the __init__ form some class.
class A(object):
def __init__(self):
super(A,self).__init__()
print "Hello from A!"
class B(A):
def __init__(self):
super(B,self).__init__()
print "Hello from B!"
class C(A):
def __init__(self):
super(C,self).__init__()
print "Hello from C!"
class D(B,C):
def __init__(self):
super(D,self).__init__()
print "Hello from D!"
class E(B,C):
pass
Example:
>>> x = D()
Hello from A!
Hello from C!
Hello from B!
Hello from D!
>>> y = E()
Hello from A!
Hello from C!
Hello from B!
>>>
Edit: Rewrote the answer. (again)
A:
Here's a partial solution:
class NodeMeta(type):
def __init__(cls, name, bases, d):
setattr(cls, '__inherits__', bases)
class Node(object):
__metaclass__ = NodeMeta
def __init__(self):
for cls in self.__inherits__:
cls.cls_init(self)
class Sensor(Node):
def cls_init(self):
print "Sensor initialized"
class PositionSensor(Sensor):
def cls_init(self):
print "PositionSensor initialized"
self._bearing = 0
def bearing(self):
# calculate bearing:
return self._bearing
class BearingSensor(Sensor):
def cls_init(self):
print "BearingSensor initialized"
self._position = (0, 0)
def position(self):
# calculate position:
return self._position
# -------- custom sensors --------
class CustomSensor(PositionSensor, BearingSensor):
def think(self):
print "Current position:", self.position()
print "Current bearing:", self.bearing()
class CustomSensor2(PositionSensor, BearingSensor, Sensor):
pass
>>> s = CustomSensor()
PositionSensor initialized
BearingSensor initialized
>>> s.think()
Current position: (0, 9)
Current bearing: 0
You'll have to move your __init__ code from the Node subclasses into some other method (I used cls_init).
Edit: I posted this before I saw your updates; I'll re-read your question, and if necessary, update this solution.
| Discussion of multiple inheritance vs Composition for a project (+other things) | I am writing a python platform for the simulation of distributed sensor swarms. The idea being that the end user can write a custom Node consisting of the SensorNode behaviour (communication, logging, etc) as well as implementing a number of different sensors.
The example below briefly demonstrates the concept.
#prewritten
class Sensor(object):
def __init__(self):
print "Hello from Sensor"
#...
#prewritten
class PositionSensor(Sensor):
def __init__(self):
print "Hello from Position"
Sensor.__init__(self)
#...
#prewritten
class BearingSensor(Sensor):
def __init__(self):
print "Hello from Bearing"
Sensor.__init__(self)
#...
#prewritten
class SensorNode(object):
def __init__(self):
print "Hello from SensorNode"
#...
#USER WRITTEN
class MySensorNode(SensorNode,BearingSensor,PositionSensor):
def CustomMethod(self):
LogData={'Position':position(), 'Bearing':bearing()} #position() from PositionSensor, bearing() from BearingSensor
Log(LogData) #Log() from SensorNode
NEW EDIT:
Firstly an overview of what I am trying to achieve:
I am writing a simulator to simulate swarm intelligence algorithms with particular focus on mobile sensor networks. These networks consist of many small robots communicating individual sensor data to build a complex sensory map of the environment.
The underlying goal of this project is to develop a simulation platform that provides abstracted interfaces to sensors such that the same user-implemented functionality can be directly ported to a robotic swarm running embedded linux. As robotic implementation is the goal, I need to design such that the software node behaves the same, and only has access to information that an physical node would have.
As part of the simulation engine, I will be providing a set of classes modelling different types of sensors and different types of sensor node. I wish to abstract all this complexity away from the user such that all the user must do is define which sensors are present on the node, and what type of sensor node (mobile, fixed position) is being implemented.
My initial thinking was that every sensor would provide a read() method which would return the relevant values, however having read the responses to the question, I see that perhaps more descriptive method names would be beneficial (.distance(), .position(), .bearing(), etc).
I initially wanted use separate classes for the sensors (with common ancestors) so that a more technical user can easily extend one of the existing classes to create a new sensor if they wish. For example:
Sensor
|
DistanceSensor(designed for 360 degree scan range)
| | |
IR Sensor Ultrasonic SickLaser
(narrow) (wider) (very wide)
The reason I was initially thinking of Multiple Inheritance (although it semi-breaks the IS-A relationship of inheritance) was due to the underlying principle behind the simulation system. Let me explain:
The user-implemented MySensorNode should not have direct access to its position within the environment (akin to a robot, the access is indirect through a sensor interface), similarly, the sensors should not know where they are. However, this lack of direct knowledge poses a problem, as the return values of the sensors are all dependent on their position and orientation within the environment (which needs to be simulated to return the correct values).
SensorNode, as a class implemented within the simulation libraries, is responsible for drawing the MySensorNode within the pygame environment - thus, it is the only class that should have direct access to the position and orientation of the sensor node within the environment.
SensorNode is also responsible for translation and rotation within the environment, however this translation and rotation is a side effect of motor actuation.
What I mean by this is that robots cannot directly alter their position within the world, all they can do is provide power to motors, and movement within the world is a side-effect of the motors interaction with the environment. I need to model this accurately within the simulation.
So, to move, the user-implemented functionality may use:
motors(50,50)
This call will, as a side-effect, alter the position of the node within the world.
If SensorNode was implemented using composition, SensorNode.motors(...) would not be able to directly alter instance variables (such as position), nor would MySensorNode.draw() be resolved to SensorNode.draw(), so SensorNode imo should be implemented using inheritance.
In terms of the sensors, the benefit of composition for a problem like this is obvious, MySensorNode is composed of a number of sensors - enough said.
However the problem as I see it is that the Sensors need access to their position and orientation within the world, and if you use composition you will end up with a call like:
>>> PosSensor.position((123,456))
(123,456)
Then again - thinking, you could pass self to the sensor upon initialisation, eg:
PosSensor = PositionSensor(self)
then later
PosSensor.position()
however this PosSensor.position() would then need to access information local to the instance (passed as self during init()), so why call PosSensor at all when you can access the information locally? Also passing your instance to an object you are composed of just seems not quite right, crossing the boundaries of encapsulation and information hiding (even though python doesn't do much to support the idea of information hiding).
If the solution was implemented using multiple inheritance, these problems would disappear:
class MySensorNode(SensorNode,PositionSensor,BearingSensor):
def Think():
while bearing()>0:
# bearing() is provided by BearingSensor and in the simulator
# will simply access local variables provided by SensorNode
# to return the bearing. In robotic implementation, the
# bearing() method will instead access C routines to read
# the actual bearing from a compass sensor
motors(100,-100)
# spin on the spot, will as a side-effect alter the return
# value of bearing()
(Ox,Oy)=position() #provided by PositionSensor
while True:
(Cx,Cy)=position()
if Cx>=Ox+100:
break
else:
motors(100,100)
#full speed ahead!will alter the return value of position()
Hopefully this edit has clarified some things, if you have any questions I'm more than happy to try and clarify them
OLD THINGS:
When an object of type MySensorNode is constructed, all constructors from the superclasses need to be called. I do not want to complicate the user with having to write a custom constructor for MySensorNode which calls the constructor from each superclass. Ideally, what I would like to happen is:
mSN = MySensorNode()
# at this point, the __init__() method is searched for
# and SensorNode.__init__() is called given the order
# of inheritance in MySensorNode.__mro__
# Somehow, I would also like to call all the other constructors
# that were not executed (ie BearingSensor and PositionSensor)
Any insight or general comments would be appreciated,
Cheers :)
OLD EDIT:
Doing something like:
#prewritten
class SensorNode(object):
def __init__(self):
print "Hello from SensorNode"
for clss in type(self).__mro__:
if clss!=SensorNode and clss!=type(self):
clss.__init__(self)
This works, as self is an instance of MySensorNode. However this solution is messy.
| [
"The sensor architecture can be solved by using composition if you want to stick to your original map-of-data design. You seem to be new to Python so I'll try to keep idioms to a minimum.\nclass IRSensor:\n def read(self): return {'ir_amplitude': 12}\n\nclass UltrasonicSensor:\n def read(self): return {'ultrasonic_amplitude': 63}\n\nclass SickLaserSensor:\n def read(self): return {'laser_amplitude': 55}\n\nclass CompositeSensor:\n \"\"\"Wrap multiple component sensors, coalesce the results, and return\n the composite readout.\n \"\"\"\n component_sensors = []\n\n def __init__(self, component_sensors=None):\n component_sensors = component_sensors or self.component_sensors\n self.sensors = [cls() for cls in component_sensors]\n\n def read(self):\n measurements = {}\n for sensor in self.sensors:\n measurements.update(sensor.read())\n return measurements\n\nclass MyCompositeSensor(CompositeSensor):\n component_sensors = [UltrasonicSensor, IRSensor]\n\n\ncomposite_sensor = MyCompositeSensor()\nmeasurement_map = composite_sensor.read()\nassert measurement_map['ultrasonic_amplitude'] == 63\nassert measurement_map['ir_amplitude'] == 12\n\nThe architectural problem you're describing with the actuators is solved by using mixins and proxying (via __getattr__) rather than inheritance. (Proxying can be a nice alternative to inheritance because objects to proxy to can be bound/unbound at runtime. Also, you don't have to worry about handling all initialization in a single constructor using this technique.)\nclass MovementActuator:\n def __init__(self, x=0, y=0):\n self.x, self.y = (x, y)\n\n def move(self, x, y):\n print 'Moving to', x, y\n self.x, self.y = (x, y)\n\n def get_position(self):\n return (self.x, self.y)\n\nclass CommunicationActuator:\n def communicate(self):\n return 'Hey you out there!'\n\nclass CompositeActuator:\n component_actuators = []\n\n def __init__(self, component_actuators=None):\n component_actuators = component_actuators \\\n or self.component_actuators\n self.actuators = [cls() for cls in component_actuators]\n\n def __getattr__(self, attr_name):\n \"\"\"Look for value in component sensors.\"\"\"\n for actuator in self.actuators:\n if hasattr(actuator, attr_name):\n return getattr(actuator, attr_name)\n raise AttributeError(attr_name)\n\n\nclass MyCompositeActuator(CompositeActuator):\n component_actuators = [MovementActuator, CommunicationActuator]\n\ncomposite_actuator = MyCompositeActuator()\nassert composite_actuator.get_position() == (0, 0)\nassert composite_actuator.communicate() == 'Hey you out there!'\n\nAnd finally, you can throw it all together with a simple node declaration:\nfrom sensors import *\nfrom actuators import *\n\nclass AbstractNode:\n sensors = [] # Set of classes.\n actuators = [] # Set of classes.\n def __init__(self):\n self.composite_sensor = CompositeSensor(self.sensors)\n self.composite_actuator = CompositeActuator(self.actuators)\n\nclass MyNode(AbstractNode):\n sensors = [UltrasonicSensor, SickLaserSensor]\n actuators = [MovementActuator, CommunicationActuator]\n\n def think(self):\n measurement_map = self.composite_sensor.read()\n while self.composite_actuator.get_position()[1] >= 0:\n self.composite_actuator.move(100, -100)\n\nmy_node = MyNode()\nmy_node.think()\n\nThat should give you an idea of the alternatives to the rigid type system. Note that you don't have to rely on the type hierarchy at all -- just implement to a (potentially implicit) common interface.\nLESS OLD:\nAfter reading the question more carefully, I see that what you have is a classic example of diamond inheritance, which is the evil that makes people flee towards single inheritance.\nYou probably don't want this to begin with, since class hierarchy means squat in Python. What you want to do is make a SensorInterface (minimum requirements for a sensor) and have a bunch of \"mixin\" classes that have totally independent functionality that can be invoked through methods of various names. In your sensor framework you shouldn't say things like isinstance(sensor, PositionSensor) -- you should say things like \"can this sensor geo-locate?\" in the following form:\ndef get_position(sensor):\n try:\n return sensor.geolocate()\n except AttributeError:\n return None\n\nThis is the heart of duck-typing philosophy and EAFP (Easier to Ask for Forgiveness than Permission), both of which the Python language embraces.\nYou should probably describe what methods these sensors will actually implement so we can describe how you can use mixin classes for your plugin architecture.\nOLD:\nIf they write the code in a module that gets put in a plugin package or what have you, you can magically instrument the classes for them when you import their plugin modules. Something along the lines of this snippet (untested):\n import inspect\n import types\n\n from sensors import Sensor\n\n def is_class(obj):\n return type(obj) in (types.ClassType, types.TypeType)\n\n def instrumented_init(self, *args, **kwargs):\n Sensor.__init__(self, *args, **kwargs)\n\n for module in plugin_modules: # Get this from somewhere...\n classes = inspect.getmembers(module, predicate=is_class)\n for name, cls in classes:\n if hasattr(cls, '__init__'):\n # User specified own init, may be deriving from something else.\n continue \n if cls.__bases__ != tuple([Sensor]):\n continue # Class doesn't singly inherit from sensor.\n cls.__init__ = instrumented_init\n\nYou can find the modules within a package with another function.\n",
"super calls the next class in the mro-list. This works even if you leave out the __init__ form some class.\nclass A(object):\n def __init__(self):\n super(A,self).__init__()\n print \"Hello from A!\"\n\nclass B(A):\n def __init__(self):\n super(B,self).__init__()\n print \"Hello from B!\"\n\nclass C(A):\n def __init__(self):\n super(C,self).__init__()\n print \"Hello from C!\"\n\nclass D(B,C):\n def __init__(self):\n super(D,self).__init__()\n print \"Hello from D!\"\n\nclass E(B,C):\n pass\n\nExample:\n>>> x = D()\nHello from A!\nHello from C!\nHello from B!\nHello from D!\n>>> y = E()\nHello from A!\nHello from C!\nHello from B!\n>>> \n\nEdit: Rewrote the answer. (again)\n",
"Here's a partial solution:\nclass NodeMeta(type):\n def __init__(cls, name, bases, d):\n setattr(cls, '__inherits__', bases)\n\nclass Node(object):\n __metaclass__ = NodeMeta\n\n def __init__(self):\n for cls in self.__inherits__:\n cls.cls_init(self)\n\nclass Sensor(Node):\n def cls_init(self):\n print \"Sensor initialized\"\n\nclass PositionSensor(Sensor):\n def cls_init(self):\n print \"PositionSensor initialized\"\n self._bearing = 0\n\n def bearing(self):\n # calculate bearing:\n return self._bearing\n\nclass BearingSensor(Sensor):\n def cls_init(self):\n print \"BearingSensor initialized\"\n self._position = (0, 0)\n\n def position(self):\n # calculate position:\n return self._position\n\n# -------- custom sensors --------\n\nclass CustomSensor(PositionSensor, BearingSensor):\n def think(self):\n print \"Current position:\", self.position()\n print \"Current bearing:\", self.bearing()\n\nclass CustomSensor2(PositionSensor, BearingSensor, Sensor):\n pass\n\n>>> s = CustomSensor()\nPositionSensor initialized\nBearingSensor initialized\n>>> s.think()\nCurrent position: (0, 9)\nCurrent bearing: 0\n\nYou'll have to move your __init__ code from the Node subclasses into some other method (I used cls_init).\nEdit: I posted this before I saw your updates; I'll re-read your question, and if necessary, update this solution.\n"
] | [
12,
1,
1
] | [] | [] | [
"constructor",
"multiple_inheritance",
"oop",
"python"
] | stackoverflow_0000645493_constructor_multiple_inheritance_oop_python.txt |
Q:
Is there a way to build a C-like DLL from a Python module?
I have a Python module with nothing but regular global functions. I need to call it from another business-domain scripting environment that can only call out to C DLLs. Is there anyway to build my Python modules so that to other code it can be called like a standard C function that's exported from a DLL? This is for a Windows environment. I'm aware of IronPython, but as far as I know it can only build .NET Assemblies, which are not callable as C DLL functions.
A:
Take a look at this Codeproject article. One way would be wrap your python functions in a C dll and expose this to the callee.
COM is a binary protocol to solve this issue. But you will have to wrap this python dll in a COM wrapper. And add some code on the calling side as well.
A:
The standard solution is to embed the Python interpreter (which is already a C DLL) in your application.
https://docs.python.org/extending/windows.html#using-dlls-in-practice
http://docs.python.org/extending/embedding.html
A:
Py2exe can generate COM dlls from python code, by compiling and embedding python code + interpreter. It does not, AFAIK, support regular DLLs yet. For that, see dirkgently's answer about embedding python yourself.
| Is there a way to build a C-like DLL from a Python module? | I have a Python module with nothing but regular global functions. I need to call it from another business-domain scripting environment that can only call out to C DLLs. Is there anyway to build my Python modules so that to other code it can be called like a standard C function that's exported from a DLL? This is for a Windows environment. I'm aware of IronPython, but as far as I know it can only build .NET Assemblies, which are not callable as C DLL functions.
| [
"Take a look at this Codeproject article. One way would be wrap your python functions in a C dll and expose this to the callee.\nCOM is a binary protocol to solve this issue. But you will have to wrap this python dll in a COM wrapper. And add some code on the calling side as well.\n",
"The standard solution is to embed the Python interpreter (which is already a C DLL) in your application.\nhttps://docs.python.org/extending/windows.html#using-dlls-in-practice\nhttp://docs.python.org/extending/embedding.html\n",
"Py2exe can generate COM dlls from python code, by compiling and embedding python code + interpreter. It does not, AFAIK, support regular DLLs yet. For that, see dirkgently's answer about embedding python yourself.\n"
] | [
6,
3,
2
] | [] | [] | [
"python"
] | stackoverflow_0000645892_python.txt |
Q:
How do you make a case for Django [or Ruby on Rails] to non-technical clients
Businessmen typically want a web application developed. They are aware of .net or J2EE by names, without much knowledge about either.
Altho' Rails and Django offer for a much better and faster development stack, it is a big task to convince businessmen to use these platforms.
The task begins with introducing Django (or Rails), quoting some blog/research. Then making a case for the use of the framework for the specific project.
Lot of the task is repetitive. What are the sources/blogs/whitepapers and other materials you use to make a case for django (or Rails)
Don't you think there should be a common brochure developed that many development agencies could use to make the same case, over and again. Are there any such ones, now?
There seems to be enough discussion on Django vs Rails. Whereas the need is (Django and Rails) vs (.net and J2EE), at least so, while making a business case. Both represent a faster pragmatic web development in a dynamic language.
A:
It's easier to ask forgiveness than permission.
First, build the initial release in Django. Quickly. Build the model well (really well!). But use as much default admin functionality as you can.
Spend time only only reporting and display pages where the HTML might actually matter to the presentation.
Show this and they'll only want more. Once they've gotten addicted to fast turnaround and correct out-of-the box operation, you can discuss technology with them. By then it won't matter any more.
A:
You need to speak the language of business: money.
"If we do it Rails, it will cost you 50% less than the same functionality in Java."
Your percentage may vary, and you might need to also include hosting and upkeep costs, to show how it balances out.
When you're convincing other programmers, sure, talk about development speed and automation of repetitive tasks. But talk bottom-line cost to a business person.
A:
Before you begin making the case for Django or Rails, you have to be convinced it's the right stack first in the context of the business person's needs. If the business person is an entrepreneur, he may have other factors that go beyond how quickly can the solution be developed. For example:
If its an enterprise play that's being developed (something like SalesForce.com, SugarCRM, etc.) it may make sense to have it written in Java because this makes acquisitions and mergers easier with potential Java-based suitors.
If its an internal IT play for a custom solution in a large company, they may already have a significant amount MS infrastructure in place. It may not make sense to have your client install SQLServer or complicate their stack further with a Rails/Django friendly stack.
If you've cross this chasm and are convinced you have the client's best interest in mind, then I would look for examples on the Internet where the same application has been authored in both Java and Rails/Django. Here's an example of the Pet Store implemented in Rails.
http://www.anassina.com/projects/railspetstore/
You can download the source code and demonstrate to your client how much less code is needed to achieve the same result.
Explain to the client why less code is valuable: the less code you write, the fewer bugs you will have.
A:
The first 2 arguments from the top of my mind:
Easier and faster development = cheaper product, less time to market.
SO optimization out of the box.
A:
While many of you made some good suggestions, WRT the talks/resources for using these frameworks, you may also note to have a look at talk on redesigning yellow pages in ROR:
Summary from the site:
This talk explains how
YELLOWPAGES.COM, one of the
highest-traffic websites in the U.S.,
was written using Ruby on Rails, how
it was scaled to handle the traffic
and how the software architecture
evolved. Also: the reasons for
choosing Ruby on Rails.
A:
The best case to be made for either of these frameworks is their ability to automate repetitive and time-consuming tasks. This allows developers to be faster and more productive which in turn means projects are delivered faster.
A:
The problem with a "brochure" approach is that it doesn't address the clients needs. Putting the language/platform of choice into a presentation that addresses the clients goals is much more likely to sell them - both on the tools you want to use, as well as you as a provider. As long as you can show that your approach will solve the problem (preferably with the least amount of expense), you'll have fewer objections and less of the "but I've heard that xxx is the best".
| How do you make a case for Django [or Ruby on Rails] to non-technical clients | Businessmen typically want a web application developed. They are aware of .net or J2EE by names, without much knowledge about either.
Altho' Rails and Django offer for a much better and faster development stack, it is a big task to convince businessmen to use these platforms.
The task begins with introducing Django (or Rails), quoting some blog/research. Then making a case for the use of the framework for the specific project.
Lot of the task is repetitive. What are the sources/blogs/whitepapers and other materials you use to make a case for django (or Rails)
Don't you think there should be a common brochure developed that many development agencies could use to make the same case, over and again. Are there any such ones, now?
There seems to be enough discussion on Django vs Rails. Whereas the need is (Django and Rails) vs (.net and J2EE), at least so, while making a business case. Both represent a faster pragmatic web development in a dynamic language.
| [
"It's easier to ask forgiveness than permission.\nFirst, build the initial release in Django. Quickly. Build the model well (really well!). But use as much default admin functionality as you can.\nSpend time only only reporting and display pages where the HTML might actually matter to the presentation.\nShow this and they'll only want more. Once they've gotten addicted to fast turnaround and correct out-of-the box operation, you can discuss technology with them. By then it won't matter any more.\n",
"You need to speak the language of business: money. \n\"If we do it Rails, it will cost you 50% less than the same functionality in Java.\" \nYour percentage may vary, and you might need to also include hosting and upkeep costs, to show how it balances out. \nWhen you're convincing other programmers, sure, talk about development speed and automation of repetitive tasks. But talk bottom-line cost to a business person. \n",
"Before you begin making the case for Django or Rails, you have to be convinced it's the right stack first in the context of the business person's needs. If the business person is an entrepreneur, he may have other factors that go beyond how quickly can the solution be developed. For example:\n\nIf its an enterprise play that's being developed (something like SalesForce.com, SugarCRM, etc.) it may make sense to have it written in Java because this makes acquisitions and mergers easier with potential Java-based suitors.\nIf its an internal IT play for a custom solution in a large company, they may already have a significant amount MS infrastructure in place. It may not make sense to have your client install SQLServer or complicate their stack further with a Rails/Django friendly stack.\n\nIf you've cross this chasm and are convinced you have the client's best interest in mind, then I would look for examples on the Internet where the same application has been authored in both Java and Rails/Django. Here's an example of the Pet Store implemented in Rails. \nhttp://www.anassina.com/projects/railspetstore/\nYou can download the source code and demonstrate to your client how much less code is needed to achieve the same result.\nExplain to the client why less code is valuable: the less code you write, the fewer bugs you will have.\n",
"The first 2 arguments from the top of my mind:\n\nEasier and faster development = cheaper product, less time to market.\nSO optimization out of the box.\n\n",
"While many of you made some good suggestions, WRT the talks/resources for using these frameworks, you may also note to have a look at talk on redesigning yellow pages in ROR:\nSummary from the site:\n\nThis talk explains how\n YELLOWPAGES.COM, one of the\n highest-traffic websites in the U.S.,\n was written using Ruby on Rails, how\n it was scaled to handle the traffic\n and how the software architecture\n evolved. Also: the reasons for\n choosing Ruby on Rails.\n\n",
"The best case to be made for either of these frameworks is their ability to automate repetitive and time-consuming tasks. This allows developers to be faster and more productive which in turn means projects are delivered faster.\n",
"The problem with a \"brochure\" approach is that it doesn't address the clients needs. Putting the language/platform of choice into a presentation that addresses the clients goals is much more likely to sell them - both on the tools you want to use, as well as you as a provider. As long as you can show that your approach will solve the problem (preferably with the least amount of expense), you'll have fewer objections and less of the \"but I've heard that xxx is the best\".\n"
] | [
21,
16,
5,
2,
2,
1,
1
] | [] | [] | [
"django",
"python",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000644237_django_python_ruby_ruby_on_rails.txt |
Q:
Workflow for configuring apache on a webfaction account via ssh and ftp. (django/python)
I'm new at this, however, when it comes to configuring mod_python/apache or wsgi/apache I suffer.
I've been able to use the python debugger tool.. pdb.set_trace() to success, especially when using the django development server, i.e. it out puts to the terminal all of the server activity, including the pdb interface.
So, how does one do something like this when trying to deploy a django website on an host such as webfaction?
Other than ftp into the error_log and read about it post failure, be able to interact with the system, as its happening?
Hopefully I'm clear enough here.
Btw, following is the file that I'm trying to configure.
import os
import sys
from os.path import abspath, dirname, join
from site import addsitedir
from django.core.handlers.modpython import ModPythonHandler
import pdb
class PinaxModPythonHandler(ModPythonHandler):
def __call__(self, req):
# mod_python fakes the environ, and thus doesn't process SetEnv.
# This fixes that. Django will call this again since there is no way
# of overriding __call__ to just process the request.
os.environ.update(req.subprocess_env)
from django.conf import settings
sys.path.insert(0, abspath(join(dirname(__file__), "../../")))
sys.path.insert(0, os.path.join(settings.PINAX_ROOT, "apps/external_apps"))
sys.path.insert(0, os.path.join(settings.PINAX_ROOT, "apps/local_apps"))
sys.path.insert(0, join(settings.PINAX_ROOT, "apps"))
sys.path.insert(0, join(settings.PROJECT_ROOT, "apps"))
pdb.set_trace()
return super(PinaxModPythonHandler, self).__call__(req)
def handler(req):
# mod_python hooks into this function.
return PinaxModPythonHandler()(req)
and here's the resulting error page via http:
MOD_PYTHON ERROR
ProcessId: 318
Interpreter: 'web25.webfaction.com'
ServerName: 'web25.webfaction.com'
DocumentRoot: '/etc/httpd/htdocs'
URI: '/'
Location: '/'
Directory: None
Filename: '/etc/httpd/htdocs'
PathInfo: '/'
Phase: 'PythonHandler'
Handler: 'bc.deploy.modpython'
Traceback (most recent call last):
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/mod_python/importer.py", line 1537, in HandlerDispatch
default=default_handler, arg=req, silent=hlist.silent)
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/mod_python/importer.py", line 1229, in _process_target
result = _execute_target(config, req, object, arg)
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/mod_python/importer.py", line 1128, in _execute_target
result = object(arg)
File "/home/dalidada/webapps/birthconfidence/bc/deploy/modpython.py", line 33, in handler
return PinaxModPythonHandler()(req)
File "/home/dalidada/webapps/birthconfidence/bc/deploy/modpython.py", line 29, in __call__
return super(PinaxModPythonHandler, self).__call__(req)
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/django/core/handlers/modpython.py", line 191, in __call__
self.load_middleware()
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/django/core/handlers/base.py", line 40, in load_middleware
raise exceptions.ImproperlyConfigured, 'Error importing middleware %s: "%s"' % (mw_module, e)
ImproperlyConfigured: Error importing middleware django_openid.consumer: "No module named django_openid.consumer"
A:
How to use pdb with mod_wsgi is documented on the mod_wsgi site. See:
http://code.google.com/p/modwsgi/wiki/DebuggingTechniques#Python_Interactive_Debugger
Other debugging techniques are shown on the same page.
| Workflow for configuring apache on a webfaction account via ssh and ftp. (django/python) | I'm new at this, however, when it comes to configuring mod_python/apache or wsgi/apache I suffer.
I've been able to use the python debugger tool.. pdb.set_trace() to success, especially when using the django development server, i.e. it out puts to the terminal all of the server activity, including the pdb interface.
So, how does one do something like this when trying to deploy a django website on an host such as webfaction?
Other than ftp into the error_log and read about it post failure, be able to interact with the system, as its happening?
Hopefully I'm clear enough here.
Btw, following is the file that I'm trying to configure.
import os
import sys
from os.path import abspath, dirname, join
from site import addsitedir
from django.core.handlers.modpython import ModPythonHandler
import pdb
class PinaxModPythonHandler(ModPythonHandler):
def __call__(self, req):
# mod_python fakes the environ, and thus doesn't process SetEnv.
# This fixes that. Django will call this again since there is no way
# of overriding __call__ to just process the request.
os.environ.update(req.subprocess_env)
from django.conf import settings
sys.path.insert(0, abspath(join(dirname(__file__), "../../")))
sys.path.insert(0, os.path.join(settings.PINAX_ROOT, "apps/external_apps"))
sys.path.insert(0, os.path.join(settings.PINAX_ROOT, "apps/local_apps"))
sys.path.insert(0, join(settings.PINAX_ROOT, "apps"))
sys.path.insert(0, join(settings.PROJECT_ROOT, "apps"))
pdb.set_trace()
return super(PinaxModPythonHandler, self).__call__(req)
def handler(req):
# mod_python hooks into this function.
return PinaxModPythonHandler()(req)
and here's the resulting error page via http:
MOD_PYTHON ERROR
ProcessId: 318
Interpreter: 'web25.webfaction.com'
ServerName: 'web25.webfaction.com'
DocumentRoot: '/etc/httpd/htdocs'
URI: '/'
Location: '/'
Directory: None
Filename: '/etc/httpd/htdocs'
PathInfo: '/'
Phase: 'PythonHandler'
Handler: 'bc.deploy.modpython'
Traceback (most recent call last):
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/mod_python/importer.py", line 1537, in HandlerDispatch
default=default_handler, arg=req, silent=hlist.silent)
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/mod_python/importer.py", line 1229, in _process_target
result = _execute_target(config, req, object, arg)
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/mod_python/importer.py", line 1128, in _execute_target
result = object(arg)
File "/home/dalidada/webapps/birthconfidence/bc/deploy/modpython.py", line 33, in handler
return PinaxModPythonHandler()(req)
File "/home/dalidada/webapps/birthconfidence/bc/deploy/modpython.py", line 29, in __call__
return super(PinaxModPythonHandler, self).__call__(req)
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/django/core/handlers/modpython.py", line 191, in __call__
self.load_middleware()
File "/home/dalidada/webapps/birthconfidence/lib/python2.5/django/core/handlers/base.py", line 40, in load_middleware
raise exceptions.ImproperlyConfigured, 'Error importing middleware %s: "%s"' % (mw_module, e)
ImproperlyConfigured: Error importing middleware django_openid.consumer: "No module named django_openid.consumer"
| [
"How to use pdb with mod_wsgi is documented on the mod_wsgi site. See:\nhttp://code.google.com/p/modwsgi/wiki/DebuggingTechniques#Python_Interactive_Debugger\nOther debugging techniques are shown on the same page.\n"
] | [
1
] | [] | [] | [
"apache",
"django",
"mod_python",
"python"
] | stackoverflow_0000647005_apache_django_mod_python_python.txt |
Q:
Python: How to detect debug interpreter
How can I detect in my python script if its being run by the debug interpreter (ie python_d.exe rather than python.exe)? I need to change the paths to some dlls that I pass to an extension.
eg Id like to do something like this at the start of my python script:
#get paths to graphics dlls
if debug_build:
d3d9Path = "bin\\debug\\direct3d9.dll"
d3d10Path = "bin\\debug\\direct3d10.dll"
openGLPath = "bin\\debug\\openGL2.dll"
else:
d3d9Path = "bin\\direct3d9.dll"
d3d10Path = "bin\\direct3d10.dll"
openGLPath = "bin\\openGL2.dll"
I thought about adding an "IsDebug()" method to the extension which would return true if it is the debug build (ie was built with "#define DEBUG") and false otherwise. But this seems a bit of a hack for somthing Im sure I can get python to tell me...
A:
Distutils use sys.gettotalrefcount to detect a debug python build:
# ...
if hasattr(sys, 'gettotalrefcount'):
plat_specifier += '-pydebug'
this method doesn't rely on an executable name '*_d.exe'. It works for any name.
this method is cross-platform. It doesn't depend on '_d.pyd' suffix.
See Debugging Builds and Misc/SpecialBuilds.txt
A:
Better, because it also works when you are running an embedded Python interpreter is to check the return value of
imp.get_suffixes()
For a debug build it contains a tuple starting with '_d.pyd':
# debug build:
[('_d.pyd', 'rb', 3), ('.py', 'U', 1), ('.pyw', 'U', 1), ('.pyc', 'rb', 2)]
# release build:
[('.pyd', 'rb', 3), ('.py', 'U', 1), ('.pyw', 'U', 1), ('.pyc', 'rb', 2)]
A:
An easy way, if you don't mind relying on the file name:
if sys.executable.endswith("_d.exe"):
print "running on debug interpreter"
You can read more about the sys module and its various facilities here.
| Python: How to detect debug interpreter | How can I detect in my python script if its being run by the debug interpreter (ie python_d.exe rather than python.exe)? I need to change the paths to some dlls that I pass to an extension.
eg Id like to do something like this at the start of my python script:
#get paths to graphics dlls
if debug_build:
d3d9Path = "bin\\debug\\direct3d9.dll"
d3d10Path = "bin\\debug\\direct3d10.dll"
openGLPath = "bin\\debug\\openGL2.dll"
else:
d3d9Path = "bin\\direct3d9.dll"
d3d10Path = "bin\\direct3d10.dll"
openGLPath = "bin\\openGL2.dll"
I thought about adding an "IsDebug()" method to the extension which would return true if it is the debug build (ie was built with "#define DEBUG") and false otherwise. But this seems a bit of a hack for somthing Im sure I can get python to tell me...
| [
"Distutils use sys.gettotalrefcount to detect a debug python build:\n# ...\nif hasattr(sys, 'gettotalrefcount'):\n plat_specifier += '-pydebug'\n\n\nthis method doesn't rely on an executable name '*_d.exe'. It works for any name.\nthis method is cross-platform. It doesn't depend on '_d.pyd' suffix.\n\nSee Debugging Builds and Misc/SpecialBuilds.txt \n",
"Better, because it also works when you are running an embedded Python interpreter is to check the return value of\nimp.get_suffixes()\n\nFor a debug build it contains a tuple starting with '_d.pyd':\n# debug build:\n[('_d.pyd', 'rb', 3), ('.py', 'U', 1), ('.pyw', 'U', 1), ('.pyc', 'rb', 2)]\n\n# release build:\n[('.pyd', 'rb', 3), ('.py', 'U', 1), ('.pyw', 'U', 1), ('.pyc', 'rb', 2)]\n\n",
"An easy way, if you don't mind relying on the file name:\nif sys.executable.endswith(\"_d.exe\"):\n print \"running on debug interpreter\"\n\nYou can read more about the sys module and its various facilities here.\n"
] | [
15,
3,
2
] | [] | [] | [
"debugging",
"python"
] | stackoverflow_0000646518_debugging_python.txt |
Q:
How do I forward a complete email without downloading attachments?
Hello (and thanks in advance!)
I'm working in Python and I got IMAP and SMTP to work with my Gmail account.
I now need to forward select messages to another account (after reading their body).
How do I do this without downloading the attachments and recreating the entire message?
Thanks!
Tal.
A:
Look at the IMAP and SMTP Lemonade extensions. There is support for forwarding messages without downloading. It's a very new extension, so not many IMAP servers support it yet; I'm not sure if Gmail is one of them.
Section 2 of RFC 4550 contains the technical details on how this works.
| How do I forward a complete email without downloading attachments? | Hello (and thanks in advance!)
I'm working in Python and I got IMAP and SMTP to work with my Gmail account.
I now need to forward select messages to another account (after reading their body).
How do I do this without downloading the attachments and recreating the entire message?
Thanks!
Tal.
| [
"Look at the IMAP and SMTP Lemonade extensions. There is support for forwarding messages without downloading. It's a very new extension, so not many IMAP servers support it yet; I'm not sure if Gmail is one of them.\nSection 2 of RFC 4550 contains the technical details on how this works.\n"
] | [
1
] | [] | [] | [
"forward",
"gmail",
"imap",
"python",
"smtp"
] | stackoverflow_0000647500_forward_gmail_imap_python_smtp.txt |
Q:
Why can I not paste the output of Pythons REPL without manual-editing?
A huge amount of example Python code shows the output of the Python REPL, for example:
>>> class eg(object):
... def __init__(self, name):
... self.name = name
... def hi(self):
... print "Hi %s" % (self.name)
...
>>> greeter = eg("Bob")
>>> greeter.hi()
Hi Bob
>>>
Now, the obvious thing you want to do is run the above code.. so, I run "python" and paste the above text in..
>>> >>> class eg(object):
File "<stdin>", line 1
>>> class eg(object):
^
SyntaxError: invalid syntax
>>> ... def __init__(self, name):
File "<stdin>", line 1
... def __init__(self, name):
^
The code is broken!?..
To get it to run, I would have to either..
copy-and-paste the lines one at a time, making sure I copy all the indentation correctly. If you screw it up (say, miss a leading space, you have to start all over again)
use a text editor to remove >>> and ..., then paste again
It's not a huge issue, but given how much example code is presented in this format, it seems strange you have to do this..
A:
How to run/adopt "the output of Pythons REPL"
Use IPython shell
In [99]: %cpaste
Pasting code; enter '--' alone on the line to stop.
:>>> class eg(object):
:... def __init__(self, name):
:... self.name = name
:... def hi(self):
:... print "Hi %s" % (self.name)
:...
:>>> greeter = eg("Bob")
:>>> greeter.hi()
:--
Hi Bob
Use a capable text editor (e.g., C-x r k kills rectangular region in Emacs)
Use doctest module
Copy without the shell prompt in the first place (though I don't know how to do it on Google Chrome, for example).
Why the doctest format is used
Save the following to documentation.txt:
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad
minim veniam, quis nostrud exercitation ullamco laboris nisi ut
aliquip ex ea commodo consequat.
>>> class eg(object):
... def __init__(self, name):
... self.name = name
... def hi(self):
... print "Hi %s" % (self.name)
...
>>> greeter = eg("Bob")
>>> greeter.hi()
Hi Bob
>>>
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non
proident, sunt in culpa qui officia deserunt mollit anim id est
laborum.
Run:
$ python -c "import doctest; doctest.testfile('documentation.txt')" -v
Output:
Trying:
class eg(object):
def __init__(self, name):
self.name = name
def hi(self):
print "Hi %s" % (self.name)
Expecting nothing
ok
Trying:
greeter = eg("Bob")
Expecting nothing
ok
Trying:
greeter.hi()
Expecting:
Hi Bob
ok
1 items passed all tests:
3 tests in doctest.txt
3 tests in 1 items.
3 passed and 0 failed.
Test passed.
If you add the following snippet at the end of your module it will test all code in its docstrings:
if __name__=="__main__":
import doctest; doctest.testmod()
QED
A:
I don't know if there's a good solution out there for this. Ideally, there'd be some way to modify the behavior of the interpretter to accept copy/paste input of this sort. Here are some alternate suggestions:
Use triple quoting to save the example to a string. Then, use exec:
>>> def chomp_prompt(s): return '\n'.join(ln[4:] for ln in s.splitlines())
...
>>> dirty = """>>> class eg(object):
... ... def __init__(self, name):
... ... self.name = name
... ... def hi(self):
... ... print "Hi %s" % (self.name)
... ...
... >>> greeter = eg("Bob")
... >>> greeter.hi()
... """
>>> clean = chomp_prompt(dirty)
>>> exec clean
Hi Bob
>>>
Not only does my solution all fit on one line (so it'll be easy for you to copy/paste it in the interpreter), it works on the above example :D :
>>> s = r'''>>> def chomp_prompt(s): return '\n'.join(ln[4:] for ln in s.splitlines())
... ...
... >>> dirty = """>>> class eg(object):
... ... ... def __init__(self, name):
... ... ... self.name = name
... ... ... def hi(self):
... ... ... print "Hi %s" % (self.name)
... ... ...
... ... >>> greeter = eg("Bob")
... ... >>> greeter.hi()
... ... """
... >>> clean = chomp_prompt(dirty)
... >>> exec clean'''
>>> s2 = chomp_prompt(s)
>>> exec s2
Hi Bob
My second suggestion is to look at ipython's ability to open an editor for you and execute what you entered there after you're done editing:
http://ipython.scipy.org/doc/rel-0.9.1/html/interactive/tutorial.html#source-code-handling-tips
If you set emacs as your editor, I know it has the ability to delete a rectangle of text (you can probably guess the command: M-x delete-rectangle), which would work perfectly for getting rid of those pesky prompts. I'm sure many other editors have this as well.
A:
"Why" questions rarely have useful answers.
For example, if I said that the reason why was to avoid a complex intellectual property infringement lawsuit, what does that do? Nothing. You still have to stop copying and pasting and start thinking and typing.
Or, for example, if I said that the reason why was given here, there's nothing actionable. The problem is that examples have to be typed instead of cut-and-pasted. And that problem is not solved by this information.
Indeed, the problem is really "I want to copy-and-paste without so much thinking and typing, how can I do that?" and the answer is the same.
You can't copy and paste the interactive session (except in doctest comments). You have to type it. Sorry.
A:
The code is presented this way, because it is meant to be a step by step process. The three characters you see ">>>" are the ones of the Python IDE, but it seems you know that already. When you have access to a console or a shell, and type python, you will get something like this.
% python
Python 2.5.1 (r251:54863, Jan 13 2009, 10:26:13)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
So really take it as an educational tool. :)
| Why can I not paste the output of Pythons REPL without manual-editing? | A huge amount of example Python code shows the output of the Python REPL, for example:
>>> class eg(object):
... def __init__(self, name):
... self.name = name
... def hi(self):
... print "Hi %s" % (self.name)
...
>>> greeter = eg("Bob")
>>> greeter.hi()
Hi Bob
>>>
Now, the obvious thing you want to do is run the above code.. so, I run "python" and paste the above text in..
>>> >>> class eg(object):
File "<stdin>", line 1
>>> class eg(object):
^
SyntaxError: invalid syntax
>>> ... def __init__(self, name):
File "<stdin>", line 1
... def __init__(self, name):
^
The code is broken!?..
To get it to run, I would have to either..
copy-and-paste the lines one at a time, making sure I copy all the indentation correctly. If you screw it up (say, miss a leading space, you have to start all over again)
use a text editor to remove >>> and ..., then paste again
It's not a huge issue, but given how much example code is presented in this format, it seems strange you have to do this..
| [
"How to run/adopt \"the output of Pythons REPL\"\n\nUse IPython shell\nIn [99]: %cpaste\nPasting code; enter '--' alone on the line to stop.\n:>>> class eg(object):\n:... def __init__(self, name):\n:... self.name = name\n:... def hi(self):\n:... print \"Hi %s\" % (self.name)\n:...\n:>>> greeter = eg(\"Bob\")\n:>>> greeter.hi()\n:--\nHi Bob\n\nUse a capable text editor (e.g., C-x r k kills rectangular region in Emacs)\nUse doctest module\n\nCopy without the shell prompt in the first place (though I don't know how to do it on Google Chrome, for example).\nWhy the doctest format is used\nSave the following to documentation.txt:\n\nLorem ipsum dolor sit amet, consectetur adipisicing elit, sed do\neiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad\nminim veniam, quis nostrud exercitation ullamco laboris nisi ut\naliquip ex ea commodo consequat. \n\n>>> class eg(object):\n... def __init__(self, name):\n... self.name = name\n... def hi(self):\n... print \"Hi %s\" % (self.name)\n... \n>>> greeter = eg(\"Bob\")\n>>> greeter.hi()\nHi Bob\n>>>\n\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum\ndolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non\nproident, sunt in culpa qui officia deserunt mollit anim id est\nlaborum.\n\nRun:\n$ python -c \"import doctest; doctest.testfile('documentation.txt')\" -v\n\nOutput:\nTrying:\n class eg(object):\n def __init__(self, name):\n self.name = name\n def hi(self):\n print \"Hi %s\" % (self.name)\nExpecting nothing\nok\nTrying:\n greeter = eg(\"Bob\")\nExpecting nothing\nok\nTrying:\n greeter.hi()\nExpecting:\n Hi Bob\nok\n1 items passed all tests:\n 3 tests in doctest.txt\n3 tests in 1 items.\n3 passed and 0 failed.\nTest passed.\n\nIf you add the following snippet at the end of your module it will test all code in its docstrings:\nif __name__==\"__main__\":\n import doctest; doctest.testmod()\n\nQED\n",
"I don't know if there's a good solution out there for this. Ideally, there'd be some way to modify the behavior of the interpretter to accept copy/paste input of this sort. Here are some alternate suggestions:\nUse triple quoting to save the example to a string. Then, use exec:\n>>> def chomp_prompt(s): return '\\n'.join(ln[4:] for ln in s.splitlines())\n...\n>>> dirty = \"\"\">>> class eg(object):\n... ... def __init__(self, name):\n... ... self.name = name\n... ... def hi(self):\n... ... print \"Hi %s\" % (self.name)\n... ...\n... >>> greeter = eg(\"Bob\")\n... >>> greeter.hi()\n... \"\"\"\n>>> clean = chomp_prompt(dirty)\n>>> exec clean\nHi Bob\n>>>\n\nNot only does my solution all fit on one line (so it'll be easy for you to copy/paste it in the interpreter), it works on the above example :D :\n>>> s = r'''>>> def chomp_prompt(s): return '\\n'.join(ln[4:] for ln in s.splitlines())\n... ...\n... >>> dirty = \"\"\">>> class eg(object):\n... ... ... def __init__(self, name):\n... ... ... self.name = name\n... ... ... def hi(self):\n... ... ... print \"Hi %s\" % (self.name)\n... ... ...\n... ... >>> greeter = eg(\"Bob\")\n... ... >>> greeter.hi()\n... ... \"\"\"\n... >>> clean = chomp_prompt(dirty)\n... >>> exec clean'''\n>>> s2 = chomp_prompt(s)\n>>> exec s2\nHi Bob\n\nMy second suggestion is to look at ipython's ability to open an editor for you and execute what you entered there after you're done editing:\nhttp://ipython.scipy.org/doc/rel-0.9.1/html/interactive/tutorial.html#source-code-handling-tips\nIf you set emacs as your editor, I know it has the ability to delete a rectangle of text (you can probably guess the command: M-x delete-rectangle), which would work perfectly for getting rid of those pesky prompts. I'm sure many other editors have this as well.\n",
"\"Why\" questions rarely have useful answers.\nFor example, if I said that the reason why was to avoid a complex intellectual property infringement lawsuit, what does that do? Nothing. You still have to stop copying and pasting and start thinking and typing.\nOr, for example, if I said that the reason why was given here, there's nothing actionable. The problem is that examples have to be typed instead of cut-and-pasted. And that problem is not solved by this information.\nIndeed, the problem is really \"I want to copy-and-paste without so much thinking and typing, how can I do that?\" and the answer is the same. \nYou can't copy and paste the interactive session (except in doctest comments). You have to type it. Sorry.\n",
"The code is presented this way, because it is meant to be a step by step process. The three characters you see \">>>\" are the ones of the Python IDE, but it seems you know that already. When you have access to a console or a shell, and type python, you will get something like this.\n% python\nPython 2.5.1 (r251:54863, Jan 13 2009, 10:26:13) \n[GCC 4.0.1 (Apple Inc. build 5465)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\n\nSo really take it as an educational tool. :)\n"
] | [
9,
2,
1,
0
] | [] | [] | [
"python",
"read_eval_print_loop",
"user_interface"
] | stackoverflow_0000647142_python_read_eval_print_loop_user_interface.txt |
Q:
socket trouble in python
I have a server that's written in C, and I want to write a client in python. The python client will send a string "send some_file" when it wants to send a file, followed by the file's contents, and the string "end some_file". Here is my client code :
file = sys.argv[1]
host = sys.argv[2]
port = int(sys.argv[3])
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((host,port))
send_str = "send %s" % file
end_str = "end %s" % file
sock.send(send_str)
sock.send("\n")
sock.send(open(file).read())
sock.send("\n")
sock.send(end_str)
sock.send("\n")
The problem is this :
the server receives the "send some_file" string from a recv
at the second recv, the file's content and the "end file" strings are sent together
In the server code, the buffer's size is 4096. I first noticed this bug when trying to send a file that's less than 4096k.
How can I make sure that the server receives the strings independently?
A:
With socket programming, even if you do 2 independent sends, it doesn't mean that the other side will receive them as 2 independent recvs.
One simple solution that works for both strings and binary data is to: First send the number of bytes in the message, then send the message.
Here is what you should do for each message whether it is a file or a string:
Sender side:
Send 4 bytes that holds the number of bytes in the following send
Send the actual data
Receiver side:
From the receiver side do a loop that blocks on a read for 4 bytes
Then do a block on a read for the number of characters specified in the preceding 4 bytes to get the data.
Along with the 4-byte length header I mentioned above, you could also add a constant size command type header (integer again) that describes what's in the following recv.
You could also consider using a protocol like HTTP which already does a lot of the work for you and has nice wrapper libraries.
A:
There are two much simpler ways I can think of in which you can solve this. Both involve some changes in the behaviors of both the client and the server.
The first is to use padding. Let's say you're sending a file. What you would do is read the file, encode this into a simpler format like Base64, then send enough space characters to fill up the rest of the 4096-byte 'chunk'. What you would do is something like this:
from cStringIO import StringIO
import base64
import socket
import sys
CHUNK_SIZE = 4096 # bytes
# Extract the socket data from the file arguments
filename = sys.argv[1]
host = sys.argv[2]
port = int(sys.argv[3])
# Make the socket
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((host,port))
# Prepare the message to send
send_str = "send %s" % (filename,)
end_str = "end %s" % (filename,)
data = open(filename).read()
encoded_data = base64.b64encode(data)
encoded_fp = StringIO(encoded_data)
sock.send(send_str + '\n')
chunk = encoded_fp.read(CHUNK_SIZE)
while chunk:
sock.send(chunk)
if len(chunk) < CHUNK_SIZE:
sock.send(' ' * (CHUNK_SIZE - len(chunk)))
chunk = encoded_fp.read(CHUNK_SIZE)
sock.send('\n' + end_str + '\n')
This example seems a little more involved, but it will ensure that the server can keep reading data in 4096-byte chunks, and all it has to do is Base64-decode the data on the other end (a C library for which is available here. The Base64 decoder ignores the extra spaces, and the format can handle both binary and text files (what would happen, for example, if a file contained the "end filename" line? It would confuse the server).
The other approach is to prefix the sending of the file with the file's length. So for example, instead of sending send filename you might say send 4192 filename to specify that the length of the file is 4192 bytes. The client would have to build the send_str based on the length of the file (as read into the data variable in the code above), and would not need to use Base64 encoding as the server would not try to interpret any end filename syntax appearing in the body of the sent file. This is what happens in HTTP; the Content-length HTTP header is used to specify how long the sent data is. An example client might look like this:
import socket
import sys
# Extract the socket data from the file arguments
filename = sys.argv[1]
host = sys.argv[2]
port = int(sys.argv[3])
# Make the socket
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((host,port))
# Prepare the message to send
data = open(filename).read()
send_str = "send %d %s" % (len(data), filename)
end_str = "end %s" % (filename,)
sock.send(send_str + '\n')
sock.send(data)
sock.send('\n' + end_str + '\n')
Either way, you're going to have to make changes to both the server and the client. In the end it would probably be easier to implement a rudimentary HTTP server (or to get one which has already been implemented) in C, as it seems that's what you're doing here. The encoding/padding solution is quick but creates a lot of redundantly-sent data (as Base64 typically causes a 33% increase in the quantity of data sent), the length prefix solution is also easy from the client side but may be more difficult on the server.
A:
Possibly using
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
will help send each packet as you want it as this disables Nagle's algorithm, as most TCP stacks use this to join several packets of small-sized data together (and is on by default I believe)
| socket trouble in python | I have a server that's written in C, and I want to write a client in python. The python client will send a string "send some_file" when it wants to send a file, followed by the file's contents, and the string "end some_file". Here is my client code :
file = sys.argv[1]
host = sys.argv[2]
port = int(sys.argv[3])
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((host,port))
send_str = "send %s" % file
end_str = "end %s" % file
sock.send(send_str)
sock.send("\n")
sock.send(open(file).read())
sock.send("\n")
sock.send(end_str)
sock.send("\n")
The problem is this :
the server receives the "send some_file" string from a recv
at the second recv, the file's content and the "end file" strings are sent together
In the server code, the buffer's size is 4096. I first noticed this bug when trying to send a file that's less than 4096k.
How can I make sure that the server receives the strings independently?
| [
"With socket programming, even if you do 2 independent sends, it doesn't mean that the other side will receive them as 2 independent recvs.\nOne simple solution that works for both strings and binary data is to: First send the number of bytes in the message, then send the message.\nHere is what you should do for each message whether it is a file or a string:\nSender side:\n\nSend 4 bytes that holds the number of bytes in the following send\nSend the actual data\n\nReceiver side:\n\nFrom the receiver side do a loop that blocks on a read for 4 bytes\nThen do a block on a read for the number of characters specified in the preceding 4 bytes to get the data.\n\nAlong with the 4-byte length header I mentioned above, you could also add a constant size command type header (integer again) that describes what's in the following recv.\nYou could also consider using a protocol like HTTP which already does a lot of the work for you and has nice wrapper libraries.\n",
"There are two much simpler ways I can think of in which you can solve this. Both involve some changes in the behaviors of both the client and the server.\nThe first is to use padding. Let's say you're sending a file. What you would do is read the file, encode this into a simpler format like Base64, then send enough space characters to fill up the rest of the 4096-byte 'chunk'. What you would do is something like this:\nfrom cStringIO import StringIO\nimport base64\nimport socket\nimport sys\n\nCHUNK_SIZE = 4096 # bytes\n\n# Extract the socket data from the file arguments\nfilename = sys.argv[1]\nhost = sys.argv[2]\nport = int(sys.argv[3])\n# Make the socket\nsock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)\nsock.connect((host,port))\n# Prepare the message to send\nsend_str = \"send %s\" % (filename,)\nend_str = \"end %s\" % (filename,)\ndata = open(filename).read()\nencoded_data = base64.b64encode(data)\nencoded_fp = StringIO(encoded_data)\nsock.send(send_str + '\\n')\nchunk = encoded_fp.read(CHUNK_SIZE)\nwhile chunk:\n sock.send(chunk)\n if len(chunk) < CHUNK_SIZE:\n sock.send(' ' * (CHUNK_SIZE - len(chunk)))\n chunk = encoded_fp.read(CHUNK_SIZE)\nsock.send('\\n' + end_str + '\\n')\n\nThis example seems a little more involved, but it will ensure that the server can keep reading data in 4096-byte chunks, and all it has to do is Base64-decode the data on the other end (a C library for which is available here. The Base64 decoder ignores the extra spaces, and the format can handle both binary and text files (what would happen, for example, if a file contained the \"end filename\" line? It would confuse the server).\nThe other approach is to prefix the sending of the file with the file's length. So for example, instead of sending send filename you might say send 4192 filename to specify that the length of the file is 4192 bytes. The client would have to build the send_str based on the length of the file (as read into the data variable in the code above), and would not need to use Base64 encoding as the server would not try to interpret any end filename syntax appearing in the body of the sent file. This is what happens in HTTP; the Content-length HTTP header is used to specify how long the sent data is. An example client might look like this:\nimport socket\nimport sys\n\n# Extract the socket data from the file arguments\nfilename = sys.argv[1]\nhost = sys.argv[2]\nport = int(sys.argv[3])\n# Make the socket\nsock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)\nsock.connect((host,port))\n# Prepare the message to send\ndata = open(filename).read()\nsend_str = \"send %d %s\" % (len(data), filename)\nend_str = \"end %s\" % (filename,)\nsock.send(send_str + '\\n')\nsock.send(data)\nsock.send('\\n' + end_str + '\\n')\n\nEither way, you're going to have to make changes to both the server and the client. In the end it would probably be easier to implement a rudimentary HTTP server (or to get one which has already been implemented) in C, as it seems that's what you're doing here. The encoding/padding solution is quick but creates a lot of redundantly-sent data (as Base64 typically causes a 33% increase in the quantity of data sent), the length prefix solution is also easy from the client side but may be more difficult on the server.\n",
"Possibly using\nsock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)\n\nwill help send each packet as you want it as this disables Nagle's algorithm, as most TCP stacks use this to join several packets of small-sized data together (and is on by default I believe)\n"
] | [
9,
1,
0
] | [
"TCP/IP data is buffered, more-or-less randomly.\nIt's just a \"stream\" of bytes. If you want, you can read it as though it's delimited by '\\n' characters. However, it is not broken into meaningful chunks; nor can it be. It must be a continuous stream of bytes.\nHow are you reading it in C? Are you reading up to a '\\n'? Or are you simply reading everything in the buffer?\nIf you're reading everything in the buffer, you should see the lines buffered more-or-less randomly.\nIf you read up to a '\\n', however, you'll see each line one at a time.\nIf you want this to really work, you should read http://www.w3.org/Protocols/rfc959/. This shows how to transfer files simply and reliably: use two sockets. One for the commands, the other for the data.\n"
] | [
-3
] | [
"python",
"sockets"
] | stackoverflow_0000647813_python_sockets.txt |
Q:
File handling in Django when posting image from service call
I am using PyAMF to transfer a dynamically generated large image from Flex to Django.
On the Django side i receive the encodedb64 data as a parameter:
My Item model as an imagefield.
What i have trouble to do is saving the data as the File Django Field.
def save_item(request, uname, data):
""" Save a new item """
item = Item()
img = cStringIO.StringIO()
img.write(base64.b64decode(data))
myFile = File(img)
item.preview.save('fakename.jpg', myFile, save=False)
That would not work because my File object from StringIO misses some properties such as mode, name etc.
I also think that using StringIO will load the image data completely in memory which is bad so i may just give up on the AMF for this particular case and use POST.
What do you think ?
A:
In django.core.files.base you can find the class ContentFile. That class extends the basic Django File class, so you do not need StringIO (which ContentFile though uses internally). The modified save method looks like this:
from django.core.files.base import ContentFile
def save_item(request, uname, data):
item = Item()
myFile = ContentFile(base64.b64decode(data))
item.preview.save('fakename.jpg', myFile, save=False)
| File handling in Django when posting image from service call | I am using PyAMF to transfer a dynamically generated large image from Flex to Django.
On the Django side i receive the encodedb64 data as a parameter:
My Item model as an imagefield.
What i have trouble to do is saving the data as the File Django Field.
def save_item(request, uname, data):
""" Save a new item """
item = Item()
img = cStringIO.StringIO()
img.write(base64.b64decode(data))
myFile = File(img)
item.preview.save('fakename.jpg', myFile, save=False)
That would not work because my File object from StringIO misses some properties such as mode, name etc.
I also think that using StringIO will load the image data completely in memory which is bad so i may just give up on the AMF for this particular case and use POST.
What do you think ?
| [
"In django.core.files.base you can find the class ContentFile. That class extends the basic Django File class, so you do not need StringIO (which ContentFile though uses internally). The modified save method looks like this:\nfrom django.core.files.base import ContentFile\ndef save_item(request, uname, data):\n item = Item()\n myFile = ContentFile(base64.b64decode(data))\n item.preview.save('fakename.jpg', myFile, save=False)\n\n"
] | [
9
] | [] | [] | [
"apache_flex",
"django",
"pyamf",
"python"
] | stackoverflow_0000647888_apache_flex_django_pyamf_python.txt |
Q:
How to make a list comprehension with the group() method in python?
I'm trying to write a little script to clean my directories. In fact I have:
pattern = re.compile(format[i])
...
current_f.append(pattern.search(str(ls)))
and I want to use a list comprehension but when I try:
In [25]: [i for i in current_f.group(0)]
I get:
AttributeError: 'list' object has no attribute 'group'
So how to make a list comprehension using group()? Is there another way to do what I want?
A:
Are you trying to do this?:
[f.group(0) for f in current_f]
| How to make a list comprehension with the group() method in python? | I'm trying to write a little script to clean my directories. In fact I have:
pattern = re.compile(format[i])
...
current_f.append(pattern.search(str(ls)))
and I want to use a list comprehension but when I try:
In [25]: [i for i in current_f.group(0)]
I get:
AttributeError: 'list' object has no attribute 'group'
So how to make a list comprehension using group()? Is there another way to do what I want?
| [
"Are you trying to do this?:\n[f.group(0) for f in current_f]\n\n"
] | [
7
] | [] | [] | [
"list",
"list_comprehension",
"python"
] | stackoverflow_0000648276_list_list_comprehension_python.txt |
Q:
Why don't Django admin "Today" and "Now" buttons show up in Safari?
I'm developing a Django application that contains a model with a date/time field. On my local copy of the application, the admin page for that particular model shows this for the date/time field:
alt text http://www.cs.wm.edu/~mpd/images/bugs/django-date-local.png
This is as expected. However, when I deploy to my webserver and use the application from there, I get this:
alt text http://www.cs.wm.edu/~mpd/images/bugs/django-date-server.png
The application on the server is exactly the same as my local copy, except that I have debugging disabled on the server (but I don't think that should matter...should it?). Why does the admin app on the server differ from the local admin app?
Update
The issue seems localized to Safari. The "Today" and "Now" buttons appear when the admin site is accessed via Firefox. It looks like Safari can't download some of the JavaScript files necessary to show these widgets (strange that Firefox can, though).
I noticed that Safari is receiving a "304 Not Modified" code for the following files, but I'm not sure what that means, or how to fix it. Obviously, these are the JavaScript files and images that control the date/time widget:
RelatedObjectLookup.js
DateTimeShortcuts.js
icon_calendar.gif
icon_clock.gif
A:
I think you have to look at what is different between your firefox configuration and safary config
Off the top of my head:
One could be configured to use a proxy (messing with the trafic) the other not. Make sure the configuration is the same in both.
Safari could have cached the error clear the cache before testing again.
Try to access the gif files directly from the browser (by inputting the full url of the images) and run wireshark on the wire comparing both GET requests and responses. Something WILL be different that will help you to track the problem.
A:
If you're getting 304 on those files. Flush your browser's cache and try again.
If it doesn't load again anyway, make sure you are getting 200 OK.
A:
It seems like you have admin media missing (hence js and images aren't loading). I generally do following.
in settings.py
ADMIN_MEDIA_PREFIX = '/media/admin/'
Then I symlink path of django.contrib.admin.media within my media dir. Say:
ln -s /var/lib/python-support/python2.5/django/contrib/admin/media/ /var/www/media/admin
Development server serves admin media automatically. But on production servers one generally prefers to server static stuff directly from apache (or whatever server).
A:
Check the media location, permissions and setup on your deployment server.
http://www.djangobook.com/en/1.0/chapter20/
A:
Have you tried checking out firebug's NET tab to see if the admin javascript/css/image files are all loading correctly?
I had that problem once.
Compare all those files from the dev server against the production server.
| Why don't Django admin "Today" and "Now" buttons show up in Safari? | I'm developing a Django application that contains a model with a date/time field. On my local copy of the application, the admin page for that particular model shows this for the date/time field:
alt text http://www.cs.wm.edu/~mpd/images/bugs/django-date-local.png
This is as expected. However, when I deploy to my webserver and use the application from there, I get this:
alt text http://www.cs.wm.edu/~mpd/images/bugs/django-date-server.png
The application on the server is exactly the same as my local copy, except that I have debugging disabled on the server (but I don't think that should matter...should it?). Why does the admin app on the server differ from the local admin app?
Update
The issue seems localized to Safari. The "Today" and "Now" buttons appear when the admin site is accessed via Firefox. It looks like Safari can't download some of the JavaScript files necessary to show these widgets (strange that Firefox can, though).
I noticed that Safari is receiving a "304 Not Modified" code for the following files, but I'm not sure what that means, or how to fix it. Obviously, these are the JavaScript files and images that control the date/time widget:
RelatedObjectLookup.js
DateTimeShortcuts.js
icon_calendar.gif
icon_clock.gif
| [
"I think you have to look at what is different between your firefox configuration and safary config\nOff the top of my head:\n\nOne could be configured to use a proxy (messing with the trafic) the other not. Make sure the configuration is the same in both.\nSafari could have cached the error clear the cache before testing again.\nTry to access the gif files directly from the browser (by inputting the full url of the images) and run wireshark on the wire comparing both GET requests and responses. Something WILL be different that will help you to track the problem.\n\n",
"If you're getting 304 on those files. Flush your browser's cache and try again.\nIf it doesn't load again anyway, make sure you are getting 200 OK.\n",
"It seems like you have admin media missing (hence js and images aren't loading). I generally do following.\nin settings.py\nADMIN_MEDIA_PREFIX = '/media/admin/'\n\nThen I symlink path of django.contrib.admin.media within my media dir. Say:\nln -s /var/lib/python-support/python2.5/django/contrib/admin/media/ /var/www/media/admin\n\nDevelopment server serves admin media automatically. But on production servers one generally prefers to server static stuff directly from apache (or whatever server).\n",
"Check the media location, permissions and setup on your deployment server.\nhttp://www.djangobook.com/en/1.0/chapter20/\n",
"Have you tried checking out firebug's NET tab to see if the admin javascript/css/image files are all loading correctly?\nI had that problem once.\nCompare all those files from the dev server against the production server.\n"
] | [
3,
2,
1,
0,
0
] | [] | [] | [
"django",
"javascript",
"python",
"safari",
"webkit"
] | stackoverflow_0000443920_django_javascript_python_safari_webkit.txt |
Q:
Print out list of function parameters in Python
Is there a way to print out a function's parameter list?
For example:
def func(a, b, c):
pass
print_func_parametes(func)
Which will produce something like:
["a", "b", "c"]
A:
Use the inspect module.
>>> import inspect
>>> inspect.getargspec(func)
(['a', 'b', 'c'], None, None, None)
The first part of returned tuple is what you're looking for.
A:
Read the source. Seriously. Python programs and libraries are provided as source. You can read the source.
A:
You might also try the built-in help() function, which will provide you not only with a list of the named parameters, but also a description of func() if you provided a docstring:
>>> def func(a, b, c):
... """do x to a,b,c and return the result"""
... pass
...
>>> help(func)
gives you
Help on function func in module __main__:
func(a, b, c)
do x to a,b,c and return the result
Most modules are provided with at least some sort of built-in documentation.
| Print out list of function parameters in Python | Is there a way to print out a function's parameter list?
For example:
def func(a, b, c):
pass
print_func_parametes(func)
Which will produce something like:
["a", "b", "c"]
| [
"Use the inspect module.\n>>> import inspect\n>>> inspect.getargspec(func)\n(['a', 'b', 'c'], None, None, None)\n\nThe first part of returned tuple is what you're looking for.\n",
"Read the source. Seriously. Python programs and libraries are provided as source. You can read the source.\n",
"You might also try the built-in help() function, which will provide you not only with a list of the named parameters, but also a description of func() if you provided a docstring:\n>>> def func(a, b, c):\n... \"\"\"do x to a,b,c and return the result\"\"\"\n... pass\n... \n>>> help(func)\n\ngives you \n \nHelp on function func in module __main__:\n\nfunc(a, b, c)\n do x to a,b,c and return the result\n\nMost modules are provided with at least some sort of built-in documentation.\n"
] | [
17,
6,
2
] | [] | [] | [
"python"
] | stackoverflow_0000648679_python.txt |
Q:
Why are Python objects of different types ordered by type names?
From Python docs: http://docs.python.org/library/stdtypes.html#comparisons
Implementation note: Objects of different types except numbers are ordered by their type names; objects of the same types that don’t support proper comparison are ordered by their address.
Is there any reason for choosing to do this over raising an exception?
A:
About four lines up from that line you quoted:
Objects of different types, except
different numeric types and different
string types, never compare equal;
such objects are ordered consistently
but arbitrarily (so that sorting a
heterogeneous array yields a
consistent result).
You don't want to raise exceptions when sorting a list of differently typed objects.
A:
It can be useful for objects of different types to be collected into a single, sorted list, in a definite order. By giving all objects a stable sort order, this behavior is default.
| Why are Python objects of different types ordered by type names? | From Python docs: http://docs.python.org/library/stdtypes.html#comparisons
Implementation note: Objects of different types except numbers are ordered by their type names; objects of the same types that don’t support proper comparison are ordered by their address.
Is there any reason for choosing to do this over raising an exception?
| [
"About four lines up from that line you quoted:\n\nObjects of different types, except\n different numeric types and different\n string types, never compare equal;\n such objects are ordered consistently\n but arbitrarily (so that sorting a\n heterogeneous array yields a\n consistent result).\n\nYou don't want to raise exceptions when sorting a list of differently typed objects.\n",
"It can be useful for objects of different types to be collected into a single, sorted list, in a definite order. By giving all objects a stable sort order, this behavior is default.\n"
] | [
5,
1
] | [] | [] | [
"python"
] | stackoverflow_0000649191_python.txt |
Q:
In Python, is it possible to access the class which contains a method, given only a method object?
I'm pretty new to Python, and haven't been able to find an answer to this question from searching online.
Here is an example decorator that does nothing (yet)
def my_decorator(text):
def wrap(f):
# grab magic f.parent_class_object.my_var and append text
def wrap_f(*args, **kwargs):
f(*args, **kwargs)
return wrap_f
return wrap
Here is an example class
class MyClass:
my_var = []
@my_decorator('sometext')
def my_func()
# do some super cool thing
In my decorator i'd like to access the class object for MyClass and add in 'sometext' to the MyClass.my_var list. My goal is to populate my_var with decorated values at module load time, not function call time.
Is there a way i can navigate from f to MyClass in order to do this? I know that *args[0] is the instance of MyClass, but that is only available when the function is called.
A:
It is not possible to read it from a function inside a class decorator, since the "methods" are still normal function while the class is being compiled. Also, the class name has not been defined while it is being compiled.
What you could do is supply the my_var list to the decorator.
class MyClass:
my_var = []
@my_decorator(my_var, 'sometext')
def my_func()
# do some super cool thing
my_var is still a normal variable by then.
| In Python, is it possible to access the class which contains a method, given only a method object? | I'm pretty new to Python, and haven't been able to find an answer to this question from searching online.
Here is an example decorator that does nothing (yet)
def my_decorator(text):
def wrap(f):
# grab magic f.parent_class_object.my_var and append text
def wrap_f(*args, **kwargs):
f(*args, **kwargs)
return wrap_f
return wrap
Here is an example class
class MyClass:
my_var = []
@my_decorator('sometext')
def my_func()
# do some super cool thing
In my decorator i'd like to access the class object for MyClass and add in 'sometext' to the MyClass.my_var list. My goal is to populate my_var with decorated values at module load time, not function call time.
Is there a way i can navigate from f to MyClass in order to do this? I know that *args[0] is the instance of MyClass, but that is only available when the function is called.
| [
"It is not possible to read it from a function inside a class decorator, since the \"methods\" are still normal function while the class is being compiled. Also, the class name has not been defined while it is being compiled.\nWhat you could do is supply the my_var list to the decorator.\nclass MyClass:\n my_var = []\n\n @my_decorator(my_var, 'sometext')\n def my_func()\n # do some super cool thing\n\nmy_var is still a normal variable by then.\n"
] | [
3
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0000649329_decorator_python.txt |
Q:
wxPython auinotebook.GetSelection() return index to the first page
Why do 'GetSelection()' return the index to the first page and not the last created in 'init' and 'new_panel'? It do return correct index in the 'click' method.
The output should be 0 0 1 1 2 2, but mine is 0 0 0 0 0 0.
Running latest version of python and wxpython in ArchLinux.
Ørjan Pettersen
#!/usr/bin/python
#12_aui_notebook1.py
import wx
import wx.lib.inspection
class MyFrame(wx.Frame):
def __init__(self, *args, **kwds):
wx.Frame.__init__(self, *args, **kwds)
self.nb = wx.aui.AuiNotebook(self)
self.new_panel('Page 1')
print self.nb.GetSelection()
self.new_panel('Page 2')
print self.nb.GetSelection()
self.new_panel('Page 3')
print self.nb.GetSelection()
def new_panel(self, nm):
pnl = wx.Panel(self)
pnl.identifierTag = nm
self.nb.AddPage(pnl, nm)
self.sizer = wx.BoxSizer()
self.sizer.Add(self.nb, 1, wx.EXPAND)
self.SetSizer(self.sizer)
pnl.SetFocus() # Have focused the last panel.
print self.nb.GetSelection()
pnl.Bind(wx.EVT_LEFT_DOWN, self.click)
def click(self, event):
print 'Mouse click'
print self.nb.GetSelection()
print self.nb.GetPageText(self.nb.GetSelection())
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, -1, '12_aui_notebook1.py')
frame.Show()
self.SetTopWindow(frame)
return 1
if __name__ == "__main__":
app = MyApp(0)
# wx.lib.inspection.InspectionTool().Show()
app.MainLoop()
A:
The solution was pretty simple. The problem seemed to be that creating the new page didn't generate a page change event.
The solution is:
self.nb.AddPage(pnl, nm, select=True)
Adding 'select=True' will trigger a page change event. So problem solved.
Another solution is to add this line:
self.nb.SetSelection(self.nb.GetPageCount()-1)
They both do the same. Trigger a page change event to the last added page.
def new_panel(self, nm):
pnl = wx.Panel(self)
pnl.identifierTag = nm
self.nb.AddPage(pnl, nm, select=True)
self.sizer = wx.BoxSizer()
self.sizer.Add(self.nb, 1, wx.EXPAND)
self.SetSizer(self.sizer)
#self.nb.SetSelection(self.nb.GetPageCount()-1)
pnl.SetFocus() # Have focused the last panel.
print self.nb.GetSelection()
A:
I ran your example and got the correct output:
0
0
1
1
2
2
I'm using the latest windows release of wxPython
| wxPython auinotebook.GetSelection() return index to the first page | Why do 'GetSelection()' return the index to the first page and not the last created in 'init' and 'new_panel'? It do return correct index in the 'click' method.
The output should be 0 0 1 1 2 2, but mine is 0 0 0 0 0 0.
Running latest version of python and wxpython in ArchLinux.
Ørjan Pettersen
#!/usr/bin/python
#12_aui_notebook1.py
import wx
import wx.lib.inspection
class MyFrame(wx.Frame):
def __init__(self, *args, **kwds):
wx.Frame.__init__(self, *args, **kwds)
self.nb = wx.aui.AuiNotebook(self)
self.new_panel('Page 1')
print self.nb.GetSelection()
self.new_panel('Page 2')
print self.nb.GetSelection()
self.new_panel('Page 3')
print self.nb.GetSelection()
def new_panel(self, nm):
pnl = wx.Panel(self)
pnl.identifierTag = nm
self.nb.AddPage(pnl, nm)
self.sizer = wx.BoxSizer()
self.sizer.Add(self.nb, 1, wx.EXPAND)
self.SetSizer(self.sizer)
pnl.SetFocus() # Have focused the last panel.
print self.nb.GetSelection()
pnl.Bind(wx.EVT_LEFT_DOWN, self.click)
def click(self, event):
print 'Mouse click'
print self.nb.GetSelection()
print self.nb.GetPageText(self.nb.GetSelection())
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, -1, '12_aui_notebook1.py')
frame.Show()
self.SetTopWindow(frame)
return 1
if __name__ == "__main__":
app = MyApp(0)
# wx.lib.inspection.InspectionTool().Show()
app.MainLoop()
| [
"The solution was pretty simple. The problem seemed to be that creating the new page didn't generate a page change event.\nThe solution is:\nself.nb.AddPage(pnl, nm, select=True)\n\nAdding 'select=True' will trigger a page change event. So problem solved.\nAnother solution is to add this line:\nself.nb.SetSelection(self.nb.GetPageCount()-1)\n\nThey both do the same. Trigger a page change event to the last added page.\ndef new_panel(self, nm):\n pnl = wx.Panel(self)\n pnl.identifierTag = nm\n self.nb.AddPage(pnl, nm, select=True) \n self.sizer = wx.BoxSizer()\n self.sizer.Add(self.nb, 1, wx.EXPAND)\n self.SetSizer(self.sizer)\n #self.nb.SetSelection(self.nb.GetPageCount()-1)\n pnl.SetFocus() # Have focused the last panel.\n print self.nb.GetSelection()\n\n",
"I ran your example and got the correct output:\n0\n0\n1\n1\n2\n2\n\nI'm using the latest windows release of wxPython\n"
] | [
1,
0
] | [] | [] | [
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0000647906_python_wxpython_wxwidgets.txt |
Q:
Directory checksum with python?
So I'm in the middle of web-based filesystem abstraction layer development.
Just like file browser, except it has some extra features like freaky permissions etc.
I would like users to be notified somehow about directory changes.
So, i.e. when someone uploads a new file via FTP, certain users should get a proper message. It is not required for the message to be extra detailed, I don't really need to show the exact resource changed. The parent directory name should be enough.
What approach would you recommend?
A:
If your server is Linux you can do this with something like inotify
If the only updates are coming from FTP, then another solution I've used in the past is to write an add-on module to ProFTPD that performs the "notification" once upload is complete.
A:
See this question: How to quickly find added / removed files?
But if you can control the upload somehow (i.e. use HTTP POST instead of FTP), you could simply send a notification after the upload has completed. This has the additional benefit that it would be simple to make sure users never see a partial file.
A:
A simple approach would be to monitor/check the last modification date of the working directory (using os.stat() for example).
Whenever a file in a directory is modified, the working directory's (the directory the file is in) last modification date changes as well.
At least this works on the filesystems I am working on (ufs, ext3). I'm not sure if all filesystems do it this way.
| Directory checksum with python? | So I'm in the middle of web-based filesystem abstraction layer development.
Just like file browser, except it has some extra features like freaky permissions etc.
I would like users to be notified somehow about directory changes.
So, i.e. when someone uploads a new file via FTP, certain users should get a proper message. It is not required for the message to be extra detailed, I don't really need to show the exact resource changed. The parent directory name should be enough.
What approach would you recommend?
| [
"If your server is Linux you can do this with something like inotify\nIf the only updates are coming from FTP, then another solution I've used in the past is to write an add-on module to ProFTPD that performs the \"notification\" once upload is complete.\n",
"See this question: How to quickly find added / removed files?\nBut if you can control the upload somehow (i.e. use HTTP POST instead of FTP), you could simply send a notification after the upload has completed. This has the additional benefit that it would be simple to make sure users never see a partial file.\n",
"A simple approach would be to monitor/check the last modification date of the working directory (using os.stat() for example). \nWhenever a file in a directory is modified, the working directory's (the directory the file is in) last modification date changes as well.\nAt least this works on the filesystems I am working on (ufs, ext3). I'm not sure if all filesystems do it this way.\n"
] | [
2,
1,
0
] | [] | [] | [
"checksum",
"file",
"filesystems",
"python"
] | stackoverflow_0000649623_checksum_file_filesystems_python.txt |
Q:
Do Python lists have an equivalent to dict.get?
I have a list of integers. I want to know whether the number 13 appears in it and, if so, where. Do I have to search the list twice, as in the code below?
if 13 in intList:
i = intList.index(13)
In the case of dictionaries, there's a get function which will ascertain membership and perform look-up with the same search. Is there something similar for lists?
A:
You answered it yourself, with the index() method. That will throw an exception if the index is not found, so just catch that:
def getIndexOrMinusOne(a, x):
try:
return a.index(x)
except ValueError:
return -1
A:
It looks like you'll just have to catch the exception...
try:
i = intList.index(13)
except ValueError:
i = some_default_value
A:
No, there isn't a direct match for what you asked for. There was a discussion a while back on the Python mailing list about this, and people reached the conclusion that it was probably a code smell if you needed this. Consider using a dict or set instead if you need to test membership that way.
A:
You can catch the ValueError exception, or you can do:
i = intList.index(13) if 13 in intList else -1
(Python 2.5+)
BTW. if you're going to do a big batch of similar operations, you might consider building inverse dictionary value -> index.
intList = [13,1,2,3,13,5,13]
indexDict = defaultdict(list)
for value, index in zip(intList, range(len(intList))):
indexDict[value].append(index)
indexDict[13]
[0, 4, 6]
| Do Python lists have an equivalent to dict.get? | I have a list of integers. I want to know whether the number 13 appears in it and, if so, where. Do I have to search the list twice, as in the code below?
if 13 in intList:
i = intList.index(13)
In the case of dictionaries, there's a get function which will ascertain membership and perform look-up with the same search. Is there something similar for lists?
| [
"You answered it yourself, with the index() method. That will throw an exception if the index is not found, so just catch that:\ndef getIndexOrMinusOne(a, x):\n try:\n return a.index(x)\n except ValueError:\n return -1\n\n",
"It looks like you'll just have to catch the exception...\ntry:\n i = intList.index(13)\nexcept ValueError:\n i = some_default_value\n\n",
"No, there isn't a direct match for what you asked for. There was a discussion a while back on the Python mailing list about this, and people reached the conclusion that it was probably a code smell if you needed this. Consider using a dict or set instead if you need to test membership that way.\n",
"You can catch the ValueError exception, or you can do:\ni = intList.index(13) if 13 in intList else -1\n\n(Python 2.5+)\nBTW. if you're going to do a big batch of similar operations, you might consider building inverse dictionary value -> index.\nintList = [13,1,2,3,13,5,13]\nindexDict = defaultdict(list)\nfor value, index in zip(intList, range(len(intList))):\n indexDict[value].append(index)\n\nindexDict[13]\n[0, 4, 6]\n\n"
] | [
14,
8,
4,
2
] | [
"Just put what you got in a function and use it:) \nYou can either use if i in list: return list.index(i) or the try/except, depending on your preferences.\n"
] | [
-1
] | [
"list",
"python"
] | stackoverflow_0000650340_list_python.txt |
Q:
How do you substitue a Python capture followed by a number character?
When using re.sub, how to you handle a situation where you need a capture followed by a number in the replacement string? For example, you cannot use "\10" for capture 1 followed by a '0' character because it will be interpreted as capture 10.
A:
\g<1>0
http://docs.python.org/library/re.html#re.sub
\g<number> uses the corresponding
group number; \g<2> is therefore
equivalent to \2, but isn’t ambiguous
in a replacement such as \g<2>0. \20
would be interpreted as a reference to
group 20, not a reference to group 2
followed by the literal character '0'.
| How do you substitue a Python capture followed by a number character? | When using re.sub, how to you handle a situation where you need a capture followed by a number in the replacement string? For example, you cannot use "\10" for capture 1 followed by a '0' character because it will be interpreted as capture 10.
| [
"\\g<1>0\n\nhttp://docs.python.org/library/re.html#re.sub\n\n\\g<number> uses the corresponding\n group number; \\g<2> is therefore\n equivalent to \\2, but isn’t ambiguous\n in a replacement such as \\g<2>0. \\20\n would be interpreted as a reference to\n group 20, not a reference to group 2\n followed by the literal character '0'.\n\n"
] | [
6
] | [] | [] | [
"python",
"regex",
"string"
] | stackoverflow_0000650926_python_regex_string.txt |
Q:
Is it possible, with the Python Standard Library (say version 2.5) to perform MS-SQL queries which are parameterized?
While the particular data I'm working with right now will not be user-generated, and will be sanitized within an inch of its life during my usual validation routines, I would like to learn how to do your basic INSERT, SELECT, etc. SQL queries while protecting myself against SQL injection attacks, just for future reference. I'd rather learn how to do things the "right" way, through parameterized queries.
Sanitization is always nice, but I am pitting my pitiful intellect against that of seasoned hackers. Manually escaping means I am probably overlooking things, since blacklists are not as robust as whitelists. For additional clarification, I do not mean using the (%s) notation to pass as a parameter for building a string possibly named sqlstatement. I think one of the magic words I need to know is "binding."
I am also hoping to avoid anything outside of the Python Standard Library.
The application in question requires Microsoft SQL 2005, if that is relevant. I am using ActiveState Python and the modules dbi and odbc. Since this is Someone Else's Database, stored procedures are out.
A:
PEP 249 (DB API 2.0) defines 5 paramstyles, PyMSSQL uses paramstyle == pyformat. But although it looks like string interpolation, it is actually binding.
Note difference between binding:
cur.execute('SELECT * FROM persons WHERE salesrep=%s', 'John Doe')
and interpolating (this is how it should NOT be done):
cur.execute('SELECT * FROM persons WHERE salesrep=%s' % 'John Doe')
See also http://wiki.python.org/moin/DbApiFaq
"I am also hoping to avoid anything
outside of the Python Standard
Library."
You're out of luck here. The only RDBMS driver that comes built-in in Python is SQLite.
A:
Try pyodbc
But if you want to have things really easy (plus tons of powerful features), take a look at sqlalchemy (which by the way uses pyodbc as the default "driver" for mssql)
| Is it possible, with the Python Standard Library (say version 2.5) to perform MS-SQL queries which are parameterized? | While the particular data I'm working with right now will not be user-generated, and will be sanitized within an inch of its life during my usual validation routines, I would like to learn how to do your basic INSERT, SELECT, etc. SQL queries while protecting myself against SQL injection attacks, just for future reference. I'd rather learn how to do things the "right" way, through parameterized queries.
Sanitization is always nice, but I am pitting my pitiful intellect against that of seasoned hackers. Manually escaping means I am probably overlooking things, since blacklists are not as robust as whitelists. For additional clarification, I do not mean using the (%s) notation to pass as a parameter for building a string possibly named sqlstatement. I think one of the magic words I need to know is "binding."
I am also hoping to avoid anything outside of the Python Standard Library.
The application in question requires Microsoft SQL 2005, if that is relevant. I am using ActiveState Python and the modules dbi and odbc. Since this is Someone Else's Database, stored procedures are out.
| [
"PEP 249 (DB API 2.0) defines 5 paramstyles, PyMSSQL uses paramstyle == pyformat. But although it looks like string interpolation, it is actually binding.\nNote difference between binding:\ncur.execute('SELECT * FROM persons WHERE salesrep=%s', 'John Doe')\n\nand interpolating (this is how it should NOT be done):\ncur.execute('SELECT * FROM persons WHERE salesrep=%s' % 'John Doe')\n\nSee also http://wiki.python.org/moin/DbApiFaq\n\n\n\"I am also hoping to avoid anything\n outside of the Python Standard\n Library.\"\n\nYou're out of luck here. The only RDBMS driver that comes built-in in Python is SQLite. \n",
"Try pyodbc\nBut if you want to have things really easy (plus tons of powerful features), take a look at sqlalchemy (which by the way uses pyodbc as the default \"driver\" for mssql)\n"
] | [
5,
2
] | [] | [] | [
"binding",
"parameters",
"python",
"sql",
"sql_injection"
] | stackoverflow_0000650979_binding_parameters_python_sql_sql_injection.txt |
Q:
Bimodal distribution in C or Python
What's the easiest way to generate random values according to a bimodal distribution in C or Python?
I could implement something like the Ziggurat algorithm or a Box-Muller transform, but if there's a ready-to-use library, or a simpler algorithm I don't know about, that'd be better.
A:
Aren't you just picking values either of two modal distributions?
http://docs.python.org/library/random.html#random.triangular
Sounds like you just toggle back and forth between two sets of parameters for your call to triangular.
def bimodal( low1, high1, mode1, low2, high2, mode2 ):
toss = random.choice( (1, 2) )
if toss == 1:
return random.triangular( low1, high1, mode1 )
else:
return random.triangular( low2, high2, mode2 )
This may do everything you need.
A:
There's always the old-fashioned straight-forward accept-reject algorithm. If it was good enough for Johnny von Neumann it should be good enough for you ;-).
| Bimodal distribution in C or Python | What's the easiest way to generate random values according to a bimodal distribution in C or Python?
I could implement something like the Ziggurat algorithm or a Box-Muller transform, but if there's a ready-to-use library, or a simpler algorithm I don't know about, that'd be better.
| [
"Aren't you just picking values either of two modal distributions?\nhttp://docs.python.org/library/random.html#random.triangular\nSounds like you just toggle back and forth between two sets of parameters for your call to triangular.\ndef bimodal( low1, high1, mode1, low2, high2, mode2 ):\n toss = random.choice( (1, 2) )\n if toss == 1:\n return random.triangular( low1, high1, mode1 ) \n else:\n return random.triangular( low2, high2, mode2 )\n\nThis may do everything you need.\n",
"There's always the old-fashioned straight-forward accept-reject algorithm. If it was good enough for Johnny von Neumann it should be good enough for you ;-).\n"
] | [
5,
2
] | [] | [] | [
"c",
"python",
"random"
] | stackoverflow_0000651421_c_python_random.txt |
Q:
What's the best way to replace the ternary operator in Python?
Possible Duplicate:
Ternary conditional operator in Python
If I have some code like:
x = foo ? 1 : 2
How should I translate it to Python? Can I do this?
if foo:
x = 1
else:
x = 2
Will x still be in scope outside the if / then blocks? Or do I have to do something like this?
x = None
if foo:
x = 1
else:
x = 2
A:
Use the ternary operator(formally conditional expression) in Python 2.5+.
x = 1 if foo else 2
A:
A nice python trick is using this:
foo = ["ifFalse","ifTrue"][booleanCondition]
It creates a 2 membered list, and the boolean becomes either 0 (false) or 1 (true), which picks the correct member.
Not very readable, but pythony :)
A:
The Ternary operator mentioned is only available from Python 2.5. From the WeekeePeedeea:
Though it had been delayed for several
years by disagreements over syntax, a
ternary operator for Python was
approved as Python Enhancement
Proposal 308 and was added to the 2.5
release in September 2006.
Python's ternary operator differs from
the common ?: operator in the order of
its operands; the general form is op1
if condition else op2. This form
invites considering op1 as the normal
value and op2 as an exceptional case.
Before 2.5, one could use the ugly
syntax (lambda x:op2,lambda
x:op1)[condition]() which also takes
care of only evaluating expressions
which are actually needed in order to
prevent side effects.
A:
Duplicate of this one.
I use this (although I'm waiting for somebody to downvote or comment if it is incorrect):
x = foo and 1 or 2
A:
I'm still using 2.4 in one of my projects and have come across this a few times. The most elegant solution I've see for this is:
x = {True: 1, False: 2}[foo is not None]
I like this because it represents a more clear boolean test than using a list with the index values 0 and 1 to get your return value.
A:
You could use something like:
val = float(raw_input("Age: "))
status = ("working","retired")[val>65]
print "You should be",status
though it is not very pythonic
(the other options are closer to C/PERL, but this involves more tuple magic)
| What's the best way to replace the ternary operator in Python? |
Possible Duplicate:
Ternary conditional operator in Python
If I have some code like:
x = foo ? 1 : 2
How should I translate it to Python? Can I do this?
if foo:
x = 1
else:
x = 2
Will x still be in scope outside the if / then blocks? Or do I have to do something like this?
x = None
if foo:
x = 1
else:
x = 2
| [
"Use the ternary operator(formally conditional expression) in Python 2.5+.\nx = 1 if foo else 2\n\n",
"A nice python trick is using this:\nfoo = [\"ifFalse\",\"ifTrue\"][booleanCondition]\n\nIt creates a 2 membered list, and the boolean becomes either 0 (false) or 1 (true), which picks the correct member.\nNot very readable, but pythony :)\n",
"The Ternary operator mentioned is only available from Python 2.5. From the WeekeePeedeea:\n\nThough it had been delayed for several\n years by disagreements over syntax, a\n ternary operator for Python was\n approved as Python Enhancement\n Proposal 308 and was added to the 2.5\n release in September 2006. \nPython's ternary operator differs from\n the common ?: operator in the order of\n its operands; the general form is op1\n if condition else op2. This form\n invites considering op1 as the normal\n value and op2 as an exceptional case. \nBefore 2.5, one could use the ugly\n syntax (lambda x:op2,lambda\n x:op1)[condition]() which also takes\n care of only evaluating expressions\n which are actually needed in order to\n prevent side effects.\n\n",
"Duplicate of this one.\nI use this (although I'm waiting for somebody to downvote or comment if it is incorrect):\nx = foo and 1 or 2\n\n",
"I'm still using 2.4 in one of my projects and have come across this a few times. The most elegant solution I've see for this is:\nx = {True: 1, False: 2}[foo is not None]\n\nI like this because it represents a more clear boolean test than using a list with the index values 0 and 1 to get your return value.\n",
"You could use something like:\nval = float(raw_input(\"Age: \"))\nstatus = (\"working\",\"retired\")[val>65]\nprint \"You should be\",status\n\nthough it is not very pythonic\n(the other options are closer to C/PERL, but this involves more tuple magic)\n"
] | [
32,
10,
5,
3,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0000643983_python.txt |
Q:
What's the best way to initialize a dict of dicts in Python?
A lot of times in Perl, I'll do something like this:
$myhash{foo}{bar}{baz} = 1
How would I translate this to Python? So far I have:
if not 'foo' in myhash:
myhash['foo'] = {}
if not 'bar' in myhash['foo']:
myhash['foo']['bar'] = {}
myhash['foo']['bar']['baz'] = 1
Is there a better way?
A:
If the amount of nesting you need is fixed, collections.defaultdict is wonderful.
e.g. nesting two deep:
myhash = collections.defaultdict(dict)
myhash[1][2] = 3
myhash[1][3] = 13
myhash[2][4] = 9
If you want to go another level of nesting, you'll need to do something like:
myhash = collections.defaultdict(lambda : collections.defaultdict(dict))
myhash[1][2][3] = 4
myhash[1][3][3] = 5
myhash[1][2]['test'] = 6
edit: MizardX points out that we can get full genericity with a simple function:
import collections
def makehash():
return collections.defaultdict(makehash)
Now we can do:
myhash = makehash()
myhash[1][2] = 4
myhash[1][3] = 8
myhash[2][5][8] = 17
# etc
A:
class AutoVivification(dict):
"""Implementation of perl's autovivification feature."""
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
value = self[item] = type(self)()
return value
Testing:
a = AutoVivification()
a[1][2][3] = 4
a[1][3][3] = 5
a[1][2]['test'] = 6
print a
Output:
{1: {2: {'test': 6, 3: 4}, 3: {3: 5}}}
A:
Is there a reason it needs to be a dict of dicts? If there's no compelling reason for that particular structure, you could simply index the dict with a tuple:
mydict = {('foo', 'bar', 'baz'):1} # Initializes dict with a key/value pair
mydict[('foo', 'bar', 'baz')] # Returns 1
mydict[('foo', 'unbar')] = 2 # Sets a value for a new key
The parentheses are required if you initialize the dict with a tuple key, but you can omit them when setting/getting values using []:
mydict = {} # Initialized the dict
mydict['foo', 'bar', 'baz'] = 1 # Sets a value
mydict['foo', 'bar', 'baz'] # Returns 1
A:
I guess the literal translation would be:
mydict = {'foo' : { 'bar' : { 'baz':1}}}
Calling:
>>> mydict['foo']['bar']['baz']
gives you 1.
That looks a little gross to me, though.
(I'm no perl guy, though, so I'm guessing at what your perl does)
| What's the best way to initialize a dict of dicts in Python? | A lot of times in Perl, I'll do something like this:
$myhash{foo}{bar}{baz} = 1
How would I translate this to Python? So far I have:
if not 'foo' in myhash:
myhash['foo'] = {}
if not 'bar' in myhash['foo']:
myhash['foo']['bar'] = {}
myhash['foo']['bar']['baz'] = 1
Is there a better way?
| [
"If the amount of nesting you need is fixed, collections.defaultdict is wonderful.\ne.g. nesting two deep:\nmyhash = collections.defaultdict(dict)\nmyhash[1][2] = 3\nmyhash[1][3] = 13\nmyhash[2][4] = 9\n\nIf you want to go another level of nesting, you'll need to do something like:\nmyhash = collections.defaultdict(lambda : collections.defaultdict(dict))\nmyhash[1][2][3] = 4\nmyhash[1][3][3] = 5\nmyhash[1][2]['test'] = 6\n\nedit: MizardX points out that we can get full genericity with a simple function:\nimport collections\ndef makehash():\n return collections.defaultdict(makehash)\n\nNow we can do:\nmyhash = makehash()\nmyhash[1][2] = 4\nmyhash[1][3] = 8\nmyhash[2][5][8] = 17\n# etc\n\n",
"class AutoVivification(dict):\n \"\"\"Implementation of perl's autovivification feature.\"\"\"\n def __getitem__(self, item):\n try:\n return dict.__getitem__(self, item)\n except KeyError:\n value = self[item] = type(self)()\n return value\n\nTesting:\na = AutoVivification()\n\na[1][2][3] = 4\na[1][3][3] = 5\na[1][2]['test'] = 6\n\nprint a\n\nOutput:\n{1: {2: {'test': 6, 3: 4}, 3: {3: 5}}}\n\n",
"Is there a reason it needs to be a dict of dicts? If there's no compelling reason for that particular structure, you could simply index the dict with a tuple:\nmydict = {('foo', 'bar', 'baz'):1} # Initializes dict with a key/value pair\nmydict[('foo', 'bar', 'baz')] # Returns 1\n\nmydict[('foo', 'unbar')] = 2 # Sets a value for a new key\n\nThe parentheses are required if you initialize the dict with a tuple key, but you can omit them when setting/getting values using []:\nmydict = {} # Initialized the dict\nmydict['foo', 'bar', 'baz'] = 1 # Sets a value\nmydict['foo', 'bar', 'baz'] # Returns 1\n\n",
"I guess the literal translation would be:\n mydict = {'foo' : { 'bar' : { 'baz':1}}}\n\nCalling:\n >>> mydict['foo']['bar']['baz']\n\ngives you 1.\nThat looks a little gross to me, though.\n(I'm no perl guy, though, so I'm guessing at what your perl does)\n"
] | [
127,
110,
15,
2
] | [] | [] | [
"autovivification",
"python"
] | stackoverflow_0000651794_autovivification_python.txt |
Q:
Simultaneously inserting and extending a list?
Is there a better way of simultaneously inserting and extending a list? Here is an ugly example of how I'm currently doing it. (lets say I want to insert '2.4' and '2.6' after the '2' element):
>>> a = ['1', '2', '3', '4']
>>> b = a[:a.index('2')+1] + ['2.4', '2.6'] + a[a.index('2'):]
>>> b
<<< ['1', '2', '2.4', '2.6', '3', '4']
A:
>>> a = ['1', '2', '3', '4']
>>> a
['1', '2', '3', '4']
>>> i = a.index('2') + 1 # after the item '2'
>>> a[i:i] = ['2.4', '2.6']
>>> a
['1', '2', '2.4', '2.6', '3', '4']
>>>
A:
You can easily insert a single element using list.insert(i, x), which Python defines as s[i:i] = [x].
a = ['1', '2', '3', '4']
for elem in reversed(['2.4', '2.6']):
a.insert(a.index('2')+1, elem))
If you want to insert a list, you can make your own function that omits the []:
def iextend(lst, i, x):
lst[i:i] = x
a = ['1', '2', '3', '4']
iextend(a, a.index('2')+1, ['2.4', '2.6']
# a = ['1', '2', '2.4', '2.6', '3', '4']
A:
I'm not entirely clear on what you're doing; if you want to add values, and have the list remain in order, it's cleaner (and probably still faster) to just sort the whole thing:
a.extend(['2.4', '2.6'])
a.sort()
A:
Have a look at the bisect module. I think it does what you want.
| Simultaneously inserting and extending a list? | Is there a better way of simultaneously inserting and extending a list? Here is an ugly example of how I'm currently doing it. (lets say I want to insert '2.4' and '2.6' after the '2' element):
>>> a = ['1', '2', '3', '4']
>>> b = a[:a.index('2')+1] + ['2.4', '2.6'] + a[a.index('2'):]
>>> b
<<< ['1', '2', '2.4', '2.6', '3', '4']
| [
">>> a = ['1', '2', '3', '4']\n>>> a\n['1', '2', '3', '4']\n>>> i = a.index('2') + 1 # after the item '2'\n>>> a[i:i] = ['2.4', '2.6']\n>>> a\n['1', '2', '2.4', '2.6', '3', '4']\n>>>\n\n",
"You can easily insert a single element using list.insert(i, x), which Python defines as s[i:i] = [x].\na = ['1', '2', '3', '4']\nfor elem in reversed(['2.4', '2.6']):\n a.insert(a.index('2')+1, elem))\n\nIf you want to insert a list, you can make your own function that omits the []:\ndef iextend(lst, i, x):\n lst[i:i] = x\n\na = ['1', '2', '3', '4']\niextend(a, a.index('2')+1, ['2.4', '2.6']\n# a = ['1', '2', '2.4', '2.6', '3', '4']\n\n",
"I'm not entirely clear on what you're doing; if you want to add values, and have the list remain in order, it's cleaner (and probably still faster) to just sort the whole thing:\na.extend(['2.4', '2.6'])\na.sort()\n\n",
"Have a look at the bisect module. I think it does what you want.\n"
] | [
15,
5,
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0000652184_python.txt |
Q:
Check if value exists in nested lists
in my list:
animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?
A:
You're using the wrong data type. Use a dict of sets instead:
def add(key, value, userdict):
userdict.setdefault(key, set())
userdict[key].add(value)
Usage:
animaldict = {}
add('bird', 'peck', animaldict)
add('bird', 'screech', animaldict)
add('turtle', 'hide', animaldict)
A:
While it is possible to construct a generic function that finds the animal in the list using a.index or testing with "dog" in animals, you really want a dictionary here, otherwise the add function will scale abysmally as more animals are added:
animals = {'dog':set(['bite']),
'cat':set(['bite', 'scratch'])}
You can then "one-shot" the add function using setdefault:
animals.setdefault('dog', set()).add('bite')
It will create the 'dog' key if it doesn't exist, and since setdefault returns the set that either exists or was just created, you can then add the bite action. Sets ensure that there are no duplicates automatically.
A:
Based on recursive's solution, in Python 2.5 or newer you can use the defaultdict class, something like this:
from collections import defaultdict
a = defaultdict(set)
def add(animal, behavior):
a[animal].add(behavior)
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
A:
animals_dict = dict(animals)
def add(key, action):
animals_dict.setdefault(key, [])
if action not in animals_dict[key]:
animals_dict[key].append(action)
(Updated to use setdefault - nice one @recursive)
A:
You really should use a dictionary for this purpose. Or alternatively a class Animal.
You could improve your code like this:
if not any((animal[0] == "bird") for animal in animals):
# append "bird" to animals
A:
While I agree with the others re. your choice of data structure, here is an answer to your question:
def add(name, action):
for animal in animals:
if animal[0] == name:
if action not in animal[1]:
animal[1].append(action)
return
else:
animals.append([name, [action]])
The for loop is an inevitable consequence of your data structure, which is why everyone is advising you to consider dictionaries instead.
| Check if value exists in nested lists | in my list:
animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?
| [
"You're using the wrong data type. Use a dict of sets instead:\ndef add(key, value, userdict):\n userdict.setdefault(key, set())\n userdict[key].add(value)\n\nUsage:\nanimaldict = {}\nadd('bird', 'peck', animaldict)\nadd('bird', 'screech', animaldict)\nadd('turtle', 'hide', animaldict)\n\n",
"While it is possible to construct a generic function that finds the animal in the list using a.index or testing with \"dog\" in animals, you really want a dictionary here, otherwise the add function will scale abysmally as more animals are added:\nanimals = {'dog':set(['bite']),\n 'cat':set(['bite', 'scratch'])}\n\nYou can then \"one-shot\" the add function using setdefault:\nanimals.setdefault('dog', set()).add('bite')\n\nIt will create the 'dog' key if it doesn't exist, and since setdefault returns the set that either exists or was just created, you can then add the bite action. Sets ensure that there are no duplicates automatically.\n",
"Based on recursive's solution, in Python 2.5 or newer you can use the defaultdict class, something like this:\nfrom collections import defaultdict\n\na = defaultdict(set)\n\ndef add(animal, behavior):\n a[animal].add(behavior)\n\nadd('bird', 'peck')\nadd('bird', 'screech')\nadd('turtle', 'hide')\n\n",
"animals_dict = dict(animals)\n\ndef add(key, action):\n animals_dict.setdefault(key, [])\n if action not in animals_dict[key]:\n animals_dict[key].append(action)\n\n(Updated to use setdefault - nice one @recursive)\n",
"You really should use a dictionary for this purpose. Or alternatively a class Animal.\nYou could improve your code like this:\nif not any((animal[0] == \"bird\") for animal in animals):\n # append \"bird\" to animals\n\n",
"While I agree with the others re. your choice of data structure, here is an answer to your question:\ndef add(name, action):\n for animal in animals:\n if animal[0] == name:\n if action not in animal[1]:\n animal[1].append(action)\n return\n else:\n animals.append([name, [action]])\n\nThe for loop is an inevitable consequence of your data structure, which is why everyone is advising you to consider dictionaries instead.\n"
] | [
6,
4,
4,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0000652423_python.txt |
Q:
Writing a binary buffer to a file in python
I have some python code that:
Takes a BLOB from a database which is compressed.
Calls an uncompression routine in C that uncompresses the data.
Writes the uncompressed data to a file.
It uses ctypes to call the C routine, which is in a shared library.
This mostly works, except for the actual writing to the file. To uncompress, I get the data uncompressed into a python buffer, created using the ctypes create_string_buffer method:
c_uncompData_p = create_string_buffer(64000)
so the uncompression call is like this:
c_uncompSize = mylib.explodeCharBuffer (c_data_p, c_data_len, c_uncompData_p)
The size of the resulting uncompressed data is returned as the return value.
But... I have no idea how to force python on only write c_uncompSize bytes out - if I do:
myfile.write (c_uncompData_p.raw)
it writes the whole 64k buffer out (the data is binary - so it is not null terminated).
So, my question is - using Python 2.5 how do I get c_uncompSize bytes printed out, rather than the whole 64k?
Thanks
Jamie
A:
Slicing works for c_char_Arrays too:
myfile.write(c_uncompData_p[:c_uncompSize])
A:
buffer() might help to avoid unnecessary copying (caused by slicing as in @elo80ka's answer):
myfile.write(buffer(c_uncompData_p.raw, 0, c_uncompSize))
In your example it doesn't matter (due to c_uncompData_p is written only once and it is small) but in general it could be useful.
Just for the sake of exercise here's the answer that uses C stdio's fwrite():
from ctypes import *
# load C library
try: libc = cdll.msvcrt # Windows
except AttributeError:
libc = CDLL("libc.so.6") # Linux
# fopen()
libc.fopen.restype = c_void_p
def errcheck(res, func, args):
if not res: raise IOError
return res
libc.fopen.errcheck = errcheck
# errcheck() could be similarly defined for `fwrite`, `fclose`
# write data
file_p = libc.fopen("output.bin", "wb")
sizeof_item = 1 # bytes
nitems = libc.fwrite(c_uncompData_p, sizeof_item, c_uncompSize, file_p)
retcode = libc.fclose(file_p)
if nitems != c_uncompSize: # not all data were written
pass
if retcode != 0: # the file was NOT successfully closed
pass
| Writing a binary buffer to a file in python | I have some python code that:
Takes a BLOB from a database which is compressed.
Calls an uncompression routine in C that uncompresses the data.
Writes the uncompressed data to a file.
It uses ctypes to call the C routine, which is in a shared library.
This mostly works, except for the actual writing to the file. To uncompress, I get the data uncompressed into a python buffer, created using the ctypes create_string_buffer method:
c_uncompData_p = create_string_buffer(64000)
so the uncompression call is like this:
c_uncompSize = mylib.explodeCharBuffer (c_data_p, c_data_len, c_uncompData_p)
The size of the resulting uncompressed data is returned as the return value.
But... I have no idea how to force python on only write c_uncompSize bytes out - if I do:
myfile.write (c_uncompData_p.raw)
it writes the whole 64k buffer out (the data is binary - so it is not null terminated).
So, my question is - using Python 2.5 how do I get c_uncompSize bytes printed out, rather than the whole 64k?
Thanks
Jamie
| [
"Slicing works for c_char_Arrays too:\nmyfile.write(c_uncompData_p[:c_uncompSize])\n\n",
"buffer() might help to avoid unnecessary copying (caused by slicing as in @elo80ka's answer):\nmyfile.write(buffer(c_uncompData_p.raw, 0, c_uncompSize))\n\nIn your example it doesn't matter (due to c_uncompData_p is written only once and it is small) but in general it could be useful.\n\nJust for the sake of exercise here's the answer that uses C stdio's fwrite():\nfrom ctypes import *\n\n# load C library\ntry: libc = cdll.msvcrt # Windows\nexcept AttributeError:\n libc = CDLL(\"libc.so.6\") # Linux\n\n# fopen()\nlibc.fopen.restype = c_void_p\ndef errcheck(res, func, args):\n if not res: raise IOError\n return res\nlibc.fopen.errcheck = errcheck\n# errcheck() could be similarly defined for `fwrite`, `fclose` \n\n# write data\nfile_p = libc.fopen(\"output.bin\", \"wb\")\nsizeof_item = 1 # bytes\nnitems = libc.fwrite(c_uncompData_p, sizeof_item, c_uncompSize, file_p)\nretcode = libc.fclose(file_p)\nif nitems != c_uncompSize: # not all data were written\n pass\nif retcode != 0: # the file was NOT successfully closed\n pass\n\n"
] | [
6,
6
] | [] | [] | [
"binary",
"io",
"python"
] | stackoverflow_0000652535_binary_io_python.txt |
Q:
Effective way to iteratively append to a string in Python?
I'm writing a Python function to split text into words, ignoring specified punctuation. Here is some working code. I'm not convinced that constructing strings out of lists (buf = [] in the code) is efficient though. Does anyone have a suggestion for a better way to do this?
def getwords(text, splitchars=' \t|!?.;:"'):
"""
Generator to get words in text by splitting text along specified splitchars
and stripping out the splitchars::
>>> list(getwords('this is some text.'))
['this', 'is', 'some', 'text']
>>> list(getwords('and/or'))
['and', 'or']
>>> list(getwords('one||two'))
['one', 'two']
>>> list(getwords(u'hola unicode!'))
[u'hola', u'unicode']
"""
splitchars = set(splitchars)
buf = []
for char in text:
if char not in splitchars:
buf.append(char)
else:
if buf:
yield ''.join(buf)
buf = []
# All done. Yield last word.
if buf:
yield ''.join(buf)
A:
http://www.skymind.com/~ocrow/python_string/ talks about several ways of concatenating strings in Python and assesses their performance as well.
A:
You don't want to use re.split?
import re
re.split("[,; ]+", "coucou1 , coucou2;coucou3")
A:
You can use re.split
re.split('[\s|!\?\.;:"]', text)
However if the text is very large the resulting array may be consuming too much memory. Then you may consider re.finditer:
import re
def getwords(text, splitchars=' \t|!?.;:"'):
words_iter = re.finditer(
"([%s]+)" % "".join([("^" + c) for c in splitchars]),
text)
for word in words_iter:
yield word.group()
# a quick test
s = "a:b cc? def...a||"
words = [x for x in getwords(s)]
assert ["a", "b", "cc", "def", "a"] == words, words
A:
You can split the input using re.split():
>>> splitchars=' \t|!?.;:"'
>>> re.split("[%s]" % splitchars, "one\ttwo|three?four")
['one', 'two', 'three', 'four']
>>>
EDIT: If your splitchars may contain special chars like ] or ^, you can use re.escpae()
>>> re.escape(splitchars)
'\\ \\\t\\|\\!\\?\\.\\;\\:\\"'
>>> re.split("[%s]" % re.escape(splitchars), "one\ttwo|three?four")
['one', 'two', 'three', 'four']
>>>
| Effective way to iteratively append to a string in Python? | I'm writing a Python function to split text into words, ignoring specified punctuation. Here is some working code. I'm not convinced that constructing strings out of lists (buf = [] in the code) is efficient though. Does anyone have a suggestion for a better way to do this?
def getwords(text, splitchars=' \t|!?.;:"'):
"""
Generator to get words in text by splitting text along specified splitchars
and stripping out the splitchars::
>>> list(getwords('this is some text.'))
['this', 'is', 'some', 'text']
>>> list(getwords('and/or'))
['and', 'or']
>>> list(getwords('one||two'))
['one', 'two']
>>> list(getwords(u'hola unicode!'))
[u'hola', u'unicode']
"""
splitchars = set(splitchars)
buf = []
for char in text:
if char not in splitchars:
buf.append(char)
else:
if buf:
yield ''.join(buf)
buf = []
# All done. Yield last word.
if buf:
yield ''.join(buf)
| [
"http://www.skymind.com/~ocrow/python_string/ talks about several ways of concatenating strings in Python and assesses their performance as well. \n",
"You don't want to use re.split?\nimport re\nre.split(\"[,; ]+\", \"coucou1 , coucou2;coucou3\")\n\n",
"You can use re.split\nre.split('[\\s|!\\?\\.;:\"]', text)\n\nHowever if the text is very large the resulting array may be consuming too much memory. Then you may consider re.finditer:\nimport re\ndef getwords(text, splitchars=' \\t|!?.;:\"'):\n words_iter = re.finditer(\n \"([%s]+)\" % \"\".join([(\"^\" + c) for c in splitchars]),\n text)\n for word in words_iter:\n yield word.group()\n\n# a quick test\ns = \"a:b cc? def...a||\"\nwords = [x for x in getwords(s)]\nassert [\"a\", \"b\", \"cc\", \"def\", \"a\"] == words, words\n\n",
"You can split the input using re.split():\n>>> splitchars=' \\t|!?.;:\"'\n>>> re.split(\"[%s]\" % splitchars, \"one\\ttwo|three?four\")\n['one', 'two', 'three', 'four']\n>>> \n\nEDIT: If your splitchars may contain special chars like ] or ^, you can use re.escpae()\n>>> re.escape(splitchars)\n'\\\\ \\\\\\t\\\\|\\\\!\\\\?\\\\.\\\\;\\\\:\\\\\"'\n>>> re.split(\"[%s]\" % re.escape(splitchars), \"one\\ttwo|three?four\")\n['one', 'two', 'three', 'four']\n>>> \n\n"
] | [
5,
4,
3,
1
] | [] | [] | [
"append",
"generator",
"python",
"split",
"string"
] | stackoverflow_0000653259_append_generator_python_split_string.txt |
Q:
For Python programmers, is there anything equivalent to Perl's CPAN?
I'm learning Python now because of the Django framework. I have been a Perl programmer for a number of years and I'm so used to Perl's tools. One of the things that I really miss is Perl's CPAN and its tools. Is there anything equivalent in Python? I would like to be able to search, install and maintain Python modules as easy as CPAN. Also, a system that can handle dependencies automatically. I tried to install a module in Python by downloading a zip file from a website, unzipped it, then do:
sudo python setup.py install
but it's looking for another module. Now, lazy as I am, I don't like chasing dependencies and such, is there an easy way?
A:
sammy, have a look at pip, which will let you do "pip install foo", and will download and install its dependencies (as long as they're on PyPI). There's also EasyInstall, but pip is intended to replace that.
A:
It might be useful to note that pip and easy_install both use the Python Package Index (PyPI), sometimes called the "Cheeseshop", to search for packages. Easy_install is currently the most universally supported, as it works with both setuptools and distutils style packaging, completely. See James Bennett's commentary on python packaging for good reasons to use pip, and Ian Bicking's reply for some clarifications on the differences.
A:
If you do use easy_install, I'd suggest installing packages by doing..
easy_install -v -Z package_name | tee date-package.log
-Z (short for --always-unzip) unzips the .egg files to directories so you can then..
less *.egg/EGG-INFO/requires.txt
less *.egg/EGG-INFO/PKG-INFO
egrep '^(Name|Version|Sum|...)' *.egg/EGG-INFO/PKG-INFO
On Sammy's original question, a couple of package indexes other than PyPI are:
Scipy
and Scipy docs for scientific computing
ohloh with code metrics.
| For Python programmers, is there anything equivalent to Perl's CPAN? | I'm learning Python now because of the Django framework. I have been a Perl programmer for a number of years and I'm so used to Perl's tools. One of the things that I really miss is Perl's CPAN and its tools. Is there anything equivalent in Python? I would like to be able to search, install and maintain Python modules as easy as CPAN. Also, a system that can handle dependencies automatically. I tried to install a module in Python by downloading a zip file from a website, unzipped it, then do:
sudo python setup.py install
but it's looking for another module. Now, lazy as I am, I don't like chasing dependencies and such, is there an easy way?
| [
"sammy, have a look at pip, which will let you do \"pip install foo\", and will download and install its dependencies (as long as they're on PyPI). There's also EasyInstall, but pip is intended to replace that.\n",
"It might be useful to note that pip and easy_install both use the Python Package Index (PyPI), sometimes called the \"Cheeseshop\", to search for packages. Easy_install is currently the most universally supported, as it works with both setuptools and distutils style packaging, completely. See James Bennett's commentary on python packaging for good reasons to use pip, and Ian Bicking's reply for some clarifications on the differences.\n",
"If you do use easy_install, I'd suggest installing packages by doing..\neasy_install -v -Z package_name | tee date-package.log\n\n-Z (short for --always-unzip) unzips the .egg files to directories so you can then..\nless *.egg/EGG-INFO/requires.txt \nless *.egg/EGG-INFO/PKG-INFO \negrep '^(Name|Version|Sum|...)' *.egg/EGG-INFO/PKG-INFO\n\nOn Sammy's original question, a couple of package indexes other than PyPI are:\nScipy\nand Scipy docs for scientific computing\nohloh with code metrics.\n"
] | [
33,
11,
2
] | [] | [] | [
"perl",
"python"
] | stackoverflow_0000410163_perl_python.txt |
Q:
A simple freeze behavior decorator
I'm trying to write a freeze decorator for Python.
The idea is as follows :
(In response to the two comments)
I might be wrong but I think there is two main use of
test case.
One is the test-driven development :
Ideally , developers are writing case before writing the code.
It usually helps defining the architecture because this discipline
forces to define the real interfaces before development.
One may even consider that in some case the person who
dispatches job between dev is writing the test case and
use it to illustrate efficiently the specification he has in mind.
I don't have any experience of the use of test case like that.
The second is the idea that all project with a decent
size and a several programmers is suffering from broken code.
Something that use to work may get broken from a change
that looked like an innocent refactoring.
Though good architecture, loose couple between component may
help to fight against this phenomenon ; you will sleep better
at night if you have written some test case to make sure
that nothing will break your program's behavior.
HOWEVER,
Nobody can deny the overhead of writting test cases. In the
first case one may argue that test case is actually guiding
development and is therefore not to be considered as an overhead.
Frankly speaking, I'm a pretty young programmer and if I were
you, my word on this subject is not really valuable...
Anyway, I think that mosts company/projects are not working
like that, and that unit tests are mainly used in the second
case...
In other words, rather than ensuring that the program is
working correctly, it is aiming at checking that it will
work the same in the future.
This needs can be met without the cost of writing tests,
by using this freezing decorator.
Let's say you have a function
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
It is perfectly nice, and you want to rewrite it as an optimized version.
It is part of a big project. You want it to give back the same result
for a few value.
Rather than going through the pain of test cases, one could use some
kind of freeze decorator.
Something such that the first time the decorator is run,
the decorator run the function with the defined args (below 0, and 7)
and saves the result in a map ( f --> args --> result )
@freeze(2,0)
@freeze(1,3)
@freeze(3,5)
@freeze(0,0)
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
Next time the program is executed, the decorator will load this map and check
that the result of this function for these args as not changed.
I already wrote quickly the decorator (see below), but hurt a few problems about
which I need your advise...
from __future__ import with_statement
from collections import defaultdict
from types import GeneratorType
import cPickle
def __id_from_function(f):
return ".".join([f.__module__, f.__name__])
def generator_firsts(g, N=100):
try:
if N==0:
return []
else:
return [g.next()] + generator_firsts(g, N-1)
except StopIteration :
return []
def __post_process(v):
specialized_postprocess = [
(GeneratorType, generator_firsts),
(Exception, str),
]
try:
val_mro = v.__class__.mro()
for ( ancestor, specialized ) in specialized_postprocess:
if ancestor in val_mro:
return specialized(v)
raise ""
except:
print "Cannot accept this as a value"
return None
def __eval_function(f):
def aux(args, kargs):
try:
return ( True, __post_process( f(*args, **kargs) ) )
except Exception, e:
return ( False, __post_process(e) )
return aux
def __compare_behavior(f, past_records):
for (args, kargs, result) in past_records:
assert __eval_function(f)(args,kargs) == result
def __record_behavior(f, past_records, args, kargs):
registered_args = [ (a, k) for (a, k, r) in past_records ]
if (args, kargs) not in registered_args:
res = __eval_function(f)(args, kargs)
past_records.append( (args, kargs, res) )
def __open_frz():
try:
with open(".frz", "r") as __open_frz:
return cPickle.load(__open_frz)
except:
return defaultdict(list)
def __save_frz(past_records):
with open(".frz", "w") as __open_frz:
return cPickle.dump(past_records, __open_frz)
def freeze_behavior(*args, **kvargs):
def freeze_decorator(f):
past_records = __open_frz()
f_id = __id_from_function(f)
f_past_records = past_records[f_id]
__compare_behavior(f, f_past_records)
__record_behavior(f, f_past_records, args, kvargs)
__save_frz(past_records)
return f
return freeze_decorator
Dumping and Comparing of results is not trivial for all type. Right now I'm thinking about using a function (I call it postprocess here), to solve this problem.
Basically instead of storing res I store postprocess(res) and I compare postprocess(res1)==postprocess(res2), instead of comparing res1 res2.
It is important to let the user overload the predefined postprocess function.
My first question is :
Do you know a way to check if an object is dumpable or not?
Defining a key for the function decorated is a pain. In the following snippets
I am using the function module and its name.
** Can you think of a smarter way to do that. **
The snippets below is kind of working, but opens and close the file when testing and when recording. This is just a stupid prototype... but do you know a nice way to open the file, process the decorator for all function, close the file...
I intend to add some functionalities to this. For instance, add the possibity to define
an iterable to browse a set of argument, record arguments from real use, etc.
Why would you expect from such a decorator?
In general, would you use such a feature, knowing its limitation... Especially when trying to use it with POO?
A:
"In general, would you use such a feature, knowing its limitation...?"
Frankly speaking -- never.
There are no circumstances under which I would "freeze" results of a function in this way.
The use case appears to be based on two wrong ideas: (1) that unit testing is either hard or complex or expensive; and (2) it could be simpler to write the code, "freeze" the results and somehow use the frozen results for refactoring. This isn't helpful. Indeed, the very real possibility of freezing wrong answers makes this a bad idea.
First, on "consistency vs. correctness". This is easier to preserve with a simple mapping than with a complex set of decorators.
Do this instead of writing a freeze decorator.
print "frozen_f=", dict( (i,f(i)) for i in range(100) )
The dictionary object that's created will work perfectly as a frozen result set. No decorator. No complexity to speak of.
Second, on "unit testing".
The point of a unit test is not to "freeze" some random results. The point of a unit test is to compare real results with results developed another (simpler, more obvious, poorly-performing way). Usually unit tests compare hand-developed results. Other times unit tests use obvious but horribly slow algorithms to produce a few key results.
The point of having test data around is not that it's a "frozen" result. The point of having test data is that it is an independent result. Done differently -- sometimes by different people -- that confirms that the function works.
Sorry. This appears to me to be a bad idea; it looks like it subverts the intent of unit testing.
"HOWEVER, Nobody can deny the overhead of writting test cases"
Actually, many folks would deny the "overhead". It isn't "overhead" in the sense of wasted time and effort. For some of us, unittests are essential. Without them, the code may work, but only by accident. With them, we have ample evidence that it actually works; and the specific cases for which it works.
A:
Are you looking to implement invariants or post conditions?
You should specify the result explicitly, this wil remove most of you problems.
| A simple freeze behavior decorator | I'm trying to write a freeze decorator for Python.
The idea is as follows :
(In response to the two comments)
I might be wrong but I think there is two main use of
test case.
One is the test-driven development :
Ideally , developers are writing case before writing the code.
It usually helps defining the architecture because this discipline
forces to define the real interfaces before development.
One may even consider that in some case the person who
dispatches job between dev is writing the test case and
use it to illustrate efficiently the specification he has in mind.
I don't have any experience of the use of test case like that.
The second is the idea that all project with a decent
size and a several programmers is suffering from broken code.
Something that use to work may get broken from a change
that looked like an innocent refactoring.
Though good architecture, loose couple between component may
help to fight against this phenomenon ; you will sleep better
at night if you have written some test case to make sure
that nothing will break your program's behavior.
HOWEVER,
Nobody can deny the overhead of writting test cases. In the
first case one may argue that test case is actually guiding
development and is therefore not to be considered as an overhead.
Frankly speaking, I'm a pretty young programmer and if I were
you, my word on this subject is not really valuable...
Anyway, I think that mosts company/projects are not working
like that, and that unit tests are mainly used in the second
case...
In other words, rather than ensuring that the program is
working correctly, it is aiming at checking that it will
work the same in the future.
This needs can be met without the cost of writing tests,
by using this freezing decorator.
Let's say you have a function
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
It is perfectly nice, and you want to rewrite it as an optimized version.
It is part of a big project. You want it to give back the same result
for a few value.
Rather than going through the pain of test cases, one could use some
kind of freeze decorator.
Something such that the first time the decorator is run,
the decorator run the function with the defined args (below 0, and 7)
and saves the result in a map ( f --> args --> result )
@freeze(2,0)
@freeze(1,3)
@freeze(3,5)
@freeze(0,0)
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
Next time the program is executed, the decorator will load this map and check
that the result of this function for these args as not changed.
I already wrote quickly the decorator (see below), but hurt a few problems about
which I need your advise...
from __future__ import with_statement
from collections import defaultdict
from types import GeneratorType
import cPickle
def __id_from_function(f):
return ".".join([f.__module__, f.__name__])
def generator_firsts(g, N=100):
try:
if N==0:
return []
else:
return [g.next()] + generator_firsts(g, N-1)
except StopIteration :
return []
def __post_process(v):
specialized_postprocess = [
(GeneratorType, generator_firsts),
(Exception, str),
]
try:
val_mro = v.__class__.mro()
for ( ancestor, specialized ) in specialized_postprocess:
if ancestor in val_mro:
return specialized(v)
raise ""
except:
print "Cannot accept this as a value"
return None
def __eval_function(f):
def aux(args, kargs):
try:
return ( True, __post_process( f(*args, **kargs) ) )
except Exception, e:
return ( False, __post_process(e) )
return aux
def __compare_behavior(f, past_records):
for (args, kargs, result) in past_records:
assert __eval_function(f)(args,kargs) == result
def __record_behavior(f, past_records, args, kargs):
registered_args = [ (a, k) for (a, k, r) in past_records ]
if (args, kargs) not in registered_args:
res = __eval_function(f)(args, kargs)
past_records.append( (args, kargs, res) )
def __open_frz():
try:
with open(".frz", "r") as __open_frz:
return cPickle.load(__open_frz)
except:
return defaultdict(list)
def __save_frz(past_records):
with open(".frz", "w") as __open_frz:
return cPickle.dump(past_records, __open_frz)
def freeze_behavior(*args, **kvargs):
def freeze_decorator(f):
past_records = __open_frz()
f_id = __id_from_function(f)
f_past_records = past_records[f_id]
__compare_behavior(f, f_past_records)
__record_behavior(f, f_past_records, args, kvargs)
__save_frz(past_records)
return f
return freeze_decorator
Dumping and Comparing of results is not trivial for all type. Right now I'm thinking about using a function (I call it postprocess here), to solve this problem.
Basically instead of storing res I store postprocess(res) and I compare postprocess(res1)==postprocess(res2), instead of comparing res1 res2.
It is important to let the user overload the predefined postprocess function.
My first question is :
Do you know a way to check if an object is dumpable or not?
Defining a key for the function decorated is a pain. In the following snippets
I am using the function module and its name.
** Can you think of a smarter way to do that. **
The snippets below is kind of working, but opens and close the file when testing and when recording. This is just a stupid prototype... but do you know a nice way to open the file, process the decorator for all function, close the file...
I intend to add some functionalities to this. For instance, add the possibity to define
an iterable to browse a set of argument, record arguments from real use, etc.
Why would you expect from such a decorator?
In general, would you use such a feature, knowing its limitation... Especially when trying to use it with POO?
| [
"\"In general, would you use such a feature, knowing its limitation...?\"\nFrankly speaking -- never.\nThere are no circumstances under which I would \"freeze\" results of a function in this way.\nThe use case appears to be based on two wrong ideas: (1) that unit testing is either hard or complex or expensive; and (2) it could be simpler to write the code, \"freeze\" the results and somehow use the frozen results for refactoring. This isn't helpful. Indeed, the very real possibility of freezing wrong answers makes this a bad idea.\nFirst, on \"consistency vs. correctness\". This is easier to preserve with a simple mapping than with a complex set of decorators.\nDo this instead of writing a freeze decorator.\nprint \"frozen_f=\", dict( (i,f(i)) for i in range(100) )\n\nThe dictionary object that's created will work perfectly as a frozen result set. No decorator. No complexity to speak of.\nSecond, on \"unit testing\".\nThe point of a unit test is not to \"freeze\" some random results. The point of a unit test is to compare real results with results developed another (simpler, more obvious, poorly-performing way). Usually unit tests compare hand-developed results. Other times unit tests use obvious but horribly slow algorithms to produce a few key results.\nThe point of having test data around is not that it's a \"frozen\" result. The point of having test data is that it is an independent result. Done differently -- sometimes by different people -- that confirms that the function works.\nSorry. This appears to me to be a bad idea; it looks like it subverts the intent of unit testing.\n\n\"HOWEVER, Nobody can deny the overhead of writting test cases\"\nActually, many folks would deny the \"overhead\". It isn't \"overhead\" in the sense of wasted time and effort. For some of us, unittests are essential. Without them, the code may work, but only by accident. With them, we have ample evidence that it actually works; and the specific cases for which it works.\n",
"Are you looking to implement invariants or post conditions?\nYou should specify the result explicitly, this wil remove most of you problems.\n"
] | [
3,
0
] | [] | [] | [
"decorator",
"freeze",
"python",
"testing"
] | stackoverflow_0000653783_decorator_freeze_python_testing.txt |
Q:
sorting a list of dictionary values by date in python
I have a list and I am appending a dictionary to it as I loop through my data...and I would like to sort by one of the dictionary keys.
ex:
data = "data from database"
list = []
for x in data:
dict = {'title':title, 'date': x.created_on}
list.append(dict)
I want to sort the list in reverse order by value of 'date'
A:
You can do it this way:
list.sort(key=lambda item:item['date'], reverse=True)
A:
from operator import itemgetter
your_list.sort(key=itemgetter('date'), reverse=True)
Related notes
don't use list, dict as variable names, they are builtin names in Python. It makes your code hard to read.
you might need to replace dictionary by tuple or collections.namedtuple or custom struct-like class depending on the context
from collections import namedtuple
from operator import itemgetter
Row = namedtuple('Row', 'title date')
rows = [Row(row.title, row.created_on) for row in data]
rows.sort(key=itemgetter(1), reverse=True)
Example:
>>> lst = [Row('a', 1), Row('b', 2)]
>>> lst.sort(key=itemgetter(1), reverse=True)
>>> lst
[Row(title='b', date=2), Row(title='a', date=1)]
Or
>>> from operator import attrgetter
>>> lst = [Row('a', 1), Row('b', 2)]
>>> lst.sort(key=attrgetter('date'), reverse=True)
>>> lst
[Row(title='b', date=2), Row(title='a', date=1)]
Here's how namedtuple looks inside:
>>> Row = namedtuple('Row', 'title date', verbose=True)
class Row(tuple):
'Row(title, date)'
__slots__ = ()
_fields = ('title', 'date')
def __new__(cls, title, date):
return tuple.__new__(cls, (title, date))
@classmethod
def _make(cls, iterable, new=tuple.__new__, len=len):
'Make a new Row object from a sequence or iterable'
result = new(cls, iterable)
if len(result) != 2:
raise TypeError('Expected 2 arguments, got %d' % len(result))
return result
def __repr__(self):
return 'Row(title=%r, date=%r)' % self
def _asdict(t):
'Return a new dict which maps field names to their values'
return {'title': t[0], 'date': t[1]}
def _replace(self, **kwds):
'Return a new Row object replacing specified fields with new values'
result = self._make(map(kwds.pop, ('title', 'date'), self))
if kwds:
raise ValueError('Got unexpected field names: %r' % kwds.keys())
return result
def __getnewargs__(self):
return tuple(self)
title = property(itemgetter(0))
date = property(itemgetter(1))
A:
I actually had this almost exact question yesterday and solved it using search. The best answer applied to your question is this:
from operator import itemgetter
list.sort(key=itemgetter('date'), reverse=True)
A:
Sort the data (or a copy of the data) directly and build the list of dicts afterwards. Sort using the function sorted with an appropiate key function (operator.attrgetter probably)
A:
If you're into the whole brevity thing:
data = "data from database"
sorted_data = sorted(
[{'title': x.title, 'date': x.created_on} for x in data],
key=operator.itemgetter('date'),
reverse=True)
| sorting a list of dictionary values by date in python | I have a list and I am appending a dictionary to it as I loop through my data...and I would like to sort by one of the dictionary keys.
ex:
data = "data from database"
list = []
for x in data:
dict = {'title':title, 'date': x.created_on}
list.append(dict)
I want to sort the list in reverse order by value of 'date'
| [
"You can do it this way:\nlist.sort(key=lambda item:item['date'], reverse=True)\n\n",
"from operator import itemgetter\n\nyour_list.sort(key=itemgetter('date'), reverse=True)\n\nRelated notes\n\ndon't use list, dict as variable names, they are builtin names in Python. It makes your code hard to read.\nyou might need to replace dictionary by tuple or collections.namedtuple or custom struct-like class depending on the context\nfrom collections import namedtuple\nfrom operator import itemgetter\n\nRow = namedtuple('Row', 'title date')\nrows = [Row(row.title, row.created_on) for row in data]\nrows.sort(key=itemgetter(1), reverse=True)\n\n\nExample:\n>>> lst = [Row('a', 1), Row('b', 2)]\n>>> lst.sort(key=itemgetter(1), reverse=True)\n>>> lst\n[Row(title='b', date=2), Row(title='a', date=1)]\n\nOr \n>>> from operator import attrgetter\n>>> lst = [Row('a', 1), Row('b', 2)]\n>>> lst.sort(key=attrgetter('date'), reverse=True)\n>>> lst\n[Row(title='b', date=2), Row(title='a', date=1)]\n\nHere's how namedtuple looks inside:\n>>> Row = namedtuple('Row', 'title date', verbose=True)\n\nclass Row(tuple):\n 'Row(title, date)'\n\n __slots__ = ()\n\n _fields = ('title', 'date')\n\n def __new__(cls, title, date):\n return tuple.__new__(cls, (title, date))\n\n @classmethod\n def _make(cls, iterable, new=tuple.__new__, len=len):\n 'Make a new Row object from a sequence or iterable'\n result = new(cls, iterable)\n if len(result) != 2:\n raise TypeError('Expected 2 arguments, got %d' % len(result))\n return result\n\n def __repr__(self):\n return 'Row(title=%r, date=%r)' % self\n\n def _asdict(t):\n 'Return a new dict which maps field names to their values'\n return {'title': t[0], 'date': t[1]}\n\n def _replace(self, **kwds):\n 'Return a new Row object replacing specified fields with new values'\n\n result = self._make(map(kwds.pop, ('title', 'date'), self))\n if kwds:\n raise ValueError('Got unexpected field names: %r' % kwds.keys())\n\n return result\n\n def __getnewargs__(self):\n return tuple(self)\n\n title = property(itemgetter(0))\n date = property(itemgetter(1))\n\n",
"I actually had this almost exact question yesterday and solved it using search. The best answer applied to your question is this:\nfrom operator import itemgetter\nlist.sort(key=itemgetter('date'), reverse=True)\n\n",
"Sort the data (or a copy of the data) directly and build the list of dicts afterwards. Sort using the function sorted with an appropiate key function (operator.attrgetter probably)\n",
"If you're into the whole brevity thing:\ndata = \"data from database\"\nsorted_data = sorted(\n [{'title': x.title, 'date': x.created_on} for x in data], \n key=operator.itemgetter('date'),\n reverse=True)\n\n"
] | [
81,
24,
4,
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0000652291_python.txt |
Q:
How do I create a unique value for each key using dict.fromkeys?
First, I'm new to Python, so I apologize if I've overlooked something, but I would like to use dict.fromkeys (or something similar) to create a dictionary of lists, the keys of which are provided in another list. I'm performing some timing tests and I'd like for the key to be the input variable and the list to contain the times for the runs:
def benchmark(input):
...
return time_taken
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = dict.fromkeys(inputs, [])
for run in range(0, runs):
for i in inputs:
results[i].append(benchmark(i))
The problem I'm having is that all the keys in the dictionary appear to share the same list, and each run simply appends to it. Is there any way to generate a unique empty list for each key using fromkeys? If not, is there another way to do this without generating the resulting dictionary by hand?
A:
The problem is that in
results = dict.fromkeys(inputs, [])
[] is evaluated only once, right there.
I'd rewrite this code like that:
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = {}
for run in range(runs):
for i in inputs:
results.setdefault(i,[]).append(benchmark(i))
Other option is:
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = dict([(i,[]) for i in inputs])
for run in range(runs):
for i in inputs:
results[i].append(benchmark(i))
A:
Check out defaultdict (requires Python 2.5 or greater).
from collections import defaultdict
def benchmark(input):
...
return time_taken
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = defaultdict(list) # Creates a dict where the default value for any key is an empty list
for run in range(0, runs):
for i in inputs:
results[i].append(benchmark(i))
A:
You can also do this if you don't want to learn anything new (although I recommend you do!) I'm curious as to which method is faster?
results = dict.fromkeys(inputs)
for run in range(0, runs):
for i in inputs:
if not results[i]:
results[i] = []
results[i].append(benchmark(i))
| How do I create a unique value for each key using dict.fromkeys? | First, I'm new to Python, so I apologize if I've overlooked something, but I would like to use dict.fromkeys (or something similar) to create a dictionary of lists, the keys of which are provided in another list. I'm performing some timing tests and I'd like for the key to be the input variable and the list to contain the times for the runs:
def benchmark(input):
...
return time_taken
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = dict.fromkeys(inputs, [])
for run in range(0, runs):
for i in inputs:
results[i].append(benchmark(i))
The problem I'm having is that all the keys in the dictionary appear to share the same list, and each run simply appends to it. Is there any way to generate a unique empty list for each key using fromkeys? If not, is there another way to do this without generating the resulting dictionary by hand?
| [
"The problem is that in \nresults = dict.fromkeys(inputs, [])\n\n[] is evaluated only once, right there. \nI'd rewrite this code like that:\nruns = 10\ninputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)\nresults = {}\n\nfor run in range(runs):\n for i in inputs:\n results.setdefault(i,[]).append(benchmark(i))\n\nOther option is:\nruns = 10\ninputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)\nresults = dict([(i,[]) for i in inputs])\n\nfor run in range(runs):\n for i in inputs:\n results[i].append(benchmark(i))\n\n",
"Check out defaultdict (requires Python 2.5 or greater).\nfrom collections import defaultdict\n\ndef benchmark(input):\n ...\n return time_taken\n\nruns = 10\ninputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)\nresults = defaultdict(list) # Creates a dict where the default value for any key is an empty list\n\nfor run in range(0, runs):\n for i in inputs:\n results[i].append(benchmark(i))\n\n",
"You can also do this if you don't want to learn anything new (although I recommend you do!) I'm curious as to which method is faster?\nresults = dict.fromkeys(inputs)\n\nfor run in range(0, runs):\n for i in inputs:\n if not results[i]:\n results[i] = []\n results[i].append(benchmark(i))\n\n"
] | [
13,
12,
2
] | [] | [] | [
"dictionary",
"fromkeys",
"python"
] | stackoverflow_0000654646_dictionary_fromkeys_python.txt |
Q:
Python Hash Functions
What is a good way of hashing a hierarchy (similar to a file structure) in python?
I could convert the whole hierarchy into a dotted string and then hash that, but is there a better (or more efficient) way of doing this without going back and forth all the time?
An example of a structure I might want to hash is:
a -> b1 -> c -> 1 -> d
a -> b2 -> c -> 2 -> d
a -> c -> 1 -> d
A:
If you have access to your hierarchy components as a tuple, just hash it - tuples are hashable. You may not gain a lot over conversion to and from a delimited string, but it's a start.
If this doesn't help, perhaps you could provide more information about how you store the hierarchy/path information.
A:
How do you want to access your hierarchy?
If you're always going to be checking for a full path, then as suggested, use a tuple:
eg:
>>> d["a","b1","c",1,"d"] = value
However, if you're going to be doing things like "quickly find all the items below "a -> b1", it may make more sense to store it as a nested hashtable (otherwise you must iterate through all items to find those you're intereted in).
For this, a defaultdict is probably the easiest way to store. For example:
from collections import defaultdict
def new_dict(): return defaultdict(new_dict)
d = defaultdict(new_dict)
d["a"]["b1"]["c"][1]["d"] = "test"
d["a"]["b2"]["c"][2]["d"] = "test2"
d["a"]["c"][1]["d"] = "test3"
print d["a"]["c"][1]["d"] # Prints test3
print d["a"].keys() # Prints ["c", "b1", "b2"]
A:
You can make any object hashable by implementing the __hash__() method
So you can simply add a suitable __hash__() method to the objects storing your hierarchy, e.g. compute the hash recursively, etc.
| Python Hash Functions | What is a good way of hashing a hierarchy (similar to a file structure) in python?
I could convert the whole hierarchy into a dotted string and then hash that, but is there a better (or more efficient) way of doing this without going back and forth all the time?
An example of a structure I might want to hash is:
a -> b1 -> c -> 1 -> d
a -> b2 -> c -> 2 -> d
a -> c -> 1 -> d
| [
"If you have access to your hierarchy components as a tuple, just hash it - tuples are hashable. You may not gain a lot over conversion to and from a delimited string, but it's a start.\nIf this doesn't help, perhaps you could provide more information about how you store the hierarchy/path information.\n",
"How do you want to access your hierarchy? \nIf you're always going to be checking for a full path, then as suggested, use a tuple:\neg:\n>>> d[\"a\",\"b1\",\"c\",1,\"d\"] = value\n\nHowever, if you're going to be doing things like \"quickly find all the items below \"a -> b1\", it may make more sense to store it as a nested hashtable (otherwise you must iterate through all items to find those you're intereted in).\nFor this, a defaultdict is probably the easiest way to store. For example:\nfrom collections import defaultdict\n\ndef new_dict(): return defaultdict(new_dict)\nd = defaultdict(new_dict)\n\nd[\"a\"][\"b1\"][\"c\"][1][\"d\"] = \"test\"\nd[\"a\"][\"b2\"][\"c\"][2][\"d\"] = \"test2\"\nd[\"a\"][\"c\"][1][\"d\"] = \"test3\"\n\nprint d[\"a\"][\"c\"][1][\"d\"] # Prints test3\nprint d[\"a\"].keys() # Prints [\"c\", \"b1\", \"b2\"]\n\n",
"You can make any object hashable by implementing the __hash__() method\nSo you can simply add a suitable __hash__() method to the objects storing your hierarchy, e.g. compute the hash recursively, etc.\n"
] | [
8,
4,
1
] | [] | [] | [
"hash",
"python"
] | stackoverflow_0000654128_hash_python.txt |
Q:
Sending Rich Text Format email using Outlook 2003
I am trying to send a Rich Text Format email message using Outlook 2003.
The following code results the RTF HTML source code to be dumped into the mail message body.
What should I do in order to fix that, and make Outlook display the formatted data and not the source HTML ?
import win32com.client
RTFTEMPLATE = """<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii">
<META NAME=3D"Generator" CONTENT=3D"MS Exchange Server version =
08.00.0681.000">
<TITLE>%s</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/rtf format -->
<P DIR=3DLTR><SPAN LANG=3D"en-us"><FONT =
FACE=3D"Calibri">%s</FONT></SPAN><SPAN =
LANG=3D"en-us"></SPAN></P>
</BODY>
</HTML>"""
Format = { 'UNSPECIFIED' : 0, 'PLAIN' : 1, 'HTML' : 2, 'RTF' : 3}
profile = "Outlook"
subject="Subject"
body = "Test Message"
session = win32com.client.Dispatch("Mapi.Session")
outlook = win32com.client.Dispatch("Outlook.Application")
session.Logon(profile)
mainMsg = outlook.CreateItem(0)
mainMsg.To = "test@test.test"
mainMsg.Subject = subject
mainMsg.BodyFormat = Format['RTF']
mainMsg.Body = RTFTEMPLATE % (subject,body)
mainMsg.Send()
EDIT: When using HTMLBody instead of Body, Outlook detects the message as HTML and not as RTF.
A:
If you must use RTF, you will need to convert your HTML to RTF format. Check out the zopyx package.
To use HTML, change the line:
mainMsg.Body = RTFTEMPLATE % (subject,body)
to:
mainMsg.HTMLBody = RTFTEMPLATE % (subject,body)
| Sending Rich Text Format email using Outlook 2003 | I am trying to send a Rich Text Format email message using Outlook 2003.
The following code results the RTF HTML source code to be dumped into the mail message body.
What should I do in order to fix that, and make Outlook display the formatted data and not the source HTML ?
import win32com.client
RTFTEMPLATE = """<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii">
<META NAME=3D"Generator" CONTENT=3D"MS Exchange Server version =
08.00.0681.000">
<TITLE>%s</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/rtf format -->
<P DIR=3DLTR><SPAN LANG=3D"en-us"><FONT =
FACE=3D"Calibri">%s</FONT></SPAN><SPAN =
LANG=3D"en-us"></SPAN></P>
</BODY>
</HTML>"""
Format = { 'UNSPECIFIED' : 0, 'PLAIN' : 1, 'HTML' : 2, 'RTF' : 3}
profile = "Outlook"
subject="Subject"
body = "Test Message"
session = win32com.client.Dispatch("Mapi.Session")
outlook = win32com.client.Dispatch("Outlook.Application")
session.Logon(profile)
mainMsg = outlook.CreateItem(0)
mainMsg.To = "test@test.test"
mainMsg.Subject = subject
mainMsg.BodyFormat = Format['RTF']
mainMsg.Body = RTFTEMPLATE % (subject,body)
mainMsg.Send()
EDIT: When using HTMLBody instead of Body, Outlook detects the message as HTML and not as RTF.
| [
"If you must use RTF, you will need to convert your HTML to RTF format. Check out the zopyx package.\nTo use HTML, change the line:\nmainMsg.Body = RTFTEMPLATE % (subject,body)\n\nto:\nmainMsg.HTMLBody = RTFTEMPLATE % (subject,body)\n\n"
] | [
0
] | [] | [] | [
"email",
"outlook",
"python"
] | stackoverflow_0000655180_email_outlook_python.txt |
Q:
splitting a ManyToManyField over multiple form fields in a ModelForm
So I have a model with a ManyToManyField called tournaments. I have a ModelForm with two tournament fields:
pay_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().pay_tourns(),
widget=forms.CheckboxSelectMultiple())
rep_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().rep_tourns(),
widget=forms.CheckboxSelectMultiple())
The methods after all() there are from a subclassed QuerySet. When I'm saving the form in my view I do thus:
post.tournaments = (post_form.cleaned_data.get('pay_tourns')
+ post_form.cleaned_data.get('rep_tourns'))
Anyway, this all works fine. What I can't figure out how to do is fill these form fields out when I'm loading an existing post. That is, when I pass instance=post to the form. Any ideas?
A:
You could do something like this to the ModelForm:
def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
instance = kwargs.get('instance')
if instance:
self.fields['pay_tourns'].queryset.filter(post=instance)
self.fields['rep_tourns'].queryset.filter(post=instance)
I don't see why that wouldn't work, but I'm going to test it just to make sure...
EDIT: Tested and it works.
A:
Paolo Bergantino was on the right track, and helped me find it. This was the solution:
def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
instance = kwargs.get('instance')
if instance:
self.fields['pay_tourns'].initial = [ o.id for o in instance.tournaments.all().active().pay_tourns()]
self.fields['rep_tourns'].initial = [ o.id for o in instance.tournaments.all().active().rep_tourns()]
| splitting a ManyToManyField over multiple form fields in a ModelForm | So I have a model with a ManyToManyField called tournaments. I have a ModelForm with two tournament fields:
pay_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().pay_tourns(),
widget=forms.CheckboxSelectMultiple())
rep_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().rep_tourns(),
widget=forms.CheckboxSelectMultiple())
The methods after all() there are from a subclassed QuerySet. When I'm saving the form in my view I do thus:
post.tournaments = (post_form.cleaned_data.get('pay_tourns')
+ post_form.cleaned_data.get('rep_tourns'))
Anyway, this all works fine. What I can't figure out how to do is fill these form fields out when I'm loading an existing post. That is, when I pass instance=post to the form. Any ideas?
| [
"You could do something like this to the ModelForm: \ndef __init__(self, *args, **kwargs):\n super(MyForm, self).__init__(*args, **kwargs)\n\n instance = kwargs.get('instance')\n if instance:\n self.fields['pay_tourns'].queryset.filter(post=instance)\n self.fields['rep_tourns'].queryset.filter(post=instance)\n\nI don't see why that wouldn't work, but I'm going to test it just to make sure...\nEDIT: Tested and it works.\n",
"Paolo Bergantino was on the right track, and helped me find it. This was the solution:\ndef __init__(self, *args, **kwargs):\n super(MyForm, self).__init__(*args, **kwargs)\n\n instance = kwargs.get('instance')\n if instance:\n self.fields['pay_tourns'].initial = [ o.id for o in instance.tournaments.all().active().pay_tourns()]\n self.fields['rep_tourns'].initial = [ o.id for o in instance.tournaments.all().active().rep_tourns()]\n\n"
] | [
2,
1
] | [] | [] | [
"django",
"django_forms",
"modelform",
"python"
] | stackoverflow_0000654576_django_django_forms_modelform_python.txt |
Q:
python regular expression for retweets
i'm working on a regex that will extract retweet keywords and user names from tweets. here's an example, with a rather terrible regex to do the job:
tweet='foobar RT@one, @two: @three barfoo'
m=re.search(r'(RT|retweet|from|via)\b\W*@(\w+)\b\W*@(\w+)\b\W*@(\w+)\b\W*',tweet)
m.groups()
('RT', 'one', 'two', 'three')
what i'd like is to condense the repeated \b\W*@(\w+)\b\W* patterns and make them of a variable number, so that if @four were added after @three, it would also be extracted. i've tried many permutations to repeat this with a + unsuccessfully.
i'd also like this to work for something like
tweet='foobar RT@one, RT @two: RT @three barfoo';
which can be achieved with a re.finditer if the patterns don't overlap. (i have a version where the patterns do overlap, and so only the first RT gets picked up.)
any help is greatly appreciated. thanks.
A:
Try
(RT|retweet|from|via)(?:\b\W*@(\w+))+'
Enclosing the \b\W*@(\w+) in '(?:...)` allows you to group the terms for repetition without capturing the aggregate.
I'm not sure I'm following the second part of your question, but I think you may be looking for something involving a construct like:
(?:(?!RT|@).)
which will match any character that isn't an "@" or the start of "RT", again without capturing it.
In that case, how about:
(RT|retweet|from|via)((?:\b\W*@\w+)+)
and then post process
re.split(r'@(\w+)' ,m.groups()[1])
To get the individual handles?
| python regular expression for retweets | i'm working on a regex that will extract retweet keywords and user names from tweets. here's an example, with a rather terrible regex to do the job:
tweet='foobar RT@one, @two: @three barfoo'
m=re.search(r'(RT|retweet|from|via)\b\W*@(\w+)\b\W*@(\w+)\b\W*@(\w+)\b\W*',tweet)
m.groups()
('RT', 'one', 'two', 'three')
what i'd like is to condense the repeated \b\W*@(\w+)\b\W* patterns and make them of a variable number, so that if @four were added after @three, it would also be extracted. i've tried many permutations to repeat this with a + unsuccessfully.
i'd also like this to work for something like
tweet='foobar RT@one, RT @two: RT @three barfoo';
which can be achieved with a re.finditer if the patterns don't overlap. (i have a version where the patterns do overlap, and so only the first RT gets picked up.)
any help is greatly appreciated. thanks.
| [
"Try\n(RT|retweet|from|via)(?:\\b\\W*@(\\w+))+'\n\nEnclosing the \\b\\W*@(\\w+) in '(?:...)` allows you to group the terms for repetition without capturing the aggregate.\nI'm not sure I'm following the second part of your question, but I think you may be looking for something involving a construct like:\n(?:(?!RT|@).)\n\nwhich will match any character that isn't an \"@\" or the start of \"RT\", again without capturing it.\nIn that case, how about:\n(RT|retweet|from|via)((?:\\b\\W*@\\w+)+)\n\nand then post process\nre.split(r'@(\\w+)' ,m.groups()[1])\n\nTo get the individual handles?\n"
] | [
3
] | [] | [] | [
"python",
"regex",
"twitter"
] | stackoverflow_0000655903_python_regex_twitter.txt |
Q:
Rotation based on end points
I'm using pygame to draw a line between two arbitrary points. I also want to append arrows at the end of the lines that face outward in the directions the line is traveling.
It's simple enough to stick an arrow image at the end, but I have no clue how the calculate the degrees of rotation to keep the arrows pointing in the right direction.
A:
Here is the complete code to do it. Note that when using pygame, the y co-ordinate is measured from the top, and so we take the negative when using math functions.
import pygame
import math
import random
pygame.init()
screen=pygame.display.set_mode((300,300))
screen.fill((255,255,255))
pos1=random.randrange(300), random.randrange(300)
pos2=random.randrange(300), random.randrange(300)
pygame.draw.line(screen, (0,0,0), pos1, pos2)
arrow=pygame.Surface((50,50))
arrow.fill((255,255,255))
pygame.draw.line(arrow, (0,0,0), (0,0), (25,25))
pygame.draw.line(arrow, (0,0,0), (0,50), (25,25))
arrow.set_colorkey((255,255,255))
angle=math.atan2(-(pos1[1]-pos2[1]), pos1[0]-pos2[0])
##Note that in pygame y=0 represents the top of the screen
##So it is necessary to invert the y coordinate when using math
angle=math.degrees(angle)
def drawAng(angle, pos):
nar=pygame.transform.rotate(arrow,angle)
nrect=nar.get_rect(center=pos)
screen.blit(nar, nrect)
drawAng(angle, pos1)
angle+=180
drawAng(angle, pos2)
pygame.display.flip()
A:
We're assuming that 0 degrees means the arrow is pointing to the right, 90 degrees means pointing straight up and 180 degrees means pointing to the left.
There are several ways to do this, the simplest is probably using the atan2 function.
if your starting point is (x1,y1) and your end point is (x2,y2) then the angle in degrees of the line between the two is:
import math
deg=math.degrees(math.atan2(y2-y1,x2-x1))
this will you an angle in the range -180 to 180 so you need it from 0 to 360 you have to take care of that your self.
A:
I would have to look up the exact functions to use, but how about making a right triangle where the hypotenuse is the line in question and the legs are axis-aligned, and using some basic trigonometry to calculate the angle of the line based on the lengths of the sides of the triangle? Of course, you will have to special-case lines that are already axis-aligned, but that should be trivial.
Also, this Wikipedia article on slope may give you some ideas.
A:
just to append to the above code, you'd probably want an event loop so it wouldn't quit right away:
...
clock = pygame.time.Clock()
running = True
while (running):
clock.tick()
| Rotation based on end points | I'm using pygame to draw a line between two arbitrary points. I also want to append arrows at the end of the lines that face outward in the directions the line is traveling.
It's simple enough to stick an arrow image at the end, but I have no clue how the calculate the degrees of rotation to keep the arrows pointing in the right direction.
| [
"Here is the complete code to do it. Note that when using pygame, the y co-ordinate is measured from the top, and so we take the negative when using math functions.\nimport pygame\nimport math\nimport random\npygame.init()\n\nscreen=pygame.display.set_mode((300,300))\nscreen.fill((255,255,255))\n\npos1=random.randrange(300), random.randrange(300)\npos2=random.randrange(300), random.randrange(300)\n\npygame.draw.line(screen, (0,0,0), pos1, pos2)\n\narrow=pygame.Surface((50,50))\narrow.fill((255,255,255))\npygame.draw.line(arrow, (0,0,0), (0,0), (25,25))\npygame.draw.line(arrow, (0,0,0), (0,50), (25,25))\narrow.set_colorkey((255,255,255))\n\nangle=math.atan2(-(pos1[1]-pos2[1]), pos1[0]-pos2[0])\n##Note that in pygame y=0 represents the top of the screen\n##So it is necessary to invert the y coordinate when using math\nangle=math.degrees(angle)\n\ndef drawAng(angle, pos):\n nar=pygame.transform.rotate(arrow,angle)\n nrect=nar.get_rect(center=pos)\n screen.blit(nar, nrect)\n\ndrawAng(angle, pos1)\nangle+=180\ndrawAng(angle, pos2)\npygame.display.flip()\n\n",
"We're assuming that 0 degrees means the arrow is pointing to the right, 90 degrees means pointing straight up and 180 degrees means pointing to the left.\nThere are several ways to do this, the simplest is probably using the atan2 function.\nif your starting point is (x1,y1) and your end point is (x2,y2) then the angle in degrees of the line between the two is:\nimport math\ndeg=math.degrees(math.atan2(y2-y1,x2-x1))\n\nthis will you an angle in the range -180 to 180 so you need it from 0 to 360 you have to take care of that your self.\n",
"I would have to look up the exact functions to use, but how about making a right triangle where the hypotenuse is the line in question and the legs are axis-aligned, and using some basic trigonometry to calculate the angle of the line based on the lengths of the sides of the triangle? Of course, you will have to special-case lines that are already axis-aligned, but that should be trivial.\nAlso, this Wikipedia article on slope may give you some ideas.\n",
"just to append to the above code, you'd probably want an event loop so it wouldn't quit right away:\n...\nclock = pygame.time.Clock()\nrunning = True\n\nwhile (running):\n clock.tick()\n\n"
] | [
9,
2,
1,
1
] | [] | [] | [
"geometry",
"pygame",
"python"
] | stackoverflow_0000650646_geometry_pygame_python.txt |
Q:
Implementation: How to retrieve and send emails for different Gmail accounts?
I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible.
There use to be a project for "Libgmailer" at sourceforge but I think it was abandoned. Is anyone aware of anything similar?
I have found that Gmail has a Python API but my site is making use of PHP.
I really need ideas on how to best go about this!
Thanks all for any input
A:
any library/source that works with imap or pop will work.
A:
Well if Google didn't come up with anything personally I'd see if I could reverse engineer the Python API by implementing it and watching it with a packet sniffer. My guess is it's just accessing some web service which should be pretty easy to mimic regardless of the language you're using.
A:
Just a thought, Gmail supports POP/IMAP access. Could you do it using those protocols? It would mean asking your users to go into their gmail and enable it though.
| Implementation: How to retrieve and send emails for different Gmail accounts? | I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible.
There use to be a project for "Libgmailer" at sourceforge but I think it was abandoned. Is anyone aware of anything similar?
I have found that Gmail has a Python API but my site is making use of PHP.
I really need ideas on how to best go about this!
Thanks all for any input
| [
"any library/source that works with imap or pop will work.\n",
"Well if Google didn't come up with anything personally I'd see if I could reverse engineer the Python API by implementing it and watching it with a packet sniffer. My guess is it's just accessing some web service which should be pretty easy to mimic regardless of the language you're using.\n",
"Just a thought, Gmail supports POP/IMAP access. Could you do it using those protocols? It would mean asking your users to go into their gmail and enable it though.\n"
] | [
6,
0,
0
] | [] | [] | [
"email",
"gmail",
"php",
"python"
] | stackoverflow_0000656180_email_gmail_php_python.txt |
Q:
How to implement Google Suggest in your own web application (e.g. using Python)
In my website, users have the possibility to store links.
During typing the internet address into the designated field I would like to display a suggest/autocomplete box similar to Google Suggest or the Chrome Omnibar.
Example:
User is typing as URL:
http://www.sta
Suggestions which would be displayed:
http://www.staples.com
http://www.starbucks.com
http://www.stackoverflow.com
How can I achieve this while not reinventing the wheel? :)
A:
You could try with
http://google.com/complete/search?output=toolbar&q=keyword
and then parse the xml result.
A:
I did this once before in a Django server. There's two parts - client-side and server-side.
Client side you will have to send out XmlHttpRequests to the server as the user is typing, and then when the information comes back, display it. This part will require a decent amount of javascript, including some tricky parts like callbacks and keypress handlers.
Server side you will have to handle the XmlHttpRequests which will be something that contains what the user has typed so far. Like a url of
www.yoursite.com/suggest?typed=www.sta
and then respond with the suggestions encoded in some way. (I'd recommend JSON-encoding the suggestions.) You also have to actually get the suggestions from your database, this could be just a simple SQL call or something else depending on your framework.
But the server-side part is pretty simple. The client-side part is trickier, I think. I found this article helpful
He's writing things in php, but the client side work is pretty much the same. In particular you might find his CSS helpful.
A:
Yahoo has a good autocomplete control.
They have a sample here..
Obviously this does nothing to help you out in getting the data - but it looks like you have your own source and arent actually looking to get data from Google.
A:
If you want the auto-complete to use date from your own database, you'll need to do the search yourself and update the suggestions using AJAX as users type. For the search part, you might want to look at Lucene.
A:
That control is often called a word wheel. MSDN has a recent walkthrough on writing one with LINQ. There are two critical aspects: deferred execution and lazy evaluation. The article has source code too.
| How to implement Google Suggest in your own web application (e.g. using Python) | In my website, users have the possibility to store links.
During typing the internet address into the designated field I would like to display a suggest/autocomplete box similar to Google Suggest or the Chrome Omnibar.
Example:
User is typing as URL:
http://www.sta
Suggestions which would be displayed:
http://www.staples.com
http://www.starbucks.com
http://www.stackoverflow.com
How can I achieve this while not reinventing the wheel? :)
| [
"You could try with\nhttp://google.com/complete/search?output=toolbar&q=keyword\nand then parse the xml result.\n",
"I did this once before in a Django server. There's two parts - client-side and server-side.\nClient side you will have to send out XmlHttpRequests to the server as the user is typing, and then when the information comes back, display it. This part will require a decent amount of javascript, including some tricky parts like callbacks and keypress handlers.\nServer side you will have to handle the XmlHttpRequests which will be something that contains what the user has typed so far. Like a url of\nwww.yoursite.com/suggest?typed=www.sta\n\nand then respond with the suggestions encoded in some way. (I'd recommend JSON-encoding the suggestions.) You also have to actually get the suggestions from your database, this could be just a simple SQL call or something else depending on your framework.\nBut the server-side part is pretty simple. The client-side part is trickier, I think. I found this article helpful\nHe's writing things in php, but the client side work is pretty much the same. In particular you might find his CSS helpful.\n",
"Yahoo has a good autocomplete control.\nThey have a sample here..\nObviously this does nothing to help you out in getting the data - but it looks like you have your own source and arent actually looking to get data from Google.\n",
"If you want the auto-complete to use date from your own database, you'll need to do the search yourself and update the suggestions using AJAX as users type. For the search part, you might want to look at Lucene.\n",
"That control is often called a word wheel. MSDN has a recent walkthrough on writing one with LINQ. There are two critical aspects: deferred execution and lazy evaluation. The article has source code too. \n"
] | [
8,
2,
1,
0,
0
] | [] | [] | [
"autocomplete",
"autosuggest",
"python"
] | stackoverflow_0000255700_autocomplete_autosuggest_python.txt |
Q:
Why did Python 2.6 add a global next() function?
I noticed that Python2.6 added a next() to it's list of global functions.
next(iterator[, default])
Retrieve the next item from the iterator by calling its next() method.
If default is given, it is returned if
the iterator is exhausted, otherwise
StopIteration is raised.
What was the motivation for adding this?
What can you do with next(iterator) that you can't do with iterator.next() and an except clause to handle the StopIteration?
A:
It's just for consistency with functions like len(). I believe next(i) calls i.__next__() internally.
See http://www.python.org/dev/peps/pep-3114/
A:
Note that in Python 3.0+ the next method has been renamed to __next__. This is because of consistency. next is a special method and special methods are named by convention (PEP 8) with double leading and trailing underscore. Special methods are not meant to be called directly, that's why the next built in function was introduced.
A:
It calls __next__ internally, but it makes it look more 'functional' than 'object-oriented'. Mind you that's just my opinion, but I don't like next(i) rather than i.next(). But as Steve Mc said, it also helps slightly with consistency.
| Why did Python 2.6 add a global next() function? | I noticed that Python2.6 added a next() to it's list of global functions.
next(iterator[, default])
Retrieve the next item from the iterator by calling its next() method.
If default is given, it is returned if
the iterator is exhausted, otherwise
StopIteration is raised.
What was the motivation for adding this?
What can you do with next(iterator) that you can't do with iterator.next() and an except clause to handle the StopIteration?
| [
"It's just for consistency with functions like len(). I believe next(i) calls i.__next__() internally.\nSee http://www.python.org/dev/peps/pep-3114/\n",
"Note that in Python 3.0+ the next method has been renamed to __next__. This is because of consistency. next is a special method and special methods are named by convention (PEP 8) with double leading and trailing underscore. Special methods are not meant to be called directly, that's why the next built in function was introduced.\n",
"It calls __next__ internally, but it makes it look more 'functional' than 'object-oriented'. Mind you that's just my opinion, but I don't like next(i) rather than i.next(). But as Steve Mc said, it also helps slightly with consistency.\n"
] | [
18,
10,
0
] | [] | [] | [
"python",
"python_2.6"
] | stackoverflow_0000656155_python_python_2.6.txt |
Q:
Django form fails validation on a unique field
I have a simple model that is defined as:
class Article(models.Model):
slug = models.SlugField(max_length=50, unique=True)
title = models.CharField(max_length=100, unique=False)
and the form:
class ArticleForm(ModelForm):
class Meta:
model = Article
The validation here fails when I try to update an existing row:
if request.method == 'POST':
form = ArticleForm(request.POST)
if form.is_valid(): # POOF
form.save()
Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes.
The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this:
slug: "Article with this None already exists"
So it looks like is_valid() fails on a unique value check, but all I want to do is update the row.
I can't just do:
form.save(force_update=True)
... because the form will fail on validation.
This looks like something very simple, but I just can't figure it out.
I am running Django 1.0.2
What croaks is BaseModelForm.validate_unique() which is called on form initialization.
A:
I don't think you are actually updating an existing article, but instead creating a new one, presumably with more or less the same content, especially the slug, and thus you will get an error. It is a bit strange that you don't get better error reporting, but also I do not know what the rest of your view looks like.
What if you where to try something along these lines (I have included a bit more of a possible view function, change it to fit your needs); I haven't actually tested my code, so I am sure I've made at least one mistake, but you should at least get the general idea:
def article_update(request, id):
article = get_objects_or_404(Article, pk=id)
if request.method == 'POST':
form = ArticleForm(request.POST, instance=article)
if form.is_valid():
form.save()
return HttpResponseRedirect(to-some-suitable-url)
else:
form = ArticleForm(instance=article)
return render_to_response('article_update.html', { 'form': form })
The thing is, as taurean noted, you should instantiate your model form with the object you wish to update, otherwise you will get a new one.
A:
I was also searching for a way to update an existing record, even tried form.save(force_update=True) but received errors??
Finally by trial & error managed to update existing record. Below codes tested working. Hope this helps...
models.py from djangobook
class Author(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=40)
email = models.EmailField(blank=True, verbose_name='e-mail')
objects = models.Manager()
sel_objects=AuthorManager()
def __unicode__(self):
return self.first_name+' '+ self.last_name
class AuthorForm(ModelForm):
class Meta:
model = Author
# views.py
# add new record
def authorcontact(request):
if request.method == 'POST':
form = AuthorForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/contact/created')
else:
form = AuthorForm()
return render_to_response('author_form.html', {'form': form})
update existing record
def authorcontactupd(request,id):
if request.method == 'POST':
a=Author.objects.get(pk=int(id))
form = AuthorForm(request.POST, instance=a)
if form.is_valid():
form.save()
return HttpResponseRedirect('/contact/created')
else:
a=Author.objects.get(pk=int(id))
form = AuthorForm(instance=a)
return render_to_response('author_form.html', {'form': form})
A:
All i can guess is that you are getting an object to fill a form, and trying to save it again.
Try using a ModelForm, and intantiate it with desired object.
A:
It appears that your SlugField is returning None and because a null/blank slug already exists somewhere in the database, its giving an 'already exists' error. It seems like your slug field isn't saving correctly at all.
| Django form fails validation on a unique field | I have a simple model that is defined as:
class Article(models.Model):
slug = models.SlugField(max_length=50, unique=True)
title = models.CharField(max_length=100, unique=False)
and the form:
class ArticleForm(ModelForm):
class Meta:
model = Article
The validation here fails when I try to update an existing row:
if request.method == 'POST':
form = ArticleForm(request.POST)
if form.is_valid(): # POOF
form.save()
Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes.
The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this:
slug: "Article with this None already exists"
So it looks like is_valid() fails on a unique value check, but all I want to do is update the row.
I can't just do:
form.save(force_update=True)
... because the form will fail on validation.
This looks like something very simple, but I just can't figure it out.
I am running Django 1.0.2
What croaks is BaseModelForm.validate_unique() which is called on form initialization.
| [
"I don't think you are actually updating an existing article, but instead creating a new one, presumably with more or less the same content, especially the slug, and thus you will get an error. It is a bit strange that you don't get better error reporting, but also I do not know what the rest of your view looks like.\nWhat if you where to try something along these lines (I have included a bit more of a possible view function, change it to fit your needs); I haven't actually tested my code, so I am sure I've made at least one mistake, but you should at least get the general idea:\ndef article_update(request, id):\n article = get_objects_or_404(Article, pk=id)\n\n if request.method == 'POST':\n form = ArticleForm(request.POST, instance=article)\n\n if form.is_valid():\n form.save()\n\n return HttpResponseRedirect(to-some-suitable-url)\n\n else:\n form = ArticleForm(instance=article)\n\n return render_to_response('article_update.html', { 'form': form })\n\nThe thing is, as taurean noted, you should instantiate your model form with the object you wish to update, otherwise you will get a new one.\n",
"I was also searching for a way to update an existing record, even tried form.save(force_update=True) but received errors??\nFinally by trial & error managed to update existing record. Below codes tested working. Hope this helps...\nmodels.py from djangobook\nclass Author(models.Model):\n first_name = models.CharField(max_length=30)\n\n last_name = models.CharField(max_length=40)\n\n email = models.EmailField(blank=True, verbose_name='e-mail')\n\n objects = models.Manager()\n\n sel_objects=AuthorManager()\n\n def __unicode__(self):\n return self.first_name+' '+ self.last_name\n\nclass AuthorForm(ModelForm):\n class Meta:\n model = Author\n\n\n# views.py\n# add new record\n\ndef authorcontact(request):\n\n if request.method == 'POST':\n\n form = AuthorForm(request.POST)\n\n if form.is_valid():\n\n form.save()\n\n return HttpResponseRedirect('/contact/created')\n\n else:\n\n form = AuthorForm()\n\n return render_to_response('author_form.html', {'form': form})\n\nupdate existing record\ndef authorcontactupd(request,id):\n\n if request.method == 'POST':\n\n a=Author.objects.get(pk=int(id))\n\n form = AuthorForm(request.POST, instance=a)\n\n if form.is_valid():\n\n form.save()\n\n return HttpResponseRedirect('/contact/created')\n\n else:\n a=Author.objects.get(pk=int(id))\n\n form = AuthorForm(instance=a)\n\n return render_to_response('author_form.html', {'form': form})\n\n",
"All i can guess is that you are getting an object to fill a form, and trying to save it again. \nTry using a ModelForm, and intantiate it with desired object.\n",
"It appears that your SlugField is returning None and because a null/blank slug already exists somewhere in the database, its giving an 'already exists' error. It seems like your slug field isn't saving correctly at all.\n"
] | [
28,
5,
2,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000526457_django_python.txt |
Q:
Accessing Microsoft Automation Objects from Python
I have a set of macros that I have turned into an add-in in excel. The macros allow me to interact with another program that has what are called Microsoft Automation Objects that provide some control over what the other program does. For example, I have a filter tool in the add-in that filters the list provided by the other program to match a list in the Excel workbook. This is slow though. I might have fifty thousand lines in the other program and want to filter out all of the lines that don't match a list of three thousand lines in Excel. This type of matching takes about 30-40 minutes. I have begun wondering if there is way to do this with Python instead since I suspect the matching process could be done in seconds.
Edited:
Thanks- Based on the suggestion to look at Hammond's book I found out a number of resources. However, though I am still exploring it looks like many of these are old. For example, Hammond's book was published in 2000, which means the writing was finished almost a decade ago. Correction I just found the package called PyWin32 with a 2/2009 build.
This should get me started. Thanks
A:
You will probably need the win32com package.
This is a sample exemple I found at : http://www.markcarter.me.uk/computing/python/excel.html which shows how to use com with Excel. This might be a good start.
# this example starts Excel, creates a new workbook,
# puts some text in the first and second cell
# closes the workbook without saving the changes
# and closes Excel. This happens really fast, so
# you may want to comment out some lines and add them
# back in one at a time ... or do the commands interactively
from win32com.client import Dispatch
xlApp = Dispatch("Excel.Application")
xlApp.Visible = 1
xlApp.Workbooks.Add()
xlApp.ActiveSheet.Cells(1,1).Value = 'Python Rules!'
xlApp.ActiveWorkbook.ActiveSheet.Cells(1,2).Value = 'Python Rules 2!'
xlApp.ActiveWorkbook.Close(SaveChanges=0) # see note 1
xlApp.Quit()
xlApp.Visible = 0 # see note 2
del xlApp
# raw_input("press Enter ...")
A:
Mark Hammond and Andy Robinson have written the book on accessing Windows COM objects from Python.
Here is an example using Excel.
A:
As far as I know it is possible to create COM objects (which is what Automation objects are) in Python on Windows. Then assuming you can get out the lists via automation it should be easy to do what you want in python.
A:
However, though I am still exploring it looks like many of these are old.
COM is old. The interface hasn't changed since at least 1993.
I also don't see the package on the Python.org website. I searched for COM packages but didn't see anything useful.
http://python.net/crew/mhammond/win32/
http://sourceforge.net/projects/pywin32/
The latest update was Feb 2009, which include Python 3.0 support.
| Accessing Microsoft Automation Objects from Python | I have a set of macros that I have turned into an add-in in excel. The macros allow me to interact with another program that has what are called Microsoft Automation Objects that provide some control over what the other program does. For example, I have a filter tool in the add-in that filters the list provided by the other program to match a list in the Excel workbook. This is slow though. I might have fifty thousand lines in the other program and want to filter out all of the lines that don't match a list of three thousand lines in Excel. This type of matching takes about 30-40 minutes. I have begun wondering if there is way to do this with Python instead since I suspect the matching process could be done in seconds.
Edited:
Thanks- Based on the suggestion to look at Hammond's book I found out a number of resources. However, though I am still exploring it looks like many of these are old. For example, Hammond's book was published in 2000, which means the writing was finished almost a decade ago. Correction I just found the package called PyWin32 with a 2/2009 build.
This should get me started. Thanks
| [
"You will probably need the win32com package.\nThis is a sample exemple I found at : http://www.markcarter.me.uk/computing/python/excel.html which shows how to use com with Excel. This might be a good start.\n# this example starts Excel, creates a new workbook, \n# puts some text in the first and second cell\n# closes the workbook without saving the changes\n# and closes Excel. This happens really fast, so\n# you may want to comment out some lines and add them\n# back in one at a time ... or do the commands interactively\n\n\nfrom win32com.client import Dispatch\n\n\nxlApp = Dispatch(\"Excel.Application\")\nxlApp.Visible = 1\nxlApp.Workbooks.Add()\nxlApp.ActiveSheet.Cells(1,1).Value = 'Python Rules!'\nxlApp.ActiveWorkbook.ActiveSheet.Cells(1,2).Value = 'Python Rules 2!'\nxlApp.ActiveWorkbook.Close(SaveChanges=0) # see note 1\nxlApp.Quit()\nxlApp.Visible = 0 # see note 2\ndel xlApp\n\n# raw_input(\"press Enter ...\")\n\n",
"Mark Hammond and Andy Robinson have written the book on accessing Windows COM objects from Python.\nHere is an example using Excel.\n",
"As far as I know it is possible to create COM objects (which is what Automation objects are) in Python on Windows. Then assuming you can get out the lists via automation it should be easy to do what you want in python.\n",
"\nHowever, though I am still exploring it looks like many of these are old. \n\nCOM is old. The interface hasn't changed since at least 1993. \n\nI also don't see the package on the Python.org website. I searched for COM packages but didn't see anything useful.\n\nhttp://python.net/crew/mhammond/win32/\nhttp://sourceforge.net/projects/pywin32/\nThe latest update was Feb 2009, which include Python 3.0 support.\n"
] | [
15,
5,
0,
0
] | [] | [] | [
"automation",
"object",
"python"
] | stackoverflow_0000659018_automation_object_python.txt |
Q:
pinging mysql using mysql alchemy and python
how do i ping mysql using mysql alchemy and python?
A:
Use mysqlshow to see if MySQL is running as expected.
http://dev.mysql.com/doc/refman/5.1/en/mysqlshow.html
Assure that SQLAlchemy has supprt for MySQL.
http://www.sqlalchemy.org/docs/05/dbengine.html#supported-dbapis
Use a simple query through SQLAlchemy.
http://www.sqlalchemy.org/docs/05/ormtutorial.html
| pinging mysql using mysql alchemy and python | how do i ping mysql using mysql alchemy and python?
| [
"Use mysqlshow to see if MySQL is running as expected.\nhttp://dev.mysql.com/doc/refman/5.1/en/mysqlshow.html\nAssure that SQLAlchemy has supprt for MySQL.\nhttp://www.sqlalchemy.org/docs/05/dbengine.html#supported-dbapis\nUse a simple query through SQLAlchemy.\nhttp://www.sqlalchemy.org/docs/05/ormtutorial.html\n"
] | [
2
] | [] | [] | [
"mysql",
"python"
] | stackoverflow_0000658888_mysql_python.txt |
Q:
Python sequence naming convention
Since there is no explicit typing in python, I want to be able to make the difference between sequences and non-sequences using a naming convention. I have been programming with python for a little while now, and I still haven't found any logical/practical way to name sequences. Of course, I went through the famous PEP8, and made some research on google, and it seems that the accepted convention is to add the letter "s" at the end of the variable name.
Let's assume we have a sequence of "weight values", therefore the variable name for the sequence should be weights. So far that's fine, but there will be cases where some word ends with "s" and happen to be the more logical way to name a variable which is not a sequence. Or let's say you have sequences of weights themselves stored into a sequence. The "s" naming convention would name the variable weightss, which is ugly. I am sure there is be a better naming convention for sequences.
What naming convention for sequences would you advise?
A:
In general, avoid this kind of behaviour. Notice from PEP8
A Foolish Consistency is the Hobgoblin
of Little Minds
which is exactly what calling a variable weightss would be doing. So in general have your variables describing what they are, not according to some naming convention:
weights = [44, 66, 88]
weight_groups = [[44, 66, 88], ...]
etc.
From the same section of the PEP8:
But most importantly: know when to be
inconsistent -- sometimes the style
guide just doesn't apply. When in
doubt, use your best judgment. Look
at other examples and decide what
looks best. And don't hesitate to
ask!
A:
The "s" naming convention would name the variable weightss, which is ugly. I am sure there is be a better naming convention for sequences.
I think the convention you're describing is meant to be interpreted as "whenever you have list of something, make it clear that it's a list by pluralizing it". For example, if you have a list of instances of grass, you would call this grasses, not grasss. I don't think it's meant to be taken as literally as you're taking it.
PEP always advises you to take your own approach if that is more readable and useful. As Ali mentioned, one of the guiding principles of PEP is that you shouldn't fall prey to foolish consistencies.
A:
Whatever you little heart desires....
Just kidding, but I wouldn't get to hung up on it. If it's ugly, do something to make it more readable like seq_weight and seq_weights
A:
Why not just thing_list or thing_seq?
| Python sequence naming convention | Since there is no explicit typing in python, I want to be able to make the difference between sequences and non-sequences using a naming convention. I have been programming with python for a little while now, and I still haven't found any logical/practical way to name sequences. Of course, I went through the famous PEP8, and made some research on google, and it seems that the accepted convention is to add the letter "s" at the end of the variable name.
Let's assume we have a sequence of "weight values", therefore the variable name for the sequence should be weights. So far that's fine, but there will be cases where some word ends with "s" and happen to be the more logical way to name a variable which is not a sequence. Or let's say you have sequences of weights themselves stored into a sequence. The "s" naming convention would name the variable weightss, which is ugly. I am sure there is be a better naming convention for sequences.
What naming convention for sequences would you advise?
| [
"In general, avoid this kind of behaviour. Notice from PEP8\n\nA Foolish Consistency is the Hobgoblin\n of Little Minds\n\nwhich is exactly what calling a variable weightss would be doing. So in general have your variables describing what they are, not according to some naming convention:\nweights = [44, 66, 88]\nweight_groups = [[44, 66, 88], ...]\n\netc.\nFrom the same section of the PEP8:\n\nBut most importantly: know when to be\n inconsistent -- sometimes the style\n guide just doesn't apply. When in\n doubt, use your best judgment. Look\n at other examples and decide what\n looks best. And don't hesitate to\n ask!\n\n",
"\nThe \"s\" naming convention would name the variable weightss, which is ugly. I am sure there is be a better naming convention for sequences.\n\nI think the convention you're describing is meant to be interpreted as \"whenever you have list of something, make it clear that it's a list by pluralizing it\". For example, if you have a list of instances of grass, you would call this grasses, not grasss. I don't think it's meant to be taken as literally as you're taking it.\nPEP always advises you to take your own approach if that is more readable and useful. As Ali mentioned, one of the guiding principles of PEP is that you shouldn't fall prey to foolish consistencies.\n",
"Whatever you little heart desires....\nJust kidding, but I wouldn't get to hung up on it. If it's ugly, do something to make it more readable like seq_weight and seq_weights\n",
"Why not just thing_list or thing_seq?\n"
] | [
20,
10,
0,
0
] | [] | [] | [
"naming_conventions",
"python",
"sequence"
] | stackoverflow_0000659415_naming_conventions_python_sequence.txt |
Q:
How to get distinct Django apps on same subdomain to share session cookie?
We have a couple of Django applications deployed on the same subdomain. A few power users need to jump between these applications. I noticed that each time they bounce between applications their session cookie receives a new session ID from Django.
I don't use the Django session table much except in one complex workflow. If the user bounces between applications while in this workflow they lose their session and have to start over.
I dug through the Django session code and discovered that the:
django.conf.settings.SECRET_KEY
is used to perform an integrity check on the sessions on each request. If the integrity check fails, a new session is created. Realizing this, I changed the secret key in each of these applications to use the same value, thinking this would allow the integrity check to pass and allow them to share Django sessions. However, it didn't seem to work.
Is there a way to do this? Am I missing something else?
Thanks in advance
A:
I would instead advise you to set SESSION_COOKIE_NAME to different values for the two apps. Your users will still have to log in twice initially, but their sessions won't conflict - if they log in to app A, then app B, and return to A, they'll still have their A session.
Sharing sessions between Django instances is probably not a good idea. If you want some kind of single-sign-on, look into something like django-cas. You'll still have 2 sessions (as you should), but the user will only log in once.
A:
I agree that sharing sessions between Django instances is probably not a good idea. If you really wanted to, you could:
make sure the two django applications share the same SECRET_KEY
make sure the two django applications share the same SeSSON_COOKIE_NAME
make sure the SESSION_COOKIE_DOMAIN is set to something that lets the two instances share cookies. (If they really share the same subdomain, your current setting is probably fine.)
make sure both Django instances use the same session backend (the same database, the same file directory, the same memcached config, etc.)
make sure that anything put into the session makes sense in both Django databases: at the very least, that'll include the user id, since Django auth uses that to remember which user is logged in.
All that said, I haven't actually tried all this, so you may still have trouble!
| How to get distinct Django apps on same subdomain to share session cookie? | We have a couple of Django applications deployed on the same subdomain. A few power users need to jump between these applications. I noticed that each time they bounce between applications their session cookie receives a new session ID from Django.
I don't use the Django session table much except in one complex workflow. If the user bounces between applications while in this workflow they lose their session and have to start over.
I dug through the Django session code and discovered that the:
django.conf.settings.SECRET_KEY
is used to perform an integrity check on the sessions on each request. If the integrity check fails, a new session is created. Realizing this, I changed the secret key in each of these applications to use the same value, thinking this would allow the integrity check to pass and allow them to share Django sessions. However, it didn't seem to work.
Is there a way to do this? Am I missing something else?
Thanks in advance
| [
"I would instead advise you to set SESSION_COOKIE_NAME to different values for the two apps. Your users will still have to log in twice initially, but their sessions won't conflict - if they log in to app A, then app B, and return to A, they'll still have their A session.\nSharing sessions between Django instances is probably not a good idea. If you want some kind of single-sign-on, look into something like django-cas. You'll still have 2 sessions (as you should), but the user will only log in once.\n",
"I agree that sharing sessions between Django instances is probably not a good idea. If you really wanted to, you could:\n\nmake sure the two django applications share the same SECRET_KEY\nmake sure the two django applications share the same SeSSON_COOKIE_NAME\nmake sure the SESSION_COOKIE_DOMAIN is set to something that lets the two instances share cookies. (If they really share the same subdomain, your current setting is probably fine.)\nmake sure both Django instances use the same session backend (the same database, the same file directory, the same memcached config, etc.)\nmake sure that anything put into the session makes sense in both Django databases: at the very least, that'll include the user id, since Django auth uses that to remember which user is logged in.\n\nAll that said, I haven't actually tried all this, so you may still have trouble! \n"
] | [
19,
9
] | [] | [] | [
"cookies",
"deployment",
"django",
"python",
"session"
] | stackoverflow_0000556907_cookies_deployment_django_python_session.txt |
Q:
How do I find the modules that are available for import from within a package?
Is there a way of knowing which modules are available to import from inside a package?
A:
Many packages will include a list called __all__, which lists the member modules. This is used when python does from x import *. You can read more about that here.
If the package does not define __all__, you'll have to do something like the answer to a question I asked earlier, here.
| How do I find the modules that are available for import from within a package? | Is there a way of knowing which modules are available to import from inside a package?
| [
"Many packages will include a list called __all__, which lists the member modules. This is used when python does from x import *. You can read more about that here.\nIf the package does not define __all__, you'll have to do something like the answer to a question I asked earlier, here.\n"
] | [
3
] | [
"You have the source.\nLook at the files inside the package directory. Those modules are available for you to import.\n",
"dir([object]);\nWithout arguments, dir() return the list of names in the current local scope. With an argument, attempt to return a list of valid attributes for that object.\nSo, in the case of a module, such as 'sys':\n>>> import sys\n>>> dir(sys)\n['__displayhook__', '__doc__', '__excepthook__', '__name__', '__stderr__', '__stdin__', '__stdout__', '_current_frames', '_getframe', 'api_version', 'argv', 'builtin_module_names', 'byteorder', 'call_tracing', 'callstats', 'copyright', 'displayhook', 'exc_clear', 'exc_info', 'exc_type', 'excepthook', 'exec_prefix', 'executable', 'exit', 'getcheckinterval', 'getdefaultencoding', 'getdlopenflags', 'getfilesystemencoding', 'getrecursionlimit', 'getrefcount', 'hexversion', 'maxint', 'maxunicode', 'meta_path', 'modules', 'path', 'path_hooks', 'path_importer_cache', 'platform', 'prefix', 'ps1', 'ps2', 'pydebug', 'setcheckinterval', 'setdlopenflags', 'setprofile', 'setrecursionlimit', 'settrace', 'stderr', 'stdin', 'stdout', 'subversion', 'version', 'version_info', 'warnoptions']\n\nThat's all there is to it.\n",
"import fred\nprint dir(fred)\n"
] | [
-1,
-1,
-2
] | [
"import",
"python"
] | stackoverflow_0000657868_import_python.txt |
Q:
How to manage a CPU intensive process on a server
I need to run a CPU- and memory-heavy Python script (analyzing and altering a lengthy WAV file) as a background process on my web server (a VPS), between HTTP requests.
The script takes up to 20 seconds to run and I am concerned about the performance on my server. Is there a good approach to either lower the priority of the process, periodically cede control to the OS, or otherwise protect the performance of my modest server?
A:
Assuming it's a UNIX server, you could use the nice command to lower its priority. That should do the trick.
A:
You can use cpulimit on a linux based server. It will allow you to limit the CPU usage (specify the limit as a percentage) even of scripts that have already started running, and its usage is pretty straightforward.
It's available on the Debian repository, so you can install it easily using aptitude:
apt-get install cpulimit
Typical ways to use cpulimit includes:
# To limit CPU usage to 75% of program called foo:
cpulimit -e foo -l 75
# To limit CPU usage to 50% of program with pid = 1582
cpulimit -p 1582 -l 50
| How to manage a CPU intensive process on a server | I need to run a CPU- and memory-heavy Python script (analyzing and altering a lengthy WAV file) as a background process on my web server (a VPS), between HTTP requests.
The script takes up to 20 seconds to run and I am concerned about the performance on my server. Is there a good approach to either lower the priority of the process, periodically cede control to the OS, or otherwise protect the performance of my modest server?
| [
"Assuming it's a UNIX server, you could use the nice command to lower its priority. That should do the trick.\n",
"You can use cpulimit on a linux based server. It will allow you to limit the CPU usage (specify the limit as a percentage) even of scripts that have already started running, and its usage is pretty straightforward.\nIt's available on the Debian repository, so you can install it easily using aptitude:\napt-get install cpulimit\n\nTypical ways to use cpulimit includes:\n# To limit CPU usage to 75% of program called foo:\ncpulimit -e foo -l 75\n\n# To limit CPU usage to 50% of program with pid = 1582\ncpulimit -p 1582 -l 50\n\n"
] | [
7,
5
] | [] | [] | [
"performance",
"python",
"signal_processing"
] | stackoverflow_0000660059_performance_python_signal_processing.txt |
Q:
.NET Framework equivalent for Python's imghdr
Python's imghdr module determines the type of image contained in a file or byte stream.
Is there an equivalent for python's imghdr module in the .Net Framework?
A:
Just recently I needed to determine mime type used in file. I don't know the exact logic behind this windows API calls, but I suspect it goes inside the file to get idea of it's mime type. Hope this will help
using System;
using System.IO;
using System.Runtime.InteropServices;
namespace SomeNamespace
{
/// <summary>
/// This will work only on windows
/// </summary>
public class MimeTypeFinder
{
[DllImport(@"urlmon.dll", CharSet = CharSet.Auto)]
private extern static UInt32 FindMimeFromData(
UInt32 pBC,
[MarshalAs(UnmanagedType.LPStr)] String pwzUrl,
[MarshalAs(UnmanagedType.LPArray)] byte[] pBuffer,
UInt32 cbSize,
[MarshalAs(UnmanagedType.LPStr)]String pwzMimeProposed,
UInt32 dwMimeFlags,
out UInt32 ppwzMimeOut,
UInt32 dwReserverd
);
public string GetMimeFromFile(string filename)
{
if (!File.Exists(filename))
throw new FileNotFoundException(filename + " not found");
var buffer = new byte[256];
using (var fs = new FileStream(filename, FileMode.Open))
{
if (fs.Length >= 256)
fs.Read(buffer, 0, 256);
else
fs.Read(buffer, 0, (int)fs.Length);
}
try
{
UInt32 mimetype;
FindMimeFromData(0, null, buffer, 256, null, 0, out mimetype, 0);
var mimeTypePtr = new IntPtr(mimetype);
var mime = Marshal.PtrToStringUni(mimeTypePtr);
Marshal.FreeCoTaskMem(mimeTypePtr);
return mime;
}
catch (Exception)
{
return "unknown/unknown";
}
}
}
}
A:
If you can trust the file's extension, you can do something like the rails plugin mimetype-fu.
This plugin has a yaml list of extensions and their known mime types. It is fairly exhaustive. We found a yaml parser for .net and simply used mimetype-fu's yaml. This made it both fast to build and fast performing.
If you are dealing with streams only and don't have a filename, the above may work better for you.
| .NET Framework equivalent for Python's imghdr | Python's imghdr module determines the type of image contained in a file or byte stream.
Is there an equivalent for python's imghdr module in the .Net Framework?
| [
"Just recently I needed to determine mime type used in file. I don't know the exact logic behind this windows API calls, but I suspect it goes inside the file to get idea of it's mime type. Hope this will help\nusing System;\nusing System.IO;\nusing System.Runtime.InteropServices;\n\nnamespace SomeNamespace\n{\n /// <summary>\n /// This will work only on windows\n /// </summary>\n public class MimeTypeFinder\n {\n [DllImport(@\"urlmon.dll\", CharSet = CharSet.Auto)]\n private extern static UInt32 FindMimeFromData(\n UInt32 pBC,\n [MarshalAs(UnmanagedType.LPStr)] String pwzUrl,\n [MarshalAs(UnmanagedType.LPArray)] byte[] pBuffer,\n UInt32 cbSize,\n [MarshalAs(UnmanagedType.LPStr)]String pwzMimeProposed,\n UInt32 dwMimeFlags,\n out UInt32 ppwzMimeOut,\n UInt32 dwReserverd\n );\n\n public string GetMimeFromFile(string filename)\n {\n if (!File.Exists(filename))\n throw new FileNotFoundException(filename + \" not found\");\n\n var buffer = new byte[256];\n using (var fs = new FileStream(filename, FileMode.Open))\n {\n if (fs.Length >= 256)\n fs.Read(buffer, 0, 256);\n else\n fs.Read(buffer, 0, (int)fs.Length);\n }\n try\n {\n UInt32 mimetype;\n FindMimeFromData(0, null, buffer, 256, null, 0, out mimetype, 0);\n var mimeTypePtr = new IntPtr(mimetype);\n var mime = Marshal.PtrToStringUni(mimeTypePtr);\n Marshal.FreeCoTaskMem(mimeTypePtr);\n return mime;\n }\n catch (Exception)\n {\n return \"unknown/unknown\";\n }\n }\n }\n}\n\n",
"If you can trust the file's extension, you can do something like the rails plugin mimetype-fu.\nThis plugin has a yaml list of extensions and their known mime types. It is fairly exhaustive. We found a yaml parser for .net and simply used mimetype-fu's yaml. This made it both fast to build and fast performing.\nIf you are dealing with streams only and don't have a filename, the above may work better for you.\n"
] | [
0,
0
] | [] | [] | [
".net",
"c#",
"python"
] | stackoverflow_0000660057_.net_c#_python.txt |
Q:
Linking to Python import library in Visual Studio 2005
I have a C++ application that has embedded Python. I'm building with Visual Studio 2005. When I try to link to python26.lib, I get a number of unresolved symbols, all of which begin with "__imp":
error LNK2019: unresolved external symbol __imp__Py_Initialize referenced in function _main
python26.lib is an import library (installed by the Python 2.6 installer). What do I have to do to resolve these symbols? They do exist in the import library (dumpbin /all shows them). Thanks.
A:
Looks like I was trying to link a 64-bit Python library to a 32-bit application. I wish the linker would tell me something other than "unresolved symbol." Linking to the 32-bit library fixes the problem.
A:
Try to include C:\WINDOWS\system32\python26.dll in your references. python26.lib contains the symbol names for the main DLL.
| Linking to Python import library in Visual Studio 2005 | I have a C++ application that has embedded Python. I'm building with Visual Studio 2005. When I try to link to python26.lib, I get a number of unresolved symbols, all of which begin with "__imp":
error LNK2019: unresolved external symbol __imp__Py_Initialize referenced in function _main
python26.lib is an import library (installed by the Python 2.6 installer). What do I have to do to resolve these symbols? They do exist in the import library (dumpbin /all shows them). Thanks.
| [
"Looks like I was trying to link a 64-bit Python library to a 32-bit application. I wish the linker would tell me something other than \"unresolved symbol.\" Linking to the 32-bit library fixes the problem. \n",
"Try to include C:\\WINDOWS\\system32\\python26.dll in your references. python26.lib contains the symbol names for the main DLL.\n"
] | [
13,
2
] | [] | [] | [
"import",
"linker",
"python",
"visual_studio"
] | stackoverflow_0000658879_import_linker_python_visual_studio.txt |
Q:
Checking file attributes in python
I'd like to check the archive bit for each file in a directory using python. So far i've got the following but i can't get it to work properly. The idea of the script is to be able to see all the files that have the archive bit on.
Thanks
# -*- coding: latin-1 -*-
import os , win32file, win32con
from time import *
start = clock()
ext = [ '.txt' , '.doc' ]
def fileattributeisset(filename, fileattr):
return bool(win32file.GetFileAttributes(filename) & fileattr)
for root, dirs, files in os.walk('d:\\Pruebas'):
print ("root", root)
print ("dirs", dirs)
print ("files", files)
for i in files:
if i[ - 4:] in ext:
print('...', root, '\\', i, end=' ')
fattrs = win32file.GetFileAttributes(i)
if fattrs & win32con.FILE_ATTRIBUTE_ARCHIVE:
print('A isSet',fattrs)
#print( fileattributeisset(i, win32con.FILE_ATTRIBUTE_ARCHIVE))
print ('####')
EDIT: all files appear to have the archive bit on, doing 'attrib' shows that all files have no attribute bits on.
A:
The file list returned from os.walk are not fully qualified paths, so when you call
win32file.GetFileAttributes(i)
it can't find the file and returns an error code; which happens to be -1. So the operation
fattrs & win32con.FILE_ATTRIBUTE_ARCHIVE
is always true.
You need to join the root to the filename so that GetFileAttributes will succeed:
fattrs = win32file.GetFileAttributes(os.path.join(root, i))
Also, when you are checking the extension, it is probably better to use os.path.splitext(path) to retrieve the extension rather than relying on them be 3 characters long as you do.
| Checking file attributes in python | I'd like to check the archive bit for each file in a directory using python. So far i've got the following but i can't get it to work properly. The idea of the script is to be able to see all the files that have the archive bit on.
Thanks
# -*- coding: latin-1 -*-
import os , win32file, win32con
from time import *
start = clock()
ext = [ '.txt' , '.doc' ]
def fileattributeisset(filename, fileattr):
return bool(win32file.GetFileAttributes(filename) & fileattr)
for root, dirs, files in os.walk('d:\\Pruebas'):
print ("root", root)
print ("dirs", dirs)
print ("files", files)
for i in files:
if i[ - 4:] in ext:
print('...', root, '\\', i, end=' ')
fattrs = win32file.GetFileAttributes(i)
if fattrs & win32con.FILE_ATTRIBUTE_ARCHIVE:
print('A isSet',fattrs)
#print( fileattributeisset(i, win32con.FILE_ATTRIBUTE_ARCHIVE))
print ('####')
EDIT: all files appear to have the archive bit on, doing 'attrib' shows that all files have no attribute bits on.
| [
"The file list returned from os.walk are not fully qualified paths, so when you call\nwin32file.GetFileAttributes(i)\n\nit can't find the file and returns an error code; which happens to be -1. So the operation\nfattrs & win32con.FILE_ATTRIBUTE_ARCHIVE \n\nis always true.\nYou need to join the root to the filename so that GetFileAttributes will succeed:\nfattrs = win32file.GetFileAttributes(os.path.join(root, i))\n\nAlso, when you are checking the extension, it is probably better to use os.path.splitext(path) to retrieve the extension rather than relying on them be 3 characters long as you do.\n"
] | [
7
] | [] | [] | [
"python"
] | stackoverflow_0000659847_python.txt |
Q:
Python lib to Read a Flash swf Format File
I'm interested in using Python to hack on the data in Flash swf files. There is good documentation available on the format of swf files, and I am considering writing my own Python lib to parse that data out using the standard Python struct lib.
Does anybody know of a Python project that already does this? I would also be interested in any available solutions that use Perl, Ruby, Haskell, etc.
A:
Well, unless you're doing it for fun (in which case, go for it!), why not use Ming? It supposedly has python wrappers...
A:
I found another option in SWF Tools. They provide a Python wrapper that supports generating SWF files in Python.
I'm not sure if either SWF Tools or Ming actually supports parsing in and modifying an existing swf file, however. Both seem geared more towards generating swf files from scratch.
| Python lib to Read a Flash swf Format File | I'm interested in using Python to hack on the data in Flash swf files. There is good documentation available on the format of swf files, and I am considering writing my own Python lib to parse that data out using the standard Python struct lib.
Does anybody know of a Python project that already does this? I would also be interested in any available solutions that use Perl, Ruby, Haskell, etc.
| [
"Well, unless you're doing it for fun (in which case, go for it!), why not use Ming? It supposedly has python wrappers...\n",
"I found another option in SWF Tools. They provide a Python wrapper that supports generating SWF files in Python.\nI'm not sure if either SWF Tools or Ming actually supports parsing in and modifying an existing swf file, however. Both seem geared more towards generating swf files from scratch.\n"
] | [
3,
1
] | [] | [] | [
"flash",
"python"
] | stackoverflow_0000656704_flash_python.txt |
Q:
Python: decorator specific argument (unrelated to wrapped function)?
I'm looking to build a caching decorator that given a function caches the result of the function to a location specified in the decoration. Something like this:
@cacheable('/path/to/cache/file')
def my_function(a, b, c):
return 'something'
The argument to the decorator is completely separate from the argument to the function it's wrapping. I've looked at quite a few examples but I'm not quite getting how to do this - is it possible to have an argument for the decorator that's unrelated to and not passed to the wrapped function?
A:
The idea is that your decorator is a function returning a decorator.
FIRST Write your decorator as if you knew your argument was a global variable. Let's say something like:
-
def decorator(f):
def decorated(*args,**kwargs):
cache = Cache(cachepath)
if cache.iscached(*args,**kwargs):
...
else:
res = f(*args,**kwargs)
cache.store((*args,**kwargs), res)
return res
return decorated
THEN Write a function that takes cachepath as an arg and return your decorator.
-
def cache(filepath)
def decorator(f):
def decorated(*args,**kwargs):
cache = Cache(cachepath)
if cache.iscached(*args,**kwargs):
...
else:
res = f(*args,**kwargs)
cache.store((*args,**kwargs), res)
return res
return decorated
return decorator
A:
Yes it is. As you know, a decorator is a function. When written in the form:
def mydecorator(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@mydecorator
def foo(a, b, c):
pass
the argument passed to mydecorator is the function foo itself.
When the decorator accepts an argument, the call @mydecorator('/path/to') is actually going to call the mydecorator function with '/path/to' first. Then the result of the call to mydecorator(path) will be called to receive the function foo. You're effectively defining a dynamic wrapper function.
In a nutshell, you need another layer of decorator functions.
Here is this slightly silly example:
def addint(val):
def decorator(func):
def wrapped(*args, **kwargs):
result = func(*args, **kwargs)
return result + val
return wrapped # returns the decorated function "add_together"
return decorator # returns the definition of the decorator "addint"
# specifically built to return an extra 5 to the sum
@addint(5)
def add_together(a, b):
return a + b
print add_together(1, 2)
# prints 8, not 3
A:
Paul's answer is good, I would move the cache object so it doesn't need to be built every time, and design your cache so that it raises KeyError when there is a cache miss:
def cache(filepath):
def decorator(f):
f._cache = Cache(cachepath)
def decorated(*args,**kwargs):
try:
key = (args, kwargs)
res = f._cache.get(key)
except KeyError:
res = f(*args, **kwargs)
f._cache.put(key, res)
return res
return decorated
return decorator
| Python: decorator specific argument (unrelated to wrapped function)? | I'm looking to build a caching decorator that given a function caches the result of the function to a location specified in the decoration. Something like this:
@cacheable('/path/to/cache/file')
def my_function(a, b, c):
return 'something'
The argument to the decorator is completely separate from the argument to the function it's wrapping. I've looked at quite a few examples but I'm not quite getting how to do this - is it possible to have an argument for the decorator that's unrelated to and not passed to the wrapped function?
| [
"The idea is that your decorator is a function returning a decorator.\nFIRST Write your decorator as if you knew your argument was a global variable. Let's say something like:\n-\ndef decorator(f):\n def decorated(*args,**kwargs):\n cache = Cache(cachepath)\n if cache.iscached(*args,**kwargs):\n ...\n else:\n res = f(*args,**kwargs)\n cache.store((*args,**kwargs), res)\n return res\n return decorated\n\nTHEN Write a function that takes cachepath as an arg and return your decorator.\n-\ndef cache(filepath)\n def decorator(f):\n def decorated(*args,**kwargs):\n cache = Cache(cachepath)\n if cache.iscached(*args,**kwargs):\n ...\n else:\n res = f(*args,**kwargs)\n cache.store((*args,**kwargs), res)\n return res\n return decorated\n return decorator\n\n",
"Yes it is. As you know, a decorator is a function. When written in the form:\ndef mydecorator(func):\n def wrapper(*args, **kwargs):\n return func(*args, **kwargs)\n return wrapper\n\n@mydecorator\ndef foo(a, b, c):\n pass\n\nthe argument passed to mydecorator is the function foo itself.\nWhen the decorator accepts an argument, the call @mydecorator('/path/to') is actually going to call the mydecorator function with '/path/to' first. Then the result of the call to mydecorator(path) will be called to receive the function foo. You're effectively defining a dynamic wrapper function.\nIn a nutshell, you need another layer of decorator functions.\nHere is this slightly silly example:\ndef addint(val):\n def decorator(func):\n def wrapped(*args, **kwargs):\n result = func(*args, **kwargs)\n return result + val\n return wrapped # returns the decorated function \"add_together\"\n return decorator # returns the definition of the decorator \"addint\"\n # specifically built to return an extra 5 to the sum\n\n@addint(5)\ndef add_together(a, b):\n return a + b\n\nprint add_together(1, 2)\n# prints 8, not 3\n\n",
"Paul's answer is good, I would move the cache object so it doesn't need to be built every time, and design your cache so that it raises KeyError when there is a cache miss:\ndef cache(filepath):\n def decorator(f):\n f._cache = Cache(cachepath)\n def decorated(*args,**kwargs):\n try:\n key = (args, kwargs)\n res = f._cache.get(key)\n except KeyError:\n res = f(*args, **kwargs)\n f._cache.put(key, res)\n return res\n return decorated\n return decorator\n\n"
] | [
9,
5,
3
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0000660727_decorator_python.txt |
Q:
Python Django loop logic: Error says 'int' is not iterable - check my syntax?
return sum(jobrecord.get_cost() or 0
for jobrecord in self.project.jobrecord_set.filter(
date__lte=date,
date__gte=self.start_date) or 0)
A:
After a small rewrite
query = self.project.jobrecord_set.filter(
date__lte=date,
date__gte=self.start_date)
values= ( jobrecord.get_cost() or 0 for jobrecord in query or 0 )
return sum( values )
Look closely at the values= ( jobrecord.get_cost() or 0 for jobrecord in query or 0 )
What happens when the query is empty?
You're evaluating jobrecord.get_cost() or 0 for jobrecord in 0
A:
0 is indeed not iterable. I think you want to drop that last or 0. when the filter query matches no elements, it will return an empty query, and your sum will just be 0, since sum([]) is zero.
If there's some reason why the query might raise an exception (invalid dates or some such), an or clause wont catch that either. [][1] or 0 still raises an exception.
| Python Django loop logic: Error says 'int' is not iterable - check my syntax? | return sum(jobrecord.get_cost() or 0
for jobrecord in self.project.jobrecord_set.filter(
date__lte=date,
date__gte=self.start_date) or 0)
| [
"After a small rewrite\nquery = self.project.jobrecord_set.filter(\n date__lte=date,\n date__gte=self.start_date)\nvalues= ( jobrecord.get_cost() or 0 for jobrecord in query or 0 )\nreturn sum( values )\n\nLook closely at the values= ( jobrecord.get_cost() or 0 for jobrecord in query or 0 )\nWhat happens when the query is empty?\nYou're evaluating jobrecord.get_cost() or 0 for jobrecord in 0\n",
"0 is indeed not iterable. I think you want to drop that last or 0. when the filter query matches no elements, it will return an empty query, and your sum will just be 0, since sum([]) is zero.\nIf there's some reason why the query might raise an exception (invalid dates or some such), an or clause wont catch that either. [][1] or 0 still raises an exception.\n"
] | [
3,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000660773_django_python.txt |
Q:
Progress bar not updating during operation
in my python program to upload a file to the internet, im using a GTK progress bar to show the upload progress. But the problems that im facing is that the progress bar does not show any activity until the upload is complete, and then it abruptly indicates upload complete. im using pycurl to make the http requests...my question is -
do i need to have a multi-threaded application to upload the file and simultaneously update the gui? or is there some other mistake that im making?
Thanks in advance!
A:
I'm going to quote the PyGTK FAQ:
You have created a progress bar inside a window, then you start running a loop that does some work:
while work_left:
...do something...
progressbar.set_fraction(...)
You will notice that the window doesn't even show up, or if it does the progress bar stays frozen until the end of the task. The explanation is simple: gtk is event driven, and you are stealing control away from the gtk main loop, thus preventing it from processing normal GUI update events.
The simplest solution consists on temporarily giving control back to gtk every time the progress is changed:
while work_left:
...do something...
progressbar.set_fraction(...)
while gtk.events_pending():
gtk.main_iteration()
Notice that with this solution, the user cannot quit the application (gtk.main_quit would not work because of new loop [gtk.main_iteration()]) until your heavy_work is done.
Another solution consists on using gtk idle functions, which are called by the gtk main loop whenever it has nothing to do. Therefore, gtk is in control, and the idle function has to do a bit of work. It should return True if there's more work to be done, otherwise False.
The best solution (it has no drawbacks) was pointed out by James Henstridge. It is taking advantage of python's generators as idle functions, to make python automatically preserve the state for us. It goes like this:
def my_task(data):
...some work...
while heavy_work_needed:
...do heavy work here...
progress_label.set_text(data) # here we update parts of UI
# there's more work, return True
yield True
# no more work, return False
yield False
def on_start_my_task_button_click(data):
task = my_task(data)
gobject.idle_add(task.next)
The 'while' above is just an example. The only rules are that it should yield True after doing a bit of work and there's more work to do, and it must yield False when the task is done.
A:
More than likely the issue is that in your progress callback, which is where I presume you're updating the progress bar, you're not making a call to manually update the display i.e. run through the GUI's event loop. This is just speculation though, if you can provide more code, it might be easier to narrow it down further.
The reason you need to manually update the display is because your main thread is also performing the upload, which is where it's blocking.
A:
In python 2.x integer operands result in integer division. Try this:
#Callback function invoked when download/upload has progress
def progress(download_t, download_d, upload_t, upload_d):
print 'in fileupload progress'
mainwin.mainw.prog_bar.set_fraction(float(upload_d) / upload_t)
A:
Yes, you probably need concurrency, and yes threads are one approach, but if you do use threads, please use an method like this one: http://unpythonic.blogspot.com/2007/08/using-threads-in-pygtk.html which will abstract away the pain, and allow you to focus on the important aspects.
(I have not repeated everything in that blog post through laziness, hence community wiki).
A:
One option, if you are not married to pycurl, is to use GObject's IO watchers.
http://pygtk.org/pygtk2reference/gobject-functions.html#function-gobject--io-add-watch
Using this you can interleave the file upload with the normal PyGTK event loop, and even do the set_progress call in your IO watch callback. If you are offloading all the work for uploading onto pycurl this is not really feasible, but if you're just uploading a file over HTTP, io_add_watch will make using a socket for this much less painful as well.
| Progress bar not updating during operation | in my python program to upload a file to the internet, im using a GTK progress bar to show the upload progress. But the problems that im facing is that the progress bar does not show any activity until the upload is complete, and then it abruptly indicates upload complete. im using pycurl to make the http requests...my question is -
do i need to have a multi-threaded application to upload the file and simultaneously update the gui? or is there some other mistake that im making?
Thanks in advance!
| [
"I'm going to quote the PyGTK FAQ:\n\nYou have created a progress bar inside a window, then you start running a loop that does some work:\n\nwhile work_left:\n ...do something...\n progressbar.set_fraction(...)\n\n\nYou will notice that the window doesn't even show up, or if it does the progress bar stays frozen until the end of the task. The explanation is simple: gtk is event driven, and you are stealing control away from the gtk main loop, thus preventing it from processing normal GUI update events.\nThe simplest solution consists on temporarily giving control back to gtk every time the progress is changed:\n\nwhile work_left:\n ...do something...\n progressbar.set_fraction(...)\n while gtk.events_pending():\n gtk.main_iteration()\n\n\nNotice that with this solution, the user cannot quit the application (gtk.main_quit would not work because of new loop [gtk.main_iteration()]) until your heavy_work is done.\nAnother solution consists on using gtk idle functions, which are called by the gtk main loop whenever it has nothing to do. Therefore, gtk is in control, and the idle function has to do a bit of work. It should return True if there's more work to be done, otherwise False.\nThe best solution (it has no drawbacks) was pointed out by James Henstridge. It is taking advantage of python's generators as idle functions, to make python automatically preserve the state for us. It goes like this:\n\ndef my_task(data):\n ...some work...\n while heavy_work_needed:\n ...do heavy work here...\n progress_label.set_text(data) # here we update parts of UI\n # there's more work, return True\n yield True\n # no more work, return False\n yield False\n\ndef on_start_my_task_button_click(data):\n task = my_task(data)\n gobject.idle_add(task.next)\n\n\nThe 'while' above is just an example. The only rules are that it should yield True after doing a bit of work and there's more work to do, and it must yield False when the task is done. \n\n",
"More than likely the issue is that in your progress callback, which is where I presume you're updating the progress bar, you're not making a call to manually update the display i.e. run through the GUI's event loop. This is just speculation though, if you can provide more code, it might be easier to narrow it down further.\nThe reason you need to manually update the display is because your main thread is also performing the upload, which is where it's blocking.\n",
"In python 2.x integer operands result in integer division. Try this:\n#Callback function invoked when download/upload has progress\ndef progress(download_t, download_d, upload_t, upload_d):\n print 'in fileupload progress'\n mainwin.mainw.prog_bar.set_fraction(float(upload_d) / upload_t)\n\n",
"Yes, you probably need concurrency, and yes threads are one approach, but if you do use threads, please use an method like this one: http://unpythonic.blogspot.com/2007/08/using-threads-in-pygtk.html which will abstract away the pain, and allow you to focus on the important aspects.\n(I have not repeated everything in that blog post through laziness, hence community wiki).\n",
"One option, if you are not married to pycurl, is to use GObject's IO watchers.\nhttp://pygtk.org/pygtk2reference/gobject-functions.html#function-gobject--io-add-watch\nUsing this you can interleave the file upload with the normal PyGTK event loop, and even do the set_progress call in your IO watch callback. If you are offloading all the work for uploading onto pycurl this is not really feasible, but if you're just uploading a file over HTTP, io_add_watch will make using a socket for this much less painful as well.\n"
] | [
13,
1,
0,
0,
0
] | [] | [] | [
"gtk",
"progress_bar",
"pygtk",
"python",
"user_interface"
] | stackoverflow_0000496814_gtk_progress_bar_pygtk_python_user_interface.txt |
Q:
Can I make pdb start debugging right away?
I want to debug a python project
The problem is, I don't know where to set a break point,
what I want to do, is be able to call a method
SomeClass( some_ctor_arguments ).some_method()`
and have the debugger be fired right away
How do I do that?
I tried pdb.run( string_command ) but it doesn't seem to work right
>>> import pdb
>>> import <some-package>
>>> pdb.run( .... )
> <string>(1)<module>()
(Pdb) s
NameError: "name '<some-package>' is not defined"
A:
Found it ..
pdb.runcall( object.method )
A:
pdb.set_trace()
will start the debugger at this point.
Place it at the beginning of the method you want to debug.
| Can I make pdb start debugging right away? | I want to debug a python project
The problem is, I don't know where to set a break point,
what I want to do, is be able to call a method
SomeClass( some_ctor_arguments ).some_method()`
and have the debugger be fired right away
How do I do that?
I tried pdb.run( string_command ) but it doesn't seem to work right
>>> import pdb
>>> import <some-package>
>>> pdb.run( .... )
> <string>(1)<module>()
(Pdb) s
NameError: "name '<some-package>' is not defined"
| [
"Found it ..\npdb.runcall( object.method )\n\n",
"pdb.set_trace()\n\nwill start the debugger at this point.\nPlace it at the beginning of the method you want to debug.\n"
] | [
5,
4
] | [] | [] | [
"debugging",
"pdb",
"python"
] | stackoverflow_0000661034_debugging_pdb_python.txt |