content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Code to utilize memory more than 70%
Please tell me C++/Java code which utilize memory more than 70% .
For Example we have 3 Virtual machine and in memory resources we want to test the
memory utilization as per memory resources allocated by user.
A:
Which memory? On a 64 bit platform, a 64 bit process can use far more than 4GB. You'd be filling swap for hours before you hit those limits.
If you want to test "70% of physical RAM", you might discover that you cannot allocate 70% of the 32 bits address space. A significant amount is already claimed by the OS.
A:
#include malloc.h
#DEFINE MB 512
void main(int argc, char **argv)
{
int i;
for (i = 0; i < MB; i++)
{
malloc(1024* 1024);
}
getchar();
}
Hit enter to release the memory, set the MB constant to how much memory you want your program to take.
My C is a little rusty so if someone comes here and walks all over me, 1000 apologies, my forte is C#.
A:
I want to test the Memory utilization but after executing the code i am unable to test the same.
As i am new to this so help me more on this.
Let we have 3 Virtual machine V1,V2,V3
For V1 - Set shared resource as High
For V2 - Set shared resources as Normal
For V3 - Set shared resources as Normal
So it means total is 2 GB then V1 get 1 GB and V2,V3 gets 512 MB each . So i want to test using programming if some one changes the Shared or reservation or Limit then how it works.
| Code to utilize memory more than 70% | Please tell me C++/Java code which utilize memory more than 70% .
For Example we have 3 Virtual machine and in memory resources we want to test the
memory utilization as per memory resources allocated by user.
| [
"Which memory? On a 64 bit platform, a 64 bit process can use far more than 4GB. You'd be filling swap for hours before you hit those limits.\nIf you want to test \"70% of physical RAM\", you might discover that you cannot allocate 70% of the 32 bits address space. A significant amount is already claimed by the OS.\n",
"#include malloc.h\n#DEFINE MB 512\nvoid main(int argc, char **argv)\n{\n int i;\n for (i = 0; i < MB; i++)\n {\n malloc(1024* 1024);\n }\n getchar();\n}\n\nHit enter to release the memory, set the MB constant to how much memory you want your program to take.\nMy C is a little rusty so if someone comes here and walks all over me, 1000 apologies, my forte is C#.\n",
"I want to test the Memory utilization but after executing the code i am unable to test the same.\nAs i am new to this so help me more on this.\nLet we have 3 Virtual machine V1,V2,V3\nFor V1 - Set shared resource as High\nFor V2 - Set shared resources as Normal \nFor V3 - Set shared resources as Normal \nSo it means total is 2 GB then V1 get 1 GB and V2,V3 gets 512 MB each . So i want to test using programming if some one changes the Shared or reservation or Limit then how it works.\n"
] | [
4,
3,
0
] | [] | [] | [
"c",
"c++",
"java",
"python"
] | stackoverflow_0000456926_c_c++_java_python.txt |
Q:
Simulating a 'local static' variable in python
Consider the following code:
def CalcSomething(a):
if CalcSomething._cache.has_key(a):
return CalcSomething._cache[a]
CalcSomething._cache[a] = ReallyCalc(a)
return CalcSomething._cache[a]
CalcSomething._cache = { }
This is the easiest way I can think of for simulating a 'local static' variable in python.
What bothers me is that CalcSomething._cache is mentioned outside the function's definition, but the alternative would be something like that:
if not hasattr(CalcSomething, "_cache"):
setattr(CalcSomething, "_cache", { } )
inside the function's definition, which is really cumbersome.
Is there a more elegant way?
[EDIT]
Just to clarify, this question is not about local function caches, as the example above might suggest. Here is another short example where a 'static local' might be handy:
def ParseString(s):
return ParseString._parser.parse(s)
# Create a Parser object once, which will be used for all parsings.
# Assuming a Parser object is heave on resources, for the sake of this example.
ParseString._parser = Parser()
A:
Turn it into a callable object (since that's what it really is.)
class CalcSomething(object):
def __init__(self):
self._cache = {}
def __call__(self, a):
if a not in self._cache:
self._cache[a] = self.reallyCalc(a)
return self._cache[a]
def reallyCalc(self, a):
return # a real answer
calcSomething = CalcSomething()
Now you can use calcSomething as if it were a function. But it remains tidy and self-contained.
A:
Turn it into a decorator.
def static_var(var_name, initial_value):
def _set_var(obj):
setattr(obj, var_name, initial_value)
return obj
return _set_var
@static_var("_cache", {})
def CalcSomething(a):
...
A:
Consider writing decorator that will maintain cache and your function won't be contaminated by caching code:
def cacheResults(aFunc):
'''This decorator funcion binds a map between the tuple of arguments
and results computed by aFunc for those arguments'''
def cachedFunc(*args):
if not hasattr(aFunc, '_cache'):
aFunc._cache = {}
if args in aFunc._cache:
return aFunc._cache[args]
newVal = aFunc(*args)
aFunc._cache[args] = newVal
return newVal
return cachedFunc
@cacheResults
def ReallyCalc(a):
'''This function does only actual computation'''
return pow(a, 42)
Maybe it doesn't look great at first, but you can use cacheResults() anywhere you don't need keyword parameters. It is possible to create similar decorator that would work also for keyword params, but that didn't seem necessary this time.
A:
One option is to abuse default parameters. ie:
def CalcSomething(a, _cache={}):
if _cache.has_key(a):
This has the advantage that you don't need to qualify the name, and will get fast local access to the variables rather than doing two slow dict lookups. However it still has the problem that it is mentioned outside the function (in fact it's worse since its now in the function signature.)
To prevent this, a better solution would be to wrap the function in a closure containing your statics:
@apply
def CalcSomething():
cache = {} # statics go here
def CalcSomething(a):
if cache.has_key(a):
return cache[a]
cache[a] = ReallyCalc(a)
return cache[a]
return CalcSomething
A:
The solution proposed by S.Lott is the solution I would propose too.
There are useful "memoize" decorators around, too, like:
Memoize decorator function with cache size limit
Memoize decorator with O(1) length-limited LRU cache, supports mutable types
Given all that, I'm providing an alternative for your initial attempt at a function and a "static local", which is standalone:
def calc_something(a):
try:
return calc_something._cache[a]
except AttributeError: # _cache is not there
calc_something._cache= {}
except KeyError: # the result is not there
pass
# compute result here
calc_something._cache[a]= result
return result
| Simulating a 'local static' variable in python | Consider the following code:
def CalcSomething(a):
if CalcSomething._cache.has_key(a):
return CalcSomething._cache[a]
CalcSomething._cache[a] = ReallyCalc(a)
return CalcSomething._cache[a]
CalcSomething._cache = { }
This is the easiest way I can think of for simulating a 'local static' variable in python.
What bothers me is that CalcSomething._cache is mentioned outside the function's definition, but the alternative would be something like that:
if not hasattr(CalcSomething, "_cache"):
setattr(CalcSomething, "_cache", { } )
inside the function's definition, which is really cumbersome.
Is there a more elegant way?
[EDIT]
Just to clarify, this question is not about local function caches, as the example above might suggest. Here is another short example where a 'static local' might be handy:
def ParseString(s):
return ParseString._parser.parse(s)
# Create a Parser object once, which will be used for all parsings.
# Assuming a Parser object is heave on resources, for the sake of this example.
ParseString._parser = Parser()
| [
"Turn it into a callable object (since that's what it really is.)\nclass CalcSomething(object):\n def __init__(self):\n self._cache = {}\n def __call__(self, a):\n if a not in self._cache: \n self._cache[a] = self.reallyCalc(a)\n return self._cache[a]\n def reallyCalc(self, a):\n return # a real answer\ncalcSomething = CalcSomething()\n\nNow you can use calcSomething as if it were a function. But it remains tidy and self-contained. \n",
"Turn it into a decorator.\ndef static_var(var_name, initial_value):\n def _set_var(obj):\n setattr(obj, var_name, initial_value)\n return obj\n return _set_var\n\n@static_var(\"_cache\", {})\ndef CalcSomething(a):\n ...\n\n",
"Consider writing decorator that will maintain cache and your function won't be contaminated by caching code:\ndef cacheResults(aFunc):\n '''This decorator funcion binds a map between the tuple of arguments \n and results computed by aFunc for those arguments'''\n def cachedFunc(*args):\n if not hasattr(aFunc, '_cache'):\n aFunc._cache = {}\n if args in aFunc._cache:\n return aFunc._cache[args]\n newVal = aFunc(*args)\n aFunc._cache[args] = newVal\n return newVal\n return cachedFunc\n\n@cacheResults\ndef ReallyCalc(a):\n '''This function does only actual computation'''\n return pow(a, 42)\n\nMaybe it doesn't look great at first, but you can use cacheResults() anywhere you don't need keyword parameters. It is possible to create similar decorator that would work also for keyword params, but that didn't seem necessary this time.\n",
"One option is to abuse default parameters. ie:\ndef CalcSomething(a, _cache={}):\n if _cache.has_key(a):\n\nThis has the advantage that you don't need to qualify the name, and will get fast local access to the variables rather than doing two slow dict lookups. However it still has the problem that it is mentioned outside the function (in fact it's worse since its now in the function signature.)\nTo prevent this, a better solution would be to wrap the function in a closure containing your statics:\n@apply\ndef CalcSomething():\n cache = {} # statics go here\n\n def CalcSomething(a):\n if cache.has_key(a):\n return cache[a]\n cache[a] = ReallyCalc(a)\n return cache[a]\n return CalcSomething\n\n",
"The solution proposed by S.Lott is the solution I would propose too.\nThere are useful \"memoize\" decorators around, too, like:\n\nMemoize decorator function with cache size limit\nMemoize decorator with O(1) length-limited LRU cache, supports mutable types \n\nGiven all that, I'm providing an alternative for your initial attempt at a function and a \"static local\", which is standalone:\ndef calc_something(a):\n\n try:\n return calc_something._cache[a]\n except AttributeError: # _cache is not there\n calc_something._cache= {}\n except KeyError: # the result is not there\n pass\n\n # compute result here\n\n calc_something._cache[a]= result\n return result\n\n"
] | [
56,
17,
11,
4,
4
] | [] | [] | [
"python"
] | stackoverflow_0000460586_python.txt |
Q:
Python with Netbeans 6.5
Can you give me some links or explain how to configure an existing python project onto Netbeans?
I'm trying it these days and it continues to crash also code navigation doesn't work well and I've problems with debugging. Surely these problems are related to my low eperience about python and I need support also in trivial things as organizing source folders, imports ecc,, thank you very much.
Valerio
A:
Python support is in beta, and as someone who works with NB for a past 2 years, I can say that even a release versions are buggy and sometimes crashes. Early Ruby support was also very shaky.
| Python with Netbeans 6.5 | Can you give me some links or explain how to configure an existing python project onto Netbeans?
I'm trying it these days and it continues to crash also code navigation doesn't work well and I've problems with debugging. Surely these problems are related to my low eperience about python and I need support also in trivial things as organizing source folders, imports ecc,, thank you very much.
Valerio
| [
"Python support is in beta, and as someone who works with NB for a past 2 years, I can say that even a release versions are buggy and sometimes crashes. Early Ruby support was also very shaky.\n"
] | [
1
] | [] | [] | [
"netbeans",
"project",
"python"
] | stackoverflow_0000462068_netbeans_project_python.txt |
Q:
Perl or Python script to remove user from group
I am putting together a Samba-based server as a Primary Domain Controller, and ran into a cute little problem that should have been solved many times over. But a number of searches did not yield a result. I need to be able to remove an existing user from an existing group with a command line script. It appears that the usermod easily allows me to add a user to a supplementary group with this command:
usermod -a -G supgroup1,supgroup2 username
Without the "-a" option, if the user is currently a member of a group which is not listed, the user will be removed from the group. Does anyone have a perl (or Python) script that allows the specification of a user and group for removal? Am I missing an obvious existing command, or well-known solution forthis? Thanks in advance!
Thanks to J.J. for the pointer to the Unix::Group module, which is part of Unix-ConfigFile. It looks like the command deluser would do what I want, but was not in any of my existing repositories. I went ahead and wrote the perl script using the Unix:Group Module. Here is the script for your sysadmining pleasure.
#!/usr/bin/perl
#
# Usage: removegroup.pl login group
# Purpose: Removes a user from a group while retaining current primary and
# supplementary groups.
# Notes: There is a Debian specific utility that can do this called deluser,
# but I did not want any cross-distribution dependencies
#
# Date: 25 September 2008
# Validate Arguments (correct number, format etc.)
if ( ($#ARGV < 1) || (2 < $#ARGV) ) {
print "\nUsage: removegroup.pl login group\n\n";
print "EXIT VALUES\n";
print " The removeuser.pl script exits with the following values:\n\n";
print " 0 success\n\n";
print " 1 Invalid number of arguments\n\n";
print " 2 Login or Group name supplied greater than 16 characters\n\n";
print " 3 Login and/or Group name contains invalid characters\n\n";
exit 1;
}
# Check for well formed group and login names
if ((16 < length($ARGV[0])) ||(16 < length($ARGV[1])))
{
print "Usage: removegroup.pl login group\n";
print "ERROR: Login and Group names must be less than 16 Characters\n";
exit 2;
}
if ( ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$}) || ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$} ) )
{
print "Usage: removegroup.pl login group\n";
print "ERROR: Login and/or Group name contains invalid characters\n";
exit 3;
}
# Set some variables for readability
$login=$ARGV[0];
$group=$ARGV[1];
# Requires the GroupFile interface from perl-Unix-Configfile
use Unix::GroupFile;
$grp = new Unix::GroupFile "/etc/group";
$grp->remove_user("$group", "$login");
$grp->commit();
undef $grp;
exit 0;
A:
I found This for you. It should do what you need. As far as I can tell Perl does not have any built in functions for removing users from a group. It has several for seeing the group id of a user or process.
A:
Web Link: http://www.ibm.com/developerworks/linux/library/l-roadmap4/
To add members to the group, use the gpasswd command with the -a switch and the user id you wish to add:
gpasswd -a userid mygroup
Remove users from a group with the same command, but a -d switch rather than -a:
gpasswd -d userid mygroup
"man gpasswd" for more info...
I looked for ages to find this. Sometimes it takes too much effort not to reinvent the wheel...
A:
It looks like deluser --group [groupname] should do it.
If not, the groups command lists the groups that a user belongs to. It should be fairly straightforward to come up with some Perl to capture that list into an array (or map it into a hash), delete the unwanted group(s), and feed that back to usermod.
A:
Here's a very simple little Perl script that should give you the list of groups you need:
my $user = 'user';
my $groupNoMore = 'somegroup';
my $groups = join ',', grep { $_ ne $groupNoMore } split /\s/, `groups $user`;
Getting and sanitizing the required arguments is left as an execrcise for the reader.
| Perl or Python script to remove user from group | I am putting together a Samba-based server as a Primary Domain Controller, and ran into a cute little problem that should have been solved many times over. But a number of searches did not yield a result. I need to be able to remove an existing user from an existing group with a command line script. It appears that the usermod easily allows me to add a user to a supplementary group with this command:
usermod -a -G supgroup1,supgroup2 username
Without the "-a" option, if the user is currently a member of a group which is not listed, the user will be removed from the group. Does anyone have a perl (or Python) script that allows the specification of a user and group for removal? Am I missing an obvious existing command, or well-known solution forthis? Thanks in advance!
Thanks to J.J. for the pointer to the Unix::Group module, which is part of Unix-ConfigFile. It looks like the command deluser would do what I want, but was not in any of my existing repositories. I went ahead and wrote the perl script using the Unix:Group Module. Here is the script for your sysadmining pleasure.
#!/usr/bin/perl
#
# Usage: removegroup.pl login group
# Purpose: Removes a user from a group while retaining current primary and
# supplementary groups.
# Notes: There is a Debian specific utility that can do this called deluser,
# but I did not want any cross-distribution dependencies
#
# Date: 25 September 2008
# Validate Arguments (correct number, format etc.)
if ( ($#ARGV < 1) || (2 < $#ARGV) ) {
print "\nUsage: removegroup.pl login group\n\n";
print "EXIT VALUES\n";
print " The removeuser.pl script exits with the following values:\n\n";
print " 0 success\n\n";
print " 1 Invalid number of arguments\n\n";
print " 2 Login or Group name supplied greater than 16 characters\n\n";
print " 3 Login and/or Group name contains invalid characters\n\n";
exit 1;
}
# Check for well formed group and login names
if ((16 < length($ARGV[0])) ||(16 < length($ARGV[1])))
{
print "Usage: removegroup.pl login group\n";
print "ERROR: Login and Group names must be less than 16 Characters\n";
exit 2;
}
if ( ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$}) || ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$} ) )
{
print "Usage: removegroup.pl login group\n";
print "ERROR: Login and/or Group name contains invalid characters\n";
exit 3;
}
# Set some variables for readability
$login=$ARGV[0];
$group=$ARGV[1];
# Requires the GroupFile interface from perl-Unix-Configfile
use Unix::GroupFile;
$grp = new Unix::GroupFile "/etc/group";
$grp->remove_user("$group", "$login");
$grp->commit();
undef $grp;
exit 0;
| [
"I found This for you. It should do what you need. As far as I can tell Perl does not have any built in functions for removing users from a group. It has several for seeing the group id of a user or process.\n",
"Web Link: http://www.ibm.com/developerworks/linux/library/l-roadmap4/\nTo add members to the group, use the gpasswd command with the -a switch and the user id you wish to add:\ngpasswd -a userid mygroup\nRemove users from a group with the same command, but a -d switch rather than -a:\ngpasswd -d userid mygroup \n\"man gpasswd\" for more info...\nI looked for ages to find this. Sometimes it takes too much effort not to reinvent the wheel...\n",
"It looks like deluser --group [groupname] should do it.\nIf not, the groups command lists the groups that a user belongs to. It should be fairly straightforward to come up with some Perl to capture that list into an array (or map it into a hash), delete the unwanted group(s), and feed that back to usermod.\n",
"Here's a very simple little Perl script that should give you the list of groups you need:\nmy $user = 'user';\nmy $groupNoMore = 'somegroup';\nmy $groups = join ',', grep { $_ ne $groupNoMore } split /\\s/, `groups $user`;\n\nGetting and sanitizing the required arguments is left as an execrcise for the reader.\n"
] | [
2,
2,
1,
1
] | [] | [] | [
"centos",
"perl",
"python",
"redhat",
"sysadmin"
] | stackoverflow_0000128933_centos_perl_python_redhat_sysadmin.txt |
Q:
UnicodeEncodeError with BeautifulSoup 3.1.0.1 and Python 2.5.2
With BeautifulSoup 3.1.0.1 and Python 2.5.2, and trying to parse a web page in French. However, as soon as I call findAll, I get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 1146: ordinal not in range(128)
Below is the code I am currently running:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://fr.encarta.msn.com/encyclopedia_761561798/Paris.html")
soup = BeautifulSoup(page, fromEncoding="latin1")
r = soup.findAll("table")
print r
Does anybody have an idea why?
Thanks!
UPDATE: As resquested, below is the full Traceback
Traceback (most recent call last):
File "[...]\test.py", line 6, in <module>
print r
UnicodeEncodeError: 'ascii' codec can't encode characters in position 1146-1147: ordinal not in range(128)
A:
Here is another idea. Your terminal is not capable of displaying an unicode string from Python. The interpreter tries to convert it to ASCII first. You should encode it explicitly before printing. I don't know the exact semantics of soup.findAll(). But it is probably something like:
for t in soup.findAll("table"):
print t.encode('latin1')
If t really is a string. Maybe its just another object from which you have to build the data that you want to display.
| UnicodeEncodeError with BeautifulSoup 3.1.0.1 and Python 2.5.2 | With BeautifulSoup 3.1.0.1 and Python 2.5.2, and trying to parse a web page in French. However, as soon as I call findAll, I get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 1146: ordinal not in range(128)
Below is the code I am currently running:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://fr.encarta.msn.com/encyclopedia_761561798/Paris.html")
soup = BeautifulSoup(page, fromEncoding="latin1")
r = soup.findAll("table")
print r
Does anybody have an idea why?
Thanks!
UPDATE: As resquested, below is the full Traceback
Traceback (most recent call last):
File "[...]\test.py", line 6, in <module>
print r
UnicodeEncodeError: 'ascii' codec can't encode characters in position 1146-1147: ordinal not in range(128)
| [
"Here is another idea. Your terminal is not capable of displaying an unicode string from Python. The interpreter tries to convert it to ASCII first. You should encode it explicitly before printing. I don't know the exact semantics of soup.findAll(). But it is probably something like:\nfor t in soup.findAll(\"table\"):\n print t.encode('latin1')\n\nIf t really is a string. Maybe its just another object from which you have to build the data that you want to display.\n"
] | [
11
] | [] | [] | [
"beautifulsoup",
"encoding",
"python",
"screen_scraping"
] | stackoverflow_0000463215_beautifulsoup_encoding_python_screen_scraping.txt |
Q:
Distributing a stand-alone Python web-based application to non-technical users
I'm writing a web application in Python, intended for use by teachers and pupils in a classroom. It'll run from a hosted website, but I also want people to be able to download a self-contained application they can install locally if they want more performance or they simply won't have an Internet connection available in the classroom.
The users aren't going to be able to manage instructions like "first install Python, then install dependencies, download the .tar.gz archive and type these commands into the command line...". I need to be able to create an all-in-one type installer that can potentially install Python, dependencies (Python-LDAP), some Python code, and register a Python-based web server as a Windows Service.
I've had a look through previous questions, but none quite seem relevant. I'm not concerned about the security of source code (my application will be open source, I'll sell content to go with it), I just need non-technical Windows users to be able to download and use my application with no fuss.
My current thoughts are to use NSIS to create an installer that includes Python and Python-LDAP as MSIs, then registers my own simple Python-based web server as a Windows service and puts a shortcut in the start menu / on the desktop linking to http://localhost. Is this doable with NSIS - can NSIS check for currently installed copies of Python, for instance? Is there a better way of doing this - is there a handy framework available that lets me shove my code in a folder and bundle it up to make an installer?
A:
Using NSIS is great (i use it too) but i would suggest using a "packager" like pyinstaller (my personal fav, alternatives bb_freeze, py2exe) to create an exe before the using NSIS
The primary benefit you get by doing this is;
Your download is smaller as you're not bundling the whole Python Standard Lib and extra stuff your app wont need and you get an exe file to boot!
A:
You can try the Bitnami Stack for Django that includes Apache, MySQL,Python, etc in an all-in-one installer. It is free/open source
| Distributing a stand-alone Python web-based application to non-technical users | I'm writing a web application in Python, intended for use by teachers and pupils in a classroom. It'll run from a hosted website, but I also want people to be able to download a self-contained application they can install locally if they want more performance or they simply won't have an Internet connection available in the classroom.
The users aren't going to be able to manage instructions like "first install Python, then install dependencies, download the .tar.gz archive and type these commands into the command line...". I need to be able to create an all-in-one type installer that can potentially install Python, dependencies (Python-LDAP), some Python code, and register a Python-based web server as a Windows Service.
I've had a look through previous questions, but none quite seem relevant. I'm not concerned about the security of source code (my application will be open source, I'll sell content to go with it), I just need non-technical Windows users to be able to download and use my application with no fuss.
My current thoughts are to use NSIS to create an installer that includes Python and Python-LDAP as MSIs, then registers my own simple Python-based web server as a Windows service and puts a shortcut in the start menu / on the desktop linking to http://localhost. Is this doable with NSIS - can NSIS check for currently installed copies of Python, for instance? Is there a better way of doing this - is there a handy framework available that lets me shove my code in a folder and bundle it up to make an installer?
| [
"Using NSIS is great (i use it too) but i would suggest using a \"packager\" like pyinstaller (my personal fav, alternatives bb_freeze, py2exe) to create an exe before the using NSIS\nThe primary benefit you get by doing this is;\nYour download is smaller as you're not bundling the whole Python Standard Lib and extra stuff your app wont need and you get an exe file to boot!\n",
"You can try the Bitnami Stack for Django that includes Apache, MySQL,Python, etc in an all-in-one installer. It is free/open source\n"
] | [
4,
0
] | [] | [] | [
"installation",
"python"
] | stackoverflow_0000210461_installation_python.txt |
Q:
Algorithm to generate spanning set
Given this input: [1,2,3,4]
I'd like to generate the set of spanning sets:
[1] [2] [3] [4]
[1] [2] [3,4]
[1] [2,3] [4]
[1] [3] [2,4]
[1,2] [3] [4]
[1,3] [2] [4]
[1,4] [2] [3]
[1,2] [3,4]
[1,3] [2,4]
[1,4] [2,3]
[1,2,3] [4]
[1,2,4] [3]
[1,3,4] [2]
[2,3,4] [1]
[1,2,3,4]
Every set has all the elements of the original set, permuted to appear in unique subsets. What is the algorithm that produces these sets? I've tried Python generator functions using choose, permutation, combination, power set, and so on, but can't get the right combination.
20 Jan 2009
This is not a homework question. This is an improved answer I was working on for www.projecteuler.net problem # 118. I already had a slow solution but came up with a better way -- except I could not figure out how to do the spanning set.
I'll post my code when I get back from an Inauguration Party.
21 Jan 2009
This is the eventual algorithm I used:
def spanningsets(items):
if len(items) == 1:
yield [items]
else:
left_set, last = items[:-1], [items[-1]]
for cc in spanningsets(left_set):
yield cc + [last]
for i,elem in enumerate(cc):
yield cc[:i] + [elem + last] + cc[i+1:]
@Yuval F: I know how to do a powerset. Here's a straightforward implementation:
def powerset(s) :
length = len(s)
for i in xrange(0, 2**length) :
yield [c for j, c in enumerate(s) if (1 << j) & i]
return
A:
This should work, though I haven't tested it enough.
def spanningsets(items):
if not items: return
if len(items) == 1:
yield [[items[-1]]]
else:
for cc in spanningsets(items[:-1]):
yield cc + [[items[-1]]]
for i in range(len(cc)):
yield cc[:i] + [cc[i] + [items[-1]]] + cc[i+1:]
for sset in spanningsets([1, 2, 3, 4]):
print ' '.join(map(str, sset))
Output:
[1] [2] [3] [4]
[1, 4] [2] [3]
[1] [2, 4] [3]
[1] [2] [3, 4]
[1, 3] [2] [4]
[1, 3, 4] [2]
[1, 3] [2, 4]
[1] [2, 3] [4]
[1, 4] [2, 3]
[1] [2, 3, 4]
[1, 2] [3] [4]
[1, 2, 4] [3]
[1, 2] [3, 4]
[1, 2, 3] [4]
[1, 2, 3, 4]
A:
What about this? I haven't tested it yet, but I'll try it later…
I think this technique is called Dynamic Programming:
Take the first element [1]
What can you create with it? Only [1]
Take the second one [2]
Now you've got two possibilities: [1,2] and [1] [2]
Take the third one [3]
With the first of number 2 [1,2] one can create [1,2,3] and [1,2] [3]
With the second of number 2 [1] [2] one can create [1,3] [2] and [1] [2,3] and [1] [2] [3]
I hope it is clear enough what I tried to show. (If not, drop a comment!)
A:
Here's The SUNY algorithm repository page on the problem. Maybe you can translate one of the code references to python.
Edit: This was a similar problem. Here is the SUNY repository page about generating partitions, which I believe is the correct problem.
A:
I think the following method is the best way to generate them for the euler problem, as you can replace the return value with the number of prime spanning subsets, and it will be trivial to do the multiplication (especially with memoization):
GenerateSubsets(list)
partitions = { x | x is subset of list and x contains the lowest element of list }
foreach (parition in partitions)
if set == list
yield { partition }
else
yield { partition } x GenerateSubsets(list - part)
The key part is to make sure that the recursive side always has the leftmost element, this way, you don't get duplicates.
I have some messy C# code that does this:
IEnumerable<IEnumerable<List<int>>> GenerateSubsets(List<int> list)
{
int p = (1 << (list.Count)) - 2;
List<int> lhs = new List<int>();
List<int> rhs = new List<int>();
while (p >= 0)
{
for (int i = 0; i < list.Count; i++)
if ((p & (1 << i)) == 0)
lhs.Add(list[i]);
else
rhs.Add(list[i]);
if (rhs.Count > 0)
foreach (var rhsSubset in GenerateSubsets(rhs))
yield return Combine(lhs, rhsSubset);
else
yield return Combine(lhs, null);
lhs.Clear();
rhs.Clear();
p -= 2;
}
}
IEnumerable<List<int>> Combine(List<int> list, IEnumerable<List<int>> rest)
{
yield return list;
if (rest != null)
foreach (List<int> x in rest)
yield return x;
}
| Algorithm to generate spanning set | Given this input: [1,2,3,4]
I'd like to generate the set of spanning sets:
[1] [2] [3] [4]
[1] [2] [3,4]
[1] [2,3] [4]
[1] [3] [2,4]
[1,2] [3] [4]
[1,3] [2] [4]
[1,4] [2] [3]
[1,2] [3,4]
[1,3] [2,4]
[1,4] [2,3]
[1,2,3] [4]
[1,2,4] [3]
[1,3,4] [2]
[2,3,4] [1]
[1,2,3,4]
Every set has all the elements of the original set, permuted to appear in unique subsets. What is the algorithm that produces these sets? I've tried Python generator functions using choose, permutation, combination, power set, and so on, but can't get the right combination.
20 Jan 2009
This is not a homework question. This is an improved answer I was working on for www.projecteuler.net problem # 118. I already had a slow solution but came up with a better way -- except I could not figure out how to do the spanning set.
I'll post my code when I get back from an Inauguration Party.
21 Jan 2009
This is the eventual algorithm I used:
def spanningsets(items):
if len(items) == 1:
yield [items]
else:
left_set, last = items[:-1], [items[-1]]
for cc in spanningsets(left_set):
yield cc + [last]
for i,elem in enumerate(cc):
yield cc[:i] + [elem + last] + cc[i+1:]
@Yuval F: I know how to do a powerset. Here's a straightforward implementation:
def powerset(s) :
length = len(s)
for i in xrange(0, 2**length) :
yield [c for j, c in enumerate(s) if (1 << j) & i]
return
| [
"This should work, though I haven't tested it enough.\ndef spanningsets(items):\n if not items: return\n if len(items) == 1:\n yield [[items[-1]]]\n else:\n for cc in spanningsets(items[:-1]):\n yield cc + [[items[-1]]]\n for i in range(len(cc)):\n yield cc[:i] + [cc[i] + [items[-1]]] + cc[i+1:]\n\nfor sset in spanningsets([1, 2, 3, 4]):\n print ' '.join(map(str, sset))\n\nOutput:\n[1] [2] [3] [4]\n[1, 4] [2] [3]\n[1] [2, 4] [3]\n[1] [2] [3, 4]\n[1, 3] [2] [4]\n[1, 3, 4] [2]\n[1, 3] [2, 4]\n[1] [2, 3] [4]\n[1, 4] [2, 3]\n[1] [2, 3, 4]\n[1, 2] [3] [4]\n[1, 2, 4] [3]\n[1, 2] [3, 4]\n[1, 2, 3] [4]\n[1, 2, 3, 4]\n\n",
"What about this? I haven't tested it yet, but I'll try it later…\nI think this technique is called Dynamic Programming:\n\nTake the first element [1]\nWhat can you create with it? Only [1]\nTake the second one [2]\nNow you've got two possibilities: [1,2] and [1] [2]\nTake the third one [3]\nWith the first of number 2 [1,2] one can create [1,2,3] and [1,2] [3]\nWith the second of number 2 [1] [2] one can create [1,3] [2] and [1] [2,3] and [1] [2] [3]\n\nI hope it is clear enough what I tried to show. (If not, drop a comment!)\n",
"Here's The SUNY algorithm repository page on the problem. Maybe you can translate one of the code references to python.\nEdit: This was a similar problem. Here is the SUNY repository page about generating partitions, which I believe is the correct problem.\n",
"I think the following method is the best way to generate them for the euler problem, as you can replace the return value with the number of prime spanning subsets, and it will be trivial to do the multiplication (especially with memoization):\nGenerateSubsets(list)\n partitions = { x | x is subset of list and x contains the lowest element of list }\n foreach (parition in partitions)\n if set == list\n yield { partition }\n else\n yield { partition } x GenerateSubsets(list - part)\n\nThe key part is to make sure that the recursive side always has the leftmost element, this way, you don't get duplicates.\nI have some messy C# code that does this:\n IEnumerable<IEnumerable<List<int>>> GenerateSubsets(List<int> list)\n {\n int p = (1 << (list.Count)) - 2;\n List<int> lhs = new List<int>();\n List<int> rhs = new List<int>();\n while (p >= 0)\n {\n for (int i = 0; i < list.Count; i++)\n if ((p & (1 << i)) == 0)\n lhs.Add(list[i]);\n else\n rhs.Add(list[i]);\n\n if (rhs.Count > 0)\n foreach (var rhsSubset in GenerateSubsets(rhs))\n yield return Combine(lhs, rhsSubset);\n else\n yield return Combine(lhs, null);\n\n lhs.Clear();\n rhs.Clear();\n p -= 2;\n }\n }\n\n IEnumerable<List<int>> Combine(List<int> list, IEnumerable<List<int>> rest)\n {\n yield return list;\n if (rest != null)\n foreach (List<int> x in rest)\n yield return x;\n }\n\n"
] | [
11,
6,
0,
0
] | [
"The result sets together with the empty set {} looks like the results of the powerset (or power set), but it is not the same thing.\nI started a post about a similar problem which has a few implementations (although in C#) and geared more for speed than clarity in some cases. The first example should be easy to translate. Maybe it will give a few ideas anyway.\nThey work on the principle that emmumerating the combinations is similar to counting in binary (imagine counting from 0 to 16). You do not state if the order is important, or just generating all the combinations, so a quick tidy up may be in order afterwards.\nHave a look here (ignore the odd title, the discussion took another direction)\n"
] | [
-1
] | [
"algorithm",
"python"
] | stackoverflow_0000460479_algorithm_python.txt |
Q:
Python Psycopg error and connection handling (v MySQLdb)
Is there a way to make psycopg and postgres deal with errors without having to reestablish the connection, like MySQLdb? The commented version of the below works with MySQLdb, the comments make it work with Psycopg2:
results = {'felicitas': 3, 'volumes': 8, 'acillevs': 1, 'mosaics': 13, 'perat\xe9': 1, 'representative': 6....}
for item in sorted(results):
try:
cur.execute("""insert into resultstab values ('%s', %d)""" % (item, results[item]))
print item, results[item]
# conn.commit()
except:
# conn=psycopg2.connect(user='bvm', database='wdb', password='redacted')
# cur=conn.cursor()
print 'choked on', item
continue
This must slow things down, could anyone give a suggestion for passing over formatting errors? Obviously the above chokes on apostrophes, but is there a way to make it pass over that without getting something like the following, or committing, reconnecting, etc?:
agreement 19
agreements 1
agrees 1
agrippa 9
choked on agrippa's
choked on agrippina
A:
I think your code looks like this at the moment:
l = "a very long ... text".split()
for e in l:
cursor.execute("INSERT INTO yourtable (yourcol) VALUES ('" + e + "')")
So try to change it into something like this:
l = "a very long ... text".split()
for e in l:
cursor.execute("INSERT INTO yourtable (yourcol) VALUES (%s)", (e,))
so never forget to pass your parameters in the parameters list, then you don't have to care about your quotes and stuff, it is also more secure. You can read more about it at http://www.python.org/dev/peps/pep-0249/
also have a look there at the method .executemany() which is specially designed to execute the same statement multiple times.
A:
First of all you should let psycopg do the escaping for you by passing to the execute() method the parameters instead of doing the formatting yourself with '%'. That is:
cur.execute("insert into resultstab values (%s, %s)", (item, results[item]))
Note how we use "%s" as a marker even for non-string values and avoid quotes in the query. psycopg will do all the quoting for us.
Then, if you want to ignore some errors, just rollback and continue.
try:
cur.execute("SELECT this is an error")
except:
conn.rollback()
That's all. psycopg will rollback and start a new transaction on your next statement.
| Python Psycopg error and connection handling (v MySQLdb) | Is there a way to make psycopg and postgres deal with errors without having to reestablish the connection, like MySQLdb? The commented version of the below works with MySQLdb, the comments make it work with Psycopg2:
results = {'felicitas': 3, 'volumes': 8, 'acillevs': 1, 'mosaics': 13, 'perat\xe9': 1, 'representative': 6....}
for item in sorted(results):
try:
cur.execute("""insert into resultstab values ('%s', %d)""" % (item, results[item]))
print item, results[item]
# conn.commit()
except:
# conn=psycopg2.connect(user='bvm', database='wdb', password='redacted')
# cur=conn.cursor()
print 'choked on', item
continue
This must slow things down, could anyone give a suggestion for passing over formatting errors? Obviously the above chokes on apostrophes, but is there a way to make it pass over that without getting something like the following, or committing, reconnecting, etc?:
agreement 19
agreements 1
agrees 1
agrippa 9
choked on agrippa's
choked on agrippina
| [
"I think your code looks like this at the moment:\nl = \"a very long ... text\".split()\nfor e in l:\n cursor.execute(\"INSERT INTO yourtable (yourcol) VALUES ('\" + e + \"')\")\n\nSo try to change it into something like this:\nl = \"a very long ... text\".split()\nfor e in l:\n cursor.execute(\"INSERT INTO yourtable (yourcol) VALUES (%s)\", (e,))\n\nso never forget to pass your parameters in the parameters list, then you don't have to care about your quotes and stuff, it is also more secure. You can read more about it at http://www.python.org/dev/peps/pep-0249/\nalso have a look there at the method .executemany() which is specially designed to execute the same statement multiple times.\n",
"First of all you should let psycopg do the escaping for you by passing to the execute() method the parameters instead of doing the formatting yourself with '%'. That is:\ncur.execute(\"insert into resultstab values (%s, %s)\", (item, results[item]))\n\nNote how we use \"%s\" as a marker even for non-string values and avoid quotes in the query. psycopg will do all the quoting for us.\nThen, if you want to ignore some errors, just rollback and continue.\ntry:\n cur.execute(\"SELECT this is an error\")\nexcept:\n conn.rollback()\n\nThat's all. psycopg will rollback and start a new transaction on your next statement.\n"
] | [
2,
2
] | [] | [] | [
"mysql",
"psycopg2",
"python"
] | stackoverflow_0000070681_mysql_psycopg2_python.txt |
Q:
Python - Doing absolute imports from a subfolder
Basically I'm asking the same question as this guy: How to do relative imports in Python?
But no one gave him a correct answer. Given that you are inside a subfolder and you want to go up a directory and then into ANOTHER subfolder, doing what they suggested does not work (as the OP pointed out in his comments to their answers).
I know that you can do this by using sys.path, but I would prefer a cleaner method.
Example:
App
__init__.py
Package_A
--__init__.py
--Module_A.py
Package_B
--__init__.py
--Module_B.py
How would I import Module_A into Module_B?
A:
main.py
setup.py
app/ ->
__init__.py
package_a/ ->
__init__.py
module_a.py
package_b/ ->
__init__.py
module_b.py
You run python main.py.
main.py does: import app.package_a.module_a
module_a.py does import app.package_b.module_b
Alternatively 2 or 3 could use: from app.package_a import module_a
That will work as long as you have app in your PYTHONPATH. main.py could be anywhere then.
So you write a setup.py to copy (install) the whole app package and subpackages to the target system's python folders, and main.py to target system's script folders.
A:
If I'm reading correctly, in Python 2.5 or higher:
from ..Module_B import Module_B
I thought I was well-versed in Python but I had no idea that was possible in version 2.5.
A:
If you are then importing Module_B in to App, you would
Module_B.py:
import ModuleA
App.py (which also imports ModuleA which is now by default in your Pythonpath)
import Module_B.Module_B
Another alternative, is to update __init__.py (the one in Module_A/App folder) to:
import os
import sys
sys.path.extend('%s../' % os.getcwd())
import ModuleA
Another alternative, is to add your folder to the PYTHONPATH environment var.
| Python - Doing absolute imports from a subfolder | Basically I'm asking the same question as this guy: How to do relative imports in Python?
But no one gave him a correct answer. Given that you are inside a subfolder and you want to go up a directory and then into ANOTHER subfolder, doing what they suggested does not work (as the OP pointed out in his comments to their answers).
I know that you can do this by using sys.path, but I would prefer a cleaner method.
Example:
App
__init__.py
Package_A
--__init__.py
--Module_A.py
Package_B
--__init__.py
--Module_B.py
How would I import Module_A into Module_B?
| [
"main.py\nsetup.py\napp/ ->\n __init__.py\n package_a/ ->\n __init__.py\n module_a.py\n package_b/ ->\n __init__.py\n module_b.py\n\n\nYou run python main.py.\nmain.py does: import app.package_a.module_a\nmodule_a.py does import app.package_b.module_b\n\nAlternatively 2 or 3 could use: from app.package_a import module_a\nThat will work as long as you have app in your PYTHONPATH. main.py could be anywhere then.\nSo you write a setup.py to copy (install) the whole app package and subpackages to the target system's python folders, and main.py to target system's script folders.\n",
"If I'm reading correctly, in Python 2.5 or higher:\nfrom ..Module_B import Module_B\n\nI thought I was well-versed in Python but I had no idea that was possible in version 2.5.\n",
"If you are then importing Module_B in to App, you would\nModule_B.py:\n import ModuleA\nApp.py (which also imports ModuleA which is now by default in your Pythonpath)\nimport Module_B.Module_B\n\nAnother alternative, is to update __init__.py (the one in Module_A/App folder) to:\nimport os\nimport sys\nsys.path.extend('%s../' % os.getcwd())\nimport ModuleA\n\nAnother alternative, is to add your folder to the PYTHONPATH environment var.\n"
] | [
12,
2,
0
] | [] | [] | [
"python",
"python_import"
] | stackoverflow_0000463643_python_python_import.txt |
Q:
Django missing translation of some strings. Any idea why?
I have a medium sized Django project, (running on AppEngine if it makes any difference), and have all the strings living in .po files like they should.
I'm seeing strange behavior where certain strings just don't translate. They show up in the .po file when I run make_messages, with the correct file locations marked where my {% trans %} tags are. The translations are in place and look correct compared to other strings on either side of them. But when I display the page in question, about 1/4 of the strings simply don't translate.
Digging into the relevant generated .mo file, I don't see either the msgid or the msgstr present.
Has anybody seen anything similar to this? Any idea what might be happening?
trans tags look correct
.po files look correct
no errors during compile_messages
A:
Ugh. Django, you're killing me.
Here's what was happening:
http://blog.e-shell.org/124
For some reason only Django knows, it decided to decorate some of my translations with the comment '# fuzzy'. It seems to have chosen which ones to mark randomly.
Anyway, #fuzzy means this: "don't translate this, even though here's the translation:"
I'll leave this here in case some other poor soul comes across it in the future.
A:
The fuzzy marker is added to the .po file by makemessages. When you have a new string (with no translations), it looks for similar strings, and includes them as the translation, with the fuzzy marker. This means, this is a crude match, so don't display it to the user, but it could be a good start for the human translator.
It isn't a Django behavior, it comes from the gettext facility.
| Django missing translation of some strings. Any idea why? | I have a medium sized Django project, (running on AppEngine if it makes any difference), and have all the strings living in .po files like they should.
I'm seeing strange behavior where certain strings just don't translate. They show up in the .po file when I run make_messages, with the correct file locations marked where my {% trans %} tags are. The translations are in place and look correct compared to other strings on either side of them. But when I display the page in question, about 1/4 of the strings simply don't translate.
Digging into the relevant generated .mo file, I don't see either the msgid or the msgstr present.
Has anybody seen anything similar to this? Any idea what might be happening?
trans tags look correct
.po files look correct
no errors during compile_messages
| [
"Ugh. Django, you're killing me.\nHere's what was happening:\nhttp://blog.e-shell.org/124\nFor some reason only Django knows, it decided to decorate some of my translations with the comment '# fuzzy'. It seems to have chosen which ones to mark randomly.\nAnyway, #fuzzy means this: \"don't translate this, even though here's the translation:\"\nI'll leave this here in case some other poor soul comes across it in the future.\n",
"The fuzzy marker is added to the .po file by makemessages. When you have a new string (with no translations), it looks for similar strings, and includes them as the translation, with the fuzzy marker. This means, this is a crude match, so don't display it to the user, but it could be a good start for the human translator.\nIt isn't a Django behavior, it comes from the gettext facility.\n"
] | [
11,
11
] | [] | [] | [
"django",
"internationalization",
"python",
"translation"
] | stackoverflow_0000463714_django_internationalization_python_translation.txt |
Q:
Python - Hits per minute implementation?
This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas?
A:
A common pattern for solving this in other languages is to let the thing being measured simply increment an integer. Then you leave it to the listening client to determine intervals and frequencies.
So you basically do not let the socket server know about stuff like "minutes", because that's a feature the observer calculates. Then you can also support multiple listeners with different interval resolution.
I suppose you want some kind of ring-buffer structure to do the rolling logging.
A:
For what it's worth, your implementation above won't work if you don't receive a packet every second, as the next second entry won't necessarily be reset to 0.
Either way, afaik the "correct" way to do this, ala logs analysis, is to keep a limited record of all the queries you receive. So just chuck the query, time received etc. into a database, and then simple database queries will give you the use over a minute, or any minute in the past. Not sure whether this is too heavyweight for you, though.
A:
When you say the last minute, do you mean the exact last seconds or the last full minute from x:00 to x:59? The latter will be easier to implement and would probably give accurate results. You have one prev variable holding the value of the hits for the previous minute. Then you have a current value that increments every time there is a new hit. You return the value of prev to the users. At the change of the minute you swap prev with current and reset current.
If you want higher analysis you could split the minute in 2 to 6 slices. You need a variable or list entry for every slice. Let's say you have 6 slices of 10 seconds. You also have an index variable pointing to the current slice (0..5). For every hit you increment a temp variable. When the slice is over, you replace the value of the indexed variable with the value of temp, reset temp and move the index forward. You return the sum of the slice variables to the users.
| Python - Hits per minute implementation? | This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas?
| [
"A common pattern for solving this in other languages is to let the thing being measured simply increment an integer. Then you leave it to the listening client to determine intervals and frequencies.\nSo you basically do not let the socket server know about stuff like \"minutes\", because that's a feature the observer calculates. Then you can also support multiple listeners with different interval resolution.\nI suppose you want some kind of ring-buffer structure to do the rolling logging. \n",
"For what it's worth, your implementation above won't work if you don't receive a packet every second, as the next second entry won't necessarily be reset to 0.\nEither way, afaik the \"correct\" way to do this, ala logs analysis, is to keep a limited record of all the queries you receive. So just chuck the query, time received etc. into a database, and then simple database queries will give you the use over a minute, or any minute in the past. Not sure whether this is too heavyweight for you, though.\n",
"When you say the last minute, do you mean the exact last seconds or the last full minute from x:00 to x:59? The latter will be easier to implement and would probably give accurate results. You have one prev variable holding the value of the hits for the previous minute. Then you have a current value that increments every time there is a new hit. You return the value of prev to the users. At the change of the minute you swap prev with current and reset current.\nIf you want higher analysis you could split the minute in 2 to 6 slices. You need a variable or list entry for every slice. Let's say you have 6 slices of 10 seconds. You also have an index variable pointing to the current slice (0..5). For every hit you increment a temp variable. When the slice is over, you replace the value of the indexed variable with the value of temp, reset temp and move the index forward. You return the sum of the slice variables to the users.\n"
] | [
3,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000464314_python.txt |
Q:
Python regular expressions - how to capture multiple groups from a wildcard expression?
I have a Python regular expression that contains a group which can occur zero or many times - but when I retrieve the list of groups afterwards, only the last one is present. Example:
re.search("(\w)*", "abcdefg").groups()
this returns the list ('g',)
I need it to return ('a','b','c','d','e','f','g',)
Is that possible? How can I do it?
A:
re.findall(r"\w","abcdefg")
A:
In addition to Douglas Leeder's solution, here is the explanation:
In regular expressions the group count is fixed. Placing a quantifier behind a group does not increase group count (imagine all other group indexes increment because an eralier group matched more than once).
Groups with quantifiers are the way of making a complex sub-expression atomic, when there is need to match it more than once. The regex engine has no other way than saving the last match only to the group. In short: There is no way to achieve what you want with a single "unarmed" regular expression, and you have to find another way.
| Python regular expressions - how to capture multiple groups from a wildcard expression? | I have a Python regular expression that contains a group which can occur zero or many times - but when I retrieve the list of groups afterwards, only the last one is present. Example:
re.search("(\w)*", "abcdefg").groups()
this returns the list ('g',)
I need it to return ('a','b','c','d','e','f','g',)
Is that possible? How can I do it?
| [
"re.findall(r\"\\w\",\"abcdefg\")\n\n",
"In addition to Douglas Leeder's solution, here is the explanation:\nIn regular expressions the group count is fixed. Placing a quantifier behind a group does not increase group count (imagine all other group indexes increment because an eralier group matched more than once).\nGroups with quantifiers are the way of making a complex sub-expression atomic, when there is need to match it more than once. The regex engine has no other way than saving the last match only to the group. In short: There is no way to achieve what you want with a single \"unarmed\" regular expression, and you have to find another way.\n"
] | [
41,
33
] | [] | [] | [
"lexical_analysis",
"python",
"regex"
] | stackoverflow_0000464736_lexical_analysis_python_regex.txt |
Q:
Parsing Functions
I'm making a script parser in python and I'm a little stuck. I am not quite sure how to parse a line for all its functions (or even just one function at a time) and then search for a function with that name, and if it exists, execute that function short of writing a massive list if elif else block....
EDIT
This is for my own scripting language that i'm making. its nothing very complex, but i have a standard library of 8 functions or so that i need to be able to be run, how can i parse a line and run the function named in the line?
A:
Once you get the name of the function, use a dispatch dict to run the function:
def mysum(...): ...
def myotherstuff(...): ...
# create dispatch dict:
myfunctions = {'sum': mysum, 'stuff': myotherstuff}
# run your parser:
function_name, parameters = parse_result(line)
# run the function:
myfunctions[function_name](parameters)
Alternatively create a class with the commands:
class Commands(object):
def do_sum(self, ...): ...
def do_stuff(self, ...): ...
def run(self, funcname, params):
getattr(self, 'do_' + funcname)(params)
cmd = Commands()
function_name, parameters = parse_result(line)
cmd.run(function_name, parameters)
You could also look at the cmd module in the stdlib to do your class. It can provide you with a command-line interface for your language, with tab command completion, automatically.
A:
Check out PyParsing, it allows for definition of the grammar directly in Python code:
Assuming a function call is just somename():
>>> from pyparsing import *
>>> grammar = Word(alphas + "_", alphanums + "_")("func_name") + "()" + StringEnd()
>>> grammar.parseString("ab()\n")["func_name"]
"ab"
A:
Take a look at PLY. It should help you keep your parser specification clean.
A:
It all depends on what code you are parsing.
If you are parsing Python syntax, use the parser module from Python:
http://docs.python.org/library/parser.html
A quite complete list of parser libraries available for Python you can find at: http://nedbatchelder.com/text/python-parsers.html
| Parsing Functions | I'm making a script parser in python and I'm a little stuck. I am not quite sure how to parse a line for all its functions (or even just one function at a time) and then search for a function with that name, and if it exists, execute that function short of writing a massive list if elif else block....
EDIT
This is for my own scripting language that i'm making. its nothing very complex, but i have a standard library of 8 functions or so that i need to be able to be run, how can i parse a line and run the function named in the line?
| [
"Once you get the name of the function, use a dispatch dict to run the function:\ndef mysum(...): ...\ndef myotherstuff(...): ...\n\n# create dispatch dict:\nmyfunctions = {'sum': mysum, 'stuff': myotherstuff}\n\n# run your parser:\nfunction_name, parameters = parse_result(line)\n\n# run the function:\nmyfunctions[function_name](parameters)\n\nAlternatively create a class with the commands:\nclass Commands(object):\n def do_sum(self, ...): ...\n def do_stuff(self, ...): ...\n def run(self, funcname, params):\n getattr(self, 'do_' + funcname)(params)\n\ncmd = Commands()\nfunction_name, parameters = parse_result(line)\ncmd.run(function_name, parameters)\n\nYou could also look at the cmd module in the stdlib to do your class. It can provide you with a command-line interface for your language, with tab command completion, automatically.\n",
"Check out PyParsing, it allows for definition of the grammar directly in Python code:\nAssuming a function call is just somename():\n>>> from pyparsing import *\n>>> grammar = Word(alphas + \"_\", alphanums + \"_\")(\"func_name\") + \"()\" + StringEnd()\n>>> grammar.parseString(\"ab()\\n\")[\"func_name\"]\n\"ab\" \n\n",
"Take a look at PLY. It should help you keep your parser specification clean.\n",
"It all depends on what code you are parsing. \nIf you are parsing Python syntax, use the parser module from Python: \nhttp://docs.python.org/library/parser.html\nA quite complete list of parser libraries available for Python you can find at: http://nedbatchelder.com/text/python-parsers.html\n"
] | [
3,
2,
0,
0
] | [] | [] | [
"parsing",
"python"
] | stackoverflow_0000464970_parsing_python.txt |
Q:
How do I take the output of one program and use it as the input of another?
I've looked at this and it wasn't much help.
I have a Ruby program that puts a question to the cmd line and I would like to write a Python program that can return an answer. Does anyone know of any links or in general how I might go about doing this?
Thanks for your help.
EDIT
Thanks to the guys that mentioned piping. I haven't used it too much and was glad it was brought up since it forced me too look in to it more.
A:
p = subprocess.Popen(['ruby', 'ruby_program.rb'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
ruby_question = p.stdout.readline()
answer = calculate_answer(ruby_question)
p.stdin.write(answer)
print p.communicate()[0] # prints further info ruby may show.
The last 2 lines could be made into one:
print p.communicate(answer)[0]
A:
If you're on unix / linux you can use piping:
question.rb | answer.py
Then the output of question.rb becomes the input of answer.py
I've not tried it recently, but I have a feeling the same syntax might work on Windows as well.
A:
Pexpect
http://www.noah.org/wiki/Pexpect
Pexpect is a pure Python expect-like
module. Pexpect makes Python a better
tool for controlling other
applications.
Pexpect is a pure Python module for
spawning child applications;
controlling them; and responding to
expected patterns in their output.
Pexpect works like Don Libes' Expect.
Pexpect allows your script to spawn a
child application and control it as if
a human were typing commands.
A:
First of all check this out:
[Unix piping][1]
It works on windows or unix but it's slighly dufferent, first the programs:
question.rb:
puts "This is the question"
answer.rb:
question = gets
#calculate answer
puts "This is the answer"
Then the command line:
In unix:
question.rb | answer.rb
In windows:
ruby question.rb | ruby answer.rb
Output:
This is the question
This is the answer
A:
There are two ways (off the top of my head) to do this. The simplest if you're in a Unix environment is to use piping. Simple example:
cat .profile .shrc | more
This will send the output of the first command (cat .profile .shrc) to the more command using the pipe character |.
The second way is to call one program from the other in your source code. I'm don't know how Ruby handles this, but in Python you can run a program and get it's output by using the popen function. See this example chapter from Learning Python, then Ctrl-F for "popen" for some example code.
| How do I take the output of one program and use it as the input of another? | I've looked at this and it wasn't much help.
I have a Ruby program that puts a question to the cmd line and I would like to write a Python program that can return an answer. Does anyone know of any links or in general how I might go about doing this?
Thanks for your help.
EDIT
Thanks to the guys that mentioned piping. I haven't used it too much and was glad it was brought up since it forced me too look in to it more.
| [
"p = subprocess.Popen(['ruby', 'ruby_program.rb'], stdin=subprocess.PIPE, \n stdout=subprocess.PIPE)\nruby_question = p.stdout.readline()\nanswer = calculate_answer(ruby_question)\np.stdin.write(answer)\nprint p.communicate()[0] # prints further info ruby may show.\n\nThe last 2 lines could be made into one:\nprint p.communicate(answer)[0]\n\n",
"If you're on unix / linux you can use piping:\nquestion.rb | answer.py\n\nThen the output of question.rb becomes the input of answer.py\nI've not tried it recently, but I have a feeling the same syntax might work on Windows as well.\n",
"Pexpect\nhttp://www.noah.org/wiki/Pexpect\n\nPexpect is a pure Python expect-like\n module. Pexpect makes Python a better\n tool for controlling other\n applications.\nPexpect is a pure Python module for\n spawning child applications;\n controlling them; and responding to\n expected patterns in their output.\n Pexpect works like Don Libes' Expect.\n Pexpect allows your script to spawn a\n child application and control it as if\n a human were typing commands.\n\n",
"First of all check this out: \n[Unix piping][1]\nIt works on windows or unix but it's slighly dufferent, first the programs:\nquestion.rb:\nputs \"This is the question\"\n\nanswer.rb:\nquestion = gets\n#calculate answer\nputs \"This is the answer\"\n\nThen the command line: \nIn unix:\nquestion.rb | answer.rb\n\nIn windows:\nruby question.rb | ruby answer.rb\n\nOutput:\nThis is the question\nThis is the answer\n\n",
"There are two ways (off the top of my head) to do this. The simplest if you're in a Unix environment is to use piping. Simple example:\ncat .profile .shrc | more\n\nThis will send the output of the first command (cat .profile .shrc) to the more command using the pipe character |.\nThe second way is to call one program from the other in your source code. I'm don't know how Ruby handles this, but in Python you can run a program and get it's output by using the popen function. See this example chapter from Learning Python, then Ctrl-F for \"popen\" for some example code.\n"
] | [
10,
4,
3,
3,
1
] | [] | [] | [
"io",
"python",
"ruby"
] | stackoverflow_0000465421_io_python_ruby.txt |
Q:
Tools for creating text as bitmaps (anti-aliased text, custom spacing, transparent background)
I need to batch create images with text. Requirements:
arbitrary size of bitmap
PNG format
transparent background
black text anti-aliased against transparency
adjustable character spacing
adjustable text position (x and y coordinates where text begins)
TrueType and/or Type1 support
Unix command line tool or Python library
So far I've evaluated the following:
Python Imaging Library: fails 5.
ImageMagick ("caption" option): hard to figure out 6.
PyCairo: fails 5.
SVG + ImageMagick convert: most promising, although requires multiple tools
The problem with PIL is that e.g. the default spacing for Verdana is way too sparse. I need the text to be a bit tighter, but there's no way to adjust it in PIL.
In ImageMagick I haven't found an easy way to specify where in the image the text begins (I'm using -size WIDTHxHEIGHT and caption:'TEXT'). Adding a transparent border will move the text away from the corner it's achored to, but
image size needs to be adjusted accordingly since border adds to the extents
it's not possible to adjust horizontal and vertical offset independently
Have I missed some obvious alternatives or failed to find necessary features from the above mentioned?
A:
(5) indeed looks tricky, short of inserting dummy narrow-spaces into the string (which will break kerning) or using something much higher-level like the SVG or HTML/CSS renderer.
However, if you don't mind getting your hands dirty, it looks quite easy to hack PIL's freetype renderer into adding horizontal space. See _imagingft.c; after the following code in both font_getsize and font_render:
if (kerning && last_index && index) {
FT_Vector delta;
FT_Get_Kerning(self->face, last_index, index, ft_kerning_default,
&delta);
x += delta.x >> 6;
}
Add:
if (last_index && index) {
x += tracking;
}
Try it with a plain integer for tracking (probably quite large judging by that '>>6') first; compile and see if it works. The next step would be to get the tracking value into the C function from Python, for which you would have to change the ParseTuple call in font_render to:
long tracking;
if (!PyArg_ParseTuple(args, "Ol|il:render", &string, &id, &mask, &tracking))
return NULL;
And in font_getsize:
long tracking;
if (!PyArg_ParseTuple(args, "O|l:getsize", &string, &tracking))
return NULL;
Then look at what Python interface you want. This is a trivial but quite tedious case of adding the extra 'tracking' argument through each level of the interface, for example:
def truetype(filename, size, index=0, encoding="", tracking= 0): # added optional tracking
"Load a truetype font file."
try:
return FreeTypeFont(filename, size, index, encoding, tracking) # added tracking
...
class FreeTypeFont:
"FreeType font wrapper (requires _imagingft service)"
def __init__(self, file, size, index=0, encoding="", tracking= 0): # added tracking
import _imagingft
self.font = _imagingft.getfont(file, size, index, encoding)
self.tracking= tracking # add this line
...
def getmask2(self, text, mode="", fill=Image.core.fill):
size, offset = self.font.getsize(text, self.tracking) # use tracking
im = fill("L", size, 0)
self.font.render(text, im.id, mode=="1", self.tracking) # use tracking
return im, offset
I haven't tested any of this! If it works, might be worth submitting as a patch.
A:
Here's the SVG + ImageMagick solution:
Programmatically create SVG documents based on this template, replacing "TEXT HERE" with the desired text content:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE svg PUBLIC
"-//W3C//DTD SVG 1.0//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg version="1.0" width="152px" height="50px">
<text style="font-size: 22px; font-weight:bold; font-family: Verdana-Bold;
letter-spacing: -1.3%;">
<tspan x="10" y="39">TEXT HERE</tspan>
</text>
</svg>
Convert the documents to background-transparent PNGs with ImageMagick's convert:
$ convert -background none input.svg output.png
A:
From a quick glance, Pango has support for letter spacing. Pango has Python bindings and is integrated with Cairo.
| Tools for creating text as bitmaps (anti-aliased text, custom spacing, transparent background) | I need to batch create images with text. Requirements:
arbitrary size of bitmap
PNG format
transparent background
black text anti-aliased against transparency
adjustable character spacing
adjustable text position (x and y coordinates where text begins)
TrueType and/or Type1 support
Unix command line tool or Python library
So far I've evaluated the following:
Python Imaging Library: fails 5.
ImageMagick ("caption" option): hard to figure out 6.
PyCairo: fails 5.
SVG + ImageMagick convert: most promising, although requires multiple tools
The problem with PIL is that e.g. the default spacing for Verdana is way too sparse. I need the text to be a bit tighter, but there's no way to adjust it in PIL.
In ImageMagick I haven't found an easy way to specify where in the image the text begins (I'm using -size WIDTHxHEIGHT and caption:'TEXT'). Adding a transparent border will move the text away from the corner it's achored to, but
image size needs to be adjusted accordingly since border adds to the extents
it's not possible to adjust horizontal and vertical offset independently
Have I missed some obvious alternatives or failed to find necessary features from the above mentioned?
| [
"(5) indeed looks tricky, short of inserting dummy narrow-spaces into the string (which will break kerning) or using something much higher-level like the SVG or HTML/CSS renderer.\nHowever, if you don't mind getting your hands dirty, it looks quite easy to hack PIL's freetype renderer into adding horizontal space. See _imagingft.c; after the following code in both font_getsize and font_render:\nif (kerning && last_index && index) {\n FT_Vector delta;\n FT_Get_Kerning(self->face, last_index, index, ft_kerning_default,\n &delta);\n x += delta.x >> 6;\n}\n\nAdd:\nif (last_index && index) {\n x += tracking;\n}\n\nTry it with a plain integer for tracking (probably quite large judging by that '>>6') first; compile and see if it works. The next step would be to get the tracking value into the C function from Python, for which you would have to change the ParseTuple call in font_render to:\nlong tracking;\nif (!PyArg_ParseTuple(args, \"Ol|il:render\", &string, &id, &mask, &tracking))\n return NULL;\n\nAnd in font_getsize:\nlong tracking;\nif (!PyArg_ParseTuple(args, \"O|l:getsize\", &string, &tracking))\n return NULL;\n\nThen look at what Python interface you want. This is a trivial but quite tedious case of adding the extra 'tracking' argument through each level of the interface, for example:\ndef truetype(filename, size, index=0, encoding=\"\", tracking= 0): # added optional tracking\n \"Load a truetype font file.\"\n try:\n return FreeTypeFont(filename, size, index, encoding, tracking) # added tracking\n ...\n\nclass FreeTypeFont:\n \"FreeType font wrapper (requires _imagingft service)\"\n\n def __init__(self, file, size, index=0, encoding=\"\", tracking= 0): # added tracking\n import _imagingft\n self.font = _imagingft.getfont(file, size, index, encoding)\n self.tracking= tracking # add this line\n\n ...\n\n def getmask2(self, text, mode=\"\", fill=Image.core.fill):\n size, offset = self.font.getsize(text, self.tracking) # use tracking\n im = fill(\"L\", size, 0)\n self.font.render(text, im.id, mode==\"1\", self.tracking) # use tracking\n return im, offset\n\nI haven't tested any of this! If it works, might be worth submitting as a patch.\n",
"Here's the SVG + ImageMagick solution:\nProgrammatically create SVG documents based on this template, replacing \"TEXT HERE\" with the desired text content:\n<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!DOCTYPE svg PUBLIC\n \"-//W3C//DTD SVG 1.0//EN\"\n \"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd\">\n<svg version=\"1.0\" width=\"152px\" height=\"50px\">\n <text style=\"font-size: 22px; font-weight:bold; font-family: Verdana-Bold;\n letter-spacing: -1.3%;\">\n <tspan x=\"10\" y=\"39\">TEXT HERE</tspan>\n </text>\n</svg>\n\nConvert the documents to background-transparent PNGs with ImageMagick's convert:\n$ convert -background none input.svg output.png\n\n",
"From a quick glance, Pango has support for letter spacing. Pango has Python bindings and is integrated with Cairo.\n"
] | [
4,
3,
2
] | [] | [] | [
"bitmap",
"fonts",
"python",
"spacing",
"unix"
] | stackoverflow_0000465144_bitmap_fonts_python_spacing_unix.txt |
Q:
Writing unit tests in Django / Python
I've not used Unit Tests before other than a quick introduction in a Uni course. I'm currently writing an application though and would like to teach myself TDD in the process. The problem is, I've no idea what to test or really how.
I'm writing a Django application, and so far have only created the models (and customised the admin application). This is how I've written the skeletons of my tests so far:
class ModelTests(TestCase):
fixtures = ['initial_data.json',]
def setUp(self):
pass
def testSSA(self):
ssa = SSA.objects.create(name="sdfsdf", cost_center=1111, street_num=8,
street_name="dfsdfsf Street", suburb="sdfsdfsdf",
post_code=3333)
def testResident(self):
pass
def testSSA_Client(self):
pass
I planned to write a function to test each model within the ModelTests class. Is this a good way of writing tests? Also, what exactly should I be testing for? That creating a model with all of the fields completed works? That a half complete model fails? That any special cases are tested (like a null and is_required=False)? I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I?
What do I need to test for a web application written in Django/Python? Some examples would be nice.
A:
Is a function to test each model within the ModelTests class a good way of writing tests?
No.
What exactly should I be testing for?
That creating a model with all of the fields completed works?
That a half complete model fails?
That any special cases are tested (like a null and is_required=False)?
I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I?
Not much of that.
You might test validation rules, but that isn't meaningful until you've defined some Form objects. Then you have something to test -- does the form enforce all the rules. You'll need at least one TestCase class for each form. A function will be a scenario -- different combinations of inputs that are allowed or not allowed.
For each Model class, you'll need at least one TestCase class definition. TestCases are cheap, define lots of them.
Your model embodies your "business entity" definitions. Your models will have methods that implement business rules. Your methods will do things like summarize, filter, calculate, aggregate, reduce, all kinds of things. You'll have functions for each of these features of a model class.
You're not testing Django. You're testing how your business rules actually work in Django.
Later, when you have more stuff in your application (forms, views, urls, etc.) you'll want to use the Django unittest client to exercise each method for each url. Again, one TestCase per
A:
I'm not exactly sure of the specifics of what you're trying to test here, I'd need more code snippets for this, but I can give you some general advice.
First, read the unit testing chapter of "Dive into Python" (it's free online! http://diveintopython3.ep.io/unit-testing.html), it's a great explanation of unit testing in general, what you need to do, and why.
Second, with regards to TDD, it is a valuable practice, but be careful about growing too dependent on it as I've found it can lead to over-specifying software and further on to having software that can not be re-developed and adapted to new tasks. This is just my experience, mind. Also, providing you don't use it dogmatically TDD is valuable.
Third, it strikes me that the best piece of advice for your specific situation is to strive to test your logic, but not the logic of frameworks that you depend on. That means that often testing half-complete models fail etc. etc. may not be appropriate, since that is not your logic, but django's, and so should already be tested. More valuable would be to test a few expected cases, instantiations that you expect, exceptions that you expect etc. to make sure your model specification is sound, and then move on to the more substantial logic of your application.
A:
Presumably you've already read Testing Django Applications.
Start testing the normal use cases of your application, creating a new user, adding a blog entry, etc. Just your typical CRUD operations first, then move out to the edge cases. Basically your building confidence in your application that anything you later change will not break how I expect the application to behave.
Simulate GET/POST requests on your URLs and observe the responses (headers, status codes and content). Did your application render the correct view? Using the correct template? In the sections where your application throws exceptions, attempt to trigger them (e.g. view/edit a non-existent record to raise ObjectDoesNotExist).
It's usually worth putting in a tracking system (e.g. Trac), so you can add a new test for every logged defect.
| Writing unit tests in Django / Python | I've not used Unit Tests before other than a quick introduction in a Uni course. I'm currently writing an application though and would like to teach myself TDD in the process. The problem is, I've no idea what to test or really how.
I'm writing a Django application, and so far have only created the models (and customised the admin application). This is how I've written the skeletons of my tests so far:
class ModelTests(TestCase):
fixtures = ['initial_data.json',]
def setUp(self):
pass
def testSSA(self):
ssa = SSA.objects.create(name="sdfsdf", cost_center=1111, street_num=8,
street_name="dfsdfsf Street", suburb="sdfsdfsdf",
post_code=3333)
def testResident(self):
pass
def testSSA_Client(self):
pass
I planned to write a function to test each model within the ModelTests class. Is this a good way of writing tests? Also, what exactly should I be testing for? That creating a model with all of the fields completed works? That a half complete model fails? That any special cases are tested (like a null and is_required=False)? I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I?
What do I need to test for a web application written in Django/Python? Some examples would be nice.
| [
"Is a function to test each model within the ModelTests class a good way of writing tests?\nNo.\nWhat exactly should I be testing for?\n\nThat creating a model with all of the fields completed works? \nThat a half complete model fails? \nThat any special cases are tested (like a null and is_required=False)? \nI've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I?\n\nNot much of that.\nYou might test validation rules, but that isn't meaningful until you've defined some Form objects. Then you have something to test -- does the form enforce all the rules. You'll need at least one TestCase class for each form. A function will be a scenario -- different combinations of inputs that are allowed or not allowed.\nFor each Model class, you'll need at least one TestCase class definition. TestCases are cheap, define lots of them. \nYour model embodies your \"business entity\" definitions. Your models will have methods that implement business rules. Your methods will do things like summarize, filter, calculate, aggregate, reduce, all kinds of things. You'll have functions for each of these features of a model class.\nYou're not testing Django. You're testing how your business rules actually work in Django.\nLater, when you have more stuff in your application (forms, views, urls, etc.) you'll want to use the Django unittest client to exercise each method for each url. Again, one TestCase per \n",
"I'm not exactly sure of the specifics of what you're trying to test here, I'd need more code snippets for this, but I can give you some general advice.\nFirst, read the unit testing chapter of \"Dive into Python\" (it's free online! http://diveintopython3.ep.io/unit-testing.html), it's a great explanation of unit testing in general, what you need to do, and why.\nSecond, with regards to TDD, it is a valuable practice, but be careful about growing too dependent on it as I've found it can lead to over-specifying software and further on to having software that can not be re-developed and adapted to new tasks. This is just my experience, mind. Also, providing you don't use it dogmatically TDD is valuable.\nThird, it strikes me that the best piece of advice for your specific situation is to strive to test your logic, but not the logic of frameworks that you depend on. That means that often testing half-complete models fail etc. etc. may not be appropriate, since that is not your logic, but django's, and so should already be tested. More valuable would be to test a few expected cases, instantiations that you expect, exceptions that you expect etc. to make sure your model specification is sound, and then move on to the more substantial logic of your application.\n",
"Presumably you've already read Testing Django Applications.\nStart testing the normal use cases of your application, creating a new user, adding a blog entry, etc. Just your typical CRUD operations first, then move out to the edge cases. Basically your building confidence in your application that anything you later change will not break how I expect the application to behave.\nSimulate GET/POST requests on your URLs and observe the responses (headers, status codes and content). Did your application render the correct view? Using the correct template? In the sections where your application throws exceptions, attempt to trigger them (e.g. view/edit a non-existent record to raise ObjectDoesNotExist).\nIt's usually worth putting in a tracking system (e.g. Trac), so you can add a new test for every logged defect.\n"
] | [
37,
10,
4
] | [] | [] | [
"django",
"python",
"unit_testing"
] | stackoverflow_0000465065_django_python_unit_testing.txt |
Q:
Delphi-like GUI designer for Python
Is there any GUI toolkit for Python with form designer similar to Delphi, eg where one can drag and drop controls to form, move them around etc.
A:
I recommend PyQt (now from Nokia), which uses Qt Designer. Qt designer produces XML files (.ui) which you can either convert to Python modules using a utility called pyuic, or load dynamically from your Python program.
You do have to write your Python code in a different editor, i.e. Designer is only the GUI designer part and not a complete IDE. They have an IDE in beta called Qt Creator, but I don't think it supports Python very well at this stage.
If you'd rather go with wxPython, wxGlade will output Python code.
A:
Use Glade + PyGTk to do GUI programming in Python. Glade is a tool which allows you to create graphical interfaces by dragging and dropping widgets. In turn Glade generates the interface definition in XML which you can hook up with your code using libglade. Check the website of Glade for more info.
A:
If your using wxPython check out BoaConstructor, it is a complete Python IDE with a GUI designer.
| Delphi-like GUI designer for Python | Is there any GUI toolkit for Python with form designer similar to Delphi, eg where one can drag and drop controls to form, move them around etc.
| [
"I recommend PyQt (now from Nokia), which uses Qt Designer. Qt designer produces XML files (.ui) which you can either convert to Python modules using a utility called pyuic, or load dynamically from your Python program.\nYou do have to write your Python code in a different editor, i.e. Designer is only the GUI designer part and not a complete IDE. They have an IDE in beta called Qt Creator, but I don't think it supports Python very well at this stage.\nIf you'd rather go with wxPython, wxGlade will output Python code.\n",
"Use Glade + PyGTk to do GUI programming in Python. Glade is a tool which allows you to create graphical interfaces by dragging and dropping widgets. In turn Glade generates the interface definition in XML which you can hook up with your code using libglade. Check the website of Glade for more info. \n",
"If your using wxPython check out BoaConstructor, it is a complete Python IDE with a GUI designer.\n"
] | [
7,
2,
2
] | [] | [] | [
"form_designer",
"python",
"user_interface"
] | stackoverflow_0000465814_form_designer_python_user_interface.txt |
Q:
Possible to integrate Google AppEngine and Google Code for continuous integration?
Anyone have any thoughts on how/if it is possible to integrate Google Code commits to cause a Google AppEngine deployment of the most recent code?
I have a simple Google AppEngine project's source hosted on Google Code and would love if everytime I committed to Subversion, that AppEngine would reflect the latest commit. I don't mind if things are broken on the live site since the project is for personal use mainly and for learning.
Anyone have any thoughts on how to tie into the subversion commit for the Code repository and/or how to kickoff the deployment to AppEngine? Ideally the solution would not require anything manual from me nor any type of server/listener software on my machine.
A:
Made By Sofa had a blog post about their workflow with Google App Engine. In the second last paragraph they have attached a subversion hook that when when someone commits code it will automatically deploy to Google App Engine. It would take a little bit of tweaking (because it works on the server side not the client) but you could do the same.
A:
Google Code Project Hosting now supports Post-Commit Web Hooks, which ping a project-owner-specified URL after every commit. This would eliminate the need to regularly poll your Google Code repository.
A:
You'd probably have to have some glue on another computer which monitored SVN commits and deployed a new version for you. Google Code has yet to develop and release an API (which they need to do soon if they're serious about this whole development thing), but GAE can be deployed to with relative automated ease, so I wouldn't have thought it should be that difficult. The deployment process, however, will vary with each project, so that's something you need to sort out yourself (you might wanna take a look at the fabric deployment system). Then, just set a cron job going which updates a local SVN checkout on the middle machine, and you're done.
A:
Very interesting, but not yet possible, AFAIK. I have been looking for that option in Google Code with no success.
The only solution I can figure out is to install something in your machine that checks for changes in your SVN repository.
I'll be happy to hear about other approaches.
A:
For those of us who are using Github, this feature from the GAE team would make us all seriously consider switching to Google Code...
| Possible to integrate Google AppEngine and Google Code for continuous integration? | Anyone have any thoughts on how/if it is possible to integrate Google Code commits to cause a Google AppEngine deployment of the most recent code?
I have a simple Google AppEngine project's source hosted on Google Code and would love if everytime I committed to Subversion, that AppEngine would reflect the latest commit. I don't mind if things are broken on the live site since the project is for personal use mainly and for learning.
Anyone have any thoughts on how to tie into the subversion commit for the Code repository and/or how to kickoff the deployment to AppEngine? Ideally the solution would not require anything manual from me nor any type of server/listener software on my machine.
| [
"Made By Sofa had a blog post about their workflow with Google App Engine. In the second last paragraph they have attached a subversion hook that when when someone commits code it will automatically deploy to Google App Engine. It would take a little bit of tweaking (because it works on the server side not the client) but you could do the same.\n",
"Google Code Project Hosting now supports Post-Commit Web Hooks, which ping a project-owner-specified URL after every commit. This would eliminate the need to regularly poll your Google Code repository.\n",
"You'd probably have to have some glue on another computer which monitored SVN commits and deployed a new version for you. Google Code has yet to develop and release an API (which they need to do soon if they're serious about this whole development thing), but GAE can be deployed to with relative automated ease, so I wouldn't have thought it should be that difficult. The deployment process, however, will vary with each project, so that's something you need to sort out yourself (you might wanna take a look at the fabric deployment system). Then, just set a cron job going which updates a local SVN checkout on the middle machine, and you're done.\n",
"Very interesting, but not yet possible, AFAIK. I have been looking for that option in Google Code with no success.\nThe only solution I can figure out is to install something in your machine that checks for changes in your SVN repository.\nI'll be happy to hear about other approaches.\n",
"For those of us who are using Github, this feature from the GAE team would make us all seriously consider switching to Google Code...\n"
] | [
5,
5,
2,
1,
1
] | [] | [] | [
"continuous_integration",
"google_app_engine",
"google_code",
"python",
"svn"
] | stackoverflow_0000241007_continuous_integration_google_app_engine_google_code_python_svn.txt |
Q:
How can I hide the console window in a PyQt app running on Windows?
Surely this is possible? I have been hunting through PyQt tutorials and documentation but cannot find the answer to it. Probably I just need to phrase my search query differently.
[Edit]
Thanks PEZ for the answer - more details including use of the .pyw extension in Python Programming on Win32 chapter 20
A:
I think you should be able to run your app with pythonw.exe.
A:
An easy way to do this is to give your script a .pyw extension instead of the usual .py.
This has the same effect as PEZ's answer (runs the script using pythonw.exe).
| How can I hide the console window in a PyQt app running on Windows? | Surely this is possible? I have been hunting through PyQt tutorials and documentation but cannot find the answer to it. Probably I just need to phrase my search query differently.
[Edit]
Thanks PEZ for the answer - more details including use of the .pyw extension in Python Programming on Win32 chapter 20
| [
"I think you should be able to run your app with pythonw.exe.\n",
"An easy way to do this is to give your script a .pyw extension instead of the usual .py.\nThis has the same effect as PEZ's answer (runs the script using pythonw.exe).\n"
] | [
26,
17
] | [] | [] | [
"command_line",
"pyqt",
"python",
"windows"
] | stackoverflow_0000466203_command_line_pyqt_python_windows.txt |
Q:
How can I set it up so that threads communicate they're complete with their task?
Conceptually, I would like to accomplish the following but have had trouble understand how to code it properly in Python:
from threading import Thread
for i in range(0,3):
t = Thread(target=myfunction)
t.start()
# wait until threads have finished executing
print 'complete!'
A:
add the threads to a list and join() them.
from threading import Thread
tlist = []
for i in range(3):
t = Thread(target=some_function)
t.start()
tlist.append(t)
# wait until threads have finished executing
for t in tlist:
t.join()
print 'complete!'
| How can I set it up so that threads communicate they're complete with their task? | Conceptually, I would like to accomplish the following but have had trouble understand how to code it properly in Python:
from threading import Thread
for i in range(0,3):
t = Thread(target=myfunction)
t.start()
# wait until threads have finished executing
print 'complete!'
| [
"add the threads to a list and join() them.\nfrom threading import Thread\ntlist = []\nfor i in range(3):\n t = Thread(target=some_function)\n t.start()\n tlist.append(t)\n\n# wait until threads have finished executing\nfor t in tlist:\n t.join()\n\nprint 'complete!'\n\n"
] | [
6
] | [
"I have never used python, but I think the concept you are looking for is a \"semaphore\".\nGoogle turned up this:\nhttp://www.python.org/doc/2.5.2/lib/semaphore-objects.html\n"
] | [
-4
] | [
"multithreading",
"python"
] | stackoverflow_0000466525_multithreading_python.txt |
Q:
How to prevent overwriting an object someone else has modified
I would like to find a generic way of preventing to save an object if it is saved after I checked it out.
We can assume the object has a timestamp field that contains last modification time. If I had checked out (visited a view using a ModelForm for instance) at t1 and the object is saved again at t2, given t2 > t1 I shouldn't be able to save it.
A:
Overwrite the save method that would first check the last timestamp:
def save(self):
if(self.id):
foo = Foo.objects.get(pk=self.id)
if(foo.timestamp > self.timestamp):
raise Exception, "trying to save outdated Foo"
super(Foo, self).save()
| How to prevent overwriting an object someone else has modified | I would like to find a generic way of preventing to save an object if it is saved after I checked it out.
We can assume the object has a timestamp field that contains last modification time. If I had checked out (visited a view using a ModelForm for instance) at t1 and the object is saved again at t2, given t2 > t1 I shouldn't be able to save it.
| [
"Overwrite the save method that would first check the last timestamp:\ndef save(self):\n if(self.id):\n foo = Foo.objects.get(pk=self.id)\n if(foo.timestamp > self.timestamp):\n raise Exception, \"trying to save outdated Foo\" \n super(Foo, self).save()\n\n"
] | [
3
] | [] | [] | [
"blocking",
"django",
"django_models",
"locking",
"python"
] | stackoverflow_0000467134_blocking_django_django_models_locking_python.txt |
Q:
python: list comprehension tactics
I'm looking to take a string and create a list of strings that build up the original string.
e.g.:
"asdf" => ["a", "as", "asd", "asdf"]
I'm sure there's a "pythonic" way to do it; I think I'm just losing my mind. What's the best way to get this done?
A:
One possibility:
>>> st = 'asdf'
>>> [st[:n+1] for n in range(len(st))]
['a', 'as', 'asd', 'asdf']
A:
If you're going to be looping over the elements of your "list", you may be better off using a generator rather than list comprehension:
>>> text = "I'm a little teapot."
>>> textgen = (text[:i + 1] for i in xrange(len(text)))
>>> textgen
<generator object <genexpr> at 0x0119BDA0>
>>> for item in textgen:
... if re.search("t$", item):
... print item
I'm a lit
I'm a litt
I'm a little t
I'm a little teapot
>>>
This code never creates a list object, nor does it ever (delta garbage collection) create more than one extra string (in addition to text).
| python: list comprehension tactics | I'm looking to take a string and create a list of strings that build up the original string.
e.g.:
"asdf" => ["a", "as", "asd", "asdf"]
I'm sure there's a "pythonic" way to do it; I think I'm just losing my mind. What's the best way to get this done?
| [
"One possibility:\n>>> st = 'asdf'\n>>> [st[:n+1] for n in range(len(st))]\n['a', 'as', 'asd', 'asdf']\n\n",
"If you're going to be looping over the elements of your \"list\", you may be better off using a generator rather than list comprehension:\n>>> text = \"I'm a little teapot.\"\n>>> textgen = (text[:i + 1] for i in xrange(len(text)))\n>>> textgen\n<generator object <genexpr> at 0x0119BDA0>\n>>> for item in textgen:\n... if re.search(\"t$\", item):\n... print item\nI'm a lit\nI'm a litt\nI'm a little t\nI'm a little teapot\n>>>\n\nThis code never creates a list object, nor does it ever (delta garbage collection) create more than one extra string (in addition to text).\n"
] | [
19,
17
] | [] | [] | [
"list_comprehension",
"python"
] | stackoverflow_0000467094_list_comprehension_python.txt |
Q:
WX Python and Raw Input on Windows (WM_INPUT)
Does anyone know how to use the Raw Input facility on Windows from a WX Python application?
What I need to do is be able to differentiate the input from multiple keyboards. So if there is another way to achieving that, that would work too.
A:
Have you tried using ctypes?
>>> import ctypes
>>> ctypes.windll.user32.RegisterRawInputDevices
<_FuncPtr object at 0x01FCFDC8>
It would be a little work setting up the Python version of the necessary structures, but you may be able to query the Win32 API directly this way without going through wxPython.
A:
Theres a nice looking library here
http://code.google.com/p/pymultimouse/
It's not wx-python specific - but it does use raw input in python with ctypes (and worked in my test with 2 mice)
| WX Python and Raw Input on Windows (WM_INPUT) | Does anyone know how to use the Raw Input facility on Windows from a WX Python application?
What I need to do is be able to differentiate the input from multiple keyboards. So if there is another way to achieving that, that would work too.
| [
"Have you tried using ctypes?\n>>> import ctypes\n>>> ctypes.windll.user32.RegisterRawInputDevices\n<_FuncPtr object at 0x01FCFDC8>\n\nIt would be a little work setting up the Python version of the necessary structures, but you may be able to query the Win32 API directly this way without going through wxPython.\n",
"Theres a nice looking library here\nhttp://code.google.com/p/pymultimouse/\nIt's not wx-python specific - but it does use raw input in python with ctypes (and worked in my test with 2 mice)\n"
] | [
4,
3
] | [] | [] | [
"python",
"raw_input",
"windows",
"wxpython"
] | stackoverflow_0000285869_python_raw_input_windows_wxpython.txt |
Q:
Multiple mouse pointers?
Is there a way to accept input from more than one mouse separately? I'm interested in making a multi-user application and I thought it would be great if I could have 2 or more users holding wireless mice each interacting with the app individually with a separate mouse arrow.
Is this something I should try to farm out to some other application/driver/os_magic? or is there a library I can use to accomplish this? Language isn't a HUGE deal, but C, C++, and Python are preferrable.
Thanks :)
edit:
Found this multi-pointer toolkit for linux (it's actually a multi-pointer x server):
http://wearables.unisa.edu.au/mpx/
A:
You could try the Microsoft Windows MultiPoint Software Development Kit 1.1
or the new
Microsoft Windows MultiPoint Software Development Kit 1.5
and the main Microsoft Multipoint site
A:
Yes. I know of at least one program that does this, KidPad. I think it's written in Java and was developed by Juan Pablo Hourcade, now at the University of Iowa. You'd have to ask him how it was implemented.
A:
http://code.google.com/p/pymultimouse/ is a library using windows raw input, it worked in a test with 2 mice.
A:
You could use DirectInput with C/C++ (there's probably also bindings in other languages). You use IDirectInput8::EnumDevices() (using DX8; same function, different interface in other versions of DirectX) to get a list of all attached devices. Then, you create the devices and poll them IDirectInputDevice8::Poll(). This should almost definitely work with any number of mice, keyboards, and other input devices. MSDN has really good documentation on this.
A:
I have this vague feeling that BeOS used to let one pair a mouse and keyboard and have separate active windows and inputs. Wow... that was a long time ago. I thought that it would be very interesting for "paired" programming.
A:
See my answer here (avoid the JNI stuff): How can I handle multiple mouse inputs in Java?
| Multiple mouse pointers? | Is there a way to accept input from more than one mouse separately? I'm interested in making a multi-user application and I thought it would be great if I could have 2 or more users holding wireless mice each interacting with the app individually with a separate mouse arrow.
Is this something I should try to farm out to some other application/driver/os_magic? or is there a library I can use to accomplish this? Language isn't a HUGE deal, but C, C++, and Python are preferrable.
Thanks :)
edit:
Found this multi-pointer toolkit for linux (it's actually a multi-pointer x server):
http://wearables.unisa.edu.au/mpx/
| [
"You could try the Microsoft Windows MultiPoint Software Development Kit 1.1\nor the new\nMicrosoft Windows MultiPoint Software Development Kit 1.5\nand the main Microsoft Multipoint site\n",
"Yes. I know of at least one program that does this, KidPad. I think it's written in Java and was developed by Juan Pablo Hourcade, now at the University of Iowa. You'd have to ask him how it was implemented.\n",
"http://code.google.com/p/pymultimouse/ is a library using windows raw input, it worked in a test with 2 mice. \n",
"You could use DirectInput with C/C++ (there's probably also bindings in other languages). You use IDirectInput8::EnumDevices() (using DX8; same function, different interface in other versions of DirectX) to get a list of all attached devices. Then, you create the devices and poll them IDirectInputDevice8::Poll(). This should almost definitely work with any number of mice, keyboards, and other input devices. MSDN has really good documentation on this.\n",
"I have this vague feeling that BeOS used to let one pair a mouse and keyboard and have separate active windows and inputs. Wow... that was a long time ago. I thought that it would be very interesting for \"paired\" programming.\n",
"See my answer here (avoid the JNI stuff): How can I handle multiple mouse inputs in Java?\n"
] | [
8,
5,
2,
1,
1,
1
] | [] | [] | [
"mouse",
"multi_user",
"python",
"user_interface"
] | stackoverflow_0000237155_mouse_multi_user_python_user_interface.txt |
Q:
How to integrate the StringTemplate engine into the CherryPy web server
I love the StringTemplate engine, and I love the CherryPy web server, and I know that they can be integrated.
Who has done it? How?
EDIT: The TurboGears framework takes the CherryPy web server and bundles other related components such as a template engine, data access tools, JavaScript kit, etc. I am interested in MochiKit, demand CherryPy, but I don't want any other template engine than StringTemplate (architecture is critical--I don't want another broken/bad template engine).
Therefore, it would be acceptable to answer this question by addressing how to integrate StringTemplate with TurboGears.
It may also be acceptable to answer this question by addressing how to use CherryPy and StringTemplate in the Google App Engine.
Thanks.
A:
Based on the tutorials for both, it looks pretty straightforward:
import stringtemplate
import cherrypy
class HelloWorld(object):
def index(self):
hello = stringtemplate.StringTemplate("Hello, $name$")
hello["name"] = "World"
return str(hello)
index.exposed = True
cherrypy.quickstart(HelloWorld())
You'll probably want to have the CherryPy functions find the StringTemplate's in some location on disk instead, but the general idea will be like this.
Django is conceptually similar: url's are mapped to python functions, and the python functions generally build up a context dictionary, render a template with that context object, and return the result.
A:
Rob,
There's reason behind people's selection of tools. StringTemplate is not terribly popular for Python, there are templating engines that are much better supported and with a much wider audience. If you don't like Kid, there's also Django's templating, Jinja, Cheetah and others. Perhaps you can find in one of them the features you like so much in StringTemplate and live happily ever after.
| How to integrate the StringTemplate engine into the CherryPy web server | I love the StringTemplate engine, and I love the CherryPy web server, and I know that they can be integrated.
Who has done it? How?
EDIT: The TurboGears framework takes the CherryPy web server and bundles other related components such as a template engine, data access tools, JavaScript kit, etc. I am interested in MochiKit, demand CherryPy, but I don't want any other template engine than StringTemplate (architecture is critical--I don't want another broken/bad template engine).
Therefore, it would be acceptable to answer this question by addressing how to integrate StringTemplate with TurboGears.
It may also be acceptable to answer this question by addressing how to use CherryPy and StringTemplate in the Google App Engine.
Thanks.
| [
"Based on the tutorials for both, it looks pretty straightforward:\n\nimport stringtemplate\nimport cherrypy\n\nclass HelloWorld(object):\n def index(self):\n hello = stringtemplate.StringTemplate(\"Hello, $name$\")\n hello[\"name\"] = \"World\"\n return str(hello)\n index.exposed = True\n\ncherrypy.quickstart(HelloWorld())\n\nYou'll probably want to have the CherryPy functions find the StringTemplate's in some location on disk instead, but the general idea will be like this.\nDjango is conceptually similar: url's are mapped to python functions, and the python functions generally build up a context dictionary, render a template with that context object, and return the result.\n",
"Rob,\nThere's reason behind people's selection of tools. StringTemplate is not terribly popular for Python, there are templating engines that are much better supported and with a much wider audience. If you don't like Kid, there's also Django's templating, Jinja, Cheetah and others. Perhaps you can find in one of them the features you like so much in StringTemplate and live happily ever after.\n"
] | [
4,
0
] | [] | [] | [
"cherrypy",
"python",
"stringtemplate"
] | stackoverflow_0000379338_cherrypy_python_stringtemplate.txt |
Q:
Python piping on Windows: Why does this not work?
I'm trying something like this
Output.py
print "Hello"
Input.py
greeting = raw_input("Give me the greeting. ")
print "The greeting is:", greeting
At the cmd line
Output.py | Input.py
But it returns an EOFError. Can someone tell me what I am doing wrong?
Thanks for your help.
EDIT
Patrick Harrington solution works but I don't know why...
A:
I tested this on my Windows machine and it works if you specify the Python exe:
C:\>C:\Python25\python.exe output.py | C:\Python25\python.exe input.py
Give me the greeting. The greeting is: hello
But I get an EOFError also if running the commands directly as:
output.py | input.py
I'm not sure exactly why that is, I'm still looking into this one but at least this should provide you with a workaround for now. It may have something to do with the way the file handler is invoked for .py files.
UPDATE: well, what do you know. Looks like this is actually a bug in Windows where stdin/stdout redirection may not work properly when started from a file association. So the workaround is as noted by myself and Patrick, you need to specify "python" will be running input.py, otherwise it will not redirect stdout from output.py to the stdin for input.py correctly.
Reference:
http://mail.python.org/pipermail/python-bugs-list/2004-August/024923.html
http://support.microsoft.com/default.aspx?kbid=321788
UPDATE 2:
To change this behavior and make Windows pipes work as expected for stdin/stdout redirection, you can add this value to the registry (tested on my box and verified this works as desired).
Start Registry Editor.
Locate and then click the following key in the registry:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
On the Edit menu, click Add Value, and then add the following
registry value:
Value name: InheritConsoleHandles
Data type: REG_DWORD
Radix: Decimal
Value data: 1
Quit Registry Editor.
A:
Change it to:
Output.py | python Input.py
The output will be:
Give me the greeting. The greeting is: hello
A:
Here's why you get the EOFError (from the documentation on raw_input):
The function then reads a line from
input, converts it to a string
(stripping a trailing newline), and
returns that. When EOF is read,
EOFError is raised.
http://docs.python.org/library/functions.html?highlight=raw_input#raw_input
You may want to use sys.stdin, it provides a file object from which you can use the read, readlines methods.
import sys
for greeting_line in sys.stdin.readlines():
print "The greeting is:", greeting_line.strip()
| Python piping on Windows: Why does this not work? | I'm trying something like this
Output.py
print "Hello"
Input.py
greeting = raw_input("Give me the greeting. ")
print "The greeting is:", greeting
At the cmd line
Output.py | Input.py
But it returns an EOFError. Can someone tell me what I am doing wrong?
Thanks for your help.
EDIT
Patrick Harrington solution works but I don't know why...
| [
"I tested this on my Windows machine and it works if you specify the Python exe: \nC:\\>C:\\Python25\\python.exe output.py | C:\\Python25\\python.exe input.py\nGive me the greeting. The greeting is: hello\n\nBut I get an EOFError also if running the commands directly as: \noutput.py | input.py \n\nI'm not sure exactly why that is, I'm still looking into this one but at least this should provide you with a workaround for now. It may have something to do with the way the file handler is invoked for .py files. \nUPDATE: well, what do you know. Looks like this is actually a bug in Windows where stdin/stdout redirection may not work properly when started from a file association. So the workaround is as noted by myself and Patrick, you need to specify \"python\" will be running input.py, otherwise it will not redirect stdout from output.py to the stdin for input.py correctly. \nReference:\nhttp://mail.python.org/pipermail/python-bugs-list/2004-August/024923.html \nhttp://support.microsoft.com/default.aspx?kbid=321788\nUPDATE 2: \nTo change this behavior and make Windows pipes work as expected for stdin/stdout redirection, you can add this value to the registry (tested on my box and verified this works as desired).\n\n\nStart Registry Editor.\nLocate and then click the following key in the registry:\nHKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\nOn the Edit menu, click Add Value, and then add the following\n registry value:\nValue name: InheritConsoleHandles\n Data type: REG_DWORD\n Radix: Decimal\n Value data: 1\nQuit Registry Editor.\n\n\n",
"Change it to:\nOutput.py | python Input.py\n\nThe output will be:\n\nGive me the greeting. The greeting is: hello\n\n",
"Here's why you get the EOFError (from the documentation on raw_input):\n\nThe function then reads a line from\n input, converts it to a string\n (stripping a trailing newline), and\n returns that. When EOF is read,\n EOFError is raised.\n\nhttp://docs.python.org/library/functions.html?highlight=raw_input#raw_input\nYou may want to use sys.stdin, it provides a file object from which you can use the read, readlines methods.\nimport sys\nfor greeting_line in sys.stdin.readlines():\n print \"The greeting is:\", greeting_line.strip()\n\n"
] | [
23,
4,
0
] | [] | [] | [
"piping",
"python",
"windows"
] | stackoverflow_0000466801_piping_python_windows.txt |
Q:
Is there a Perl equivalent of Python's re.findall/re.finditer (iterative regex results)?
In Python compiled regex patterns have a findall method that does the following:
Return all non-overlapping matches of
pattern in string, as a list of
strings. The string is scanned
left-to-right, and matches are
returned in the order found. If one or
more groups are present in the
pattern, return a list of groups; this
will be a list of tuples if the
pattern has more than one group. Empty
matches are included in the result
unless they touch the beginning of
another match.
What's the canonical way of doing this in Perl? A naive algorithm I can think of is along the lines of "while a search and replace with the empty string is successful, do [suite]". I'm hoping there's a nicer way. :-)
Thanks in advance!
A:
Use the /g modifier in your match. From the perlop manual:
The "/g" modifier specifies global pattern matching--that is, matching as many times as possible within the string. How it behaves depends on the context. In list context, it returns a list of the substrings matched by any capturing parentheses in the regular expression. If there are no parentheses, it returns a list of all the matched strings, as if there were parentheses around the whole pattern.
In scalar context, each execution of "m//g" finds the next match, returning true if it matches, and false if there is no further match. The position after the last match can be read or set using the pos() function; see "pos" in perlfunc. A failed match normally resets the search position to the beginning of the string, but you can avoid that by adding the "/c" modifier (e.g. "m//gc"). Modifying the target string also resets the search position.
A:
To build on Chris' response, it's probably most relevant to encase the //g regex in a while loop, like:
my @matches;
while ( 'foobarbaz' =~ m/([aeiou])/g )
{
push @matches, $1;
}
Pasting some quick Python I/O:
>>> import re
>>> re.findall(r'([aeiou])([nrs])','I had a sandwich for lunch')
[('a', 'n'), ('o', 'r'), ('u', 'n')]
To get something comparable in Perl, the construct could be something like:
my $matches = [];
while ( 'I had a sandwich for lunch' =~ m/([aeiou])([nrs])/g )
{
push @$matches, [$1,$2];
}
But in general, whatever function you're iterating for, you can probably do within the while loop itself.
A:
Nice beginner reference with similar content to @kyle's answer: Perl Tutorial: Using regular expressions
| Is there a Perl equivalent of Python's re.findall/re.finditer (iterative regex results)? | In Python compiled regex patterns have a findall method that does the following:
Return all non-overlapping matches of
pattern in string, as a list of
strings. The string is scanned
left-to-right, and matches are
returned in the order found. If one or
more groups are present in the
pattern, return a list of groups; this
will be a list of tuples if the
pattern has more than one group. Empty
matches are included in the result
unless they touch the beginning of
another match.
What's the canonical way of doing this in Perl? A naive algorithm I can think of is along the lines of "while a search and replace with the empty string is successful, do [suite]". I'm hoping there's a nicer way. :-)
Thanks in advance!
| [
"Use the /g modifier in your match. From the perlop manual:\n\nThe \"/g\" modifier specifies global pattern matching--that is, matching as many times as possible within the string. How it behaves depends on the context. In list context, it returns a list of the substrings matched by any capturing parentheses in the regular expression. If there are no parentheses, it returns a list of all the matched strings, as if there were parentheses around the whole pattern.\nIn scalar context, each execution of \"m//g\" finds the next match, returning true if it matches, and false if there is no further match. The position after the last match can be read or set using the pos() function; see \"pos\" in perlfunc. A failed match normally resets the search position to the beginning of the string, but you can avoid that by adding the \"/c\" modifier (e.g. \"m//gc\"). Modifying the target string also resets the search position.\n\n",
"To build on Chris' response, it's probably most relevant to encase the //g regex in a while loop, like:\nmy @matches;\nwhile ( 'foobarbaz' =~ m/([aeiou])/g )\n{\n push @matches, $1;\n}\n\nPasting some quick Python I/O:\n>>> import re\n>>> re.findall(r'([aeiou])([nrs])','I had a sandwich for lunch')\n[('a', 'n'), ('o', 'r'), ('u', 'n')]\n\nTo get something comparable in Perl, the construct could be something like:\nmy $matches = [];\nwhile ( 'I had a sandwich for lunch' =~ m/([aeiou])([nrs])/g )\n{\n push @$matches, [$1,$2];\n}\n\nBut in general, whatever function you're iterating for, you can probably do within the while loop itself.\n",
"Nice beginner reference with similar content to @kyle's answer: Perl Tutorial: Using regular expressions\n"
] | [
13,
8,
2
] | [] | [] | [
"iterator",
"perl",
"python",
"regex"
] | stackoverflow_0000467800_iterator_perl_python_regex.txt |
Q:
Permutations in python, with a twist
I have a list of objects (for the sake of example, let's say 5). I want a list of some of the possible permutations. Specifically, given that some pairs are not together, and some triples don't make sandwiches, how can I generate all other permutations? I realize that I generate all of them first and check that they work, but I think it would be faster to not even consider the pairs and triples that don't work.
Am I wrong that it would be faster to check first and generate later?
How would I do it?
A:
You would have to find an algorithm that cuts off more than one unwanted permutation after a single check, in order to gain anything. The obvious strategy is to build the permutations sequentially, for example, in a tree. Each cut then eliminates a whole branch.
edit:
Example: in the set (A B C D), let's say that B and C, and A and D are not allowed to be neighbours.
(A) (B) (C) (D)
/ | \ / | \ / | \ / | \
AB AC AD BA BC BD CA CB CD DA DB DC
| \ | \ X / \ X / \ / \ X / \ X / \ / \
ABC ABD ACB ACD BAC BAD BDA BDC CAB CAD CDA CDB DBA DBC DCA DCB
X | X | | X X | | X X | | X | X
ABDC ACDB BACD BDCA CABD CDBA DBAC DCAB
v v v v v v v v
Each of the strings without parentheses needs a check. As you see, the Xs (where subtrees have been cut off) save checks, one if they are in the third row, but four if they are in the second row. We saved 24 of 60 checks here and got down to 36. However, there are only 24 permutations overall anyway, so if checking the restrictions (as opposed to building the lists) is the bottleneck, we would have been better off to just construct all the permutations and check them at the end... IF the checks couldn't be optimized when we go this way.
Now, as you see, the checks only need to be performed on the new part of each list. This makes the checks much leaner; actually, we divide the check that would be needed for a full permutation into small chunks. In the above example, we only have to look whether the added letter is allowed to stand besides the last one, not all the letters before.
However, also if we first construct, then filter, the checks could be cut short as soon as a no-no is encountered. So, on checking, there is no real gain compared to the first-build-then-filter algorithm; there is rather the danger of further overhead through more function calls.
What we do save is the time to build the lists, and the peak memory consumption. Building a list is generally rather fast, but peak memory consumption might be a consideration if the number of object gets larger. For the first-build-then-filter, both grow linearly with the number of objects. For the tree version, it grows slower, depending on the constraints. From a certain number of objects and rules on, there is also actual check saving.
In general, I think that you would need to try it out and time the two algorithms. If you really have only 5 objects, stick to the simple (filter rules (build-permutations set)). If your number of objects gets large, the tree algorithm will at some point perform noticably better (you know, big O).
Um. Sorry, I got into lecture mode; bear with me.
| Permutations in python, with a twist | I have a list of objects (for the sake of example, let's say 5). I want a list of some of the possible permutations. Specifically, given that some pairs are not together, and some triples don't make sandwiches, how can I generate all other permutations? I realize that I generate all of them first and check that they work, but I think it would be faster to not even consider the pairs and triples that don't work.
Am I wrong that it would be faster to check first and generate later?
How would I do it?
| [
"You would have to find an algorithm that cuts off more than one unwanted permutation after a single check, in order to gain anything. The obvious strategy is to build the permutations sequentially, for example, in a tree. Each cut then eliminates a whole branch.\nedit:\nExample: in the set (A B C D), let's say that B and C, and A and D are not allowed to be neighbours.\n\n (A) (B) (C) (D)\n / | \\ / | \\ / | \\ / | \\\n AB AC AD BA BC BD CA CB CD DA DB DC\n | \\ | \\ X / \\ X / \\ / \\ X / \\ X / \\ / \\\nABC ABD ACB ACD BAC BAD BDA BDC CAB CAD CDA CDB DBA DBC DCA DCB\n X | X | | X X | | X X | | X | X\n ABDC ACDB BACD BDCA CABD CDBA DBAC DCAB\n v v v v v v v v\n\nEach of the strings without parentheses needs a check. As you see, the Xs (where subtrees have been cut off) save checks, one if they are in the third row, but four if they are in the second row. We saved 24 of 60 checks here and got down to 36. However, there are only 24 permutations overall anyway, so if checking the restrictions (as opposed to building the lists) is the bottleneck, we would have been better off to just construct all the permutations and check them at the end... IF the checks couldn't be optimized when we go this way.\nNow, as you see, the checks only need to be performed on the new part of each list. This makes the checks much leaner; actually, we divide the check that would be needed for a full permutation into small chunks. In the above example, we only have to look whether the added letter is allowed to stand besides the last one, not all the letters before.\nHowever, also if we first construct, then filter, the checks could be cut short as soon as a no-no is encountered. So, on checking, there is no real gain compared to the first-build-then-filter algorithm; there is rather the danger of further overhead through more function calls.\nWhat we do save is the time to build the lists, and the peak memory consumption. Building a list is generally rather fast, but peak memory consumption might be a consideration if the number of object gets larger. For the first-build-then-filter, both grow linearly with the number of objects. For the tree version, it grows slower, depending on the constraints. From a certain number of objects and rules on, there is also actual check saving.\nIn general, I think that you would need to try it out and time the two algorithms. If you really have only 5 objects, stick to the simple (filter rules (build-permutations set)). If your number of objects gets large, the tree algorithm will at some point perform noticably better (you know, big O).\nUm. Sorry, I got into lecture mode; bear with me.\n"
] | [
5
] | [] | [] | [
"python"
] | stackoverflow_0000467878_python.txt |
Q:
Sample a running Python app
I'm used to sampling C-based apps, which every few milliseconds sees what function stack is being called at that moment.
This allows me to see where most of the time is spent in an app so I can optimize it.
When using python, however, sample isn't so helpful, since it's sampling the C functions of the python interpreter, not the python code itself.
Is there a useful sampling tool for python?
A:
Python includes a built-in set of profiling tools. In particular, you can run cProfile on an arbitrary python script from the command-line:
$ python -m cProfile myscript.py
Much more elaborate usage is available by calling the API directly. Note that the cProfile module was added in Python 2.5. In earlier versions, you can use the pure-Python, but slower "profile" module.
| Sample a running Python app | I'm used to sampling C-based apps, which every few milliseconds sees what function stack is being called at that moment.
This allows me to see where most of the time is spent in an app so I can optimize it.
When using python, however, sample isn't so helpful, since it's sampling the C functions of the python interpreter, not the python code itself.
Is there a useful sampling tool for python?
| [
"Python includes a built-in set of profiling tools. In particular, you can run cProfile on an arbitrary python script from the command-line:\n$ python -m cProfile myscript.py\n\nMuch more elaborate usage is available by calling the API directly. Note that the cProfile module was added in Python 2.5. In earlier versions, you can use the pure-Python, but slower \"profile\" module.\n"
] | [
4
] | [] | [] | [
"performance",
"python",
"sample"
] | stackoverflow_0000467925_performance_python_sample.txt |
Q:
Python: convert alphabetically spelled out numbers to numerics?
I'm looking for a library, service, or code suggestions to turn spelled out numbers and amounts (eg. "thirty five dollars and fifteen cents", "one point five") into numerics ($35.15, 1.5) . Suggestions?
A:
I wrote some code to do this for integers a while ago: http://github.com/ghewgill/text2num
Feel free to fork and hack.
| Python: convert alphabetically spelled out numbers to numerics? | I'm looking for a library, service, or code suggestions to turn spelled out numbers and amounts (eg. "thirty five dollars and fifteen cents", "one point five") into numerics ($35.15, 1.5) . Suggestions?
| [
"I wrote some code to do this for integers a while ago: http://github.com/ghewgill/text2num\nFeel free to fork and hack.\n"
] | [
5
] | [] | [] | [
"parsing",
"python"
] | stackoverflow_0000468241_parsing_python.txt |
Q:
Python code for sorting files into folders
Python 2.5.1
http://www.cgsecurity.org/wiki/After_Using_PhotoRec
I've just run PhotoRec and the code given as a way to sort file types into their own folder is coming back with this error. Any suggestions on how to alter? Thanks :
[EDIT2: Two points:
This question was voted down because it was a 'usage' of code, somehow not a programming question. Does it qualify as a coding question? I argue yes.
I've gone back and edited the page where the code came from to clarify the need for parameters for the benefit of others.]
gyaresu$ python recovery.py
Traceback (most recent call last):
File "recovery.py", line 8, in
source = sys.argv[1]
IndexError: list index out of range
Script:
#!/usr/bin/env python
import os
import os.path
import shutil
import string
import sys
source = sys.argv[1]
destination = sys.argv[2]
while os.path.exists(source) != True:
source = raw_input('Enter a valid source directory\n')
while os.path.exists(destination) != True:
destination = raw_input('Enter a valid destination directory\n')
for root, dirs, files in os.walk(source, topdown=False):
for file in files:
extension = string.upper(os.path.splitext(file)[1][1:])
destinationPath = os.path.join(destination,extension)
if os.path.exists(destinationPath) != True:
os.mkdir(destinationPath)
if os.path.exists(os.path.join(destinationPath,file)):
print 'WARNING: this file was not copied :' + os.path.join(root,file)
else:
shutil.copy2(os.path.join(root,file), destinationPath)
A:
It simply means that the program is expecting two command line arguments: source and destination. If you wish to use the same code in another function, replace sys.argv[1] and [2] with your own variables.
A:
Or you can modify the original script and add
if len(sys.argv) != 3:
print "Require 2 arguments: %s <source> <destination>" %(sys.argv[0])
sys.exit(1)
after the import statements for proper error handling.
A:
Since the script is going to ask for paths if they don't exist, you could make the program arguments optional.
Change
source = sys.argv[1]
destination = sys.argv[2]
to
source = sys.argv[1] if len(sys.argv > 1) else ""
destination = sys.argv[2] if len(sys.argv > 2) else ""
| Python code for sorting files into folders | Python 2.5.1
http://www.cgsecurity.org/wiki/After_Using_PhotoRec
I've just run PhotoRec and the code given as a way to sort file types into their own folder is coming back with this error. Any suggestions on how to alter? Thanks :
[EDIT2: Two points:
This question was voted down because it was a 'usage' of code, somehow not a programming question. Does it qualify as a coding question? I argue yes.
I've gone back and edited the page where the code came from to clarify the need for parameters for the benefit of others.]
gyaresu$ python recovery.py
Traceback (most recent call last):
File "recovery.py", line 8, in
source = sys.argv[1]
IndexError: list index out of range
Script:
#!/usr/bin/env python
import os
import os.path
import shutil
import string
import sys
source = sys.argv[1]
destination = sys.argv[2]
while os.path.exists(source) != True:
source = raw_input('Enter a valid source directory\n')
while os.path.exists(destination) != True:
destination = raw_input('Enter a valid destination directory\n')
for root, dirs, files in os.walk(source, topdown=False):
for file in files:
extension = string.upper(os.path.splitext(file)[1][1:])
destinationPath = os.path.join(destination,extension)
if os.path.exists(destinationPath) != True:
os.mkdir(destinationPath)
if os.path.exists(os.path.join(destinationPath,file)):
print 'WARNING: this file was not copied :' + os.path.join(root,file)
else:
shutil.copy2(os.path.join(root,file), destinationPath)
| [
"It simply means that the program is expecting two command line arguments: source and destination. If you wish to use the same code in another function, replace sys.argv[1] and [2] with your own variables.\n",
"Or you can modify the original script and add\nif len(sys.argv) != 3:\n print \"Require 2 arguments: %s <source> <destination>\" %(sys.argv[0])\n sys.exit(1)\n\nafter the import statements for proper error handling.\n",
"Since the script is going to ask for paths if they don't exist, you could make the program arguments optional.\nChange\nsource = sys.argv[1]\ndestination = sys.argv[2]\n\nto\nsource = sys.argv[1] if len(sys.argv > 1) else \"\"\ndestination = sys.argv[2] if len(sys.argv > 2) else \"\"\n\n"
] | [
2,
2,
0
] | [] | [] | [
"python",
"recovery",
"scripting"
] | stackoverflow_0000468383_python_recovery_scripting.txt |
Q:
Tracking file load progress in Python
A lot of modules I use import entire files into memory or trickle a file's contents in while they process it. I'm wondering if there's any way to track this sort of loading progress? Possibly a wrapper class that takes a callback?
A:
I would do by this by determining the size of the file, and then simply dividing the total by the number of bytes read. Like this:
import os
def show_progress(file_name, chunk_size=1024):
fh = open(file_name, "r")
total_size = os.path.getsize(file_name)
total_read = 0
while True:
chunk = fh.read(chunk_size)
if not chunk:
fh.close()
break
total_read += len(chunk)
print "Progress: %s percent" % (total_read/total_size)
yield chunk
for chunk in show_progress("my_file.txt"):
# Process the chunk
pass
Edit: I know it isn't the best code, but I just wanted to show the concept.
A:
If you actually mean "import" (not "read") then you can override the import module definitions. You can add timing capabilities.
See the imp module.
If you mean "read", then you can trivially wrap Python files with your own file-like wrapper. Files don't expose too many methods. You can override the interesting ones to get timing data.
>>> class MyFile(file):
... def read(self,*args,**kw):
... # start timing
... result= super(MyFile,self).read(*args,**kw)
... # finish timing
... return result
| Tracking file load progress in Python | A lot of modules I use import entire files into memory or trickle a file's contents in while they process it. I'm wondering if there's any way to track this sort of loading progress? Possibly a wrapper class that takes a callback?
| [
"I would do by this by determining the size of the file, and then simply dividing the total by the number of bytes read. Like this:\nimport os\n\ndef show_progress(file_name, chunk_size=1024):\n fh = open(file_name, \"r\")\n total_size = os.path.getsize(file_name)\n total_read = 0\n while True:\n chunk = fh.read(chunk_size)\n if not chunk: \n fh.close()\n break\n total_read += len(chunk)\n print \"Progress: %s percent\" % (total_read/total_size)\n yield chunk\n\nfor chunk in show_progress(\"my_file.txt\"):\n # Process the chunk\n pass \n\nEdit: I know it isn't the best code, but I just wanted to show the concept.\n",
"If you actually mean \"import\" (not \"read\") then you can override the import module definitions. You can add timing capabilities.\nSee the imp module.\nIf you mean \"read\", then you can trivially wrap Python files with your own file-like wrapper. Files don't expose too many methods. You can override the interesting ones to get timing data.\n>>> class MyFile(file):\n... def read(self,*args,**kw):\n... # start timing\n... result= super(MyFile,self).read(*args,**kw)\n... # finish timing\n... return result\n\n"
] | [
7,
3
] | [] | [] | [
"file",
"load",
"progress",
"python"
] | stackoverflow_0000468238_file_load_progress_python.txt |
Q:
What is the best approach to implement configuration app with Django?
I need to program kind of configuration registry for Django-based application.
Requirements:
Most likely param_name : param_value structure
Editable via admin interface
Has to work with syncdb. How to deal with a situation in which other apps depend on configuration model and the model itself has not been initialized yet in DB? Let's say I would like to have configurable model fields properties, i.e. the default value setting?
Any ideas or suggestions would be appreciated.
A:
I have found djblets.siteconfig very useful. Works great with the Admin app, and very easy to use. Highly recommended.
A:
Once a while (year ago) I used dbsettings to have some sort of business configuration accessible via admin interface, but I cann't say how it fits today.
A:
I think you'll have trouble if you make other apps depend (at interpretation/app-loading time) on values set in your config app. Can you use some kind of placeholder value in Python code at interpretation time, and then pull in the real config data on the post_syncdb signal?
| What is the best approach to implement configuration app with Django? | I need to program kind of configuration registry for Django-based application.
Requirements:
Most likely param_name : param_value structure
Editable via admin interface
Has to work with syncdb. How to deal with a situation in which other apps depend on configuration model and the model itself has not been initialized yet in DB? Let's say I would like to have configurable model fields properties, i.e. the default value setting?
Any ideas or suggestions would be appreciated.
| [
"I have found djblets.siteconfig very useful. Works great with the Admin app, and very easy to use. Highly recommended.\n",
"Once a while (year ago) I used dbsettings to have some sort of business configuration accessible via admin interface, but I cann't say how it fits today.\n",
"I think you'll have trouble if you make other apps depend (at interpretation/app-loading time) on values set in your config app. Can you use some kind of placeholder value in Python code at interpretation time, and then pull in the real config data on the post_syncdb signal?\n"
] | [
5,
1,
0
] | [] | [] | [
"configuration",
"django",
"django_admin",
"django_models",
"python"
] | stackoverflow_0000442355_configuration_django_django_admin_django_models_python.txt |
Q:
Templates within templates. How to avoid rendering twice?
I've got a CMS that takes some dynamic content and renders it using a standard template. However I am now using template tags in the dynamic content itself so I have to do a render_to_string and then pass the results of that as a context variable to render_to_response. This seems wasteful.
What's a better way to do this?
A:
"This seems wasteful" Why does it seem that way?
Every template is a mix of tags and text. In your case some block of text has already been visited by a template engine. So what? Once it's been transformed it's just text and passes through the next template engine very, very quickly.
Do you have specific performance problems? Are you not meeting your transaction throughput requirements? Is there a specific problem?
Is the code too complex? Is it hard to maintain? Does it break all the time?
I think your solution is adequate. I'm not sure template tags in dynamic content is good from a debugging point of view, but from a basic "template rendering" point of view, it is fine.
A:
What you're doing sounds fine, but the question could be asked: Why not put the templatetag references directly in your template instead of manually rendering them?
<div>
{% if object matches some criteria %}
{% render_type1_object object %}
{% else %}
{% render_type2_object object %}
{% endif %
... etc ...
</div>
Or, better yet, have one central templatetag for rendering an object (or list of objects), which encapsulates the mapping of object types to templatetags. Then all of your templates simply reference the one templatetag, with no type-knowledge necessary in the templates themselves.
The key being that you're moving knowledge of how to render individual objects out of your views.
| Templates within templates. How to avoid rendering twice? | I've got a CMS that takes some dynamic content and renders it using a standard template. However I am now using template tags in the dynamic content itself so I have to do a render_to_string and then pass the results of that as a context variable to render_to_response. This seems wasteful.
What's a better way to do this?
| [
"\"This seems wasteful\" Why does it seem that way?\nEvery template is a mix of tags and text. In your case some block of text has already been visited by a template engine. So what? Once it's been transformed it's just text and passes through the next template engine very, very quickly.\nDo you have specific performance problems? Are you not meeting your transaction throughput requirements? Is there a specific problem?\nIs the code too complex? Is it hard to maintain? Does it break all the time?\nI think your solution is adequate. I'm not sure template tags in dynamic content is good from a debugging point of view, but from a basic \"template rendering\" point of view, it is fine.\n",
"What you're doing sounds fine, but the question could be asked: Why not put the templatetag references directly in your template instead of manually rendering them?\n<div>\n {% if object matches some criteria %}\n {% render_type1_object object %}\n {% else %}\n {% render_type2_object object %}\n {% endif %\n ... etc ...\n</div>\n\nOr, better yet, have one central templatetag for rendering an object (or list of objects), which encapsulates the mapping of object types to templatetags. Then all of your templates simply reference the one templatetag, with no type-knowledge necessary in the templates themselves.\nThe key being that you're moving knowledge of how to render individual objects out of your views.\n"
] | [
2,
0
] | [] | [] | [
"django",
"django_templates",
"python"
] | stackoverflow_0000468736_django_django_templates_python.txt |
Q:
problem using an instance in a with_statement
I've recently started to learn python , and I reached the with statement . I've tried to use it with a class instance , but I think I'm doing something wrong . Here is the code :
from __future__ import with_statement
import pdb
class Geo:
def __init__(self,text):
self.text = text
def __enter__(self):
print "entering"
def __exit__(self,exception_type,exception_value,exception_traceback):
print "exiting"
def ok(self):
print self.text
def __get(self):
return self.text
with Geo("line") as g :
g.ok()
The thing is that when the interpreter reaches the ok method inside the with statement , the following exception is raised :
Traceback (most recent call last):
File "dec.py", line 23, in
g.ok()
AttributeError: 'NoneType' object has no attribute 'ok'
Why does the g object have the type NoneType ? How can I use an instance with the with statement ?
A:
Your __enter__ method needs to return the object that should be used for the "as g" part of the with statement. See the documentation, where it states:
If a target was included in the with statement, the return value from __enter__() is assigned to it.
Currently, it has no return statement, so g gets bound to None (the default return value)
| problem using an instance in a with_statement | I've recently started to learn python , and I reached the with statement . I've tried to use it with a class instance , but I think I'm doing something wrong . Here is the code :
from __future__ import with_statement
import pdb
class Geo:
def __init__(self,text):
self.text = text
def __enter__(self):
print "entering"
def __exit__(self,exception_type,exception_value,exception_traceback):
print "exiting"
def ok(self):
print self.text
def __get(self):
return self.text
with Geo("line") as g :
g.ok()
The thing is that when the interpreter reaches the ok method inside the with statement , the following exception is raised :
Traceback (most recent call last):
File "dec.py", line 23, in
g.ok()
AttributeError: 'NoneType' object has no attribute 'ok'
Why does the g object have the type NoneType ? How can I use an instance with the with statement ?
| [
"Your __enter__ method needs to return the object that should be used for the \"as g\" part of the with statement. See the documentation, where it states:\n\nIf a target was included in the with statement, the return value from __enter__() is assigned to it.\n\nCurrently, it has no return statement, so g gets bound to None (the default return value)\n"
] | [
12
] | [] | [] | [
"python",
"with_statement"
] | stackoverflow_0000469950_python_with_statement.txt |
Q:
Why does 1+++2 = 3?
How does Python evaluate the expression 1+++2?
How many ever + I put in between, it is printing 3 as the answer. Please can anyone explain this behavior
And for 1--2 it is printing 3 and for 1---2 it is printing -1
A:
Your expression is the same as:
1+(+(+2))
Any numeric expression can be preceded by - to make it negative, or + to do nothing (the option is present for symmetry). With negative signs:
1-(-(2)) = 1-(-2)
= 1+2
= 3
and
1-(-(-2)) = 1-(2)
= -1
I see you clarified your question to say that you come from a C background. In Python, there are no increment operators like ++ and -- in C, which was probably the source of your confusion. To increment or decrement a variable i or j in Python use this style:
i += 1
j -= 1
A:
The extra +'s are not incrementors (like ++a or a++ in c++). They are just showing that the number is positive.
There is no such ++ operator. There is a unary + operator and a unary - operator though. The unary + operator has no effect on its argument. The unary - operator negates its operator or mulitplies it by -1.
+1
-> 1
++1
-> 1
This is the same as +(+(1))
1+++2
-> 3
Because it's the same as 1 + (+(+(2))
Likewise you can do --1 to mean - (-1) which is +1.
--1
-> 1
For completeness there is no * unary opeartor. So *1 is an error. But there is a **
operator which is power of, it takes 2 arguments.
2**3
-> 8
A:
1+(+(+2)) = 3
1 - (-2) = 3
1 - (-(-2)) = -1
A:
Trying Unary Plus and Unary minus:
The unary - (minus) operator yields the negation of its numeric argument.
The unary + (plus) operator yields its numeric argument unchanged.
>>> +2
2
>>> ++2
2
>>> +++2
2
>>> -2
-2
>>> --2
2
>>> ---2
-2
>>> 1+(++2)
3
A:
Think it as 1 + (+1*(+1*2))). The first + is operator and following plus signs are sign of second operand (= 2).
Just like 1---2 is same as 1 - -(-(2)) or 1- (-1*(-1*(2))
A:
I believe it's being parsed as, the first + as a binary operation (add), and the rest as unary operations (make positive).
1 + (+(+2))
| Why does 1+++2 = 3? | How does Python evaluate the expression 1+++2?
How many ever + I put in between, it is printing 3 as the answer. Please can anyone explain this behavior
And for 1--2 it is printing 3 and for 1---2 it is printing -1
| [
"Your expression is the same as:\n1+(+(+2))\n\nAny numeric expression can be preceded by - to make it negative, or + to do nothing (the option is present for symmetry). With negative signs:\n1-(-(2)) = 1-(-2)\n = 1+2\n = 3\n\nand\n1-(-(-2)) = 1-(2)\n = -1\n\nI see you clarified your question to say that you come from a C background. In Python, there are no increment operators like ++ and -- in C, which was probably the source of your confusion. To increment or decrement a variable i or j in Python use this style:\ni += 1\nj -= 1\n\n",
"The extra +'s are not incrementors (like ++a or a++ in c++). They are just showing that the number is positive.\nThere is no such ++ operator. There is a unary + operator and a unary - operator though. The unary + operator has no effect on its argument. The unary - operator negates its operator or mulitplies it by -1. \n+1\n\n-> 1\n++1\n\n-> 1\nThis is the same as +(+(1))\n 1+++2\n\n-> 3\nBecause it's the same as 1 + (+(+(2))\nLikewise you can do --1 to mean - (-1) which is +1.\n --1\n\n-> 1\nFor completeness there is no * unary opeartor. So *1 is an error. But there is a ** \noperator which is power of, it takes 2 arguments. \n 2**3\n\n-> 8\n",
"1+(+(+2)) = 3\n1 - (-2) = 3\n1 - (-(-2)) = -1\n",
"Trying Unary Plus and Unary minus:\n\nThe unary - (minus) operator yields the negation of its numeric argument.\nThe unary + (plus) operator yields its numeric argument unchanged.\n\n>>> +2\n2\n>>> ++2\n2\n>>> +++2\n2\n>>> -2\n-2\n>>> --2\n2\n>>> ---2\n-2\n>>> 1+(++2)\n3\n\n",
"Think it as 1 + (+1*(+1*2))). The first + is operator and following plus signs are sign of second operand (= 2).\nJust like 1---2 is same as 1 - -(-(2)) or 1- (-1*(-1*(2))\n",
"I believe it's being parsed as, the first + as a binary operation (add), and the rest as unary operations (make positive). \n 1 + (+(+2))\n\n"
] | [
63,
15,
4,
4,
1,
1
] | [] | [] | [
"evaluation",
"operator_precedence",
"python"
] | stackoverflow_0000470139_evaluation_operator_precedence_python.txt |
Q:
In Django how do i return the total number of items that are related to a model?
In Django how can i return the total number of items (count) that are related to another model, e.g the way stackoverflow does a list of questions then on the side it shows the count on the answers related to that question.
This is easy if i get the questionid, i can return all answers related to that question but when am displaying the entire list of question it becomes a bit tricky to display on the side the count showing the total count.
I don't know if am clear but just think how stackoverflow displays its questions with answer,views count next to each question!
A:
QuerySet.count()
See also an example how to build QuerySets of related models.
A:
If you're willing to use trunk, you can take advantage of the brand new annotate() QuerySet method added just a week or so ago, which solves this exact problem:
http://docs.djangoproject.com/en/dev/topics/db/aggregation/
If you want to stick with Django 1.0, you can achieve this in a slightly less elegant way using the select argument of the extra() QuerySet method. There's an example of exactly what you are talking about using extra() here:
http://docs.djangoproject.com/en/dev/ref/models/querysets/#extra-select-none-where-none-params-none-tables-none-order-by-none-select-params-none
Finally, if you need this to be really high performance you can denormalise the count in to a separate column. I've got some examples of how to do this in the unit testing part of my presentation here:
http://www.slideshare.net/simon/advanced-django
| In Django how do i return the total number of items that are related to a model? | In Django how can i return the total number of items (count) that are related to another model, e.g the way stackoverflow does a list of questions then on the side it shows the count on the answers related to that question.
This is easy if i get the questionid, i can return all answers related to that question but when am displaying the entire list of question it becomes a bit tricky to display on the side the count showing the total count.
I don't know if am clear but just think how stackoverflow displays its questions with answer,views count next to each question!
| [
"QuerySet.count()\nSee also an example how to build QuerySets of related models.\n",
"If you're willing to use trunk, you can take advantage of the brand new annotate() QuerySet method added just a week or so ago, which solves this exact problem:\nhttp://docs.djangoproject.com/en/dev/topics/db/aggregation/\nIf you want to stick with Django 1.0, you can achieve this in a slightly less elegant way using the select argument of the extra() QuerySet method. There's an example of exactly what you are talking about using extra() here:\nhttp://docs.djangoproject.com/en/dev/ref/models/querysets/#extra-select-none-where-none-params-none-tables-none-order-by-none-select-params-none\nFinally, if you need this to be really high performance you can denormalise the count in to a separate column. I've got some examples of how to do this in the unit testing part of my presentation here:\nhttp://www.slideshare.net/simon/advanced-django\n"
] | [
5,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000469110_django_python.txt |
Q:
How do I get PIL to work when built on mingw/cygwin?
I'm trying to build PIL 1.1.6 against cygwin or mingw whilst running against a windows install of python. When I do either the build works but I get the following failure when trying to save files.
$ python25
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from PIL.Image import open
>>> im = open('test.gif')
>>> im.save('output1.gif')
Traceback (most recent call last):
File "", line 1, in
File "c:\Python25\Lib\site-packages\PIL\Image.py", line 1405, in save
save_handler(self, fp, filename)
File "c:\Python25\Lib\site-packages\PIL\GifImagePlugin.py", line 291, in _save
ImageFile._save(imOut, fp, [("gif", (0,0)+im.size, 0, rawmode)])
File "c:\Python25\Lib\site-packages\PIL\ImageFile.py", line 491, in _save
s = e.encode_to_file(fh, bufsize)
IOError: [Errno 0] Error
>>>
I'm not compiling with the libraries for jpeg or zip support but I don't think this should be relevant here.
The failing line seems to be a write in encode_to_file in encode.c.
I'm suspiscious that this occurs because a file descriptor is being passed from Python (which was build under visual studio 2003) to _imaging.pyd but that the file descriptors don't match because on windows file descriptors are and abstraction on top of the operating system. Does anyone know anything about this?
A:
As far as I can tell from some cursory Google searching, you need to rebase the DLLs after building PIL in order for it to work properly on Cygwin.
References:
http://jetfar.com/cygwin-install-python-imaging-library/
http://www.cygwin.com/ml/cygwin/2003-06/msg01121.html
| How do I get PIL to work when built on mingw/cygwin? | I'm trying to build PIL 1.1.6 against cygwin or mingw whilst running against a windows install of python. When I do either the build works but I get the following failure when trying to save files.
$ python25
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from PIL.Image import open
>>> im = open('test.gif')
>>> im.save('output1.gif')
Traceback (most recent call last):
File "", line 1, in
File "c:\Python25\Lib\site-packages\PIL\Image.py", line 1405, in save
save_handler(self, fp, filename)
File "c:\Python25\Lib\site-packages\PIL\GifImagePlugin.py", line 291, in _save
ImageFile._save(imOut, fp, [("gif", (0,0)+im.size, 0, rawmode)])
File "c:\Python25\Lib\site-packages\PIL\ImageFile.py", line 491, in _save
s = e.encode_to_file(fh, bufsize)
IOError: [Errno 0] Error
>>>
I'm not compiling with the libraries for jpeg or zip support but I don't think this should be relevant here.
The failing line seems to be a write in encode_to_file in encode.c.
I'm suspiscious that this occurs because a file descriptor is being passed from Python (which was build under visual studio 2003) to _imaging.pyd but that the file descriptors don't match because on windows file descriptors are and abstraction on top of the operating system. Does anyone know anything about this?
| [
"As far as I can tell from some cursory Google searching, you need to rebase the DLLs after building PIL in order for it to work properly on Cygwin.\nReferences: \n\nhttp://jetfar.com/cygwin-install-python-imaging-library/\nhttp://www.cygwin.com/ml/cygwin/2003-06/msg01121.html\n\n"
] | [
1
] | [] | [] | [
"cygwin",
"python",
"python_imaging_library",
"windows"
] | stackoverflow_0000380731_cygwin_python_python_imaging_library_windows.txt |
Q:
Pros and cons of IronPython and IronPython Studio
We are ready in our company to move everything to Python instead of C#, we are a consulting company and we usually write small projects in C# we don't do huge projects and our work is more based on complex mathematical models not complex software structures. So we believe IronPython is a good platform for us because it provides standard GUI functionality on windows and access to all of .Net libraries.
I know Ironpython studio is not complete, and in fact I had a hard time adding my references but I was wondering if someone could list some of the pros and cons of this migration for us, considering Python code is easier to read by our clients and we usually deliver a proof-of-concept prototype instead of a full-functional code, our clients usually go ahead and implement the application themselves
A:
My company, Resolver Systems, develops what is probably the biggest application written in IronPython yet. (It's called Resolver One, and it's a Pythonic spreadsheet). We are also hosting the Ironclad project (to run CPython extensions under IronPython) and that is going well (we plan to release a beta of Resolver One & numpy soon).
The reason we chose IronPython was the .NET integration - our clients want 100% integration on Windows and the easiest way to do that right now is .NET.
We design our GUI (without behaviour) in Visual Studio, compile it into a DLL and subclass it from IronPython to add behaviour.
We have found that IronPython is faster at some cases and slower at some others. However, the IronPython team is very responsive, whenever we report a regression they fix it and usually backport it to the bugfix release. If you worry about performance, you can always implement a critical part in C# (we haven't had to do that yet).
If you have experience with C#, then IronPython will be natural for you, and easier than C#, especially for prototypes.
Regarding IronPython studio, we don't use it. Each of us has his editor of choice (TextPad, Emacs, Vim & Wing), and everything works fine.
A:
There are a lot of reasons why you want to switch from C# to python, i did this myself recently. After a lot of investigating, here are the reasons why i stick to CPython:
Performance: There are some articles out there stating that there are always cases where ironpython is slower, so if performance is an issue
Take the original: many people argue that new features etc. are always integrated in CPython first and you have to wait until they are implemented in ironpython.
Licensing: Some people argue this is a timebomb: nobody knows how the licensing of ironpython/mono might change in near future
Extensions: one of the strengths of python are the thousands of extensions which are all usable by CPython, as you mentioned mathematical problems: numpy might be a suitable fast package for you which might not run as expected under IronPython (although Ironclad)
Especially under Windows you have a native GUI-toolkit with wxPython which also looks great under several other platforms and there are pyQT and a lot of other toolkits. They have nice designer like wxGlade, but here VisualStudio C# Designer is easier to use.
Platform independence (if this is an issue): CPython is ported to really a lot of platforms, whereas ironpython can only be used on the major platforms (recently read a developer was sad that he couldn't get mono to run under his AIX)
Ironpython is a great work, and if i had a special .NET library i would have to use, IronPython might be the choice, but for general purpose problems, people seem to suggest using the original CPython, unless Guido changes his mind.
A:
The way you describe things, it sounds like you're company is switching to Python simple for the sake of Python. Is there some specific reason you want to use Python? Is a more dynamic language necessary? Is the functional programming going to help you at all? If you've got a perfectly good working set of tools in C#, why bother switching?
If you're set on switching, you may want to consider starting with standard Python unless you're specifically tied to the .NET libraries. You can write cross platform GUIs using a number of different frameworks like wxPython, pyQt, etc. That said, Visual Studio has a far superior GUI designer to just about any of the tools out there for creating Python windowed layouts.
| Pros and cons of IronPython and IronPython Studio | We are ready in our company to move everything to Python instead of C#, we are a consulting company and we usually write small projects in C# we don't do huge projects and our work is more based on complex mathematical models not complex software structures. So we believe IronPython is a good platform for us because it provides standard GUI functionality on windows and access to all of .Net libraries.
I know Ironpython studio is not complete, and in fact I had a hard time adding my references but I was wondering if someone could list some of the pros and cons of this migration for us, considering Python code is easier to read by our clients and we usually deliver a proof-of-concept prototype instead of a full-functional code, our clients usually go ahead and implement the application themselves
| [
"My company, Resolver Systems, develops what is probably the biggest application written in IronPython yet. (It's called Resolver One, and it's a Pythonic spreadsheet). We are also hosting the Ironclad project (to run CPython extensions under IronPython) and that is going well (we plan to release a beta of Resolver One & numpy soon).\nThe reason we chose IronPython was the .NET integration - our clients want 100% integration on Windows and the easiest way to do that right now is .NET. \nWe design our GUI (without behaviour) in Visual Studio, compile it into a DLL and subclass it from IronPython to add behaviour.\nWe have found that IronPython is faster at some cases and slower at some others. However, the IronPython team is very responsive, whenever we report a regression they fix it and usually backport it to the bugfix release. If you worry about performance, you can always implement a critical part in C# (we haven't had to do that yet).\nIf you have experience with C#, then IronPython will be natural for you, and easier than C#, especially for prototypes.\nRegarding IronPython studio, we don't use it. Each of us has his editor of choice (TextPad, Emacs, Vim & Wing), and everything works fine.\n",
"There are a lot of reasons why you want to switch from C# to python, i did this myself recently. After a lot of investigating, here are the reasons why i stick to CPython:\n\nPerformance: There are some articles out there stating that there are always cases where ironpython is slower, so if performance is an issue\nTake the original: many people argue that new features etc. are always integrated in CPython first and you have to wait until they are implemented in ironpython.\nLicensing: Some people argue this is a timebomb: nobody knows how the licensing of ironpython/mono might change in near future\nExtensions: one of the strengths of python are the thousands of extensions which are all usable by CPython, as you mentioned mathematical problems: numpy might be a suitable fast package for you which might not run as expected under IronPython (although Ironclad)\nEspecially under Windows you have a native GUI-toolkit with wxPython which also looks great under several other platforms and there are pyQT and a lot of other toolkits. They have nice designer like wxGlade, but here VisualStudio C# Designer is easier to use.\nPlatform independence (if this is an issue): CPython is ported to really a lot of platforms, whereas ironpython can only be used on the major platforms (recently read a developer was sad that he couldn't get mono to run under his AIX)\n\nIronpython is a great work, and if i had a special .NET library i would have to use, IronPython might be the choice, but for general purpose problems, people seem to suggest using the original CPython, unless Guido changes his mind.\n",
"The way you describe things, it sounds like you're company is switching to Python simple for the sake of Python. Is there some specific reason you want to use Python? Is a more dynamic language necessary? Is the functional programming going to help you at all? If you've got a perfectly good working set of tools in C#, why bother switching?\nIf you're set on switching, you may want to consider starting with standard Python unless you're specifically tied to the .NET libraries. You can write cross platform GUIs using a number of different frameworks like wxPython, pyQt, etc. That said, Visual Studio has a far superior GUI designer to just about any of the tools out there for creating Python windowed layouts.\n"
] | [
18,
9,
7
] | [] | [] | [
"ironpython",
"ironpython_studio",
"python"
] | stackoverflow_0000471712_ironpython_ironpython_studio_python.txt |
Q:
Incorrect answer in dll import in Python
In my Python script I'm importing a dll written in VB.NET.
I'm calling a function of initialisation in my script. It takes 2 arguments: a path to XML file and a string. It returns an integer - 0 for success, else error. The second argument is passed by reference. So if success, it will get updated with success message. Otherwise, get updated with error message.
When my script receives the integer, I should print the message in the second variable. I'm not able to do that.
A:
Python strings are immutable. There is no way the string can be changed inside the function.
So what you really want is to pass a char buffer of some sort. You can create those in python using the ctypes module.
Please edit the question and paste a minimal snippet of the code so we can test and give more information.
A:
Python does not support the concept of passing a string "by reference" in the same way that VB.NET does. So it might not be possible to do this without some more work.
However, without seeing your code it's definitely not possible to tell you what's wrong.
| Incorrect answer in dll import in Python | In my Python script I'm importing a dll written in VB.NET.
I'm calling a function of initialisation in my script. It takes 2 arguments: a path to XML file and a string. It returns an integer - 0 for success, else error. The second argument is passed by reference. So if success, it will get updated with success message. Otherwise, get updated with error message.
When my script receives the integer, I should print the message in the second variable. I'm not able to do that.
| [
"Python strings are immutable. There is no way the string can be changed inside the function.\nSo what you really want is to pass a char buffer of some sort. You can create those in python using the ctypes module. \nPlease edit the question and paste a minimal snippet of the code so we can test and give more information.\n",
"Python does not support the concept of passing a string \"by reference\" in the same way that VB.NET does. So it might not be possible to do this without some more work.\nHowever, without seeing your code it's definitely not possible to tell you what's wrong.\n"
] | [
3,
1
] | [] | [] | [
"import",
"python"
] | stackoverflow_0000472170_import_python.txt |
Q:
TypeError: 'tuple' object is not callable
I was doing the tutorial from the book teach yourself django in 24 hours and in part1 hour 4 i got stuck on this error.
Traceback (most recent call last):
File "C:\Python25\lib\site-packages\django\core\servers\basehttp.py", line 278, in run
self.result = application(self.environ, self.start_response)
File "C:\Python25\lib\site-packages\django\core\servers\basehttp.py", line 635, in __call__
return self.application(environ, start_response)
File "C:\Python25\lib\site-packages\django\core\handlers\wsgi.py", line 239, in __call__
response = self.get_response(request)
File "C:\Python25\lib\site-packages\django\core\handlers\base.py", line 67, in get_response
response = middleware_method(request)
File "C:\Python25\Lib\site-packages\django\middleware\common.py", line 56, in process_request
if (not _is_valid_path(request.path_info) and
File "C:\Python25\Lib\site-packages\django\middleware\common.py", line 142, in _is_valid_path
urlresolvers.resolve(path)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 254, in resolve
return get_resolver(urlconf).resolve(path)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 181, in resolve
for pattern in self.url_patterns:
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 205, in _get_url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 200, in _get_urlconf_module
self._urlconf_module = __import__(self.urlconf_name, {}, {}, [''])
File "c:\projects\iFriends\..\iFriends\urls.py", line 17, in <module>
(r'^admin/', include('django.contribute.admin.urls'))
TypeError: 'tuple' object is not callable
Can someone help me please..
url.py
from django.conf.urls.defaults import *
####Uncomment the next two lines to enable the admin:
#### from django.contrib import admin
#### admin.autodiscover()
urlpatterns = patterns('',
(r'^People/$', 'iFriends.People.views.index') ,
(r'^admin/', include('django.contrib.admin.urls')),
# Example:
# (r'^iFriends/', include('iFriends.foo.urls')),
# Uncomment the admin/doc line below and add 'django.contrib.admindocs'
# to INSTALLED_APPS to enable admin documentation:
# (r'^admin/doc/', include('django.contrib.admindocs.urls')),
# Uncomment the next line to enable the admin:
)
A:
You somehow set some function to a tuple. Please edit the question and paste your urls.py code, so we can point you to the error.
I can try a wild guess:
File "c:\projects\iFriends\..\iFriends\urls.py", line 17, in <module>
(r'^admin/', include('django.contribute.admin.urls'))
This somehow tells me that you missed a comma on line 16, so:
16. (r'^/', 'some_stuff....') # <-- missed comma here
17. (r'^admin/', include('django.contribute.admin.urls'))
Just put the comma and it will work. If that's not the case, I'll send my cristal ball for mainantance. Paste the code.
EDIT
Seems like you have pasted the urls.py as an answer. Please edit the question and paste urls.py there.
Anyway, the error has changed. What did you do? In this new error, urls.py is not found anymore so maybe you've renamed it? Have you changed the way you run the application?
The file you pasted is not the one that is running. Are you pasting url.py and django is reading urls.py? The code in the error doesn't match the code you pasted! Please paste the correct file, i.e. the same that gives the error, or we can't help.
| TypeError: 'tuple' object is not callable | I was doing the tutorial from the book teach yourself django in 24 hours and in part1 hour 4 i got stuck on this error.
Traceback (most recent call last):
File "C:\Python25\lib\site-packages\django\core\servers\basehttp.py", line 278, in run
self.result = application(self.environ, self.start_response)
File "C:\Python25\lib\site-packages\django\core\servers\basehttp.py", line 635, in __call__
return self.application(environ, start_response)
File "C:\Python25\lib\site-packages\django\core\handlers\wsgi.py", line 239, in __call__
response = self.get_response(request)
File "C:\Python25\lib\site-packages\django\core\handlers\base.py", line 67, in get_response
response = middleware_method(request)
File "C:\Python25\Lib\site-packages\django\middleware\common.py", line 56, in process_request
if (not _is_valid_path(request.path_info) and
File "C:\Python25\Lib\site-packages\django\middleware\common.py", line 142, in _is_valid_path
urlresolvers.resolve(path)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 254, in resolve
return get_resolver(urlconf).resolve(path)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 181, in resolve
for pattern in self.url_patterns:
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 205, in _get_url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py", line 200, in _get_urlconf_module
self._urlconf_module = __import__(self.urlconf_name, {}, {}, [''])
File "c:\projects\iFriends\..\iFriends\urls.py", line 17, in <module>
(r'^admin/', include('django.contribute.admin.urls'))
TypeError: 'tuple' object is not callable
Can someone help me please..
url.py
from django.conf.urls.defaults import *
####Uncomment the next two lines to enable the admin:
#### from django.contrib import admin
#### admin.autodiscover()
urlpatterns = patterns('',
(r'^People/$', 'iFriends.People.views.index') ,
(r'^admin/', include('django.contrib.admin.urls')),
# Example:
# (r'^iFriends/', include('iFriends.foo.urls')),
# Uncomment the admin/doc line below and add 'django.contrib.admindocs'
# to INSTALLED_APPS to enable admin documentation:
# (r'^admin/doc/', include('django.contrib.admindocs.urls')),
# Uncomment the next line to enable the admin:
)
| [
"You somehow set some function to a tuple. Please edit the question and paste your urls.py code, so we can point you to the error.\nI can try a wild guess:\nFile \"c:\\projects\\iFriends\\..\\iFriends\\urls.py\", line 17, in <module>\n (r'^admin/', include('django.contribute.admin.urls'))\n\nThis somehow tells me that you missed a comma on line 16, so:\n16. (r'^/', 'some_stuff....') # <-- missed comma here\n17. (r'^admin/', include('django.contribute.admin.urls'))\n\nJust put the comma and it will work. If that's not the case, I'll send my cristal ball for mainantance. Paste the code.\nEDIT\nSeems like you have pasted the urls.py as an answer. Please edit the question and paste urls.py there.\nAnyway, the error has changed. What did you do? In this new error, urls.py is not found anymore so maybe you've renamed it? Have you changed the way you run the application?\nThe file you pasted is not the one that is running. Are you pasting url.py and django is reading urls.py? The code in the error doesn't match the code you pasted! Please paste the correct file, i.e. the same that gives the error, or we can't help.\n"
] | [
21
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000472503_django_python.txt |
Q:
Customizing an Admin form in Django while also using autodiscover
I want to modify a few tiny details of Django's built-in django.contrib.auth module. Specifically, I want a different form that makes username an email field (and email an alternate email address. (I'd rather not modify auth any more than necessary -- a simple form change seems to be all that's needed.)
When I use autodiscover with a customized ModelAdmin for auth I wind up conflicting with auth's own admin interface and get an "already registered" error.
It looks like I have to create my own admin site, enumerating all of my Models. It's only 18 classes, but it seems like a DRY problem -- every change requires both adding to the Model and adding to the customized admin site.
Or, should I write my own version of "autodiscover with exclusions" to essentially import all the admin modules except auth?
A:
None of the above. Just use admin.site.unregister(). Here's how I recently added filtering Users on is_active in the admin (n.b. is_active filtering is now on the User model by default in Django core; still works here as an example), all DRY as can be:
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.models import User
class MyUserAdmin(UserAdmin):
list_filter = UserAdmin.list_filter + ('is_active',)
admin.site.unregister(User)
admin.site.register(User, MyUserAdmin)
A:
I think it might be easier to do this with a custom auth backend and thus remove the need for a customized ModelAdmin.
I did something similar with this snippet:
http://www.djangosnippets.org/snippets/74/
| Customizing an Admin form in Django while also using autodiscover | I want to modify a few tiny details of Django's built-in django.contrib.auth module. Specifically, I want a different form that makes username an email field (and email an alternate email address. (I'd rather not modify auth any more than necessary -- a simple form change seems to be all that's needed.)
When I use autodiscover with a customized ModelAdmin for auth I wind up conflicting with auth's own admin interface and get an "already registered" error.
It looks like I have to create my own admin site, enumerating all of my Models. It's only 18 classes, but it seems like a DRY problem -- every change requires both adding to the Model and adding to the customized admin site.
Or, should I write my own version of "autodiscover with exclusions" to essentially import all the admin modules except auth?
| [
"None of the above. Just use admin.site.unregister(). Here's how I recently added filtering Users on is_active in the admin (n.b. is_active filtering is now on the User model by default in Django core; still works here as an example), all DRY as can be:\nfrom django.contrib import admin\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.contrib.auth.models import User\n\nclass MyUserAdmin(UserAdmin):\n list_filter = UserAdmin.list_filter + ('is_active',)\n\nadmin.site.unregister(User)\nadmin.site.register(User, MyUserAdmin)\n\n",
"I think it might be easier to do this with a custom auth backend and thus remove the need for a customized ModelAdmin.\nI did something similar with this snippet:\nhttp://www.djangosnippets.org/snippets/74/\n"
] | [
53,
2
] | [] | [] | [
"customization",
"django",
"django_admin",
"forms",
"python"
] | stackoverflow_0000471550_customization_django_django_admin_forms_python.txt |
Q:
How to specify uniqueness for a tuple of field in a Django model
Is there a way to specify a Model in Django such that is ensures that pair of fields in unique in the table, in a way similar to the "unique=True" attribute for similar field?
Or do I need to check this constraint in the clean() method?
A:
There is a META option called unique_together. For example:
class MyModel(models.Model):
field1 = models.BlahField()
field2 = models.FooField()
field3 = models.BazField()
class Meta:
unique_together = ("field1", "field2")
More info on the Django documentation page.
| How to specify uniqueness for a tuple of field in a Django model | Is there a way to specify a Model in Django such that is ensures that pair of fields in unique in the table, in a way similar to the "unique=True" attribute for similar field?
Or do I need to check this constraint in the clean() method?
| [
"There is a META option called unique_together. For example:\nclass MyModel(models.Model):\n field1 = models.BlahField()\n field2 = models.FooField()\n field3 = models.BazField()\n\n class Meta:\n unique_together = (\"field1\", \"field2\")\n\nMore info on the Django documentation page.\n"
] | [
42
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000472392_django_django_models_python.txt |
Q:
reading/writing xmp metadatas on pdf files through pypdf
I can read xmp metadatas through pyPdf with this code:
a = pyPdf.PdfFileReader(open(self.fileName))
b = a.getXmpMetadata()
c = b.pdf_keywords
but: is this the best way?
And if I don't use the pdf_keywords property?
And is there any way to set these metadatas with pyPdf?
A:
As far as I can see, this is the best way to do so - and there is no way to change the metadata with pyPDF.
| reading/writing xmp metadatas on pdf files through pypdf | I can read xmp metadatas through pyPdf with this code:
a = pyPdf.PdfFileReader(open(self.fileName))
b = a.getXmpMetadata()
c = b.pdf_keywords
but: is this the best way?
And if I don't use the pdf_keywords property?
And is there any way to set these metadatas with pyPdf?
| [
"As far as I can see, this is the best way to do so - and there is no way to change the metadata with pyPDF.\n"
] | [
3
] | [] | [] | [
"metadata",
"pdf",
"pypdf",
"python",
"xmp"
] | stackoverflow_0000466692_metadata_pdf_pypdf_python_xmp.txt |
Q:
Subtract from an input appended list with a running balance output
Noob
I am trying to write a script that gives a running balance. I am messing up on the elementary declared functions of python.
I need it too:
accept a balance via input
append a list of transactions
take those out one by one in the order they were input
print a running total
use pyhtmltable to make the output in html table ready form for copy and pasting
Code:
# transaction posting on available balance
import PyHtmlTable
import twodarr
import string,re
import copy
import sys
posting_trans = [] #creating a list of posting debits here
avail_bal = int(input('What is the balance available to pay transactions?')) #getting the starting balance
while True: #building up the list of transactions
ans = input('Please enter the debits in order of posting one at a time. If there is no more, please enter 0:')
if int(ans) == 0:
break
if ans > 0: # to get out of loop
posting_trans.append(ans)
num_trans = int(len(posting_trans)) #counting the number of transactions
print "<b> Beginning available balance of",avail_bal," </b> " # start of the html table
tabledict = {'width':'400','border':2,'bgcolor':'white'}
t = PyHtmlTable.PyHtmlTable( 2, 1 , tabledict )
t.setCellcontents(0,0,"Transactions") #header cells
t.setCellcontents(1,0,"Available Balance")
while True: #trying to create the rest of a dynamic table
if countdown == 0:
break
for countdown in range(1,num_trans):
t.add_row(1)
def newer_bal():
newer_bal(avail_bal - posting_trans[countdown])
t.setCellcontents(0, 1, posting_trans[countdown])
t.setCellcontents(1, 1, newer_bal)
t.display()
A:
Something like that?
# transaction posting on available balance
import PyHtmlTable
posting_trans = [] #creating a list of posting debits here
#getting the starting balance
print 'What is the balance available to pay transactions? '
avail_bal = float(raw_input('Value: '))
while True: #building up the list of transactions
print 'Please enter the debits in order of posting one at a time.'
print 'If there is no more, please enter 0:'
ans = float(raw_input('Value: '))
if ans == 0:
break # to get out of loop
posting_trans.append(ans)
# start of the html table
print "<b> Beginning available balance of %.2f</b>" % avail_bal
tabledict = {'width': '400', 'border': 2, 'bgcolor': 'white'}
t = PyHtmlTable.PyHtmlTable(2, 1, tabledict)
t.setCellcontents(0, 0, "Transaction Value") #header cells
t.setCellcontents(0, 1, "Available Balance")
for line, trans in enumerate(posting_trans):
avail_bal -= trans
t.setCellcontents(line + 1, 0, '%.2f' % trans)
t.setCellcontents(line + 1, 1, '%.2f' % avail_bal)
t.display()
Hints:
Don't use input(). Use raw_input() instead. It has been renamed to input() in python 3.0.
You don't need to store the values in the list. You could store them in the table already, that is the point into using PyHtmlTable. I left the list for didactic purposes.
Read a tutorial. Read documentation. Write lots of code.
| Subtract from an input appended list with a running balance output | Noob
I am trying to write a script that gives a running balance. I am messing up on the elementary declared functions of python.
I need it too:
accept a balance via input
append a list of transactions
take those out one by one in the order they were input
print a running total
use pyhtmltable to make the output in html table ready form for copy and pasting
Code:
# transaction posting on available balance
import PyHtmlTable
import twodarr
import string,re
import copy
import sys
posting_trans = [] #creating a list of posting debits here
avail_bal = int(input('What is the balance available to pay transactions?')) #getting the starting balance
while True: #building up the list of transactions
ans = input('Please enter the debits in order of posting one at a time. If there is no more, please enter 0:')
if int(ans) == 0:
break
if ans > 0: # to get out of loop
posting_trans.append(ans)
num_trans = int(len(posting_trans)) #counting the number of transactions
print "<b> Beginning available balance of",avail_bal," </b> " # start of the html table
tabledict = {'width':'400','border':2,'bgcolor':'white'}
t = PyHtmlTable.PyHtmlTable( 2, 1 , tabledict )
t.setCellcontents(0,0,"Transactions") #header cells
t.setCellcontents(1,0,"Available Balance")
while True: #trying to create the rest of a dynamic table
if countdown == 0:
break
for countdown in range(1,num_trans):
t.add_row(1)
def newer_bal():
newer_bal(avail_bal - posting_trans[countdown])
t.setCellcontents(0, 1, posting_trans[countdown])
t.setCellcontents(1, 1, newer_bal)
t.display()
| [
"Something like that?\n# transaction posting on available balance\nimport PyHtmlTable \n\nposting_trans = [] #creating a list of posting debits here\n\n#getting the starting balance\nprint 'What is the balance available to pay transactions? '\navail_bal = float(raw_input('Value: ')) \n\nwhile True: #building up the list of transactions\n print 'Please enter the debits in order of posting one at a time.'\n print 'If there is no more, please enter 0:'\n ans = float(raw_input('Value: '))\n if ans == 0:\n break # to get out of loop\n posting_trans.append(ans)\n\n# start of the html table\nprint \"<b> Beginning available balance of %.2f</b>\" % avail_bal\n\ntabledict = {'width': '400', 'border': 2, 'bgcolor': 'white'}\nt = PyHtmlTable.PyHtmlTable(2, 1, tabledict)\n\nt.setCellcontents(0, 0, \"Transaction Value\") #header cells\nt.setCellcontents(0, 1, \"Available Balance\")\n\n\nfor line, trans in enumerate(posting_trans):\n avail_bal -= trans\n t.setCellcontents(line + 1, 0, '%.2f' % trans)\n t.setCellcontents(line + 1, 1, '%.2f' % avail_bal) \n\nt.display()\n\nHints:\n\nDon't use input(). Use raw_input() instead. It has been renamed to input() in python 3.0.\nYou don't need to store the values in the list. You could store them in the table already, that is the point into using PyHtmlTable. I left the list for didactic purposes.\nRead a tutorial. Read documentation. Write lots of code.\n\n"
] | [
2
] | [] | [] | [
"loops",
"python",
"running_balance"
] | stackoverflow_0000472839_loops_python_running_balance.txt |
Q:
file upload status information
Im making a small python script to upload files on the net. The script is working correctly, and now I want to add a simple progress bar that indicates the amount of uploading left. my question is -how do I get the upload status information from the server where im uploading the file, assuming it is possible...I am using curl and pycurl to make the http requests in python.
Any help will be much appreciated, Thanks!!
A:
Check out the documentation here: http://pycurl.sourceforge.net/doc/callbacks.html for callbacks. Best of luck!
| file upload status information | Im making a small python script to upload files on the net. The script is working correctly, and now I want to add a simple progress bar that indicates the amount of uploading left. my question is -how do I get the upload status information from the server where im uploading the file, assuming it is possible...I am using curl and pycurl to make the http requests in python.
Any help will be much appreciated, Thanks!!
| [
"Check out the documentation here: http://pycurl.sourceforge.net/doc/callbacks.html for callbacks. Best of luck!\n"
] | [
2
] | [] | [] | [
"curl",
"python",
"upload"
] | stackoverflow_0000473937_curl_python_upload.txt |
Q:
Best way to create a "runner" script in Python?
I have a bunch of Python modules in a directory, all being a derivate class. I need a "runner" script that, for each module, instantiate the class that is inside it (the actual class name can be built by the module file name) and than call the "go" method on each of them.
I don't know how many modules are there, but I can list all of them globbing the directory via something like "bot_*.py"
I think this is something about "meta programming", but how could be the best (most elegant) way to do it?
A:
You could use __import__() to load each module, use dir() to find all objects in each module, find all objects which are classes, instantiate them, and run the go() method:
import types
for module_name in list_of_modules_to_load:
module = __import__(module_name)
for name in dir(module):
object = module.__dict__[name]
if type(object) == types.ClassType:
object().go()
A:
def run_all(path):
import glob, os
print "Exploring %s" % path
for filename in glob.glob(path + "/*.py"):
# modulename = "bot_paperino"
modulename = os.path.splitext(os.path.split(filename)[-1])[0]
# classname = "Paperino"
classname = modulename.split("bot_")[-1].capitalize()
# package = "path.bot_paperino"
package = filename.replace("\\", "/").replace("/", ".")[:-3]
mod = __import__(package)
if classname in mod.__dict__[modulename].__dict__.keys():
obj = mod.__dict__[modulename].__dict__[classname]()
if hasattr(obj, "go"):
obj.go()
if __name__ == "__main__":
import sys
# Run on each directory passed on command line
for path in sys.argv[1:]:
run_all(sys.argv[1])
You need a __init__.py in each path you want to "run".
Change "bot_" at your will.
Run on windows and linux.
A:
Here is one way to do this off the top of my head where I have to presume the structure of your modules a bit:
mainDir/
runner.py
package/
__init__.py
bot_moduleA.py
bot_moduleB.py
bot_moduleC.py
In runner you could find this:
import types
import package
for moduleName in dir(package):
module = package.__dict__[moduleName]
if type(module) != types.ModuleType:
continue
for klassName in dir(module):
klass = module.__dict__[klassName]
if type(klass) != types.ClassType:
continue
klass().go()
A:
I would try:
import glob
import os
filelist = glob.glob('bot_*.py')
for f in filelist:
context = {}
exec(open(f).read(), context)
klassname = os.path.basename(f)[:-3]
klass = context[klassname]
klass().go()
This will only run classes similarly named to the module, which I think is what you want. It also doesn't have the requirement of the top level directory to be a package.
Beware that glob returns the complete path, including preceding directories, hence the use os.path.basename(f)[:-3] to get the class name.
| Best way to create a "runner" script in Python? | I have a bunch of Python modules in a directory, all being a derivate class. I need a "runner" script that, for each module, instantiate the class that is inside it (the actual class name can be built by the module file name) and than call the "go" method on each of them.
I don't know how many modules are there, but I can list all of them globbing the directory via something like "bot_*.py"
I think this is something about "meta programming", but how could be the best (most elegant) way to do it?
| [
"You could use __import__() to load each module, use dir() to find all objects in each module, find all objects which are classes, instantiate them, and run the go() method:\nimport types\nfor module_name in list_of_modules_to_load:\n module = __import__(module_name)\n for name in dir(module):\n object = module.__dict__[name]\n if type(object) == types.ClassType:\n object().go()\n\n",
"def run_all(path):\n import glob, os\n print \"Exploring %s\" % path\n for filename in glob.glob(path + \"/*.py\"):\n # modulename = \"bot_paperino\"\n modulename = os.path.splitext(os.path.split(filename)[-1])[0]\n # classname = \"Paperino\"\n classname = modulename.split(\"bot_\")[-1].capitalize()\n # package = \"path.bot_paperino\"\n package = filename.replace(\"\\\\\", \"/\").replace(\"/\", \".\")[:-3]\n mod = __import__(package)\n if classname in mod.__dict__[modulename].__dict__.keys():\n obj = mod.__dict__[modulename].__dict__[classname]()\n if hasattr(obj, \"go\"):\n obj.go()\n\nif __name__ == \"__main__\":\n import sys\n # Run on each directory passed on command line\n for path in sys.argv[1:]:\n run_all(sys.argv[1])\n\nYou need a __init__.py in each path you want to \"run\".\nChange \"bot_\" at your will.\nRun on windows and linux.\n",
"Here is one way to do this off the top of my head where I have to presume the structure of your modules a bit:\nmainDir/\n runner.py\n package/\n __init__.py\n bot_moduleA.py\n bot_moduleB.py\n bot_moduleC.py\nIn runner you could find this:\n\nimport types\nimport package\n\nfor moduleName in dir(package):\n module = package.__dict__[moduleName]\n if type(module) != types.ModuleType:\n continue\n\n for klassName in dir(module):\n klass = module.__dict__[klassName]\n if type(klass) != types.ClassType:\n continue\n klass().go()\n\n",
"I would try:\nimport glob\nimport os\n\nfilelist = glob.glob('bot_*.py')\nfor f in filelist:\n context = {}\n exec(open(f).read(), context)\n klassname = os.path.basename(f)[:-3] \n klass = context[klassname]\n klass().go()\n\nThis will only run classes similarly named to the module, which I think is what you want. It also doesn't have the requirement of the top level directory to be a package.\nBeware that glob returns the complete path, including preceding directories, hence the use os.path.basename(f)[:-3] to get the class name.\n"
] | [
4,
3,
1,
1
] | [] | [] | [
"metaprogramming",
"python"
] | stackoverflow_0000473961_metaprogramming_python.txt |
Q:
Getting attributes of a Python package that I don't have the name of, until runtime
In a Python package, I have a string containing (presumably) the name of a subpackage. From that subpackage, I want to retrieve a tuple of constants...I'm really not even sure how to proceed in doing this, though.
#!/usr/bin/python
"" The Alpha Package
Implements functionality of a base package under the 'alpha' namespace
""
def get_params(packagename):
# Here, I want to get alpha.<packagename>.REQUIRED_PARAMS
pass
So, later in my code I might have:
#!/usr/bin/python
import alpha
alpha.get_params('bravo') # should return alpha.bravo.REQUIRED_PARAMS
alpha.get_params('charlie') # should return alpha.charlie.REQUIRED_PARAMS
A:
If I correctly understand what you want, I think something roughly like this should work:
def get_params(packagename):
module = __import__('alpha.%s' % packagename)
return module.__dict__['REQUIRED_PARAMS']
| Getting attributes of a Python package that I don't have the name of, until runtime | In a Python package, I have a string containing (presumably) the name of a subpackage. From that subpackage, I want to retrieve a tuple of constants...I'm really not even sure how to proceed in doing this, though.
#!/usr/bin/python
"" The Alpha Package
Implements functionality of a base package under the 'alpha' namespace
""
def get_params(packagename):
# Here, I want to get alpha.<packagename>.REQUIRED_PARAMS
pass
So, later in my code I might have:
#!/usr/bin/python
import alpha
alpha.get_params('bravo') # should return alpha.bravo.REQUIRED_PARAMS
alpha.get_params('charlie') # should return alpha.charlie.REQUIRED_PARAMS
| [
"If I correctly understand what you want, I think something roughly like this should work:\ndef get_params(packagename):\n module = __import__('alpha.%s' % packagename)\n return module.__dict__['REQUIRED_PARAMS']\n\n"
] | [
5
] | [] | [] | [
"package",
"python"
] | stackoverflow_0000474331_package_python.txt |
Q:
Python: Use the codecs module or use string function decode?
I have a text file that is encoded in UTF-8. I'm reading it in to analyze and plot some data. I would like the file to be read in as ascii. Would it be best to use the codecs module or use the builtin string decode method? Also, the file is divided up as a csv, so could the csv module also be a valid solution?
Thanks for your help.
A:
Do you mean that your file is encoded in UTF-8? ("Unicode" is not an encoding... Required reading: http://www.joelonsoftware.com/articles/Unicode.html) I'm not 100% sure but I think you should be able to read a UTF-8 encoded file with the csv module, and you can convert the strings which contain special characters to Python's unicode strings (edit: if you need to) after reading.
There are a few examples of using csv with UTF-8 encoded data at http://docs.python.org./library/csv.html#csv-examples; it might help you to look at them.
| Python: Use the codecs module or use string function decode? | I have a text file that is encoded in UTF-8. I'm reading it in to analyze and plot some data. I would like the file to be read in as ascii. Would it be best to use the codecs module or use the builtin string decode method? Also, the file is divided up as a csv, so could the csv module also be a valid solution?
Thanks for your help.
| [
"Do you mean that your file is encoded in UTF-8? (\"Unicode\" is not an encoding... Required reading: http://www.joelonsoftware.com/articles/Unicode.html) I'm not 100% sure but I think you should be able to read a UTF-8 encoded file with the csv module, and you can convert the strings which contain special characters to Python's unicode strings (edit: if you need to) after reading.\nThere are a few examples of using csv with UTF-8 encoded data at http://docs.python.org./library/csv.html#csv-examples; it might help you to look at them.\n"
] | [
5
] | [] | [] | [
"codec",
"csv",
"decode",
"python",
"unicode"
] | stackoverflow_0000474373_codec_csv_decode_python_unicode.txt |
Q:
Non-ascii string in verbose_name argument when declaring DB field in Django
I declare this:
#This file is using encoding:utf-8
...
class Buddy(models.Model):
name=models.CharField('ФИО',max_length=200)
...
... in models.py. manage.py syncdb works smoothly. However when I go to admin interface and try to add a new Buddy I catch a DjangoUnicodeDecodeError, which says: "'utf8' codec can't decode bytes in position 0-1: invalid data. You passed in '\xd4\xc8\xce' (<type 'str'<r)".
I'm using sqlite3, so all strings are stored as bytestrings encoded in utf8 there. Django's encoding is also utf8. Seen django's docs on this topic, no idea.
UPD: Eventually I figured out what the problem was. It turned out to be that I'd saved my source in ANSI encoding.
Solution: I saved the source in UTF-8 and it worked wonders.
A:
First, I would explicitly define your description as a Unicode string:
class Buddy(models.Model):
name=models.CharField(u'ФИО',max_len)
Note the 'u' in u'ФИО'.
Secondly, do you have a __unicode__() function defined on your model? If so, make sure that it returns a Unicode string. It's very likely you're getting this error when the Admin interface tries to access the unicode representation of the model, not when it's added to the database. If you're returning a non-unicode string from __unicode__(), it may cause this problem.
| Non-ascii string in verbose_name argument when declaring DB field in Django | I declare this:
#This file is using encoding:utf-8
...
class Buddy(models.Model):
name=models.CharField('ФИО',max_length=200)
...
... in models.py. manage.py syncdb works smoothly. However when I go to admin interface and try to add a new Buddy I catch a DjangoUnicodeDecodeError, which says: "'utf8' codec can't decode bytes in position 0-1: invalid data. You passed in '\xd4\xc8\xce' (<type 'str'<r)".
I'm using sqlite3, so all strings are stored as bytestrings encoded in utf8 there. Django's encoding is also utf8. Seen django's docs on this topic, no idea.
UPD: Eventually I figured out what the problem was. It turned out to be that I'd saved my source in ANSI encoding.
Solution: I saved the source in UTF-8 and it worked wonders.
| [
"First, I would explicitly define your description as a Unicode string:\nclass Buddy(models.Model):\n name=models.CharField(u'ФИО',max_len)\n\nNote the 'u' in u'ФИО'.\nSecondly, do you have a __unicode__() function defined on your model? If so, make sure that it returns a Unicode string. It's very likely you're getting this error when the Admin interface tries to access the unicode representation of the model, not when it's added to the database. If you're returning a non-unicode string from __unicode__(), it may cause this problem.\n"
] | [
5
] | [] | [] | [
"django",
"python",
"unicode"
] | stackoverflow_0000475073_django_python_unicode.txt |
Q:
How do I construct the packets for this UDP protocol?
Valve Software's Steam Server Query protocol as documented here allows you to query their game servers for various data. This is a little out of my depth and I'm looking for a little guidance as to what I need to learn.
I'm assuming I'll need socket and struct, correct?
I'm comfortable with basic UDP tasks like these, so I guess my main question is how do I construct my data with struct, as I'm completely unfamiliar with it.?
A:
I found an answer to my own question. Yay.
SRCDS.py has this implemented already and I figured it out by looking it over.
| How do I construct the packets for this UDP protocol? | Valve Software's Steam Server Query protocol as documented here allows you to query their game servers for various data. This is a little out of my depth and I'm looking for a little guidance as to what I need to learn.
I'm assuming I'll need socket and struct, correct?
I'm comfortable with basic UDP tasks like these, so I guess my main question is how do I construct my data with struct, as I'm completely unfamiliar with it.?
| [
"I found an answer to my own question. Yay.\nSRCDS.py has this implemented already and I figured it out by looking it over.\n"
] | [
1
] | [] | [] | [
"network_programming",
"network_protocols",
"python"
] | stackoverflow_0000474934_network_programming_network_protocols_python.txt |
Q:
A QuerySet by aggregate field value
Let's say I have the following model:
class Contest:
title = models.CharField( max_length = 200 )
description = models.TextField()
class Image:
title = models.CharField( max_length = 200 )
description = models.TextField()
contest = models.ForeignKey( Contest )
user = models.ForeignKey( User )
def score( self ):
return self.vote_set.all().aggregate( models.Sum( 'value' ) )[ 'value__sum' ]
class Vote:
value = models.SmallIntegerField()
user = models.ForeignKey( User )
image = models.ForeignKey( Image )
The users of a site can contribute their images to several contests. Then other users can vote them up or down.
Everything works fine, but now I want to display a page on which users can see all contributions to a certain contest. The images shall be ordered by their score.
Therefore I have tried the following:
Contest.objects.get( pk = id ).image_set.order_by( 'score' )
As I feared it doesn't work since 'score' is no database field that could be used in queries.
A:
Oh, of course I forget about new aggregation support in Django and its annotate functionality.
So query may look like this:
Contest.objects.get(pk=id).image_set.annotate(score=Sum('vote__value')).order_by( 'score' )
A:
You can write your own sort in Python very simply.
def getScore( anObject ):
return anObject.score()
objects= list(Contest.objects.get( pk = id ).image_set)
objects.sort( key=getScore )
This works nicely because we sorted the list, which we're going to provide to the template.
A:
The db-level order_by cannot sort queryset by model's python method.
The solution is to introduce score field to Image model and recalculate it on every Vote update. Some sort of denormalization. When you will can to sort by it.
| A QuerySet by aggregate field value | Let's say I have the following model:
class Contest:
title = models.CharField( max_length = 200 )
description = models.TextField()
class Image:
title = models.CharField( max_length = 200 )
description = models.TextField()
contest = models.ForeignKey( Contest )
user = models.ForeignKey( User )
def score( self ):
return self.vote_set.all().aggregate( models.Sum( 'value' ) )[ 'value__sum' ]
class Vote:
value = models.SmallIntegerField()
user = models.ForeignKey( User )
image = models.ForeignKey( Image )
The users of a site can contribute their images to several contests. Then other users can vote them up or down.
Everything works fine, but now I want to display a page on which users can see all contributions to a certain contest. The images shall be ordered by their score.
Therefore I have tried the following:
Contest.objects.get( pk = id ).image_set.order_by( 'score' )
As I feared it doesn't work since 'score' is no database field that could be used in queries.
| [
"Oh, of course I forget about new aggregation support in Django and its annotate functionality.\nSo query may look like this:\nContest.objects.get(pk=id).image_set.annotate(score=Sum('vote__value')).order_by( 'score' )\n\n",
"You can write your own sort in Python very simply.\ndef getScore( anObject ):\n return anObject.score()\nobjects= list(Contest.objects.get( pk = id ).image_set)\nobjects.sort( key=getScore )\n\nThis works nicely because we sorted the list, which we're going to provide to the template.\n",
"The db-level order_by cannot sort queryset by model's python method.\nThe solution is to introduce score field to Image model and recalculate it on every Vote update. Some sort of denormalization. When you will can to sort by it.\n"
] | [
47,
9,
2
] | [] | [] | [
"database",
"django",
"python"
] | stackoverflow_0000476017_database_django_python.txt |
Q:
python IPC (Inter Process Communication) for Vista UAC (User Access Control)
I am writing a Filemanager in (wx)python - a lot already works. When copying files there is already a progress dialog, overwrite handling etc.
Now in Vista when the user wants to copy a file to certain directories (eg %Program Files%) the application/script needs elevation, which cannot be asked for at runtime. So i have to start another app/script elevated, which does the work, but needs to communicate with the main app, so latter can update the progress etc.
I searched and found a lot of articles saying shared memory and pipes are the easiest way. So what i am looking for is a 'high level' platform independent ipc library whith python bindings using shared mem or pipes.
I already found ominORB, fnorb, etc. They look very interesting, but use TCP/IP, is there an equivalent lib using shared mem or pipes ? Since the ipc-client is always on the same machine sockets seems not to be neccesary here. And i am also afraid the user would have to allow ipc-socket-communications on his/her personal firewall.
EDIT: I really mean high level: it would be great to be able to just call some functions like when using omniORB instead of sending strings to stdin/stdout.
A:
How about just communicating with the second process using stdin/stdout?
There are some caveats due to input and output buffering, but take a look at this Python Cookbook recipe, and also Pexpect, for ideas on how to do this.
| python IPC (Inter Process Communication) for Vista UAC (User Access Control) | I am writing a Filemanager in (wx)python - a lot already works. When copying files there is already a progress dialog, overwrite handling etc.
Now in Vista when the user wants to copy a file to certain directories (eg %Program Files%) the application/script needs elevation, which cannot be asked for at runtime. So i have to start another app/script elevated, which does the work, but needs to communicate with the main app, so latter can update the progress etc.
I searched and found a lot of articles saying shared memory and pipes are the easiest way. So what i am looking for is a 'high level' platform independent ipc library whith python bindings using shared mem or pipes.
I already found ominORB, fnorb, etc. They look very interesting, but use TCP/IP, is there an equivalent lib using shared mem or pipes ? Since the ipc-client is always on the same machine sockets seems not to be neccesary here. And i am also afraid the user would have to allow ipc-socket-communications on his/her personal firewall.
EDIT: I really mean high level: it would be great to be able to just call some functions like when using omniORB instead of sending strings to stdin/stdout.
| [
"How about just communicating with the second process using stdin/stdout?\nThere are some caveats due to input and output buffering, but take a look at this Python Cookbook recipe, and also Pexpect, for ideas on how to do this. \n"
] | [
2
] | [] | [] | [
"ipc",
"python",
"vista_security",
"windows_vista",
"wxpython"
] | stackoverflow_0000475928_ipc_python_vista_security_windows_vista_wxpython.txt |
Q:
PostgreSQL procedural languages: to choose?
I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually.
I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager.
I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead.
Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?
A:
Since you already known python, PL/Python should be something to look at. And you sound like you write SQL for your database queries, so PL/SQL is a natural extension of that.
PL/SQL feels like SQL, just with all the stuff that you would expect from SQL anyway, like variables for whole rows and the usual control structures of procedural languages. It fits the way you usually interact with the database, but it's not the most elegant language of all time. I can't say anything about PL/Python, since I have never used it, but since you know python it should be easy to flip through some examples and see if you like it.
A:
Why can't you run your Python on the database server? That has the fewest complexities -- you can run the program you already have.
A:
I was in the exact same situation as you and went with PL/Python after giving up on PL/SQL after a while. It was a good decision, looking back. Some things that bit me where unicode issues (client encoding, byte sequence) and specific postgres data types (bytea).
| PostgreSQL procedural languages: to choose? | I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually.
I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager.
I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead.
Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?
| [
"Since you already known python, PL/Python should be something to look at. And you sound like you write SQL for your database queries, so PL/SQL is a natural extension of that.\nPL/SQL feels like SQL, just with all the stuff that you would expect from SQL anyway, like variables for whole rows and the usual control structures of procedural languages. It fits the way you usually interact with the database, but it's not the most elegant language of all time. I can't say anything about PL/Python, since I have never used it, but since you know python it should be easy to flip through some examples and see if you like it.\n",
"Why can't you run your Python on the database server? That has the fewest complexities -- you can run the program you already have.\n",
"I was in the exact same situation as you and went with PL/Python after giving up on PL/SQL after a while. It was a good decision, looking back. Some things that bit me where unicode issues (client encoding, byte sequence) and specific postgres data types (bytea).\n"
] | [
6,
2,
1
] | [] | [] | [
"postgresql",
"python"
] | stackoverflow_0000475302_postgresql_python.txt |
Q:
group by in django
How can i create simple group by query in trunk version of django?
I need something like
SELECT name
FROM mytable
GROUP BY name
actually what i want to do is simply get all entries with distinct names.
A:
If you need all the distinct names, just do this:
Foo.objects.values('name').distinct()
And you'll get a list of dictionaries, each one with a name key. If you need other data, just add more attribute names as parameters to the .values() call. Of course, if you add in attributes that may vary between rows with the same name, you'll break the .distinct().
This won't help if you want to get complete model objects back. But getting distinct names and getting full data are inherently incompatible goals anyway; how do you know which row with a given name you want returned in its entirety? If you want to calculate some sort of aggregate data for all the rows with a given name, aggregation support was recently added to Django trunk and can take care of that for you.
A:
Add .distinct to your queryset:
Entries.objects.filter(something='xxx').distinct()
A:
this will not work because every row have unique id. So every record is distinct..
To solve my problem i used
foo = Foo.objects.all()
foo.query.group_by = ['name']
but this is not official API.
| group by in django | How can i create simple group by query in trunk version of django?
I need something like
SELECT name
FROM mytable
GROUP BY name
actually what i want to do is simply get all entries with distinct names.
| [
"If you need all the distinct names, just do this:\nFoo.objects.values('name').distinct()\n\nAnd you'll get a list of dictionaries, each one with a name key. If you need other data, just add more attribute names as parameters to the .values() call. Of course, if you add in attributes that may vary between rows with the same name, you'll break the .distinct().\nThis won't help if you want to get complete model objects back. But getting distinct names and getting full data are inherently incompatible goals anyway; how do you know which row with a given name you want returned in its entirety? If you want to calculate some sort of aggregate data for all the rows with a given name, aggregation support was recently added to Django trunk and can take care of that for you.\n",
"Add .distinct to your queryset:\nEntries.objects.filter(something='xxx').distinct()\n\n",
"this will not work because every row have unique id. So every record is distinct.. \nTo solve my problem i used \nfoo = Foo.objects.all()\nfoo.query.group_by = ['name']\n\nbut this is not official API.\n"
] | [
12,
3,
2
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000475552_django_django_models_python.txt |
Q:
Queryset API distinct() does not work?
class Message(models.Model):
subject = models.CharField(max_length=100)
pub_date = models.DateTimeField(default=datetime.now())
class Topic(models.Model):
title = models.CharField(max_length=100)
message = models.ManyToManyField(Message, verbose_name='Discussion')
I want to get order all the topics according to the latest message object attached to that topic.
I executed this query but this does not give the distinct queryset.
>> Topic.objects.order_by('-message__pub_date').distinct()
A:
You don't need distinct() here, what you need is aggregation. This query will do what you want:
from django.db.models import Max
Topic.objects.annotate(Max('message__pub_date')).order_by('-message__pub_date__max')
Though if this is production code, you'll probably want to follow akaihola's advice and denormalize "last_message_posted" onto the Topic model directly.
Also, there's an error in your default value for Message.pub_date. As you have it now, whenever you first run the server and this code is loaded, datetime.now() will be executed once and that value will be used as the pub_date for all Messages. Use this instead to pass the callable itself so it isn't called until each Message is created:
pub_date = models.DateTimeField(default=datetime.now)
A:
You'll find the explanation in the documentation for .distinct().
I would de-normalize by adding a modified_date field to the Topic model and updating it whenever a Message is saved or deleted.
| Queryset API distinct() does not work? | class Message(models.Model):
subject = models.CharField(max_length=100)
pub_date = models.DateTimeField(default=datetime.now())
class Topic(models.Model):
title = models.CharField(max_length=100)
message = models.ManyToManyField(Message, verbose_name='Discussion')
I want to get order all the topics according to the latest message object attached to that topic.
I executed this query but this does not give the distinct queryset.
>> Topic.objects.order_by('-message__pub_date').distinct()
| [
"You don't need distinct() here, what you need is aggregation. This query will do what you want:\nfrom django.db.models import Max\nTopic.objects.annotate(Max('message__pub_date')).order_by('-message__pub_date__max')\n\nThough if this is production code, you'll probably want to follow akaihola's advice and denormalize \"last_message_posted\" onto the Topic model directly.\nAlso, there's an error in your default value for Message.pub_date. As you have it now, whenever you first run the server and this code is loaded, datetime.now() will be executed once and that value will be used as the pub_date for all Messages. Use this instead to pass the callable itself so it isn't called until each Message is created:\npub_date = models.DateTimeField(default=datetime.now)\n\n",
"You'll find the explanation in the documentation for .distinct().\nI would de-normalize by adding a modified_date field to the Topic model and updating it whenever a Message is saved or deleted.\n"
] | [
7,
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000453477_django_python.txt |
Q:
Trimming Mako output
I really like the Mako templating system that's used in Pylons and a couple other Python frameworks, and my only complaint is how much WS leaks through even a simple inheritance scheme.
Is there anyway to accomplish below, without creating such huge WS gaps... or packing my code in like I started to do with base.mako?
Otherwise to get a grip on what I'm trying to accomplish with below.
Base is kind of like interface class for all views for the entire application, layout is just a prototype idea for 3-4 different layout files ( tables, pure CSS, etc ), and controller/action is a test to make sure my idea's are sane.
Short summary of question: How to cut out the WS created in my Mako scheme?
Update: Is not a solution because it involves seeding all of my mako files with \'s
http://www.makotemplates.org/docs/syntax.html#syntax_newline
/base.mako
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head><%def name="headtags()"></%def>${self.headtags()}</head>
<body>
<%def name="header()"></%def>${self.header()}${next.body()}<%def name="footer()"></%def>${self.footer()}
</body>
</html>
/layout.mako
<%inherit file="/base.mako"/>
<%def name="headtags()">
${parent.headtags()}
<script src="http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.3/prototype.js"></script>
</%def>
<%def name="header()">
<h1>My Blogination</h1>
</%def>
<div id="content">${next.body()}</div>
/controller/action.mako
<%inherit file="/layout.mako" />
<%def name="headtags()">
<title> Hello world, templating system is 1 percent done</title>
${parent.headtags()}
</%def>
Hello ${c.name}!
rendered output:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<title> Hello world, templating system is 1 percent done</title>
<script src="http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.3/prototype.js"></script>
</head>
<body>
<h1>My Blogination</h1>
<div id="content">
Hello Anonymous!
</div>
</body>
</html>
A:
Found my own answer
http://docs.makotemplates.org/en/latest/filtering.html
It still required some trial and error, but using
t = TemplateLookup(directories=['/tmp'], default_filters=['trim'])
dramatically cut down on whitespace bleed. Additional savings can be found by checking the compiled template's and looking for any writes that are just pushing ' ' or similar.
| Trimming Mako output | I really like the Mako templating system that's used in Pylons and a couple other Python frameworks, and my only complaint is how much WS leaks through even a simple inheritance scheme.
Is there anyway to accomplish below, without creating such huge WS gaps... or packing my code in like I started to do with base.mako?
Otherwise to get a grip on what I'm trying to accomplish with below.
Base is kind of like interface class for all views for the entire application, layout is just a prototype idea for 3-4 different layout files ( tables, pure CSS, etc ), and controller/action is a test to make sure my idea's are sane.
Short summary of question: How to cut out the WS created in my Mako scheme?
Update: Is not a solution because it involves seeding all of my mako files with \'s
http://www.makotemplates.org/docs/syntax.html#syntax_newline
/base.mako
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head><%def name="headtags()"></%def>${self.headtags()}</head>
<body>
<%def name="header()"></%def>${self.header()}${next.body()}<%def name="footer()"></%def>${self.footer()}
</body>
</html>
/layout.mako
<%inherit file="/base.mako"/>
<%def name="headtags()">
${parent.headtags()}
<script src="http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.3/prototype.js"></script>
</%def>
<%def name="header()">
<h1>My Blogination</h1>
</%def>
<div id="content">${next.body()}</div>
/controller/action.mako
<%inherit file="/layout.mako" />
<%def name="headtags()">
<title> Hello world, templating system is 1 percent done</title>
${parent.headtags()}
</%def>
Hello ${c.name}!
rendered output:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<title> Hello world, templating system is 1 percent done</title>
<script src="http://ajax.googleapis.com/ajax/libs/prototype/1.6.0.3/prototype.js"></script>
</head>
<body>
<h1>My Blogination</h1>
<div id="content">
Hello Anonymous!
</div>
</body>
</html>
| [
"Found my own answer\nhttp://docs.makotemplates.org/en/latest/filtering.html\nIt still required some trial and error, but using\nt = TemplateLookup(directories=['/tmp'], default_filters=['trim'])\n\ndramatically cut down on whitespace bleed. Additional savings can be found by checking the compiled template's and looking for any writes that are just pushing ' ' or similar. \n"
] | [
2
] | [] | [] | [
"layout",
"mako",
"python",
"template_engine"
] | stackoverflow_0000476324_layout_mako_python_template_engine.txt |
Q:
Python Regular Expression to add links to urls
I'm trying to make a regular expression that will correctly capture URLs, including ones that are wrapped in parenthesis as in (http://example.com) and spoken about on coding horror at https://blog.codinghorror.com/the-problem-with-urls/
I'm currently using the following to create HTML A tags in python for links that start with http and www.
r1 = r"(\b(http|https)://([-A-Za-z0-9+&@#/%?=~_()|!:,.;]*[-A-Za-z0-9+&@#/%=~_()|]))"
r2 = r"((^|\b)www\.([-A-Za-z0-9+&@#/%?=~_()|!:,.;]*[-A-Za-z0-9+&@#/%=~_()|]))"
return re.sub(r2,r'<a rel="nofollow" target="_blank" href="http://\1">\1</a>',re.sub(r1,r'<a rel="nofollow" target="_blank" href="\1">\1</a>',text))
this works well except for the case where someone wraps the url in parens. Does anyone have a better way?
A:
Problem is, URLs could have parenthesis as part of them... (http://en.wikipedia.org/wiki/Tropical_Storm_Alberto_(2006)) . You can't treat that with regexp alone, since it doesn't have state. You need a parser. So your best chance would be to use a parser, and try to guess the correct close parenthesis. That is error-prone (the url could open parenthesis and never close it) so I guess you're out of luck anyway.
See also http://en.wikipedia.org/wiki/, or (http://en.wikipedia.org/wiki/)) and other similar valid URLs.
| Python Regular Expression to add links to urls | I'm trying to make a regular expression that will correctly capture URLs, including ones that are wrapped in parenthesis as in (http://example.com) and spoken about on coding horror at https://blog.codinghorror.com/the-problem-with-urls/
I'm currently using the following to create HTML A tags in python for links that start with http and www.
r1 = r"(\b(http|https)://([-A-Za-z0-9+&@#/%?=~_()|!:,.;]*[-A-Za-z0-9+&@#/%=~_()|]))"
r2 = r"((^|\b)www\.([-A-Za-z0-9+&@#/%?=~_()|!:,.;]*[-A-Za-z0-9+&@#/%=~_()|]))"
return re.sub(r2,r'<a rel="nofollow" target="_blank" href="http://\1">\1</a>',re.sub(r1,r'<a rel="nofollow" target="_blank" href="\1">\1</a>',text))
this works well except for the case where someone wraps the url in parens. Does anyone have a better way?
| [
"Problem is, URLs could have parenthesis as part of them... (http://en.wikipedia.org/wiki/Tropical_Storm_Alberto_(2006)) . You can't treat that with regexp alone, since it doesn't have state. You need a parser. So your best chance would be to use a parser, and try to guess the correct close parenthesis. That is error-prone (the url could open parenthesis and never close it) so I guess you're out of luck anyway.\nSee also http://en.wikipedia.org/wiki/, or (http://en.wikipedia.org/wiki/)) and other similar valid URLs.\n"
] | [
4
] | [] | [] | [
"python",
"regex",
"url"
] | stackoverflow_0000476478_python_regex_url.txt |
Q:
Is there a way to install the scipy special module without the rest of scipy?
I'm writing some Python numerical code and would like to use some functions from the special module. So far, my code only depends on numpy, which I've found very easy to install in a variety of Python environments. Installing scipy, on the other hand, has generally been an exercise in frustration. Is there a way to get just the special module?
Note, I see now that there is a downloadable scipy package for the Mac, but that hasn't always been the case
A:
The scipy subpackages can usually be installed individually. Try cd-ing to the "special" directory and running your normal "python setup.py install". The name space for importing should now be special and now scipy.special.
A:
I'm not familiar with scipy in particular, but in general, modules for software packages depend on the installation of the package itself. So, yes, I'm pretty sure that you need to install scipy to use the special module.
A:
If you already have the right version of Numpy installed then you could try to just take the source for the special module, stick it in a directory somewhere, add it to your PYTHONPATH, and see if it works. It all depends on its dependencies. If it has dependencies beyond Numpy, then you'll have to install those, and if those have dependencies...
| Is there a way to install the scipy special module without the rest of scipy? | I'm writing some Python numerical code and would like to use some functions from the special module. So far, my code only depends on numpy, which I've found very easy to install in a variety of Python environments. Installing scipy, on the other hand, has generally been an exercise in frustration. Is there a way to get just the special module?
Note, I see now that there is a downloadable scipy package for the Mac, but that hasn't always been the case
| [
"The scipy subpackages can usually be installed individually. Try cd-ing to the \"special\" directory and running your normal \"python setup.py install\". The name space for importing should now be special and now scipy.special.\n",
"I'm not familiar with scipy in particular, but in general, modules for software packages depend on the installation of the package itself. So, yes, I'm pretty sure that you need to install scipy to use the special module.\n",
"If you already have the right version of Numpy installed then you could try to just take the source for the special module, stick it in a directory somewhere, add it to your PYTHONPATH, and see if it works. It all depends on its dependencies. If it has dependencies beyond Numpy, then you'll have to install those, and if those have dependencies...\n"
] | [
2,
0,
0
] | [] | [] | [
"numpy",
"package",
"python",
"scipy"
] | stackoverflow_0000476369_numpy_package_python_scipy.txt |
Q:
Generating and submitting a dynamic number of objects in a form with Django
I want to be able to update a dynamic number of objects within a single form using Django and I'm wondering what the best way to do this would be. An example of a similar situation may help.
Model:
class Customer(Model.models):
name = models.CharField(max_length=100)
active = models.BooleanField()
Form (I know I'm mixing view and template code here which doesn't work but this is a general idea for what the form is supposed to do):
customers = Customer.objects.all()
for c in customers:
print <li> {{ c.name }} <input type="checkbox" value="{{ c.active }}" name="?" />
How would I go about submitting a list of these objects? Would the best bet be to attach the id of the customer into each 'row' and then process based on the id? Is there a mechanism for submitting a list of tuples? What would be the ideal solution?
A:
Formsets!
Also, the equivalent for forms generated directly models are model formsets.
| Generating and submitting a dynamic number of objects in a form with Django | I want to be able to update a dynamic number of objects within a single form using Django and I'm wondering what the best way to do this would be. An example of a similar situation may help.
Model:
class Customer(Model.models):
name = models.CharField(max_length=100)
active = models.BooleanField()
Form (I know I'm mixing view and template code here which doesn't work but this is a general idea for what the form is supposed to do):
customers = Customer.objects.all()
for c in customers:
print <li> {{ c.name }} <input type="checkbox" value="{{ c.active }}" name="?" />
How would I go about submitting a list of these objects? Would the best bet be to attach the id of the customer into each 'row' and then process based on the id? Is there a mechanism for submitting a list of tuples? What would be the ideal solution?
| [
"Formsets!\nAlso, the equivalent for forms generated directly models are model formsets.\n"
] | [
8
] | [] | [] | [
"django",
"forms",
"html",
"python",
"web_applications"
] | stackoverflow_0000477183_django_forms_html_python_web_applications.txt |
Q:
Most suitable language(s) for simulations in modeling?
I will participate a modeling competition, which spends three days.
I need a language which is fast and designed for modeling, such as to 2D/3D models.
I have considered these languages:
Python
Sage
Which languages would you use?
A:
You should use the language that you know best and that has good-enough tools for the task at hand. Depending on when the competition is you may have no time to learn a new language/environment.
A:
have a look at http://www.processing.org/ -- it is a programming language (similar to java) and IDE especially developed for simulation and data visualization. given that it was developed in a teaching context, it will be easy to use and will give you great results in no time -- i have seen amazing applications (e,g, webpages as graphs, complexification)
| Most suitable language(s) for simulations in modeling? | I will participate a modeling competition, which spends three days.
I need a language which is fast and designed for modeling, such as to 2D/3D models.
I have considered these languages:
Python
Sage
Which languages would you use?
| [
"You should use the language that you know best and that has good-enough tools for the task at hand. Depending on when the competition is you may have no time to learn a new language/environment.\n",
"have a look at http://www.processing.org/ -- it is a programming language (similar to java) and IDE especially developed for simulation and data visualization. given that it was developed in a teaching context, it will be easy to use and will give you great results in no time -- i have seen amazing applications (e,g, webpages as graphs, complexification)\n"
] | [
4,
2
] | [
"I would use C++, since it spends three days I would have time to write C++ code, and it's a lot faster then python, which would be my choice if it were a one day competition. So I would probably use C++ with OpenGL and SDL for the models. The simulations would I first write in C++, and if I had time at the end I would try to implement them in a shader if it were possible.\n"
] | [
-1
] | [
"python",
"sage"
] | stackoverflow_0000477335_python_sage.txt |
Q:
How to read Unicode input and compare Unicode strings in Python?
I work in Python and would like to read user input (from command line) in Unicode format, ie a Unicode equivalent of raw_input?
Also, I would like to test Unicode strings for equality and it looks like a standard == does not work.
A:
raw_input() returns strings as encoded by the OS or UI facilities. The difficulty is knowing which is that decoding. You might attempt the following:
import sys, locale
text= raw_input().decode(sys.stdin.encoding or locale.getpreferredencoding(True))
which should work correctly in most of the cases.
We need more data about not working Unicode comparisons in order to help you. However, it might be a matter of normalization. Consider the following:
>>> a1= u'\xeatre'
>>> a2= u'e\u0302tre'
a1 and a2 are equivalent but not equal:
>>> print a1, a2
être être
>>> print a1 == a2
False
So you might want to use the unicodedata.normalize() method:
>>> import unicodedata as ud
>>> ud.normalize('NFC', a1)
u'\xeatre'
>>> ud.normalize('NFC', a2)
u'\xeatre'
>>> ud.normalize('NFC', a1) == ud.normalize('NFC', a2)
True
If you give us more information, we might be able to help you more, though.
A:
It should work. raw_input returns a byte string which you must decode using the correct encoding to get your unicode object. For example, the following works for me under Python 2.5 / Terminal.app / OSX:
>>> bytes = raw_input()
日本語 Ελληνικά
>>> bytes
'\xe6\x97\xa5\xe6\x9c\xac\xe8\xaa\x9e \xce\x95\xce\xbb\xce\xbb\xce\xb7\xce\xbd\xce\xb9\xce\xba\xce\xac'
>>> uni = bytes.decode('utf-8') # substitute the encoding of your terminal if it's not utf-8
>>> uni
u'\u65e5\u672c\u8a9e \u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac'
>>> print uni
日本語 Ελληνικά
As for comparing unicode strings: can you post an example where the comparison doesn't work?
A:
I'm not really sure, which format you mean by "Unicode format", there are several. UTF-8? UTF-16? In any case you should be able to read a normal string with raw_input and then decode it using the strings decode method:
raw = raw_input("Please input some funny characters: ")
decoded = raw.decode("utf-8")
If you have a different input encoding just use "utf-16" or whatever instead of "utf-8". Also see the codecs modules docs for different kinds of encodings.
Comparing then should work just fine with ==. If you have string literals containing special characters you should prefix them with "u" to mark them as unicode:
if decoded == u"äöü":
print "Do you speak German?"
And if you want to output these strings again, you probably want to encode them again in the desired encoding:
print decoded.encode("utf-8")
A:
In the general case, it's probably not possible to compare unicode strings. The problem is that there are several ways to compose the same characters. A simple example is accented roman characters. Although there are codepoints for basically all of the commonly used accented characters, it is also correct to compose them from unaccented base letters and a non-spacing accent. This issue is more significant in many non-roman alphabets.
| How to read Unicode input and compare Unicode strings in Python? | I work in Python and would like to read user input (from command line) in Unicode format, ie a Unicode equivalent of raw_input?
Also, I would like to test Unicode strings for equality and it looks like a standard == does not work.
| [
"raw_input() returns strings as encoded by the OS or UI facilities. The difficulty is knowing which is that decoding. You might attempt the following:\nimport sys, locale\ntext= raw_input().decode(sys.stdin.encoding or locale.getpreferredencoding(True))\n\nwhich should work correctly in most of the cases.\nWe need more data about not working Unicode comparisons in order to help you. However, it might be a matter of normalization. Consider the following:\n>>> a1= u'\\xeatre'\n>>> a2= u'e\\u0302tre'\n\na1 and a2 are equivalent but not equal:\n>>> print a1, a2\nêtre être\n>>> print a1 == a2\nFalse\n\nSo you might want to use the unicodedata.normalize() method:\n>>> import unicodedata as ud\n>>> ud.normalize('NFC', a1)\nu'\\xeatre'\n>>> ud.normalize('NFC', a2)\nu'\\xeatre'\n>>> ud.normalize('NFC', a1) == ud.normalize('NFC', a2)\nTrue\n\nIf you give us more information, we might be able to help you more, though.\n",
"It should work. raw_input returns a byte string which you must decode using the correct encoding to get your unicode object. For example, the following works for me under Python 2.5 / Terminal.app / OSX:\n>>> bytes = raw_input()\n日本語 Ελληνικά\n>>> bytes\n'\\xe6\\x97\\xa5\\xe6\\x9c\\xac\\xe8\\xaa\\x9e \\xce\\x95\\xce\\xbb\\xce\\xbb\\xce\\xb7\\xce\\xbd\\xce\\xb9\\xce\\xba\\xce\\xac'\n\n>>> uni = bytes.decode('utf-8') # substitute the encoding of your terminal if it's not utf-8\n>>> uni\nu'\\u65e5\\u672c\\u8a9e \\u0395\\u03bb\\u03bb\\u03b7\\u03bd\\u03b9\\u03ba\\u03ac'\n\n>>> print uni\n日本語 Ελληνικά\n\nAs for comparing unicode strings: can you post an example where the comparison doesn't work?\n",
"I'm not really sure, which format you mean by \"Unicode format\", there are several. UTF-8? UTF-16? In any case you should be able to read a normal string with raw_input and then decode it using the strings decode method:\nraw = raw_input(\"Please input some funny characters: \")\ndecoded = raw.decode(\"utf-8\")\n\nIf you have a different input encoding just use \"utf-16\" or whatever instead of \"utf-8\". Also see the codecs modules docs for different kinds of encodings.\nComparing then should work just fine with ==. If you have string literals containing special characters you should prefix them with \"u\" to mark them as unicode:\nif decoded == u\"äöü\":\n print \"Do you speak German?\"\n\nAnd if you want to output these strings again, you probably want to encode them again in the desired encoding:\nprint decoded.encode(\"utf-8\")\n\n",
"In the general case, it's probably not possible to compare unicode strings. The problem is that there are several ways to compose the same characters. A simple example is accented roman characters. Although there are codepoints for basically all of the commonly used accented characters, it is also correct to compose them from unaccented base letters and a non-spacing accent. This issue is more significant in many non-roman alphabets. \n"
] | [
54,
16,
4,
1
] | [] | [] | [
"python",
"python_2.7",
"unicode"
] | stackoverflow_0000477061_python_python_2.7_unicode.txt |
Q:
How to convert XML to JSON in Python
Possible Duplicate:
Converting XML to JSON using Python?
I am importing an XML feed and trying to convert it to JSON for output. I'm getting this error:
TypeError: <xml.dom.minidom.Document instance at 0x72787d8> is not JSON serializable
Unfortunately I know next to nothing about Python. I'm developing this on the Google App Engine. I could use some help, because my little 2 hour hack that was going so well is now on its 3rd day.
XML data:
<?xml version="1.0" ?><eveapi version="2">
<currentTime>2009-01-25 15:03:27</currentTime>
<result>
<rowset columns="name,characterID,corporationName,corporationID" key="characterID" name="characters">
<row characterID="999999" corporationID="999999" corporationName="filler data" name="someName"/>
</rowset>
</result>
<cachedUntil>2009-01-25 15:04:55</cachedUntil>
</eveapi>
My code:
class doproxy(webapp.RequestHandler):
def get(self):
apiurl = 'http://api.eve-online.com'
path = self.request.get('path');
type = self.request.get('type');
args = '&'+self.request.get('args');
#assemble api url
url = apiurl+path
#do GET request
if type == 'get':
result = urlfetch.fetch(url,'','get');
#do POST request
if type == 'post':
result = urlfetch.fetch(url,args,'post');
if result.status_code == 200:
dom = minidom.parseString( result.content ) #.encode( "utf-8" ) )
dom2json = simplejson.dump(dom,"utf-8")
A:
I'm quickly coming to the opinion that
Python is potentially a great
language, but that none of its users
know how to actually document anything
in a clear and concise way.
The attitude of the question isn't going to help with getting answers from these same Python users.
As is mentioned in the answers to this related question, there is no 1-to-1 correspondence between XML and JSON so the conversion can't be done automatically.
In the documentation for simplejson you can find the list of types that it's able to serialize, which are basically the native Python types (dict, list, unicode, int, float, True/False, None).
So, you have to create a Python data structure containing only these types, which you will then give to simplejson.dump().
| How to convert XML to JSON in Python |
Possible Duplicate:
Converting XML to JSON using Python?
I am importing an XML feed and trying to convert it to JSON for output. I'm getting this error:
TypeError: <xml.dom.minidom.Document instance at 0x72787d8> is not JSON serializable
Unfortunately I know next to nothing about Python. I'm developing this on the Google App Engine. I could use some help, because my little 2 hour hack that was going so well is now on its 3rd day.
XML data:
<?xml version="1.0" ?><eveapi version="2">
<currentTime>2009-01-25 15:03:27</currentTime>
<result>
<rowset columns="name,characterID,corporationName,corporationID" key="characterID" name="characters">
<row characterID="999999" corporationID="999999" corporationName="filler data" name="someName"/>
</rowset>
</result>
<cachedUntil>2009-01-25 15:04:55</cachedUntil>
</eveapi>
My code:
class doproxy(webapp.RequestHandler):
def get(self):
apiurl = 'http://api.eve-online.com'
path = self.request.get('path');
type = self.request.get('type');
args = '&'+self.request.get('args');
#assemble api url
url = apiurl+path
#do GET request
if type == 'get':
result = urlfetch.fetch(url,'','get');
#do POST request
if type == 'post':
result = urlfetch.fetch(url,args,'post');
if result.status_code == 200:
dom = minidom.parseString( result.content ) #.encode( "utf-8" ) )
dom2json = simplejson.dump(dom,"utf-8")
| [
"\nI'm quickly coming to the opinion that\n Python is potentially a great\n language, but that none of its users\n know how to actually document anything\n in a clear and concise way.\n\nThe attitude of the question isn't going to help with getting answers from these same Python users.\nAs is mentioned in the answers to this related question, there is no 1-to-1 correspondence between XML and JSON so the conversion can't be done automatically.\nIn the documentation for simplejson you can find the list of types that it's able to serialize, which are basically the native Python types (dict, list, unicode, int, float, True/False, None).\nSo, you have to create a Python data structure containing only these types, which you will then give to simplejson.dump(). \n"
] | [
9
] | [] | [] | [
"json",
"python",
"xml"
] | stackoverflow_0000477794_json_python_xml.txt |
Q:
How to list the files in a static directory?
I am playing with Google App Engine and Python and I cannot list the files of a static directory. Below is the code I currently use.
app.yaml
- url: /data
static_dir: data
Python code to list the files
myFiles = []
for root, dirs, files in os.walk(os.path.join(os.path.dirname(__file__), 'data/') ):
for name in files:
full_name = os.path.join(root, name)
myFiles.append('%s;%s\n' % (name, datetime.fromtimestamp(os.stat(full_name).st_mtime)))
When I run this code locally on my machine, everything is alright. I have my Python script at the root of the directory and it walks the files under the data directory. However, when I upload and run the exact same code in GAE, it doesn`t work. It seems to me that the directory structure of my application is not exactly replicated in Google App Engine. Where are the static files?
Thanks!
A:
https://developers.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Static_file_handlers
They're not where you think they are, GAE puts static content into GoogleFS which is equivalent of a CDN. The idea is that static content is meant to be served directly to your users and not act as a file store you can manipulate. Furthermore GAE has 1K file limit and it would be difficult to police this rule if you could manipulate your static file store.
A:
Here´s a project that let you browse your static files:
http://code.google.com/p/appfilesbrowser/
And here is must see list of recipes for appengine:
http://appengine-cookbook.appspot.com/
(I found about that project here sometime ago)
A:
You can't access files uploaded as static content programmatically - they're not installed on the server along with your app, rather they're served up directly. If you really need to access them, you can remove the static file handler and serve them up yourself.
| How to list the files in a static directory? | I am playing with Google App Engine and Python and I cannot list the files of a static directory. Below is the code I currently use.
app.yaml
- url: /data
static_dir: data
Python code to list the files
myFiles = []
for root, dirs, files in os.walk(os.path.join(os.path.dirname(__file__), 'data/') ):
for name in files:
full_name = os.path.join(root, name)
myFiles.append('%s;%s\n' % (name, datetime.fromtimestamp(os.stat(full_name).st_mtime)))
When I run this code locally on my machine, everything is alright. I have my Python script at the root of the directory and it walks the files under the data directory. However, when I upload and run the exact same code in GAE, it doesn`t work. It seems to me that the directory structure of my application is not exactly replicated in Google App Engine. Where are the static files?
Thanks!
| [
"https://developers.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Static_file_handlers\nThey're not where you think they are, GAE puts static content into GoogleFS which is equivalent of a CDN. The idea is that static content is meant to be served directly to your users and not act as a file store you can manipulate. Furthermore GAE has 1K file limit and it would be difficult to police this rule if you could manipulate your static file store.\n",
"Here´s a project that let you browse your static files:\nhttp://code.google.com/p/appfilesbrowser/\nAnd here is must see list of recipes for appengine:\nhttp://appengine-cookbook.appspot.com/\n(I found about that project here sometime ago)\n",
"You can't access files uploaded as static content programmatically - they're not installed on the server along with your app, rather they're served up directly. If you really need to access them, you can remove the static file handler and serve them up yourself.\n"
] | [
7,
2,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000477135_google_app_engine_python.txt |
Q:
Python and os.chroot
I'm writing a web-server in Python as a hobby project. The code is targeted at *NIX machines. I'm new to developing on Linux and even newer to Python itself.
I am worried about people breaking out of the folder that I'm using to serve up the web-site. The most obvious way to do this is to filter requests for documents like /../../etc/passwd. However, I'm worried that there might be clever ways to go up the directory tree that I'm not aware of and consequentially my filter won't catch.
I'm considering adding using the os.chroot so that the root directory is the web-site itself. Is this is a safe way of protecting against these jail breaking attacks? Are there any potential pitfalls to doing this that will hurt me down the road?
A:
Yes there are pitfalls. Security wise:
If you run as root, there are always ways to break out. So first chroot(), then PERMANENTLY drop privileges to an other user.
Put nothing which isn't absolutely required into the chroot tree. Especially no suid/sgid files, named pipes, unix domain sockets and device nodes.
Python wise your whole module loading gets screwed up. Python is simply not made for such scenarios. If your application is moderately complex you will run into module loading issues.
I think much more important than chrooting is running as a non privileged user and simply using the file system permissions to keep that user from reading anything of importance.
A:
Check out Twisted. twistd supports privilege shedding and chroot operation out of the box. Additionally it has a whole framework for writing network services, daemons, and pretty much everything.
| Python and os.chroot | I'm writing a web-server in Python as a hobby project. The code is targeted at *NIX machines. I'm new to developing on Linux and even newer to Python itself.
I am worried about people breaking out of the folder that I'm using to serve up the web-site. The most obvious way to do this is to filter requests for documents like /../../etc/passwd. However, I'm worried that there might be clever ways to go up the directory tree that I'm not aware of and consequentially my filter won't catch.
I'm considering adding using the os.chroot so that the root directory is the web-site itself. Is this is a safe way of protecting against these jail breaking attacks? Are there any potential pitfalls to doing this that will hurt me down the road?
| [
"Yes there are pitfalls. Security wise:\n\nIf you run as root, there are always ways to break out. So first chroot(), then PERMANENTLY drop privileges to an other user.\nPut nothing which isn't absolutely required into the chroot tree. Especially no suid/sgid files, named pipes, unix domain sockets and device nodes.\n\nPython wise your whole module loading gets screwed up. Python is simply not made for such scenarios. If your application is moderately complex you will run into module loading issues.\nI think much more important than chrooting is running as a non privileged user and simply using the file system permissions to keep that user from reading anything of importance.\n",
"Check out Twisted. twistd supports privilege shedding and chroot operation out of the box. Additionally it has a whole framework for writing network services, daemons, and pretty much everything.\n"
] | [
7,
3
] | [] | [] | [
"chroot",
"linux",
"python"
] | stackoverflow_0000478359_chroot_linux_python.txt |
Q:
How do I build and install P4Python for Mac OS X?
I've been unable to build P4Python for an Intel Mac OS X 10.5.5.
These are my steps:
I downloaded p4python.tgz (from
http://filehost.perforce.com/perforce/r07.3/tools/) and expanded
it into "P4Python-2007.3".
I downloaded p4api.tar (from
http://filehost.perforce.com/perforce/r07.3/bin.macosx104x86/)
and expanded it into "p4api-2007.3.143793".
I placed "p4api-2007.3.143793" into "P4Python-2007.3" and edited
setup.cfg to set "p4_api=./p4api-2007.3.143793".
I added the line 'extra_link_args = ["-framework", "Carbon"]' to
setup.py after:
elif unameOut[0] == "Darwin":
unix = "MACOSX"
release = "104"
platform = self.architecture(unameOut[4])
I ran python setup.py build and got:
$ python setup.py build
API Release 2007.3
running build
running build_py
running build_ext
building 'P4API' extension
gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -DID_OS="MACOSX104X86" -DID_REL="2007.3" -DID_PATCH="151416" -DID_API="2007.3" -DID_Y="2008" -DID_M="04" -DID_D="09" -I./p4api-2007.3.143793 -I./p4api-2007.3.143793/include/p4 -I/build/toolchain/mac32/python-2.4.3/include/python2.4 -c P4API.cpp -o build/temp.darwin-9.5.0-i386-2.4/P4API.o -DOS_MACOSX -DOS_MACOSX104 -DOS_MACOSXX86 -DOS_MACOSX104X86
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for C/ObjC but not for C++
P4API.cpp: In function âint P4Adapter_init(P4Adapter*, PyObject*, PyObject*)â:
P4API.cpp:105: error: âPy_ssize_tâ was not declared in this scope
P4API.cpp:105: error: expected `;' before âposâ
P4API.cpp:107: error: âposâ was not declared in this scope
P4API.cpp: In function âPyObject* P4Adapter_run(P4Adapter*, PyObject*)â:
P4API.cpp:177: error: âPy_ssize_tâ was not declared in this scope
P4API.cpp:177: error: expected `;' before âiâ
P4API.cpp:177: error: âiâ was not declared in this scope
error: command 'gcc' failed with exit status 1
which gcc returns /usr/bin/gcc and gcc -v returns:
Using built-in specs.
Target: i686-apple-darwin9
Configured with: /var/tmp/gcc/gcc-5465~16/src/configure
--disable-checking -enable-werror --prefix=/usr --mandir=/share/man
--enable-languages=c,objc,c++,obj-c++
--program-transform-name=/^[cg][^.-]*$/s/$/-4.0/
--with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib
--build=i686-apple-darwin9 --with-arch=apple --with-tune=generic
--host=i686-apple-darwin9 --target=i686-apple-darwin9
Thread model: posix
gcc version 4.0.1 (Apple Inc. build 5465)
python -V returns Python 2.4.3.
A:
From http://bugs.mymediasystem.org/?do=details&task_id=676 suggests that Py_ssize_t was added in python 2.5, so it won't work (without some modifications) with python 2.4.
Either install/compile your own copy of python 2.5/2.6, or work out how to change P4Python, or look for an alternative python-perforce library.
A:
The newer version 2008.1 will build with Python 2.4.
I had posted the minor changes required to do that on my P4Python page, but they were rolled in to the official version.
Robert
A:
Very outdated, but maybe you can use http://public.perforce.com:8080/@md=d&cd=//guest/miki_tebeka/p4py/&c=5Fm@//guest/miki_tebeka/p4py/main/?ac=83 for now
| How do I build and install P4Python for Mac OS X? | I've been unable to build P4Python for an Intel Mac OS X 10.5.5.
These are my steps:
I downloaded p4python.tgz (from
http://filehost.perforce.com/perforce/r07.3/tools/) and expanded
it into "P4Python-2007.3".
I downloaded p4api.tar (from
http://filehost.perforce.com/perforce/r07.3/bin.macosx104x86/)
and expanded it into "p4api-2007.3.143793".
I placed "p4api-2007.3.143793" into "P4Python-2007.3" and edited
setup.cfg to set "p4_api=./p4api-2007.3.143793".
I added the line 'extra_link_args = ["-framework", "Carbon"]' to
setup.py after:
elif unameOut[0] == "Darwin":
unix = "MACOSX"
release = "104"
platform = self.architecture(unameOut[4])
I ran python setup.py build and got:
$ python setup.py build
API Release 2007.3
running build
running build_py
running build_ext
building 'P4API' extension
gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -DID_OS="MACOSX104X86" -DID_REL="2007.3" -DID_PATCH="151416" -DID_API="2007.3" -DID_Y="2008" -DID_M="04" -DID_D="09" -I./p4api-2007.3.143793 -I./p4api-2007.3.143793/include/p4 -I/build/toolchain/mac32/python-2.4.3/include/python2.4 -c P4API.cpp -o build/temp.darwin-9.5.0-i386-2.4/P4API.o -DOS_MACOSX -DOS_MACOSX104 -DOS_MACOSXX86 -DOS_MACOSX104X86
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for C/ObjC but not for C++
P4API.cpp: In function âint P4Adapter_init(P4Adapter*, PyObject*, PyObject*)â:
P4API.cpp:105: error: âPy_ssize_tâ was not declared in this scope
P4API.cpp:105: error: expected `;' before âposâ
P4API.cpp:107: error: âposâ was not declared in this scope
P4API.cpp: In function âPyObject* P4Adapter_run(P4Adapter*, PyObject*)â:
P4API.cpp:177: error: âPy_ssize_tâ was not declared in this scope
P4API.cpp:177: error: expected `;' before âiâ
P4API.cpp:177: error: âiâ was not declared in this scope
error: command 'gcc' failed with exit status 1
which gcc returns /usr/bin/gcc and gcc -v returns:
Using built-in specs.
Target: i686-apple-darwin9
Configured with: /var/tmp/gcc/gcc-5465~16/src/configure
--disable-checking -enable-werror --prefix=/usr --mandir=/share/man
--enable-languages=c,objc,c++,obj-c++
--program-transform-name=/^[cg][^.-]*$/s/$/-4.0/
--with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib
--build=i686-apple-darwin9 --with-arch=apple --with-tune=generic
--host=i686-apple-darwin9 --target=i686-apple-darwin9
Thread model: posix
gcc version 4.0.1 (Apple Inc. build 5465)
python -V returns Python 2.4.3.
| [
"From http://bugs.mymediasystem.org/?do=details&task_id=676 suggests that Py_ssize_t was added in python 2.5, so it won't work (without some modifications) with python 2.4.\nEither install/compile your own copy of python 2.5/2.6, or work out how to change P4Python, or look for an alternative python-perforce library.\n",
"The newer version 2008.1 will build with Python 2.4.\nI had posted the minor changes required to do that on my P4Python page, but they were rolled in to the official version.\nRobert\n",
"Very outdated, but maybe you can use http://public.perforce.com:8080/@md=d&cd=//guest/miki_tebeka/p4py/&c=5Fm@//guest/miki_tebeka/p4py/main/?ac=83 for now\n"
] | [
1,
1,
0
] | [] | [] | [
"macos",
"p4python",
"perforce",
"python"
] | stackoverflow_0000168273_macos_p4python_perforce_python.txt |
Q:
Need to route instance calls inside a python class
The problem is a need to take the arguments into account before choosing the responder. Here is my attempt so far.
from responders import A, B, C
class RandomResponder(object)
def init(self, *args, *kwargs):
self.args = args
self.kwargs = kwargs
def __getattr__(self, name):
# pick a responder based on the args the function was called with
# I don't know how to do this part
# for sake of argument lets the args a function was called with lead me to pick responder A
r = A
responder = r(*self.args, **self.kwargs)
return responder.__getattr__(name)
The desired effect would be:
r = RandomResponder()
r.doSomething(1)
#returns A.doSomething()
r.doSomething(2)
#returns B.doSomething()
r.doSomething(3)
#return C.doSomething()
r.doSomethingElse(1)
#returns A.doSomethingElse()
r.doSomethingElse(2)
#returns B.doSomethingElse()
r.doSomethingElse(3)
#returns C.doSomethingElse()
I will not know ahead of time all the functions contained with the responders A, B, and C.
A:
When you do this
r.doSomething(1)
what happens is, in order:
r.__getattr__ is called, and returns an object
this object is called with an argument "1"
At the time when __getattr__ is called, you have no way of knowing what arguments the object you return is going to get called with, or even if it's going to be called at all...
So, to get the behavior that you want, __getattr__ has to return a callable object that makes the decision itself based on the arguments it's called with. For example
from responders import A, B, C
class RandomResponder(object):
def __getattr__(self, name):
def func(*args, **kwds):
resp = { 1:A, 2:B, 3:C }[args[0]] # Decide which responder to use (example)
return getattr(resp, name)() # Call the function on the responder
return func
A:
Try this:
class RandomResponder(object):
choices = [A, B, C]
@classmethod
def which(cls):
return random.choice(cls.choices)
def __getattr__(self, attr):
return getattr(self.which(), attr)
which() randomly selects an option from the choices, and which getattr uses to get the attribute.
EDIT: it actually looks like you want something more like this.
class RandomResponder(object):
choices = [A, B, C]
def __getattr__(self, attr):
# we define a function that actually gets called
# which takes up the first positional argument,
# the rest are left to args and kwargs
def doCall(which, *args, **kwargs):
# get the attribute of the appropriate one, call with passed args
return getattr(self.choices[which], attr)(*args, **kwargs)
return doCall
This could be written using lambda, but I'll just leave it like this so it's clearer.
A:
What about:
RandomResponder = [A, B, C]
RandomResponder[0].doSomething() # returns A.doSomething()
RandomResponder[1].doSomething() # returns B.doSomething()
RandomResponder[2].doSomething() # returns C.doSomething()
# etc
A:
If you specify args (without the asterisk), it's just a List of values (strings). Similarly, kwargs is a Dict of matching keys (strings) to values (strings).
This is one of the first results I found after Googling args kwargs.
Edit: I actually don't know quite what you're looking for, so this is just a guess.
A:
Are you trying to do this?
from responders import A, B, C
class RandomResponder(object)
def pickAResponder( self, someName ):
"""Use a simple mapping."""
return { 'nameA': A, 'nameB': B, 'nameC': C }[someName]
def __init__(self, *args, *kwargs):
self.args = args
self.kwargs = kwargs
def __getattr__(self, name):
"""pick a responder based on the args[0]"""
r = self.pickAResponder(self.args[0])
responder = r(*self.args, **self.kwargs)
return responder.__getattr__(name)
Your responder classes (A, B, C) are just objects. You can manipulate a class using mappings, lists, if-statements, whatever Python coding you want in the pickAResponder method.
| Need to route instance calls inside a python class | The problem is a need to take the arguments into account before choosing the responder. Here is my attempt so far.
from responders import A, B, C
class RandomResponder(object)
def init(self, *args, *kwargs):
self.args = args
self.kwargs = kwargs
def __getattr__(self, name):
# pick a responder based on the args the function was called with
# I don't know how to do this part
# for sake of argument lets the args a function was called with lead me to pick responder A
r = A
responder = r(*self.args, **self.kwargs)
return responder.__getattr__(name)
The desired effect would be:
r = RandomResponder()
r.doSomething(1)
#returns A.doSomething()
r.doSomething(2)
#returns B.doSomething()
r.doSomething(3)
#return C.doSomething()
r.doSomethingElse(1)
#returns A.doSomethingElse()
r.doSomethingElse(2)
#returns B.doSomethingElse()
r.doSomethingElse(3)
#returns C.doSomethingElse()
I will not know ahead of time all the functions contained with the responders A, B, and C.
| [
"When you do this\nr.doSomething(1)\n\nwhat happens is, in order:\n\nr.__getattr__ is called, and returns an object\nthis object is called with an argument \"1\" \n\nAt the time when __getattr__ is called, you have no way of knowing what arguments the object you return is going to get called with, or even if it's going to be called at all... \nSo, to get the behavior that you want, __getattr__ has to return a callable object that makes the decision itself based on the arguments it's called with. For example\nfrom responders import A, B, C\n\nclass RandomResponder(object):\n def __getattr__(self, name):\n def func(*args, **kwds):\n resp = { 1:A, 2:B, 3:C }[args[0]] # Decide which responder to use (example)\n return getattr(resp, name)() # Call the function on the responder\n return func\n\n",
"Try this:\nclass RandomResponder(object):\n choices = [A, B, C]\n\n @classmethod\n def which(cls):\n return random.choice(cls.choices)\n\n def __getattr__(self, attr):\n return getattr(self.which(), attr)\n\nwhich() randomly selects an option from the choices, and which getattr uses to get the attribute.\nEDIT: it actually looks like you want something more like this.\nclass RandomResponder(object):\n choices = [A, B, C]\n\n def __getattr__(self, attr):\n # we define a function that actually gets called\n # which takes up the first positional argument,\n # the rest are left to args and kwargs\n def doCall(which, *args, **kwargs):\n # get the attribute of the appropriate one, call with passed args\n return getattr(self.choices[which], attr)(*args, **kwargs)\n return doCall\n\nThis could be written using lambda, but I'll just leave it like this so it's clearer.\n",
"What about:\nRandomResponder = [A, B, C]\nRandomResponder[0].doSomething() # returns A.doSomething()\nRandomResponder[1].doSomething() # returns B.doSomething()\nRandomResponder[2].doSomething() # returns C.doSomething()\n# etc\n\n",
"If you specify args (without the asterisk), it's just a List of values (strings). Similarly, kwargs is a Dict of matching keys (strings) to values (strings).\nThis is one of the first results I found after Googling args kwargs.\nEdit: I actually don't know quite what you're looking for, so this is just a guess.\n",
"Are you trying to do this?\nfrom responders import A, B, C\n\nclass RandomResponder(object)\n\n def pickAResponder( self, someName ):\n \"\"\"Use a simple mapping.\"\"\"\n return { 'nameA': A, 'nameB': B, 'nameC': C }[someName]\n\n def __init__(self, *args, *kwargs):\n self.args = args\n self.kwargs = kwargs\n\n def __getattr__(self, name):\n \"\"\"pick a responder based on the args[0]\"\"\"\n r = self.pickAResponder(self.args[0])\n responder = r(*self.args, **self.kwargs)\n return responder.__getattr__(name)\n\nYour responder classes (A, B, C) are just objects. You can manipulate a class using mappings, lists, if-statements, whatever Python coding you want in the pickAResponder method.\n"
] | [
3,
2,
1,
0,
0
] | [] | [] | [
"class",
"instance",
"python"
] | stackoverflow_0000478655_class_instance_python.txt |
Q:
is there a string method to capitalize acronyms in python?
This is good:
import string
string.capwords("proper name")
Out: 'Proper Name'
This is not so good:
string.capwords("I.R.S")
Out: 'I.r.s'
Is there no string method to do capwords so that it accomodates acronyms?
A:
This might work:
import re
def _callback(match):
""" This is a simple callback function for the regular expression which is
in charge of doing the actual capitalization. It is designed to only
capitalize words which aren't fully uppercased (like acronyms).
"""
word = match.group(0)
if word == word.upper():
return word
else:
return word.capitalize()
def capwords(data):
""" This function converts `data` into a capitalized version of itself. This
function accomidates acronyms.
"""
return re.sub("[\w\'\-\_]+", _callback, data)
Here is a test:
print capwords("This is an IRS test.") # Produces: "This Is An IRS Test."
print capwords("This is an I.R.S. test.") # Produces: "This Is An I.R.S. Test."
A:
No, there is no such method in the standard library.
A:
Even if there were such a function, what would it do when asked to process "IRS"? Even the IRS call themselves "IRS" with no dots.
| is there a string method to capitalize acronyms in python? | This is good:
import string
string.capwords("proper name")
Out: 'Proper Name'
This is not so good:
string.capwords("I.R.S")
Out: 'I.r.s'
Is there no string method to do capwords so that it accomodates acronyms?
| [
"This might work:\nimport re\n\ndef _callback(match):\n \"\"\" This is a simple callback function for the regular expression which is \n in charge of doing the actual capitalization. It is designed to only \n capitalize words which aren't fully uppercased (like acronyms).\n \"\"\"\n word = match.group(0)\n if word == word.upper():\n return word\n else:\n return word.capitalize()\n\ndef capwords(data):\n \"\"\" This function converts `data` into a capitalized version of itself. This \n function accomidates acronyms.\n \"\"\"\n return re.sub(\"[\\w\\'\\-\\_]+\", _callback, data)\n\nHere is a test:\nprint capwords(\"This is an IRS test.\") # Produces: \"This Is An IRS Test.\"\nprint capwords(\"This is an I.R.S. test.\") # Produces: \"This Is An I.R.S. Test.\"\n\n",
"No, there is no such method in the standard library.\n",
"Even if there were such a function, what would it do when asked to process \"IRS\"? Even the IRS call themselves \"IRS\" with no dots.\n"
] | [
8,
2,
1
] | [
"I just used a list comprehension: [ \".\".join( [ string.capwords(l) for l in entry.split(\".\") ] ) for entry in original_list ]\n"
] | [
-1
] | [
"acronym",
"capitalize",
"python",
"string"
] | stackoverflow_0000479043_acronym_capitalize_python_string.txt |
Q:
ISO encoded attachment names and python
First of all i don't have the code example on this computer, but i have an example that is quite similar.
(http://docs.python.org/library/email-examples.html)
The 4th one.
My issue lies within this bit of code
counter = 1
for part in msg.walk():
# multipart/* are just containers
if part.get_content_maintype() == 'multipart':
continue
# Applications should really sanitize the given filename so that an
# email message can't be used to overwrite important files
filename = part.get_filename()
if not filename:
ext = mimetypes.guess_extension(part.get_content_type())
if not ext:
# Use a generic bag-of-bits extension
ext = '.bin'
filename = 'part-%03d%s' % (counter, ext)
counter += 1
fp = open(os.path.join(opts.directory, filename), 'wb')
fp.write(part.get_payload(decode=True))
fp.close()
When i fetch emails that do not have iso or utf encoded filenames, this code works fine.
But when the attachment name is iso encoded, the filename is not within the get_filename, but the filename is in encoded form within part["Content-type"] (i belive)
The above example tries to guess the extension and if it cant find the filename, it just gives it a part filename. What i would like is the filename.
Has anyone dealt with issues like these, and what did you do to fix it?
A:
I found the issue, it was with
mimetypes.guess_extension(part.get_content_type())
And images with "image/pjpeg" as the content type
@S.Lott i have changed the code to resemble the above example, but i added this to fix the pjpeg issue.
if not filename:
ext = mimetypes.guess_extension(part.get_content_type())
if not ext:
guess = part["Content-Type"].split(";")
if guess[0] == "image/pjpeg":
guess[0] = "image/jpeg"
ext = mimetypes.guess_extension(guess[0])
if not ext:
ext = ".bin"
| ISO encoded attachment names and python | First of all i don't have the code example on this computer, but i have an example that is quite similar.
(http://docs.python.org/library/email-examples.html)
The 4th one.
My issue lies within this bit of code
counter = 1
for part in msg.walk():
# multipart/* are just containers
if part.get_content_maintype() == 'multipart':
continue
# Applications should really sanitize the given filename so that an
# email message can't be used to overwrite important files
filename = part.get_filename()
if not filename:
ext = mimetypes.guess_extension(part.get_content_type())
if not ext:
# Use a generic bag-of-bits extension
ext = '.bin'
filename = 'part-%03d%s' % (counter, ext)
counter += 1
fp = open(os.path.join(opts.directory, filename), 'wb')
fp.write(part.get_payload(decode=True))
fp.close()
When i fetch emails that do not have iso or utf encoded filenames, this code works fine.
But when the attachment name is iso encoded, the filename is not within the get_filename, but the filename is in encoded form within part["Content-type"] (i belive)
The above example tries to guess the extension and if it cant find the filename, it just gives it a part filename. What i would like is the filename.
Has anyone dealt with issues like these, and what did you do to fix it?
| [
"I found the issue, it was with \nmimetypes.guess_extension(part.get_content_type())\n\nAnd images with \"image/pjpeg\" as the content type\n@S.Lott i have changed the code to resemble the above example, but i added this to fix the pjpeg issue.\nif not filename:\n ext = mimetypes.guess_extension(part.get_content_type())\n\nif not ext:\n guess = part[\"Content-Type\"].split(\";\")\n\n if guess[0] == \"image/pjpeg\":\n guess[0] = \"image/jpeg\"\n\n ext = mimetypes.guess_extension(guess[0])\n\nif not ext:\n ext = \".bin\"\n\n"
] | [
0
] | [] | [] | [
"attachment",
"email",
"python"
] | stackoverflow_0000470567_attachment_email_python.txt |
Q:
All nodeValue fields are None when parsing XML
I'm building a simple web-based RSS reader in Python, but I'm having trouble parsing the XML. I started out by trying some stuff in the Python command line.
>>> from xml.dom import minidom
>>> import urllib2
>>> url ='http://www.digg.com/rss/index.xml'
>>> xmldoc = minidom.parse(urllib2.urlopen(url))
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> titlenode = channelnode[0].getElementsByTagName("title")
>>> print titlenode[0]
<DOM Element: title at 0xb37440>
>>> print titlenode[0].nodeValue
None
I played around with this for a while, but the nodeValue of everything seems to be None. Yet if you look at the XML, there definitely are values there. What am I doing wrong?
A:
For RSS feeds you should try the Universal Feed Parser library. It simplifies the handling of RSS feeds immensly.
import feedparser
d = feedparser.parse('http://www.digg.com/rss/index.xml')
title = d.channel.title
A:
This is the syntax you are looking for:
>>> print titlenode[0].firstChild.nodeValue
digg.com: Stories / Popular
Note that the node value is a logical descendant of the node itself.
| All nodeValue fields are None when parsing XML | I'm building a simple web-based RSS reader in Python, but I'm having trouble parsing the XML. I started out by trying some stuff in the Python command line.
>>> from xml.dom import minidom
>>> import urllib2
>>> url ='http://www.digg.com/rss/index.xml'
>>> xmldoc = minidom.parse(urllib2.urlopen(url))
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> channelnode = xmldoc.getElementsByTagName("channel")
>>> titlenode = channelnode[0].getElementsByTagName("title")
>>> print titlenode[0]
<DOM Element: title at 0xb37440>
>>> print titlenode[0].nodeValue
None
I played around with this for a while, but the nodeValue of everything seems to be None. Yet if you look at the XML, there definitely are values there. What am I doing wrong?
| [
"For RSS feeds you should try the Universal Feed Parser library. It simplifies the handling of RSS feeds immensly.\nimport feedparser\nd = feedparser.parse('http://www.digg.com/rss/index.xml')\ntitle = d.channel.title\n\n",
"This is the syntax you are looking for:\n>>> print titlenode[0].firstChild.nodeValue\ndigg.com: Stories / Popular\n\nNote that the node value is a logical descendant of the node itself.\n"
] | [
17,
10
] | [] | [] | [
"minidom",
"python",
"rss",
"xml"
] | stackoverflow_0000479751_minidom_python_rss_xml.txt |
Q:
The OLE way of doing drag&drop in wxPython
I have wxPython app which is running on MS Windows and I'd like it to support drag&drop between its instances (so the user opens my app 3 times and drags data from one instance to another).
The simple drag&drop in wxPython works that way:
User initiates drag: The source window packs necessary data in wx.DataObject(), creates new wx.DropSource, sets its data and calls dropSource.DoDragDrop()
User drops data onto target window: The drop target calls library function GetData() which transfers actual data to its wx.DataObject instance and finally - dataObject.GetData() unpacks the actual data.
I'd like to have some more sophisticated drag&drop which would allow user to choose what data is dragged after he drops.
Scenario of my dreams:
User initiates drag: Only some pointer to the source window is packed (some function or object).
User drops data onto target window: Nice dialog is displayed which asks user which drag&drop mode he chooses (like - dragging only song title, or song title and the artists name or whole album of the dragged artist).
Users chooses drag&drop mode: Drop target calls some function on the dragged data object, which then retrieves data from the drag source and transfers it to the drop target.
The scenario of my dreams seems doable in MS Windows, but the docs for wxWidgets and wxPython are pretty complex and ambigious. Not all wx.DataObject classes are available in wxPython (only wx.PySimpleDataObject), so I'd like someone to share his experience with such approach. Can such behaviour be implemented in wxPython without having to code it directly in winAPI?
EDIT:
Toni Ruža gave an answer with working drag&drop example, but that's not exactly the scenario of my dreams. His code manipulates data when it's dropped (the HandleDrop() shows popup menu), but data is prepared when drag is initiated (in On_ElementDrag()). In my application there should be three different drag&drop modes, and some of them require time-consuming data preparation. That's why I want to postpone data retrieval to the moment user drops data and chooses (potentially costly) d&d mode.
And for memory protection issue - I want to use OLE mechanisms for inter-process communication, like MS Office does. You can copy Excel diagram and paste it into MS-Word where it will behave like an image (well, sort of). Since it works I believe it can be done in winAPI. I just don't know if I can code it in wxPython.
A:
Since you can't use one of the standard data formats to store references to python objects I would recommend you use a text data format for storing the parameters you need for your method calls rather than making a new data format. And anyway, it would be no good to pass a reference to an object from one app to another as the object in question would not be accessible (remember memory protection?).
Here is a simple example for your requirements:
import wx
class TestDropTarget(wx.TextDropTarget):
def OnDropText(self, x, y, text):
wx.GetApp().TopWindow.HandleDrop(text)
def OnDragOver(self, x, y, d):
return wx.DragCopy
class Test(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None)
self.numbers = wx.ListCtrl(self, style = wx.LC_ICON | wx.LC_AUTOARRANGE)
self.field = wx.TextCtrl(self)
sizer = wx.FlexGridSizer(2, 2, 5, 5)
sizer.AddGrowableCol(1)
sizer.AddGrowableRow(0)
self.SetSizer(sizer)
sizer.Add(wx.StaticText(self, label="Drag from:"))
sizer.Add(self.numbers, flag=wx.EXPAND)
sizer.Add(wx.StaticText(self, label="Drag to:"), flag=wx.ALIGN_CENTER_VERTICAL)
sizer.Add(self.field)
for i in range(100):
self.numbers.InsertStringItem(self.numbers.GetItemCount(), str(i))
self.numbers.Bind(wx.EVT_LIST_BEGIN_DRAG, self.On_ElementDrag)
self.field.SetDropTarget(TestDropTarget())
menu_id1 = wx.NewId()
menu_id2 = wx.NewId()
self.menu = wx.Menu()
self.menu.AppendItem(wx.MenuItem(self.menu, menu_id1, "Simple copy"))
self.menu.AppendItem(wx.MenuItem(self.menu, menu_id2, "Mess with it"))
self.Bind(wx.EVT_MENU, self.On_SimpleCopy, id=menu_id1)
self.Bind(wx.EVT_MENU, self.On_MessWithIt, id=menu_id2)
def On_ElementDrag(self, event):
data = wx.TextDataObject(self.numbers.GetItemText(event.Index))
source = wx.DropSource(self.numbers)
source.SetData(data)
source.DoDragDrop()
def HandleDrop(self, text):
self._text = text
self.PopupMenu(self.menu)
def On_SimpleCopy(self, event):
self.field.Value = self._text
def On_MessWithIt(self, event):
self.field.Value = "<-%s->" % "".join([int(c)*c for c in self._text])
app = wx.PySimpleApp()
app.TopWindow = Test()
app.TopWindow.Show()
app.MainLoop()
Methods like On_SimpleCopy and On_MessWithIt get executed after the drop so any lengthy operations you might want to do you can do there based on the textual or some other standard type of data you transfered with the drag (self._text in my case), and look... no OLE :)
A:
Ok, it seems that it can't be done the way I wanted it.
Possible solutions are:
Pass some parameters in d&d and do some inter-process communication on your own, after user drops data in target processes window.
Use DataObjectComposite to support multiple drag&drop formats and keyboard modifiers to choose current format. Scenario:
User initiates drag. State of CTRL, ALT and SHIFT is checked, and depending on it the d&d format is selected. DataObjectComposite is created, and has set data in chosen format.
User drops data in target window. Drop target asks dropped DataObject for supported format and retrieves data, knowing what format it is in.
I'm choosing the solution 2., because it doesn't require hand crafting communication between processes and it allows me to avoid unnecessary data retrieval when user wants to drag only the simplest data.
Anyway - Toni, thanks for your answer! Played with it a little and it made me think of d&d and of changing my approach to the problem.
| The OLE way of doing drag&drop in wxPython | I have wxPython app which is running on MS Windows and I'd like it to support drag&drop between its instances (so the user opens my app 3 times and drags data from one instance to another).
The simple drag&drop in wxPython works that way:
User initiates drag: The source window packs necessary data in wx.DataObject(), creates new wx.DropSource, sets its data and calls dropSource.DoDragDrop()
User drops data onto target window: The drop target calls library function GetData() which transfers actual data to its wx.DataObject instance and finally - dataObject.GetData() unpacks the actual data.
I'd like to have some more sophisticated drag&drop which would allow user to choose what data is dragged after he drops.
Scenario of my dreams:
User initiates drag: Only some pointer to the source window is packed (some function or object).
User drops data onto target window: Nice dialog is displayed which asks user which drag&drop mode he chooses (like - dragging only song title, or song title and the artists name or whole album of the dragged artist).
Users chooses drag&drop mode: Drop target calls some function on the dragged data object, which then retrieves data from the drag source and transfers it to the drop target.
The scenario of my dreams seems doable in MS Windows, but the docs for wxWidgets and wxPython are pretty complex and ambigious. Not all wx.DataObject classes are available in wxPython (only wx.PySimpleDataObject), so I'd like someone to share his experience with such approach. Can such behaviour be implemented in wxPython without having to code it directly in winAPI?
EDIT:
Toni Ruža gave an answer with working drag&drop example, but that's not exactly the scenario of my dreams. His code manipulates data when it's dropped (the HandleDrop() shows popup menu), but data is prepared when drag is initiated (in On_ElementDrag()). In my application there should be three different drag&drop modes, and some of them require time-consuming data preparation. That's why I want to postpone data retrieval to the moment user drops data and chooses (potentially costly) d&d mode.
And for memory protection issue - I want to use OLE mechanisms for inter-process communication, like MS Office does. You can copy Excel diagram and paste it into MS-Word where it will behave like an image (well, sort of). Since it works I believe it can be done in winAPI. I just don't know if I can code it in wxPython.
| [
"Since you can't use one of the standard data formats to store references to python objects I would recommend you use a text data format for storing the parameters you need for your method calls rather than making a new data format. And anyway, it would be no good to pass a reference to an object from one app to another as the object in question would not be accessible (remember memory protection?).\nHere is a simple example for your requirements:\nimport wx\n\n\nclass TestDropTarget(wx.TextDropTarget):\n def OnDropText(self, x, y, text):\n wx.GetApp().TopWindow.HandleDrop(text)\n\n def OnDragOver(self, x, y, d):\n return wx.DragCopy\n\n\nclass Test(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n\n self.numbers = wx.ListCtrl(self, style = wx.LC_ICON | wx.LC_AUTOARRANGE)\n self.field = wx.TextCtrl(self)\n\n sizer = wx.FlexGridSizer(2, 2, 5, 5)\n sizer.AddGrowableCol(1)\n sizer.AddGrowableRow(0)\n self.SetSizer(sizer)\n sizer.Add(wx.StaticText(self, label=\"Drag from:\"))\n sizer.Add(self.numbers, flag=wx.EXPAND)\n sizer.Add(wx.StaticText(self, label=\"Drag to:\"), flag=wx.ALIGN_CENTER_VERTICAL)\n sizer.Add(self.field)\n\n for i in range(100):\n self.numbers.InsertStringItem(self.numbers.GetItemCount(), str(i))\n\n self.numbers.Bind(wx.EVT_LIST_BEGIN_DRAG, self.On_ElementDrag)\n self.field.SetDropTarget(TestDropTarget())\n\n menu_id1 = wx.NewId()\n menu_id2 = wx.NewId()\n self.menu = wx.Menu()\n self.menu.AppendItem(wx.MenuItem(self.menu, menu_id1, \"Simple copy\"))\n self.menu.AppendItem(wx.MenuItem(self.menu, menu_id2, \"Mess with it\"))\n self.Bind(wx.EVT_MENU, self.On_SimpleCopy, id=menu_id1)\n self.Bind(wx.EVT_MENU, self.On_MessWithIt, id=menu_id2)\n\n def On_ElementDrag(self, event):\n data = wx.TextDataObject(self.numbers.GetItemText(event.Index))\n source = wx.DropSource(self.numbers)\n source.SetData(data)\n source.DoDragDrop()\n\n def HandleDrop(self, text):\n self._text = text\n self.PopupMenu(self.menu)\n\n def On_SimpleCopy(self, event):\n self.field.Value = self._text\n\n def On_MessWithIt(self, event):\n self.field.Value = \"<-%s->\" % \"\".join([int(c)*c for c in self._text])\n\n\napp = wx.PySimpleApp()\napp.TopWindow = Test()\napp.TopWindow.Show()\napp.MainLoop()\n\nMethods like On_SimpleCopy and On_MessWithIt get executed after the drop so any lengthy operations you might want to do you can do there based on the textual or some other standard type of data you transfered with the drag (self._text in my case), and look... no OLE :)\n",
"Ok, it seems that it can't be done the way I wanted it.\nPossible solutions are: \n\nPass some parameters in d&d and do some inter-process communication on your own, after user drops data in target processes window. \nUse DataObjectComposite to support multiple drag&drop formats and keyboard modifiers to choose current format. Scenario:\n\n\nUser initiates drag. State of CTRL, ALT and SHIFT is checked, and depending on it the d&d format is selected. DataObjectComposite is created, and has set data in chosen format.\nUser drops data in target window. Drop target asks dropped DataObject for supported format and retrieves data, knowing what format it is in.\n\n\nI'm choosing the solution 2., because it doesn't require hand crafting communication between processes and it allows me to avoid unnecessary data retrieval when user wants to drag only the simplest data.\nAnyway - Toni, thanks for your answer! Played with it a little and it made me think of d&d and of changing my approach to the problem.\n"
] | [
3,
0
] | [] | [] | [
"drag_and_drop",
"ole",
"python",
"windows",
"wxpython"
] | stackoverflow_0000476142_drag_and_drop_ole_python_windows_wxpython.txt |
Q:
How can I build a recursive function in python?
How can I build a recursive function in python?
A:
I'm wondering whether you meant "recursive". Here is a simple example of a recursive function to compute the factorial function:
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
The two key elements of a recursive algorithm are:
The termination condition: n == 0
The reduction step where the function calls itself with a smaller number each time: factorial(n - 1)
A:
Recursion in Python works just as recursion in an other language, with the recursive construct defined in terms of itself:
For example a recursive class could be a binary tree (or any tree):
class tree():
def __init__(self):
'''Initialise the tree'''
self.Data = None
self.Count = 0
self.LeftSubtree = None
self.RightSubtree = None
def Insert(self, data):
'''Add an item of data to the tree'''
if self.Data == None:
self.Data = data
self.Count += 1
elif data < self.Data:
if self.LeftSubtree == None:
# tree is a recurive class definition
self.LeftSubtree = tree()
# Insert is a recursive function
self.LeftSubtree.Insert(data)
elif data == self.Data:
self.Count += 1
elif data > self.Data:
if self.RightSubtree == None:
self.RightSubtree = tree()
self.RightSubtree.Insert(data)
if __name__ == '__main__':
T = tree()
# The root node
T.Insert('b')
# Will be put into the left subtree
T.Insert('a')
# Will be put into the right subtree
T.Insert('c')
As already mentioned a recursive structure must have a termination condition. In this class, it is not so obvious because it only recurses if new elements are added, and only does it a single time extra.
Also worth noting, python by default has a limit to the depth of recursion available, to avoid absorbing all of the computer's memory. On my computer this is 1000. I don't know if this changes depending on hardware, etc. To see yours :
import sys
sys.getrecursionlimit()
and to set it :
import sys #(if you haven't already)
sys.setrecursionlimit()
edit: I can't guarentee that my binary tree is the most efficient design ever. If anyone can improve it, I'd be happy to hear how
A:
Let's say you want to build:
u(n+1)=f(u(n)) with u(0)=u0
One solution is to define a simple recursive function:
u0 = ...
def f(x):
...
def u(n):
if n==0: return u0
return f(u(n-1))
Unfortunately, if you want to calculate high values of u, you will run into a stack overflow error.
Another solution is a simple loop:
def u(n):
ux = u0
for i in xrange(n):
ux=f(ux)
return ux
But if you want multiple values of u for different values of n, this is suboptimal. You could cache all values in an array, but you may run into an out of memory error. You may want to use generators instead:
def u(n):
ux = u0
for i in xrange(n):
ux=f(ux)
yield ux
for val in u(1000):
print val
There are many other options, but I guess these are the main ones.
A:
Recursive function example:
def recursive(string, num):
print "#%s - %s" % (string, num)
recursive(string, num+1)
Run it with:
recursive("Hello world", 0)
| How can I build a recursive function in python? | How can I build a recursive function in python?
| [
"I'm wondering whether you meant \"recursive\". Here is a simple example of a recursive function to compute the factorial function:\ndef factorial(n):\n if n == 0:\n return 1\n else:\n return n * factorial(n - 1)\n\nThe two key elements of a recursive algorithm are:\n\nThe termination condition: n == 0\nThe reduction step where the function calls itself with a smaller number each time: factorial(n - 1)\n\n",
"Recursion in Python works just as recursion in an other language, with the recursive construct defined in terms of itself:\nFor example a recursive class could be a binary tree (or any tree):\nclass tree():\n def __init__(self):\n '''Initialise the tree'''\n self.Data = None\n self.Count = 0\n self.LeftSubtree = None\n self.RightSubtree = None\n\n def Insert(self, data):\n '''Add an item of data to the tree'''\n if self.Data == None:\n self.Data = data\n self.Count += 1\n elif data < self.Data:\n if self.LeftSubtree == None:\n # tree is a recurive class definition\n self.LeftSubtree = tree()\n # Insert is a recursive function\n self.LeftSubtree.Insert(data)\n elif data == self.Data:\n self.Count += 1\n elif data > self.Data:\n if self.RightSubtree == None:\n self.RightSubtree = tree()\n self.RightSubtree.Insert(data)\n\nif __name__ == '__main__':\n T = tree()\n # The root node\n T.Insert('b')\n # Will be put into the left subtree\n T.Insert('a')\n # Will be put into the right subtree\n T.Insert('c')\n\nAs already mentioned a recursive structure must have a termination condition. In this class, it is not so obvious because it only recurses if new elements are added, and only does it a single time extra.\nAlso worth noting, python by default has a limit to the depth of recursion available, to avoid absorbing all of the computer's memory. On my computer this is 1000. I don't know if this changes depending on hardware, etc. To see yours :\nimport sys\nsys.getrecursionlimit()\n\nand to set it :\nimport sys #(if you haven't already)\nsys.setrecursionlimit()\n\nedit: I can't guarentee that my binary tree is the most efficient design ever. If anyone can improve it, I'd be happy to hear how\n",
"Let's say you want to build:\nu(n+1)=f(u(n)) with u(0)=u0\nOne solution is to define a simple recursive function: \nu0 = ...\n\ndef f(x):\n ...\n\ndef u(n):\n if n==0: return u0\n return f(u(n-1))\n\nUnfortunately, if you want to calculate high values of u, you will run into a stack overflow error.\nAnother solution is a simple loop:\ndef u(n):\n ux = u0\n for i in xrange(n):\n ux=f(ux)\n return ux\n\nBut if you want multiple values of u for different values of n, this is suboptimal. You could cache all values in an array, but you may run into an out of memory error. You may want to use generators instead:\ndef u(n):\n ux = u0\n for i in xrange(n):\n ux=f(ux)\n yield ux\n\nfor val in u(1000):\n print val\n\nThere are many other options, but I guess these are the main ones.\n",
"Recursive function example:\ndef recursive(string, num):\n print \"#%s - %s\" % (string, num)\n recursive(string, num+1)\n\nRun it with:\nrecursive(\"Hello world\", 0)\n\n"
] | [
81,
10,
5,
2
] | [] | [] | [
"python",
"recursion"
] | stackoverflow_0000479343_python_recursion.txt |
Q:
Python 2.2: How to get the lower 32 bits out of a 64 bit number?
I have a 64 bit number comprised of various bit fields and I'm writing a simple python utility to parse the number. The problem I'm facing is that the lower 32 bits comprise of one field and using some combination of bit shifts or bit masking doesn't give just the 32 bits.
big_num = 0xFFFFFFFFFFFFFFFF
some_field = (big_num & 0x00FFFF0000000000) # works as expected
field_i_need = big_num & 0x00000000FFFFFFFF # doesn't work
What happens is that field_i_need is equal to big_num, not the lower 32 bits.
Am I missing something simple here?
Thanks!
Matthew
A:
>>> big_num = 0xFFFFFFFFFFFFFFFF
>>> some_field = (big_num & 0x00FFFF0000000000) # works as expected
>>> field_i_need = big_num & 0x00000000FFFFFFFF # doesn't work
>>> big_num
18446744073709551615L
>>> field_i_need
4294967295L
It seems to work, or I am missing the question. I'm using Python 2.6.1, anyway.
For your information, I asked a somehow-related question some time ago.
A:
You need to use long integers.
foo = 0xDEADBEEFCAFEBABEL
fooLow = foo & 0xFFFFFFFFL
A:
Matthew here, I noticed I had left out one piece of information, I'm using python version 2.2.3. I managed to try this out on another machine w/ version 2.5.1 and everything works as expected. Unfortunately I need to use the older version.
Anyway, thank you all for your responses. Appending 'L' seems to do the trick, and this is a one-off so I feel comfortable with this approach.
Thanks,
Matthew
A:
Obviously, if it's a one-off, then using long integers by appending 'L' to your literals is the quick answer, but for more complicated cases, you might find that you can write clearer code if you look at Does Python have a bitfield type? since this is how you seem to be using your bitmasks.
I think my personal preference would probably be http://docs.python.org/library/ctypes.html#ctypes-bit-fields-in-structures-unions though, since it's in the standard library, and has a pretty clear API.
| Python 2.2: How to get the lower 32 bits out of a 64 bit number? | I have a 64 bit number comprised of various bit fields and I'm writing a simple python utility to parse the number. The problem I'm facing is that the lower 32 bits comprise of one field and using some combination of bit shifts or bit masking doesn't give just the 32 bits.
big_num = 0xFFFFFFFFFFFFFFFF
some_field = (big_num & 0x00FFFF0000000000) # works as expected
field_i_need = big_num & 0x00000000FFFFFFFF # doesn't work
What happens is that field_i_need is equal to big_num, not the lower 32 bits.
Am I missing something simple here?
Thanks!
Matthew
| [
">>> big_num = 0xFFFFFFFFFFFFFFFF\n>>> some_field = (big_num & 0x00FFFF0000000000) # works as expected\n>>> field_i_need = big_num & 0x00000000FFFFFFFF # doesn't work\n>>> big_num\n18446744073709551615L\n>>> field_i_need\n4294967295L\n\nIt seems to work, or I am missing the question. I'm using Python 2.6.1, anyway.\nFor your information, I asked a somehow-related question some time ago.\n",
"You need to use long integers.\nfoo = 0xDEADBEEFCAFEBABEL\nfooLow = foo & 0xFFFFFFFFL\n\n",
"Matthew here, I noticed I had left out one piece of information, I'm using python version 2.2.3. I managed to try this out on another machine w/ version 2.5.1 and everything works as expected. Unfortunately I need to use the older version.\nAnyway, thank you all for your responses. Appending 'L' seems to do the trick, and this is a one-off so I feel comfortable with this approach.\nThanks,\nMatthew\n",
"Obviously, if it's a one-off, then using long integers by appending 'L' to your literals is the quick answer, but for more complicated cases, you might find that you can write clearer code if you look at Does Python have a bitfield type? since this is how you seem to be using your bitmasks.\nI think my personal preference would probably be http://docs.python.org/library/ctypes.html#ctypes-bit-fields-in-structures-unions though, since it's in the standard library, and has a pretty clear API.\n"
] | [
4,
2,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0000474949_python.txt |
Q:
Is there a way in python to apply a list of regex patterns that are stored in a list to a single string?
i have a list of regex patterns (stored in a list type) that I would like to apply to a string.
Does anyone know a good way to:
Apply every regex pattern in the list to the string
and
Call a different function that is associated with that pattern in the list if it matches.
I would like to do this in python if possible
thanks in advance.
A:
import re
def func1(s):
print s, "is a nice string"
def func2(s):
print s, "is a bad string"
funcs = {
r".*pat1.*": func1,
r".*pat2.*": func2
}
s = "Some string with both pat1 and pat2"
for pat, func in funcs.items():
if re.search(pat, s):
func(s)
The above code will call both functions for the string s because both patterns are matched.
| Is there a way in python to apply a list of regex patterns that are stored in a list to a single string? | i have a list of regex patterns (stored in a list type) that I would like to apply to a string.
Does anyone know a good way to:
Apply every regex pattern in the list to the string
and
Call a different function that is associated with that pattern in the list if it matches.
I would like to do this in python if possible
thanks in advance.
| [
"import re\n\ndef func1(s):\n print s, \"is a nice string\"\n\ndef func2(s):\n print s, \"is a bad string\"\n\nfuncs = {\n r\".*pat1.*\": func1,\n r\".*pat2.*\": func2\n}\ns = \"Some string with both pat1 and pat2\"\n\nfor pat, func in funcs.items():\n if re.search(pat, s):\n func(s)\n\nThe above code will call both functions for the string s because both patterns are matched.\n"
] | [
11
] | [] | [] | [
"list",
"python",
"regex"
] | stackoverflow_0000481266_list_python_regex.txt |
Q:
Installing certain packages using virtualenv
So, I want to start using virtualenv this year. I like the no-site-packages option, that is nice. However I was wondering how to install certain packages into each virtualenv. For example, lets say I want to install django into each virtualenv... is this possible, and if so, how? Does buildout address this?
Well it's not so much django, more like the django applications... I dont mind installing a version of django into each virtualenv... i was just wondering if there was some intermediate option to 'no-site-packages'
A:
I know where you're coming from with the no-sites-option. I want to use pip freeze to generate requirements lists and don't want a lot of extra cruft in site-packages. I also need to use multiple versions of django as I have legacy projects I haven't upgraded (some old svn checkouts (pre1.0), some 1.0, and some new svn checkouts). Installing Django in the global site-packages isn't really an option.
Instead I have a django folder with releases and a couple of different svn versions and just symlink to the appropriate version in the local site-packages. For ease of use I link to the local site-packages at the same level as the environment and then link in the appropriate django directory and any other "system" style packages I need (usually just PIL). So:
$ virtualenv pyenv
$ ln -s ./pyenv/lib/python2.5/site-packages ./installed
$ ln -s /usr/lib/python2.5/site-packages/PIL ./installed
$ ln -s /opt/django/django1.0svn/trunk/django ./installed
Now the following works:
$ source pyenv/bin/activate
$ python
> import django
> import PIL
A:
If you want django to be installed on EACH virtualenv, you might as well install it in the site-packages directory? Just a thought.
A:
I'd suggest using virtualenv's bootstrapping support. This allows you to execute arbitrary Python after the virtualenv is created, such as installing new packages.
A:
The other option (one I've used) is to easy_install Django after you've created the virtual environment. This is easily scripted. The penalty you pay is waiting for Django installation in each of your virtual environments.
I'm with Toby, though: Unless there's a compelling reason why you have to have a separate copy of Django in each virtual environment, you should just consider installing it in your main Python area, and allowing each virtual environment to use it from there.
A:
I want to check out this project:
http://www.stereoplex.com/two-voices/fez-djangoskel-django-projects-and-apps-as-eggs
Might be my answer....
| Installing certain packages using virtualenv | So, I want to start using virtualenv this year. I like the no-site-packages option, that is nice. However I was wondering how to install certain packages into each virtualenv. For example, lets say I want to install django into each virtualenv... is this possible, and if so, how? Does buildout address this?
Well it's not so much django, more like the django applications... I dont mind installing a version of django into each virtualenv... i was just wondering if there was some intermediate option to 'no-site-packages'
| [
"I know where you're coming from with the no-sites-option. I want to use pip freeze to generate requirements lists and don't want a lot of extra cruft in site-packages. I also need to use multiple versions of django as I have legacy projects I haven't upgraded (some old svn checkouts (pre1.0), some 1.0, and some new svn checkouts). Installing Django in the global site-packages isn't really an option.\nInstead I have a django folder with releases and a couple of different svn versions and just symlink to the appropriate version in the local site-packages. For ease of use I link to the local site-packages at the same level as the environment and then link in the appropriate django directory and any other \"system\" style packages I need (usually just PIL). So:\n$ virtualenv pyenv\n$ ln -s ./pyenv/lib/python2.5/site-packages ./installed\n$ ln -s /usr/lib/python2.5/site-packages/PIL ./installed\n$ ln -s /opt/django/django1.0svn/trunk/django ./installed\n\nNow the following works:\n$ source pyenv/bin/activate\n$ python\n> import django\n> import PIL\n\n",
"If you want django to be installed on EACH virtualenv, you might as well install it in the site-packages directory? Just a thought.\n",
"I'd suggest using virtualenv's bootstrapping support. This allows you to execute arbitrary Python after the virtualenv is created, such as installing new packages.\n",
"The other option (one I've used) is to easy_install Django after you've created the virtual environment. This is easily scripted. The penalty you pay is waiting for Django installation in each of your virtual environments.\nI'm with Toby, though: Unless there's a compelling reason why you have to have a separate copy of Django in each virtual environment, you should just consider installing it in your main Python area, and allowing each virtual environment to use it from there.\n",
"I want to check out this project:\nhttp://www.stereoplex.com/two-voices/fez-djangoskel-django-projects-and-apps-as-eggs\nMight be my answer....\n"
] | [
6,
2,
1,
0,
0
] | [] | [] | [
"buildout",
"python",
"virtualenv"
] | stackoverflow_0000434407_buildout_python_virtualenv.txt |
Q:
What does the LDAP response tuple (97, []) mean?
I am using python-ldap to try to authenticate against an existing Active Directory, and when I use the following code:
import ldap
l = ldap.initialize('LDAP://example.com')
m = l.simple_bind_s(username@example.com,password)
I get the following back:
print m
(97, [])
What does the 97 and empty list signify coming from a Microsoft Active Directory server?
I gather this is a successful authentication since it doesn't error (which it does if you use the wrong password or non-existent username), but I'd like to know if the tuple means something useful.
A:
The first item is a status code (97=success) followed by a list of messages from the server.
See here in the section Binding.
A:
According to the documentation, this is:
LDAP_REFERRAL_LIMIT_EXCEEDED 0x61 The referral limit was exceeded.
Probably
ldap.set_option(ldap.OPT_REFERRALS, 0)
could help.
A:
here is a forum thread that explains the error and provides work around.
http://www.velocityreviews.com/forums/t612838-pythonldap-operations-error.html
| What does the LDAP response tuple (97, []) mean? | I am using python-ldap to try to authenticate against an existing Active Directory, and when I use the following code:
import ldap
l = ldap.initialize('LDAP://example.com')
m = l.simple_bind_s(username@example.com,password)
I get the following back:
print m
(97, [])
What does the 97 and empty list signify coming from a Microsoft Active Directory server?
I gather this is a successful authentication since it doesn't error (which it does if you use the wrong password or non-existent username), but I'd like to know if the tuple means something useful.
| [
"The first item is a status code (97=success) followed by a list of messages from the server.\nSee here in the section Binding. \n",
"According to the documentation, this is:\nLDAP_REFERRAL_LIMIT_EXCEEDED 0x61 The referral limit was exceeded.\n\nProbably\nldap.set_option(ldap.OPT_REFERRALS, 0)\n\ncould help.\n",
"here is a forum thread that explains the error and provides work around. \nhttp://www.velocityreviews.com/forums/t612838-pythonldap-operations-error.html\n"
] | [
6,
5,
0
] | [] | [] | [
"active_directory",
"ldap",
"python"
] | stackoverflow_0000481995_active_directory_ldap_python.txt |
Q:
Python - what are all the built-in decorators?
I know of @staticmethod, @classmethod, and @property, but only through scattered documentation. What are all the function decorators that are built into Python? Is that in the docs? Is there an up-to-date list maintained somewhere?
A:
I don't think so. Decorators don't differ from ordinary functions, you only call them in a fancier way.
For finding all of them try searching Built-in functions list, because as you can see in Python glossary the decorator syntax is just a syntactic sugar, as the following two definitions create equal functions (copied this example from glossary):
def f(...):
...
f = staticmethod(f)
@staticmethod
def f(...):
So any built-in function that returns another function can be used as a decorator. Question is - does it make sense to use it that way? :-)
functools module contains some functions that can be used as decorators, but they aren't built-ins you asked for.
A:
They're not built-in, but this library of example decorators is very good.
As Abgan says, the built-in function list is probably the best place to look. Although, since decorators can also be implemented as classes, it's not guaranteed to be comprehensive.
A:
Decorators aren't even required to return a function. I've used @atexit.register before.
| Python - what are all the built-in decorators? | I know of @staticmethod, @classmethod, and @property, but only through scattered documentation. What are all the function decorators that are built into Python? Is that in the docs? Is there an up-to-date list maintained somewhere?
| [
"I don't think so. Decorators don't differ from ordinary functions, you only call them in a fancier way. \nFor finding all of them try searching Built-in functions list, because as you can see in Python glossary the decorator syntax is just a syntactic sugar, as the following two definitions create equal functions (copied this example from glossary):\ndef f(...):\n ...\nf = staticmethod(f)\n\n@staticmethod\ndef f(...):\n\nSo any built-in function that returns another function can be used as a decorator. Question is - does it make sense to use it that way? :-)\nfunctools module contains some functions that can be used as decorators, but they aren't built-ins you asked for.\n",
"They're not built-in, but this library of example decorators is very good.\nAs Abgan says, the built-in function list is probably the best place to look. Although, since decorators can also be implemented as classes, it's not guaranteed to be comprehensive.\n",
"Decorators aren't even required to return a function. I've used @atexit.register before.\n"
] | [
44,
23,
1
] | [
"There is no such thing as a list of all decorators. There's no list of all functions. There's no list of all classes.\nDecorators are a handy tool for defining a common aspect across functions, methods, or classes. There are the built-in decorators. Plus there are any number of cool and useless decorators. In the same way there are any number of cool and useless classes. \n"
] | [
-4
] | [
"decorator",
"python"
] | stackoverflow_0000480178_decorator_python.txt |
Q:
Replace Nested For Loops... or not
I have a script that loops through a series of four (or less) characters strings. For example:
aaaa
aaab
aaac
aaad
If have been able to implement it with nested for loops like so:
chars = string.digits + string.uppercase + string.lowercase
for a in chars:
print '%s' % a
for b in chars:
print '%s%s' % (a, b)
for c in chars:
print '%s%s%s' % (a, b, c)
for d in chars:
print '%s%s%s%s' % (a, b, c, d)
Is this sort of loop nesting a bad thing, and if so, what would be a better way of accomplishing what I am doing?
A:
import string
import itertools
chars = string.digits + string.letters
MAX_CHARS = 4
for nletters in range(MAX_CHARS):
for word in itertools.product(chars, repeat=nletters + 1):
print (''.join(word))
That'll print all 15018570 words you're looking for. If you want more/less words just change the MAX_CHARS variable. It will still have just two fors for any number of chars, and you don't have to repeat yourself. And is pretty readable. .
A:
I'm going to submit my answer as the most readable and least scalable :)
import string
chars = [''] + list(string.lowercase)
strings = (a+b+c+d for a in chars
for b in chars
for c in chars
for d in chars)
for string in strings:
print string
EDIT: Actually, this is incorrect, as it will produce duplicates of all strings of length<4. Removing the empty string from the chars array would just produce 4-char strings.
Normally I'd delete this answer, but I still kinda like it if you need to generate strings of the same length.
A:
Write for the programmer first - the computer second.
If it's clear and obvious to understand then its correct.
If speed matters AND the compiler doesn't optimise it anyway AND if you measure it AND it is the problem - then think of a faster cleverer way!
A:
I don't think it's a bad thing, provided you understand (and document :-) it. I don't doubt there may be a more pythonic way or clever solution (with lambdas or whatnot) but I've always favored readability over cleverness.
Since you have to generate all possibilities of 1-, 2-, 3- and 4-character "words", this method is as good as any. I'm not sure how long it would take since you're effectively generating (very roughly) 14 million lines of output (but probably every solution would have that problem).
Pre-calculating the common prefixes may provide a speed boost but you'd be better off measuring it to check (always check, never assume):
chars = string.digits + string.uppercase + string.lowercase
for a in chars:
print a
for b in chars:
ab = '%s%s' % (a, b)
print ab
for c in chars:
abc = '%s%s' % (ab, c)
print abc
for d in chars:
print '%s%s' % (abc, d)
EDIT: I actually did some benchmarks (with Windows-Python 2.6.1) - this version takes about 2.25 time units compared to the original 2.84 so it's 26% faster. I think that might warrant its use (again, as long as it's documented clearly what it's trying to achieve).
A:
@nosklo's and @Triptych's solutions produce different results:
>>> list(map(''.join, itertools.chain.from_iterable(itertools.product("ab",
... repeat=r) for r in range(4)))) # @nosklo's
['', 'a', 'b', 'aa', 'ab', 'ba', 'bb', 'aaa', 'aab', 'aba', 'abb', 'baa',
'bab', 'bba', 'bbb']
>>> ab = ['']+list("ab")
>>> list(map(''.join, (a+b+c for a in ab for b in ab for c in ab)))
['', 'a', 'b', 'a', 'aa', 'ab', 'b', 'ba', 'bb', 'a', 'aa', 'ab', 'aa',
'aaa', 'aab', 'ab', 'aba', 'abb', 'b', 'ba', 'bb', 'ba', 'baa', 'bab',
'bb', 'bba', 'bbb']
Here's modified @Triptych's solution that produce the same output as the @nosklo's one:
>>> ab = "ab"
>>> list(map(''.join, itertools.chain([''], ab, (a+b for a in ab for b in ab),
... (a+b+c for a in ab for b in ab for c in ab))))
['', 'a', 'b', 'aa', 'ab', 'ba', 'bb', 'aaa', 'aab', 'aba', 'abb', 'baa',
'bab', 'bba', 'bbb']
A:
There are many algorithms for generating every permutation of a set. What you want here is a related problem, but not directly analagous. Suggested Reading
A:
It doesn't exactly answer the question, but this would return the nth combination for the given maximum length and characters in the alphabet to use:
#!/usr/bin/python
def nth_combination(n, maxlen=4, alphabet='abc'):
"""
>>> print ','.join(nth_combination(n, 1, 'abc') for n in range(3))
a,b,c
>>> print ','.join(nth_combination(n, 2, 'abc') for n in range(12))
a,aa,ab,ac,b,ba,bb,bc,c,ca,cb,cc
>>> import string ; alphabet = string.ascii_letters + string.digits
>>> print ','.join(nth_combination(n, 4, alphabet) for n in range(16))
a,aa,aaa,aaaa,aaab,aaac,aaad,aaae,aaaf,aaag,aaah,aaai,aaaj,aaak,aaal,aaam
>>> print ','.join(nth_combination(n, 4, alphabet)
... for n in range(0, 14000000, 10**6))
a,emiL,iyro,mKz2,qWIF,u8Ri,zk0U,Dxav,HJi9,LVrM,P7Ap,UjJ1,YvSE,2H1h
"""
if maxlen == 1:
return alphabet[n]
offset, next_n = divmod(n, 1 + len(alphabet)**(maxlen-1))
if next_n == 0:
return alphabet[offset]
return alphabet[offset] + nth_combination(next_n-1, maxlen-1, alphabet)
if __name__ == '__main__':
from doctest import testmod
testmod()
This of course makes sense only if you need random access to the set of combinations instead of always iterating through them all.
If maxlen is high, some speed optimization could be achieved e.g. by getting rid of string concatenation and re-calculating the length of alphabet and maxlen-1 at each level of the recursion. A non-recursive approach might make sense, too.
| Replace Nested For Loops... or not | I have a script that loops through a series of four (or less) characters strings. For example:
aaaa
aaab
aaac
aaad
If have been able to implement it with nested for loops like so:
chars = string.digits + string.uppercase + string.lowercase
for a in chars:
print '%s' % a
for b in chars:
print '%s%s' % (a, b)
for c in chars:
print '%s%s%s' % (a, b, c)
for d in chars:
print '%s%s%s%s' % (a, b, c, d)
Is this sort of loop nesting a bad thing, and if so, what would be a better way of accomplishing what I am doing?
| [
"import string\nimport itertools\n\nchars = string.digits + string.letters\nMAX_CHARS = 4\nfor nletters in range(MAX_CHARS):\n for word in itertools.product(chars, repeat=nletters + 1):\n print (''.join(word))\n\nThat'll print all 15018570 words you're looking for. If you want more/less words just change the MAX_CHARS variable. It will still have just two fors for any number of chars, and you don't have to repeat yourself. And is pretty readable. .\n",
"I'm going to submit my answer as the most readable and least scalable :)\nimport string\nchars = [''] + list(string.lowercase)\n\nstrings = (a+b+c+d for a in chars\n for b in chars\n for c in chars\n for d in chars)\n\nfor string in strings:\n print string\n\nEDIT: Actually, this is incorrect, as it will produce duplicates of all strings of length<4. Removing the empty string from the chars array would just produce 4-char strings. \nNormally I'd delete this answer, but I still kinda like it if you need to generate strings of the same length.\n",
"Write for the programmer first - the computer second.\nIf it's clear and obvious to understand then its correct.\nIf speed matters AND the compiler doesn't optimise it anyway AND if you measure it AND it is the problem - then think of a faster cleverer way!\n",
"I don't think it's a bad thing, provided you understand (and document :-) it. I don't doubt there may be a more pythonic way or clever solution (with lambdas or whatnot) but I've always favored readability over cleverness.\nSince you have to generate all possibilities of 1-, 2-, 3- and 4-character \"words\", this method is as good as any. I'm not sure how long it would take since you're effectively generating (very roughly) 14 million lines of output (but probably every solution would have that problem).\nPre-calculating the common prefixes may provide a speed boost but you'd be better off measuring it to check (always check, never assume):\nchars = string.digits + string.uppercase + string.lowercase\nfor a in chars:\n print a\n for b in chars:\n ab = '%s%s' % (a, b)\n print ab\n for c in chars:\n abc = '%s%s' % (ab, c)\n print abc\n for d in chars:\n print '%s%s' % (abc, d)\n\nEDIT: I actually did some benchmarks (with Windows-Python 2.6.1) - this version takes about 2.25 time units compared to the original 2.84 so it's 26% faster. I think that might warrant its use (again, as long as it's documented clearly what it's trying to achieve).\n",
"@nosklo's and @Triptych's solutions produce different results:\n>>> list(map(''.join, itertools.chain.from_iterable(itertools.product(\"ab\", \n... repeat=r) for r in range(4)))) # @nosklo's \n\n\n['', 'a', 'b', 'aa', 'ab', 'ba', 'bb', 'aaa', 'aab', 'aba', 'abb', 'baa', \n 'bab', 'bba', 'bbb']\n\n>>> ab = ['']+list(\"ab\")\n>>> list(map(''.join, (a+b+c for a in ab for b in ab for c in ab))) \n\n\n['', 'a', 'b', 'a', 'aa', 'ab', 'b', 'ba', 'bb', 'a', 'aa', 'ab', 'aa', \n 'aaa', 'aab', 'ab', 'aba', 'abb', 'b', 'ba', 'bb', 'ba', 'baa', 'bab', \n 'bb', 'bba', 'bbb']\n\nHere's modified @Triptych's solution that produce the same output as the @nosklo's one:\n>>> ab = \"ab\"\n>>> list(map(''.join, itertools.chain([''], ab, (a+b for a in ab for b in ab),\n... (a+b+c for a in ab for b in ab for c in ab))))\n\n\n['', 'a', 'b', 'aa', 'ab', 'ba', 'bb', 'aaa', 'aab', 'aba', 'abb', 'baa', \n 'bab', 'bba', 'bbb']\n\n",
"There are many algorithms for generating every permutation of a set. What you want here is a related problem, but not directly analagous. Suggested Reading\n",
"It doesn't exactly answer the question, but this would return the nth combination for the given maximum length and characters in the alphabet to use:\n#!/usr/bin/python\n\ndef nth_combination(n, maxlen=4, alphabet='abc'):\n \"\"\"\n >>> print ','.join(nth_combination(n, 1, 'abc') for n in range(3))\n a,b,c\n >>> print ','.join(nth_combination(n, 2, 'abc') for n in range(12))\n a,aa,ab,ac,b,ba,bb,bc,c,ca,cb,cc\n >>> import string ; alphabet = string.ascii_letters + string.digits\n >>> print ','.join(nth_combination(n, 4, alphabet) for n in range(16))\n a,aa,aaa,aaaa,aaab,aaac,aaad,aaae,aaaf,aaag,aaah,aaai,aaaj,aaak,aaal,aaam\n >>> print ','.join(nth_combination(n, 4, alphabet)\n ... for n in range(0, 14000000, 10**6))\n a,emiL,iyro,mKz2,qWIF,u8Ri,zk0U,Dxav,HJi9,LVrM,P7Ap,UjJ1,YvSE,2H1h\n \"\"\"\n if maxlen == 1:\n return alphabet[n]\n offset, next_n = divmod(n, 1 + len(alphabet)**(maxlen-1))\n if next_n == 0:\n return alphabet[offset]\n return alphabet[offset] + nth_combination(next_n-1, maxlen-1, alphabet)\n\nif __name__ == '__main__':\n from doctest import testmod\n testmod()\n\nThis of course makes sense only if you need random access to the set of combinations instead of always iterating through them all.\nIf maxlen is high, some speed optimization could be achieved e.g. by getting rid of string concatenation and re-calculating the length of alphabet and maxlen-1 at each level of the recursion. A non-recursive approach might make sense, too.\n"
] | [
16,
6,
4,
3,
2,
1,
1
] | [] | [] | [
"for_loop",
"loops",
"nested_loops",
"python"
] | stackoverflow_0000482146_for_loop_loops_nested_loops_python.txt |
Q:
How to stop a Python script without error messages on the shell?
I want to stop a Python script on seeing an error message.
I dont want any error message on shell like exit().
How to do it ???
A:
When you send CTRL+C to a Python script, it raises the KeyboardInterrupt exception, so you can do something like
try:
... Work goes here ...
except KeyboardInterrupt:
sys.exit(0)
| How to stop a Python script without error messages on the shell? | I want to stop a Python script on seeing an error message.
I dont want any error message on shell like exit().
How to do it ???
| [
"When you send CTRL+C to a Python script, it raises the KeyboardInterrupt exception, so you can do something like\ntry:\n ... Work goes here ...\nexcept KeyboardInterrupt:\n sys.exit(0)\n\n"
] | [
9
] | [] | [] | [
"error_handling",
"python"
] | stackoverflow_0000483208_error_handling_python.txt |
Q:
How come my class is behaving like a static class?
i have a module (a single .py file, actually), with a class called HashedDir.
when i import the file and instanciate 2 instances of that class, when i check the object's fields they're always the same, even if the two objects should be different.
Eg:
h1 = HashedDir('/path/to/dir')
print h1.getList()['files'] # /path/to/dir
h2 = HashedDir('some/other/path')
print h1.getList()['files'] # some/other/path
print h2.getList()['files'] # some/other/path
Any idea?
This is the class:
from os import walk
from os import path
from hashlib import md5
import re
class HashedDir:
"""
A list of files with associated md5 hashes generated retrieving thou
a recursive walk in the directory tree starting from a provided root
directory. Also stores the dirs in each dir
"""
# {'files': [
# ('/path/to/file1', '52bc309e11259af15e4623c7a0abc28c'),
# ('/path/to/file2', '52bc309e11259af15e4623c7a0abc28c'),
# ('/path/to/dir/file3', '52bc309e11259af15e4623c7a0abc28c')
# ],
# 'dirs': ['/path/to/dir1', '/path/to/dir2']
# }
fileList = {'files': [], 'dirs': []}
ignoreList = []
def __init__(self, rootDir, ignoreList=[]):
"""
ignoreList is a list of regular expressions. If a file or a dir matches
that regular expression, don't count it
"""
self.ignoreList = ignoreList
for dirpath, dirnames, filenames in walk(rootDir):
for fileName in filenames:
completeName = path.join(dirpath,fileName)
hash = md5(open(completeName).read()).hexdigest()
relativePath = self._relativePath(completeName, rootDir)
if not self._toBeIgnored(relativePath):
self.fileList['files'].append((relativePath, hash))
for dirName in dirnames:
completeName = path.join(dirpath, dirName)
relativePath = self._relativePath(completeName, rootDir)
if not self._toBeIgnored(relativePath):
self.fileList['dirs'].append(relativePath)
def _relativePath(self, path, base):
return path.replace(base, '')
def _toBeIgnored(self, path):
for regex in self.ignoreList:
if re.compile(regex).search(path) != None:
return True
return False
def getList(self):
return self.fileList
Thanks in advance
A:
There are two kinds of variables in a class:
class variables, defined at the class level, and common to all instances
instance variables, defined within a class method (usually __init__) and qualified by the instance (usually self.).
Example
class SomeClass( object ):
classVariable = 0
def __init__( self ):
self.instanceVariable= 0
The variable named classVariable is part of the class, common to all instances. Because of the way Python does search, it's available as a member of self.classVariable, as well as SomeClass.classVariable.
The variable named instanceVariable is part of the instance (self.) and is unique to each instance.
Note. There's a third kind, global, but that's not what you're asking about.
A:
Is it fileList you're talking about? You have it as a class variable, to make it an instance variable you need to do:
self.fileList = {'files': [], 'dirs': []}
in you __ init __ function.
A:
Things declared in a class block are class attributes, and class attributes are also accessible through the instance. (This principle, in fact, is how methods are bound.) Not only that, but default arguments for a function are only evaluated when the function is defined. So, to give an example illustrating these two points:
class C(object):
list_a = []
def __init__(self, list_b=[]):
self.list_b = list_b
def __str__(self):
return '%r %r' % (self.list_a, self.list_b)
c1 = C()
c2 = C()
c2.list_a = []
c3 = C([])
c1.list_a.append(1)
c1.list_b.append(2)
print c1
print c2
print c3
The output for this is:
[1] [2]
[] [2]
[1] []
c1 and c3 share the same list_a because it's a class attribute; it's not shadowed by an instance attribute like it is on c2. c1 and c2 share the same list_b because there is only one list_b default in __init__; a new list isn't created every time the function is called, but passing in your own new list works.
A:
If you declare your variables outside a class method, inside the body of the class, they will become 'class variables' and be common to all class instances. To get instance variables, declare them inside the init function and bind them to 'self', the handler for the current instance.
A:
As others have pointed out, your problem is that fileList is a class variable which you are mutating.
However its worth noting another potential pitfall in your code that could lead to a similar problem (though it doesn't in your specific example):
def __init__(self, rootDir, ignoreList=[]):
Beware passing mutable parameters (such as this list) as default arguments. The list is only created once (when you're defining the __init__ function. This means that all instances of the class which have been constructed using the default will use the same list.
In your example, the list is never modified, so this will not have any repercussions, but if (as you do for fileList) you append to self.ignoreList, then this would affect all such instances, leading to a similar problem to the one you're seeing.
This is a very common beginner gotcha - to avoid it, it's a good idea to write such code as something like:
def __init__(self, rootDir, ignoreList=None):
if ignoreList is None:
ignoreList = [] # This will create a new empty list for every instance.
A:
It might be useful if you could post a full working (or failing!) example.
If I do what I think is necessary (i.e., wrap this in Class HashedDir(object): and set self.fileList = {'files': [], 'dirs': []} inside init then it does seem to work.
Which items are you referring to as self.value? As per the previous post by sykora, you need to distinguish between code that is run for every instance (in init) and code that is common to all instances.
| How come my class is behaving like a static class? | i have a module (a single .py file, actually), with a class called HashedDir.
when i import the file and instanciate 2 instances of that class, when i check the object's fields they're always the same, even if the two objects should be different.
Eg:
h1 = HashedDir('/path/to/dir')
print h1.getList()['files'] # /path/to/dir
h2 = HashedDir('some/other/path')
print h1.getList()['files'] # some/other/path
print h2.getList()['files'] # some/other/path
Any idea?
This is the class:
from os import walk
from os import path
from hashlib import md5
import re
class HashedDir:
"""
A list of files with associated md5 hashes generated retrieving thou
a recursive walk in the directory tree starting from a provided root
directory. Also stores the dirs in each dir
"""
# {'files': [
# ('/path/to/file1', '52bc309e11259af15e4623c7a0abc28c'),
# ('/path/to/file2', '52bc309e11259af15e4623c7a0abc28c'),
# ('/path/to/dir/file3', '52bc309e11259af15e4623c7a0abc28c')
# ],
# 'dirs': ['/path/to/dir1', '/path/to/dir2']
# }
fileList = {'files': [], 'dirs': []}
ignoreList = []
def __init__(self, rootDir, ignoreList=[]):
"""
ignoreList is a list of regular expressions. If a file or a dir matches
that regular expression, don't count it
"""
self.ignoreList = ignoreList
for dirpath, dirnames, filenames in walk(rootDir):
for fileName in filenames:
completeName = path.join(dirpath,fileName)
hash = md5(open(completeName).read()).hexdigest()
relativePath = self._relativePath(completeName, rootDir)
if not self._toBeIgnored(relativePath):
self.fileList['files'].append((relativePath, hash))
for dirName in dirnames:
completeName = path.join(dirpath, dirName)
relativePath = self._relativePath(completeName, rootDir)
if not self._toBeIgnored(relativePath):
self.fileList['dirs'].append(relativePath)
def _relativePath(self, path, base):
return path.replace(base, '')
def _toBeIgnored(self, path):
for regex in self.ignoreList:
if re.compile(regex).search(path) != None:
return True
return False
def getList(self):
return self.fileList
Thanks in advance
| [
"There are two kinds of variables in a class:\n\nclass variables, defined at the class level, and common to all instances\ninstance variables, defined within a class method (usually __init__) and qualified by the instance (usually self.).\n\nExample\nclass SomeClass( object ):\n classVariable = 0\n def __init__( self ):\n self.instanceVariable= 0\n\nThe variable named classVariable is part of the class, common to all instances. Because of the way Python does search, it's available as a member of self.classVariable, as well as SomeClass.classVariable.\nThe variable named instanceVariable is part of the instance (self.) and is unique to each instance.\nNote. There's a third kind, global, but that's not what you're asking about.\n",
"Is it fileList you're talking about? You have it as a class variable, to make it an instance variable you need to do:\nself.fileList = {'files': [], 'dirs': []}\n\nin you __ init __ function.\n",
"Things declared in a class block are class attributes, and class attributes are also accessible through the instance. (This principle, in fact, is how methods are bound.) Not only that, but default arguments for a function are only evaluated when the function is defined. So, to give an example illustrating these two points:\nclass C(object):\n list_a = []\n def __init__(self, list_b=[]):\n self.list_b = list_b\n\n def __str__(self):\n return '%r %r' % (self.list_a, self.list_b)\n\nc1 = C()\nc2 = C()\nc2.list_a = []\nc3 = C([])\n\nc1.list_a.append(1)\nc1.list_b.append(2)\nprint c1\nprint c2\nprint c3\n\nThe output for this is:\n[1] [2]\n[] [2]\n[1] []\n\nc1 and c3 share the same list_a because it's a class attribute; it's not shadowed by an instance attribute like it is on c2. c1 and c2 share the same list_b because there is only one list_b default in __init__; a new list isn't created every time the function is called, but passing in your own new list works.\n",
"If you declare your variables outside a class method, inside the body of the class, they will become 'class variables' and be common to all class instances. To get instance variables, declare them inside the init function and bind them to 'self', the handler for the current instance.\n",
"As others have pointed out, your problem is that fileList is a class variable which you are mutating.\nHowever its worth noting another potential pitfall in your code that could lead to a similar problem (though it doesn't in your specific example):\ndef __init__(self, rootDir, ignoreList=[]):\n\nBeware passing mutable parameters (such as this list) as default arguments. The list is only created once (when you're defining the __init__ function. This means that all instances of the class which have been constructed using the default will use the same list.\nIn your example, the list is never modified, so this will not have any repercussions, but if (as you do for fileList) you append to self.ignoreList, then this would affect all such instances, leading to a similar problem to the one you're seeing.\nThis is a very common beginner gotcha - to avoid it, it's a good idea to write such code as something like:\ndef __init__(self, rootDir, ignoreList=None):\n if ignoreList is None:\n ignoreList = [] # This will create a new empty list for every instance.\n\n",
"It might be useful if you could post a full working (or failing!) example.\nIf I do what I think is necessary (i.e., wrap this in Class HashedDir(object): and set self.fileList = {'files': [], 'dirs': []} inside init then it does seem to work.\nWhich items are you referring to as self.value? As per the previous post by sykora, you need to distinguish between code that is run for every instance (in init) and code that is common to all instances.\n"
] | [
10,
6,
2,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000483072_python.txt |
Q:
Performance: Python 3.x vs Python 2.x
On a question of just performance, how does Python 3 compare to Python 2.x?
A:
3.0 is slower than 2.5 on official benchmarks. From "What’s New in Python 3.0":
The net result of the 3.0
generalizations is that Python 3.0
runs the pystone benchmark around 10%
slower than Python 2.5. Most likely
the biggest cause is the removal of
special-casing for small integers.
There’s room for improvement, but it
will happen after 3.0 is released!
A:
I'd say any difference will be below trivial. For example, looping over a list will be the exact same.
The idea behind Python 3 is to clean up the language syntax itself - remove ambigious stuff like except Exception1, Exception2, cleanup the standard modules (no urllib, urllib2, httplib etc).
There really isn't much you can do to improve it's performance, although I imagine stuff like the garbage collection and memory management code will have had some tweaks, but it's not going to be a "wow, my database statistic generation code completes in half the time!" improvement - that's something you get by improving the code, rather than the language!
Really, performance of the language is irrelevant - all interpreted languages basically function at the same speed.
Why I find Python "faster" is all the built-in moudles, and the nice-to-write syntax - something that has been improved in Python3, so I guess in those terms, yes, python3's performance is better then python2.x..
A:
The IO library has been completely redesigned, and the new implementation is in pure Python. Whilst this is a functional improvement, it is at present much slower. Work is afoot to rewrite the bulk of the new system in C. For details see these bug reports.
A:
I think ultimately it is too early to make that kind of comparison just yet. Wait until it is out of beta before benchmarking it. The interpreter will probably be polished enormously before the release but overall i think for most uses the performance would be comparable and if you are running a really speed conscious app is python really the right language to be using?
A:
Unless there are plans for a new VM of some kind (and I haven't heard of any such plans), there is all the reason to believe that in the long run the performance of Py3K will, at least asymptotically, equal that of 2.5
It may take a few months, but will eventually happen, as nothing in the new features of Py3k is inherently less performant.
To conclude, I don't think there's place to worry about it. Neither to hope for a major improvement of some kind.
A:
I don't if it faster now, but I have to expect that it eventually will be because that is where new performance work will happen and not all of that will be backported.
| Performance: Python 3.x vs Python 2.x | On a question of just performance, how does Python 3 compare to Python 2.x?
| [
"3.0 is slower than 2.5 on official benchmarks. From \"What’s New in Python 3.0\":\n\nThe net result of the 3.0\n generalizations is that Python 3.0\n runs the pystone benchmark around 10%\n slower than Python 2.5. Most likely\n the biggest cause is the removal of\n special-casing for small integers.\n There’s room for improvement, but it\n will happen after 3.0 is released!\n\n",
"I'd say any difference will be below trivial. For example, looping over a list will be the exact same.\nThe idea behind Python 3 is to clean up the language syntax itself - remove ambigious stuff like except Exception1, Exception2, cleanup the standard modules (no urllib, urllib2, httplib etc).\nThere really isn't much you can do to improve it's performance, although I imagine stuff like the garbage collection and memory management code will have had some tweaks, but it's not going to be a \"wow, my database statistic generation code completes in half the time!\" improvement - that's something you get by improving the code, rather than the language!\nReally, performance of the language is irrelevant - all interpreted languages basically function at the same speed.\nWhy I find Python \"faster\" is all the built-in moudles, and the nice-to-write syntax - something that has been improved in Python3, so I guess in those terms, yes, python3's performance is better then python2.x..\n",
"The IO library has been completely redesigned, and the new implementation is in pure Python. Whilst this is a functional improvement, it is at present much slower. Work is afoot to rewrite the bulk of the new system in C. For details see these bug reports.\n",
"I think ultimately it is too early to make that kind of comparison just yet. Wait until it is out of beta before benchmarking it. The interpreter will probably be polished enormously before the release but overall i think for most uses the performance would be comparable and if you are running a really speed conscious app is python really the right language to be using?\n",
"Unless there are plans for a new VM of some kind (and I haven't heard of any such plans), there is all the reason to believe that in the long run the performance of Py3K will, at least asymptotically, equal that of 2.5\nIt may take a few months, but will eventually happen, as nothing in the new features of Py3k is inherently less performant. \nTo conclude, I don't think there's place to worry about it. Neither to hope for a major improvement of some kind.\n",
"I don't if it faster now, but I have to expect that it eventually will be because that is where new performance work will happen and not all of that will be backported.\n"
] | [
29,
7,
5,
4,
3,
0
] | [] | [] | [
"performance",
"python",
"python_2.x",
"python_3.x"
] | stackoverflow_0000170426_performance_python_python_2.x_python_3.x.txt |
Q:
calling methods on an instance with getattr [ python ]
I was trying to write some code that would check if an item has some attributes , and to call them . I tried to do that with getattr , but the modifications wouldn't be permanent . I made a "dummy" class to check upon this .
Here is the code I used for the class :
class X:
def __init__(self):
self.value = 90
def __get(self):
return self.value
def __set(self,value):
self.value = value
value = property(__get,__set)
x = X()
print x.value # this would output 90
getattr(x,"value=",99) # when called from an interactive python interpreter this would output 99
print x.value # this is still 90 ( how could I make this be 99 ? )
Thanks !
A:
You need to do something like
class X:
def __init__(self):
self._value = 90
def _get(self):
return self._value
def _set(self, value):
self._value = value
value = property(_get, _set)
Note that the "internal" variable has to have a different name than the property (I used _value).
Then,
setattr(x, 'value', 99)
should work.
A:
getattr(x,"value=",99)
returns 99 because x has no attribute "value=" (note the equals sign), so getattr returns the supplied default (99).
| calling methods on an instance with getattr [ python ] | I was trying to write some code that would check if an item has some attributes , and to call them . I tried to do that with getattr , but the modifications wouldn't be permanent . I made a "dummy" class to check upon this .
Here is the code I used for the class :
class X:
def __init__(self):
self.value = 90
def __get(self):
return self.value
def __set(self,value):
self.value = value
value = property(__get,__set)
x = X()
print x.value # this would output 90
getattr(x,"value=",99) # when called from an interactive python interpreter this would output 99
print x.value # this is still 90 ( how could I make this be 99 ? )
Thanks !
| [
"You need to do something like\nclass X: \n def __init__(self):\n self._value = 90 \n\n def _get(self): \n return self._value\n\n def _set(self, value):\n self._value = value \n\n value = property(_get, _set)\n\nNote that the \"internal\" variable has to have a different name than the property (I used _value).\nThen,\nsetattr(x, 'value', 99)\n\nshould work.\n",
"getattr(x,\"value=\",99)\n\nreturns 99 because x has no attribute \"value=\" (note the equals sign), so getattr returns the supplied default (99).\n"
] | [
8,
2
] | [] | [] | [
"attributes",
"dynamic",
"properties",
"python"
] | stackoverflow_0000484220_attributes_dynamic_properties_python.txt |
Q:
Problem with encoding in Django templates
I'm having problems using {% ifequal s1 "some text" %} to compare strings with extended characters in Django templates. When string s1 contains ascii characters >127, I get exceptions in the template rendering. What am I doing wrong? I'm using UTF-8 coding throughout the rest of application in both the data, templates and Python code without any problems.
views.py
def test(request):
return render_to_response("test.html", {
"s1": "dados",
"s2": "aprovação",
}
)
test.html
s1={{s1}}<br>
s2={{s2}}<br>
{% ifequal s1 "dados" %}
s1="dados" is true
{% endifequal %}
{% ifequal s1 "aprovação" %}
s1="aprovação" is true
{% endifequal %}
{% comment %}
The following two comparions cause the following exception:
Caught an exception while rendering: 'ascii' codec can't decode byte 0xc3 in position 6: ordinal not in range(128)
{% ifequal s2 "dados" %}
s2="dados" is true
{% endifequal %}
{% ifequal s2 "aprovação" %}
s2="aprovação" is true
{% endifequal %}
{% endcomment %}
{% ifequal s2 u"dados" %}
s2="dados" is true
{% endifequal %}
{% comment %}
The following comparison causes the following exception:
Caught an exception while rendering: 'ascii' codec can't encode characters in position 8-9: ordinal not in range(128)
{% ifequal s2 u"aprovação" %}
s2="aprovação" is true
{% endifequal %}
{% endcomment %}
Output
s1=dados
s2=aprovação
s1="dados" is true
A:
Sometimes there's nothing like describing a problem to someone else to help you solve it. :) I should have marked the Python strings as Unicode like this and everything works now:
def test(request):
return render_to_response("test.html", {
"s1": u"dados",
"s2": u"aprovação",
}
)
| Problem with encoding in Django templates | I'm having problems using {% ifequal s1 "some text" %} to compare strings with extended characters in Django templates. When string s1 contains ascii characters >127, I get exceptions in the template rendering. What am I doing wrong? I'm using UTF-8 coding throughout the rest of application in both the data, templates and Python code without any problems.
views.py
def test(request):
return render_to_response("test.html", {
"s1": "dados",
"s2": "aprovação",
}
)
test.html
s1={{s1}}<br>
s2={{s2}}<br>
{% ifequal s1 "dados" %}
s1="dados" is true
{% endifequal %}
{% ifequal s1 "aprovação" %}
s1="aprovação" is true
{% endifequal %}
{% comment %}
The following two comparions cause the following exception:
Caught an exception while rendering: 'ascii' codec can't decode byte 0xc3 in position 6: ordinal not in range(128)
{% ifequal s2 "dados" %}
s2="dados" is true
{% endifequal %}
{% ifequal s2 "aprovação" %}
s2="aprovação" is true
{% endifequal %}
{% endcomment %}
{% ifequal s2 u"dados" %}
s2="dados" is true
{% endifequal %}
{% comment %}
The following comparison causes the following exception:
Caught an exception while rendering: 'ascii' codec can't encode characters in position 8-9: ordinal not in range(128)
{% ifequal s2 u"aprovação" %}
s2="aprovação" is true
{% endifequal %}
{% endcomment %}
Output
s1=dados
s2=aprovação
s1="dados" is true
| [
"Sometimes there's nothing like describing a problem to someone else to help you solve it. :) I should have marked the Python strings as Unicode like this and everything works now:\ndef test(request):\n return render_to_response(\"test.html\", {\n \"s1\": u\"dados\",\n \"s2\": u\"aprovação\",\n }\n )\n\n"
] | [
8
] | [] | [] | [
"django",
"django_templates",
"internationalization",
"python",
"unicode"
] | stackoverflow_0000484338_django_django_templates_internationalization_python_unicode.txt |
Q:
Running unexported .dll functions with python
This may seem like a weird question, but I would like to know how I can run a function in a .dll from a memory 'signature'. I don't understand much about how it actually works, but I needed it badly. Its a way of running unexported functions from within a .dll, if you know the memory signature and adress of it.
For example, I have these:
respawn_f "_ZN9CCSPlayer12RoundRespawnEv"
respawn_sig "568BF18B06FF90B80400008B86E80D00"
respawn_mask "xxxxx?xxx??xxxx?"
And using some pretty nifty C++ code you can use this to run functions from within a .dll.
Here is a well explained article on it:
http://wiki.alliedmods.net/Signature_Scanning
So, is it possible using Ctypes or any other way to do this inside python?
A:
If you can already run them using C++ then you can try using SWIG to generate python wrappers for the C++ code you've written making it callable from python.
http://www.swig.org/
Some caveats that I've found using SWIG:
Swig looks up types based on a string value. For example
an integer type in Python (int) will look to make sure
that the cpp type is "int" otherwise swig will complain
about type mismatches. There is no automatic conversion.
Swig copies source code verbatim therefore even objects in the same namespace
will need to be fully qualified so that the cxx file will compile properly.
Hope that helps.
A:
You said you were trying to call a function that was not exported; as far as I know, that's not possible from Python. However, your problem seems to be merely that the name is mangled.
You can invoke an arbitrary export using ctypes. Since the name is mangled, and isn't a valid Python identifier, you can use getattr().
Another approach if you have the right information is to find the export by ordinal, which you'd have to do if there was no name exported at all. One way to get the ordinal would be using dumpbin.exe, included in many Windows compiled languages. It's actually a front-end to the linker, so if you have the MS LinK.exe, you can also use that with appropriate commandline switches.
To get the function reference (which is a "function-pointer" object bound to the address of it), you can use something like:
import ctypes
func = getattr(ctypes.windll.msvcrt, "@@myfunc")
retval = func(None)
Naturally, you'd replace the 'msvcrt' with the dll you specifically want to call.
What I don't show here is how to unmangle the name to derive the calling signature, and thus the arguments necessary. Doing that would require a demangler, and those are very specific to the brand AND VERSION of C++ compiler used to create the DLL.
There is a certain amount of error checking if the function is stdcall, so you can sometimes fiddle with things till you get them right. But if the function is cdecl, then there's no way to automatically check. Likewise you have to remember to include the extra this parameter if appropriate.
| Running unexported .dll functions with python | This may seem like a weird question, but I would like to know how I can run a function in a .dll from a memory 'signature'. I don't understand much about how it actually works, but I needed it badly. Its a way of running unexported functions from within a .dll, if you know the memory signature and adress of it.
For example, I have these:
respawn_f "_ZN9CCSPlayer12RoundRespawnEv"
respawn_sig "568BF18B06FF90B80400008B86E80D00"
respawn_mask "xxxxx?xxx??xxxx?"
And using some pretty nifty C++ code you can use this to run functions from within a .dll.
Here is a well explained article on it:
http://wiki.alliedmods.net/Signature_Scanning
So, is it possible using Ctypes or any other way to do this inside python?
| [
"If you can already run them using C++ then you can try using SWIG to generate python wrappers for the C++ code you've written making it callable from python.\nhttp://www.swig.org/\nSome caveats that I've found using SWIG:\nSwig looks up types based on a string value. For example\nan integer type in Python (int) will look to make sure\nthat the cpp type is \"int\" otherwise swig will complain\nabout type mismatches. There is no automatic conversion.\nSwig copies source code verbatim therefore even objects in the same namespace\nwill need to be fully qualified so that the cxx file will compile properly.\nHope that helps.\n",
"You said you were trying to call a function that was not exported; as far as I know, that's not possible from Python. However, your problem seems to be merely that the name is mangled.\nYou can invoke an arbitrary export using ctypes. Since the name is mangled, and isn't a valid Python identifier, you can use getattr().\nAnother approach if you have the right information is to find the export by ordinal, which you'd have to do if there was no name exported at all. One way to get the ordinal would be using dumpbin.exe, included in many Windows compiled languages. It's actually a front-end to the linker, so if you have the MS LinK.exe, you can also use that with appropriate commandline switches.\nTo get the function reference (which is a \"function-pointer\" object bound to the address of it), you can use something like:\nimport ctypes\nfunc = getattr(ctypes.windll.msvcrt, \"@@myfunc\")\nretval = func(None)\nNaturally, you'd replace the 'msvcrt' with the dll you specifically want to call.\nWhat I don't show here is how to unmangle the name to derive the calling signature, and thus the arguments necessary. Doing that would require a demangler, and those are very specific to the brand AND VERSION of C++ compiler used to create the DLL.\nThere is a certain amount of error checking if the function is stdcall, so you can sometimes fiddle with things till you get them right. But if the function is cdecl, then there's no way to automatically check. Likewise you have to remember to include the extra this parameter if appropriate.\n"
] | [
2,
0
] | [] | [] | [
"ctypes",
"memory",
"python"
] | stackoverflow_0000421223_ctypes_memory_python.txt |
Q:
Extracting info from large structured text files
I need to read some large files (from 50k to 100k lines), structured in groups separated by empty lines. Each group start at the same pattern "No.999999999 dd/mm/yyyy ZZZ". Here´s some sample data.
No.813829461 16/09/1987 270
Tit.SUZANO PAPEL E CELULOSE S.A. (BR/BA)
C.N.P.J./C.I.C./N INPI : 16404287000155
Procurador: MARCELLO DO NASCIMENTO
No.815326777 28/12/1989 351
Tit.SIGLA SISTEMA GLOBO DE GRAVACOES AUDIO VISUAIS LTDA (BR/RJ)
C.N.P.J./C.I.C./NºINPI : 34162651000108
Apres.: Nominativa ; Nat.: De Produto
Marca: TRIO TROPICAL
Clas.Prod/Serv: 09.40
*DEFERIDO CONFORME RESOLUÇÃO 123 DE 06/01/2006, PUBLICADA NA RPI 1829, DE 24/01/2006.
Procurador: WALDEMAR RODRIGUES PEDRA
No.900148764 11/01/2007 LD3
Tit.TIARA BOLSAS E CALÇADOS LTDA
Procurador: Marcia Ferreira Gomes
*Escritório: Marcas Marcantes e Patentes Ltda
*Exigência Formal não respondida Satisfatoriamente, Pedido de Registro de Marca considerado inexistente, de acordo com Art. 157 da LPI
*Protocolo da Petição de cumprimento de Exigência Formal: 810080140197
I wrote some code that´s parsing it accordingly. There´s anything that I can improve, to improve readability or performance? Here´s what I come so far:
import re, pprint
class Despacho(object):
"""
Class to parse each line, applying the regexp and storing the results
for future use
"""
regexp = {
re.compile(r'No.([\d]{9}) ([\d]{2}/[\d]{2}/[\d]{4}) (.*)'): lambda self: self._processo,
re.compile(r'Tit.(.*)'): lambda self: self._titular,
re.compile(r'Procurador: (.*)'): lambda self: self._procurador,
re.compile(r'C.N.P.J./C.I.C./N INPI :(.*)'): lambda self: self._documento,
re.compile(r'Apres.: (.*) ; Nat.: (.*)'): lambda self: self._apresentacao,
re.compile(r'Marca: (.*)'): lambda self: self._marca,
re.compile(r'Clas.Prod/Serv: (.*)'): lambda self: self._classe,
re.compile(r'\*(.*)'): lambda self: self._complemento,
}
def __init__(self):
"""
'complemento' is the only field that can be multiple in a single registry
"""
self.complemento = []
def _processo(self, matches):
self.processo, self.data, self.despacho = matches.groups()
def _titular(self, matches):
self.titular = matches.group(1)
def _procurador(self, matches):
self.procurador = matches.group(1)
def _documento(self, matches):
self.documento = matches.group(1)
def _apresentacao(self, matches):
self.apresentacao, self.natureza = matches.groups()
def _marca(self, matches):
self.marca = matches.group(1)
def _classe(self, matches):
self.classe = matches.group(1)
def _complemento(self, matches):
self.complemento.append(matches.group(1))
def read(self, line):
for pattern in Despacho.regexp:
m = pattern.match(line)
if m:
Despacho.regexp[pattern](self)(m)
def process(rpi):
"""
read data and process each group
"""
rpi = (line for line in rpi)
group = False
for line in rpi:
if line.startswith('No.'):
group = True
d = Despacho()
if not line.strip() and group: # empty line - end of block
yield d
group = False
d.read(line)
arquivo = open('rm1972.txt') # file to process
for desp in process(arquivo):
pprint.pprint(desp.__dict__)
print('--------------')
A:
That is pretty good. Below some suggestions, let me know if you like'em:
import re
import pprint
import sys
class Despacho(object):
"""
Class to parse each line, applying the regexp and storing the results
for future use
"""
#used a dict with the keys instead of functions.
regexp = {
('processo',
'data',
'despacho'): re.compile(r'No.([\d]{9}) ([\d]{2}/[\d]{2}/[\d]{4}) (.*)'),
('titular',): re.compile(r'Tit.(.*)'),
('procurador',): re.compile(r'Procurador: (.*)'),
('documento',): re.compile(r'C.N.P.J./C.I.C./N INPI :(.*)'),
('apresentacao',
'natureza'): re.compile(r'Apres.: (.*) ; Nat.: (.*)'),
('marca',): re.compile(r'Marca: (.*)'),
('classe',): re.compile(r'Clas.Prod/Serv: (.*)'),
('complemento',): re.compile(r'\*(.*)'),
}
def __init__(self):
"""
'complemento' is the only field that can be multiple in a single registry
"""
self.complemento = []
def read(self, line):
for attrs, pattern in Despacho.regexp.iteritems():
m = pattern.match(line)
if m:
for groupn, attr in enumerate(attrs):
# special case complemento:
if attr == 'complemento':
self.complemento.append(m.group(groupn + 1))
else:
# set the attribute on the object
setattr(self, attr, m.group(groupn + 1))
def __repr__(self):
# defines object printed representation
d = {}
for attrs in self.regexp:
for attr in attrs:
d[attr] = getattr(self, attr, None)
return pprint.pformat(d)
def process(rpi):
"""
read data and process each group
"""
#Useless line, since you're doing a for anyway
#rpi = (line for line in rpi)
group = False
for line in rpi:
if line.startswith('No.'):
group = True
d = Despacho()
if not line.strip() and group: # empty line - end of block
yield d
group = False
d.read(line)
def main():
arquivo = open('rm1972.txt') # file to process
for desp in process(arquivo):
print desp # can print directly here.
print('-' * 20)
return 0
if __name__ == '__main__':
main()
A:
It would be easier to help if you had a specific concern. Performance will depend greatly on the efficiency of the particular regex engine you are using. 100K lines in a single file doesn't sound that big, but again it all depends on your environment.
I use Expresso in my .NET development to test expressions for accuracy and performance.
A Google search turned up Kodos, a GUI Python regex authoring tool.
A:
It looks good overall, but why do you have the line:
rpi = (line for line in rpi)
You can already iterate over the file object without this intermediate step.
A:
I wouldn't use regex here. If you know that your lines will be starting with fixed strings, why not check those strings and write a logic around it?
for line in open(file):
if line[0:3]=='No.':
currIndex='No'
map['No']=line[4:]
....
...
else if line.strip()=='':
//store the record in the map and clear the map
else:
//append line to the last index in map.. this is when the record overflows to the next line.
Map[currIndex]=Map[currIndex]+"\n"+line
Consider the above code as just the pseudocode.
A:
Another version with only one combined regular expression:
#!/usr/bin/python
import re
import pprint
import sys
class Despacho(object):
"""
Class to parse each line, applying the regexp and storing the results
for future use
"""
#used a dict with the keys instead of functions.
regexp = re.compile(
r'No.(?P<processo>[\d]{9}) (?P<data>[\d]{2}/[\d]{2}/[\d]{4}) (?P<despacho>.*)'
r'|Tit.(?P<titular>.*)'
r'|Procurador: (?P<procurador>.*)'
r'|C.N.P.J./C.I.C./N INPI :(?P<documento>.*)'
r'|Apres.: (?P<apresentacao>.*) ; Nat.: (?P<natureza>.*)'
r'|Marca: (?P<marca>.*)'
r'|Clas.Prod/Serv: (?P<classe>.*)'
r'|\*(?P<complemento>.*)')
simplefields = ('processo', 'data', 'despacho', 'titular', 'procurador',
'documento', 'apresentacao', 'natureza', 'marca', 'classe')
def __init__(self):
"""
'complemento' is the only field that can be multiple in a single
registry
"""
self.__dict__ = dict.fromkeys(self.simplefields)
self.complemento = []
def parse(self, line):
m = self.regexp.match(line)
if m:
gd = dict((k, v) for k, v in m.groupdict().items() if v)
if 'complemento' in gd:
self.complemento.append(gd['complemento'])
else:
self.__dict__.update(gd)
def __repr__(self):
# defines object printed representation
return pprint.pformat(self.__dict__)
def process(rpi):
"""
read data and process each group
"""
d = None
for line in rpi:
if line.startswith('No.'):
if d:
yield d
d = Despacho()
d.parse(line)
yield d
def main():
arquivo = file('rm1972.txt') # file to process
for desp in process(arquivo):
print desp # can print directly here.
print '-' * 20
if __name__ == '__main__':
main()
| Extracting info from large structured text files | I need to read some large files (from 50k to 100k lines), structured in groups separated by empty lines. Each group start at the same pattern "No.999999999 dd/mm/yyyy ZZZ". Here´s some sample data.
No.813829461 16/09/1987 270
Tit.SUZANO PAPEL E CELULOSE S.A. (BR/BA)
C.N.P.J./C.I.C./N INPI : 16404287000155
Procurador: MARCELLO DO NASCIMENTO
No.815326777 28/12/1989 351
Tit.SIGLA SISTEMA GLOBO DE GRAVACOES AUDIO VISUAIS LTDA (BR/RJ)
C.N.P.J./C.I.C./NºINPI : 34162651000108
Apres.: Nominativa ; Nat.: De Produto
Marca: TRIO TROPICAL
Clas.Prod/Serv: 09.40
*DEFERIDO CONFORME RESOLUÇÃO 123 DE 06/01/2006, PUBLICADA NA RPI 1829, DE 24/01/2006.
Procurador: WALDEMAR RODRIGUES PEDRA
No.900148764 11/01/2007 LD3
Tit.TIARA BOLSAS E CALÇADOS LTDA
Procurador: Marcia Ferreira Gomes
*Escritório: Marcas Marcantes e Patentes Ltda
*Exigência Formal não respondida Satisfatoriamente, Pedido de Registro de Marca considerado inexistente, de acordo com Art. 157 da LPI
*Protocolo da Petição de cumprimento de Exigência Formal: 810080140197
I wrote some code that´s parsing it accordingly. There´s anything that I can improve, to improve readability or performance? Here´s what I come so far:
import re, pprint
class Despacho(object):
"""
Class to parse each line, applying the regexp and storing the results
for future use
"""
regexp = {
re.compile(r'No.([\d]{9}) ([\d]{2}/[\d]{2}/[\d]{4}) (.*)'): lambda self: self._processo,
re.compile(r'Tit.(.*)'): lambda self: self._titular,
re.compile(r'Procurador: (.*)'): lambda self: self._procurador,
re.compile(r'C.N.P.J./C.I.C./N INPI :(.*)'): lambda self: self._documento,
re.compile(r'Apres.: (.*) ; Nat.: (.*)'): lambda self: self._apresentacao,
re.compile(r'Marca: (.*)'): lambda self: self._marca,
re.compile(r'Clas.Prod/Serv: (.*)'): lambda self: self._classe,
re.compile(r'\*(.*)'): lambda self: self._complemento,
}
def __init__(self):
"""
'complemento' is the only field that can be multiple in a single registry
"""
self.complemento = []
def _processo(self, matches):
self.processo, self.data, self.despacho = matches.groups()
def _titular(self, matches):
self.titular = matches.group(1)
def _procurador(self, matches):
self.procurador = matches.group(1)
def _documento(self, matches):
self.documento = matches.group(1)
def _apresentacao(self, matches):
self.apresentacao, self.natureza = matches.groups()
def _marca(self, matches):
self.marca = matches.group(1)
def _classe(self, matches):
self.classe = matches.group(1)
def _complemento(self, matches):
self.complemento.append(matches.group(1))
def read(self, line):
for pattern in Despacho.regexp:
m = pattern.match(line)
if m:
Despacho.regexp[pattern](self)(m)
def process(rpi):
"""
read data and process each group
"""
rpi = (line for line in rpi)
group = False
for line in rpi:
if line.startswith('No.'):
group = True
d = Despacho()
if not line.strip() and group: # empty line - end of block
yield d
group = False
d.read(line)
arquivo = open('rm1972.txt') # file to process
for desp in process(arquivo):
pprint.pprint(desp.__dict__)
print('--------------')
| [
"That is pretty good. Below some suggestions, let me know if you like'em:\nimport re\nimport pprint\nimport sys\n\nclass Despacho(object):\n \"\"\"\n Class to parse each line, applying the regexp and storing the results\n for future use\n \"\"\"\n #used a dict with the keys instead of functions.\n regexp = {\n ('processo', \n 'data', \n 'despacho'): re.compile(r'No.([\\d]{9}) ([\\d]{2}/[\\d]{2}/[\\d]{4}) (.*)'),\n ('titular',): re.compile(r'Tit.(.*)'),\n ('procurador',): re.compile(r'Procurador: (.*)'),\n ('documento',): re.compile(r'C.N.P.J./C.I.C./N INPI :(.*)'),\n ('apresentacao',\n 'natureza'): re.compile(r'Apres.: (.*) ; Nat.: (.*)'),\n ('marca',): re.compile(r'Marca: (.*)'),\n ('classe',): re.compile(r'Clas.Prod/Serv: (.*)'),\n ('complemento',): re.compile(r'\\*(.*)'),\n }\n\n def __init__(self):\n \"\"\"\n 'complemento' is the only field that can be multiple in a single registry\n \"\"\"\n self.complemento = []\n\n\n def read(self, line):\n for attrs, pattern in Despacho.regexp.iteritems():\n m = pattern.match(line)\n if m:\n for groupn, attr in enumerate(attrs):\n # special case complemento:\n if attr == 'complemento':\n self.complemento.append(m.group(groupn + 1))\n else:\n # set the attribute on the object\n setattr(self, attr, m.group(groupn + 1))\n\n def __repr__(self):\n # defines object printed representation\n d = {}\n for attrs in self.regexp:\n for attr in attrs:\n d[attr] = getattr(self, attr, None)\n return pprint.pformat(d)\n\ndef process(rpi):\n \"\"\"\n read data and process each group\n \"\"\"\n #Useless line, since you're doing a for anyway\n #rpi = (line for line in rpi)\n group = False\n\n for line in rpi:\n if line.startswith('No.'):\n group = True\n d = Despacho() \n\n if not line.strip() and group: # empty line - end of block\n yield d\n group = False\n\n d.read(line)\n\ndef main():\n arquivo = open('rm1972.txt') # file to process\n for desp in process(arquivo):\n print desp # can print directly here.\n print('-' * 20)\n return 0\n\nif __name__ == '__main__':\n main()\n\n",
"It would be easier to help if you had a specific concern. Performance will depend greatly on the efficiency of the particular regex engine you are using. 100K lines in a single file doesn't sound that big, but again it all depends on your environment.\nI use Expresso in my .NET development to test expressions for accuracy and performance.\n A Google search turned up Kodos, a GUI Python regex authoring tool.\n",
"It looks good overall, but why do you have the line:\nrpi = (line for line in rpi)\n\nYou can already iterate over the file object without this intermediate step.\n",
"I wouldn't use regex here. If you know that your lines will be starting with fixed strings, why not check those strings and write a logic around it?\nfor line in open(file):\n if line[0:3]=='No.':\n currIndex='No'\n map['No']=line[4:]\n ....\n ...\n else if line.strip()=='':\n //store the record in the map and clear the map\n else:\n //append line to the last index in map.. this is when the record overflows to the next line.\n Map[currIndex]=Map[currIndex]+\"\\n\"+line \n\nConsider the above code as just the pseudocode.\n",
"Another version with only one combined regular expression:\n#!/usr/bin/python\n\nimport re\nimport pprint\nimport sys\n\nclass Despacho(object):\n \"\"\"\n Class to parse each line, applying the regexp and storing the results\n for future use\n \"\"\"\n #used a dict with the keys instead of functions.\n regexp = re.compile(\n r'No.(?P<processo>[\\d]{9}) (?P<data>[\\d]{2}/[\\d]{2}/[\\d]{4}) (?P<despacho>.*)'\n r'|Tit.(?P<titular>.*)'\n r'|Procurador: (?P<procurador>.*)'\n r'|C.N.P.J./C.I.C./N INPI :(?P<documento>.*)'\n r'|Apres.: (?P<apresentacao>.*) ; Nat.: (?P<natureza>.*)'\n r'|Marca: (?P<marca>.*)'\n r'|Clas.Prod/Serv: (?P<classe>.*)'\n r'|\\*(?P<complemento>.*)')\n\n simplefields = ('processo', 'data', 'despacho', 'titular', 'procurador',\n 'documento', 'apresentacao', 'natureza', 'marca', 'classe')\n\n def __init__(self):\n \"\"\"\n 'complemento' is the only field that can be multiple in a single\n registry\n \"\"\"\n self.__dict__ = dict.fromkeys(self.simplefields)\n self.complemento = []\n\n def parse(self, line):\n m = self.regexp.match(line)\n if m:\n gd = dict((k, v) for k, v in m.groupdict().items() if v)\n if 'complemento' in gd:\n self.complemento.append(gd['complemento'])\n else:\n self.__dict__.update(gd)\n\n def __repr__(self):\n # defines object printed representation\n return pprint.pformat(self.__dict__)\n\ndef process(rpi):\n \"\"\"\n read data and process each group\n \"\"\"\n d = None\n\n for line in rpi:\n if line.startswith('No.'):\n if d:\n yield d\n d = Despacho()\n d.parse(line)\n yield d\n\ndef main():\n arquivo = file('rm1972.txt') # file to process\n for desp in process(arquivo):\n print desp # can print directly here.\n print '-' * 20\n\nif __name__ == '__main__':\n main()\n\n"
] | [
2,
1,
0,
0,
0
] | [] | [] | [
"python",
"text_processing"
] | stackoverflow_0000481862_python_text_processing.txt |
Q:
Parsing fixed-format data embedded in HTML in python
I am using google's appengine api
from google.appengine.api import urlfetch
to fetch a webpage. The result of
result = urlfetch.fetch("http://www.example.com/index.html")
is a string of the html content (in result.content). The problem is the data that I want to parse is not really in HTML form, so I don't think using a python HTML parser will work for me. I need to parse all of the plain text in the body of the html document. The only problem is that urlfetch returns a single string of the entire HTML document, removing all newlines and extra spaces.
EDIT:
Okay, I tried fetching a different URL and apparently urlfetch does not strip the newlines, it was the original webpage I was trying to parse that served the HTML file that way...
END EDIT
If the document is something like this:
<html><head></head><body>
AAA 123 888 2008-10-30 ABC
BBB 987 332 2009-01-02 JSE
...
A4A 288 AAA
</body></html>
result.content will be this, after urlfetch fetches it:
'<html><head></head><body>AAA 123 888 2008-10-30 ABCBBB 987 2009-01-02 JSE...A4A 288 AAA</body></html>'
Using an HTML parser will not help me with the data between the body tags, so I was going to use regular expresions to parse my data, but as you can see the last part of one line gets combined with the first part of the next line, and I don't know how to split it. I tried
result.content.split('\n')
and
result.content.split('\r')
but the resulting list was all just 1 element. I don't see any options in google's urlfetch function to not remove newlines.
Any ideas how I can parse this data? Maybe I need to fetch it differently?
Thanks in advance!
A:
Only suggestion I can think of is to parse it as if it has fixed width columns. Newlines are not taken into consideration for HTML.
If you have control of the source data, put it into a text file rather than HTML.
A:
I understand that the format of the document is the one you have posted. In that case, I agree that a parser like Beautiful Soup may not be a good solution.
I assume that you are already getting the interesting data (between the BODY tags) with a regular expression like
import re
data = re.findall('<body>([^\<]*)</body>', result)[0]
then, it should be as easy as:
start = 0
end = 5
while (end<len(data)):
print data[start:end]
start = end+1
end = end+5
print data[start:]
(note: I did not check this code against boundary cases, and I do expect it to fail. It is only here to show the generic idea)
A:
Once you have the body text as a single, long string, you can break it up as follows.
This presumes that each record is 26 characters.
body= "AAA 123 888 2008-10-30 ABCBBB 987 2009-01-02 JSE...A4A 288 AAA"
for i in range(0,len(body),26):
line= body[i:i+26]
# parse the line
A:
EDIT: Reading comprehension is a desirable thing. I missed the bit about the lines being run together with no separator between them, which would kinda be the whole point of this, wouldn't it? So, nevermind my answer, it's not actually relevant.
If you know that each line is 5 space-separated columns, then (once you've stripped out the html) you could do something like (untested):
def generate_lines(datastring):
while datastring:
splitresult = datastring.split(' ', 5)
if len(splitresult) >= 5:
datastring = splitresult[5]
else:
datastring = None
yield splitresult[:5]
for line in generate_lines(data):
process_data_line(line)
Of course, you can change the split character and number of columns as needed (possibly even passing them into the generator function as additional parameters), and add error handling as appropriate.
A:
Further suggestions for splitting the string s into 26-character blocks:
As a list:
>>> [s[x:x+26] for x in range(0, len(s), 26)]
['AAA 123 888 2008-10-30 ABC',
'BBB 987 2009-01-02 JSE',
'A4A 288 AAA']
As a generator:
>>> for line in (s[x:x+26] for x in range(0, len(s), 26)): print line
AAA 123 888 2008-10-30 ABC
BBB 987 2009-01-02 JSE
A4A 288 AAA
Replace range() with xrange() in Python 2.x if s is very long.
| Parsing fixed-format data embedded in HTML in python | I am using google's appengine api
from google.appengine.api import urlfetch
to fetch a webpage. The result of
result = urlfetch.fetch("http://www.example.com/index.html")
is a string of the html content (in result.content). The problem is the data that I want to parse is not really in HTML form, so I don't think using a python HTML parser will work for me. I need to parse all of the plain text in the body of the html document. The only problem is that urlfetch returns a single string of the entire HTML document, removing all newlines and extra spaces.
EDIT:
Okay, I tried fetching a different URL and apparently urlfetch does not strip the newlines, it was the original webpage I was trying to parse that served the HTML file that way...
END EDIT
If the document is something like this:
<html><head></head><body>
AAA 123 888 2008-10-30 ABC
BBB 987 332 2009-01-02 JSE
...
A4A 288 AAA
</body></html>
result.content will be this, after urlfetch fetches it:
'<html><head></head><body>AAA 123 888 2008-10-30 ABCBBB 987 2009-01-02 JSE...A4A 288 AAA</body></html>'
Using an HTML parser will not help me with the data between the body tags, so I was going to use regular expresions to parse my data, but as you can see the last part of one line gets combined with the first part of the next line, and I don't know how to split it. I tried
result.content.split('\n')
and
result.content.split('\r')
but the resulting list was all just 1 element. I don't see any options in google's urlfetch function to not remove newlines.
Any ideas how I can parse this data? Maybe I need to fetch it differently?
Thanks in advance!
| [
"Only suggestion I can think of is to parse it as if it has fixed width columns. Newlines are not taken into consideration for HTML. \nIf you have control of the source data, put it into a text file rather than HTML.\n",
"I understand that the format of the document is the one you have posted. In that case, I agree that a parser like Beautiful Soup may not be a good solution.\nI assume that you are already getting the interesting data (between the BODY tags) with a regular expression like\nimport re\ndata = re.findall('<body>([^\\<]*)</body>', result)[0]\n\nthen, it should be as easy as:\nstart = 0\nend = 5\nwhile (end<len(data)):\n print data[start:end]\n start = end+1\n end = end+5\nprint data[start:]\n\n(note: I did not check this code against boundary cases, and I do expect it to fail. It is only here to show the generic idea)\n",
"Once you have the body text as a single, long string, you can break it up as follows.\nThis presumes that each record is 26 characters.\nbody= \"AAA 123 888 2008-10-30 ABCBBB 987 2009-01-02 JSE...A4A 288 AAA\"\nfor i in range(0,len(body),26):\n line= body[i:i+26]\n # parse the line\n\n",
"EDIT: Reading comprehension is a desirable thing. I missed the bit about the lines being run together with no separator between them, which would kinda be the whole point of this, wouldn't it? So, nevermind my answer, it's not actually relevant.\n\nIf you know that each line is 5 space-separated columns, then (once you've stripped out the html) you could do something like (untested):\ndef generate_lines(datastring):\n while datastring:\n splitresult = datastring.split(' ', 5)\n if len(splitresult) >= 5:\n datastring = splitresult[5]\n else:\n datastring = None\n yield splitresult[:5]\n\nfor line in generate_lines(data):\n process_data_line(line)\n\nOf course, you can change the split character and number of columns as needed (possibly even passing them into the generator function as additional parameters), and add error handling as appropriate. \n",
"Further suggestions for splitting the string s into 26-character blocks:\nAs a list:\n>>> [s[x:x+26] for x in range(0, len(s), 26)]\n['AAA 123 888 2008-10-30 ABC',\n 'BBB 987 2009-01-02 JSE',\n 'A4A 288 AAA']\n\nAs a generator:\n>>> for line in (s[x:x+26] for x in range(0, len(s), 26)): print line\nAAA 123 888 2008-10-30 ABC\nBBB 987 2009-01-02 JSE\nA4A 288 AAA\n\nReplace range() with xrange() in Python 2.x if s is very long.\n"
] | [
2,
2,
1,
0,
0
] | [] | [] | [
"google_app_engine",
"html",
"html_content_extraction",
"parsing",
"python"
] | stackoverflow_0000409769_google_app_engine_html_html_content_extraction_parsing_python.txt |
Q:
How would you determine where each property and method of a Python class is defined?
Given an instance of some class in Python, it would be useful to be able to determine which line of source code defined each method and property (e.g. to implement 1). For example, given a module ab.py
class A(object):
z = 1
q = 2
def y(self): pass
def x(self): pass
class B(A):
q = 4
def x(self): pass
def w(self): pass
define a function whither(class_, attribute) returning a tuple containing the filename, class, and line in the source code that defined or subclassed attribute. This means the definition in the class body, not the latest assignment due to overeager dynamism. It's fine if it returns 'unknown' for some attributes.
>>> a = A()
>>> b = B()
>>> b.spigot = 'brass'
>>> whither(a, 'z')
("ab.py", <class 'a.A'>, [line] 2)
>>> whither(b, 'q')
("ab.py", <class 'a.B'>, 8)
>>> whither(b, 'x')
("ab.py", <class 'a.B'>, 9)
>>> whither(b, 'spigot')
("Attribute 'spigot' is a data attribute")
I want to use this while introspecting Plone, where every object has hundreds of methods and it would be really useful to sort through them organized by class and not just alphabetically.
Of course, in Python you can't always reasonably know, but it would be nice to get good answers in the common case of mostly-static code.
A:
You are looking for the undocumented function inspect.classify_class_attrs(cls). Pass it a class and it will return a list of tuples ('name', 'kind' e.g. 'method' or 'data', defining class, property). If you need information on absolutely everything in a specific instance you'll have to do additional work.
Example:
>>> import inspect
>>> import pprint
>>> import calendar
>>>
>>> hc = calendar.HTMLCalendar()
>>> hc.__class__.pathos = None
>>> calendar.Calendar.phobos = None
>>> pprint.pprint(inspect.classify_class_attrs(hc.__class__))
[...
('__doc__',
'data',
<class 'calendar.HTMLCalendar'>,
'\n This calendar returns complete HTML pages.\n '),
...
('__new__',
'data',
<type 'object'>,
<built-in method __new__ of type object at 0x814fac0>),
...
('cssclasses',
'data',
<class 'calendar.HTMLCalendar'>,
['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']),
('firstweekday',
'property',
<class 'calendar.Calendar'>,
<property object at 0x98b8c34>),
('formatday',
'method',
<class 'calendar.HTMLCalendar'>,
<function formatday at 0x98b7bc4>),
...
('pathos', 'data', <class 'calendar.HTMLCalendar'>, None),
('phobos', 'data', <class 'calendar.Calendar'>, None),
...
]
A:
This is more-or-less impossible without static analysis, and even then, it won't always work. You can get the line where a function was defined and in which file by examining its code object, but beyond that, there's not much you can do. The inspect module can help with this. So:
import ab
a = ab.A()
meth = a.x
# So, now we have the method.
func = meth.im_func
# And the function from the method.
code = func.func_code
# And the code from the function!
print code.co_firstlineno, code.co_filename
# Or:
import inspect
print inspect.getsource(meth), inspect.getfile(meth)
But consider:
def some_method(self):
pass
ab.A.some_method = some_method
ab.A.some_class_attribute = None
Or worse:
some_cls = ab.A
some_string_var = 'another_instance_attribute'
setattr(some_cls, some_string_var, None)
Especially in the latter case, what do you want or expect to get?
A:
You are looking for the inspect module, specifically inspect.getsourcefile() and inspect.getsourcelines(). For example
a.py:
class Hello(object):
def say(self):
print 1
>>> from a import Hello
>>> hi = Hello()
>>> inspect.getsourcefile(hi.say)
a.py
>>> inspect.getsourcelines(A, foo)
([' def say(self):\n print 1\n'], 2)
Given the dynamic nature of Python, doing this for more complicated situations may simply not be possible...
| How would you determine where each property and method of a Python class is defined? | Given an instance of some class in Python, it would be useful to be able to determine which line of source code defined each method and property (e.g. to implement 1). For example, given a module ab.py
class A(object):
z = 1
q = 2
def y(self): pass
def x(self): pass
class B(A):
q = 4
def x(self): pass
def w(self): pass
define a function whither(class_, attribute) returning a tuple containing the filename, class, and line in the source code that defined or subclassed attribute. This means the definition in the class body, not the latest assignment due to overeager dynamism. It's fine if it returns 'unknown' for some attributes.
>>> a = A()
>>> b = B()
>>> b.spigot = 'brass'
>>> whither(a, 'z')
("ab.py", <class 'a.A'>, [line] 2)
>>> whither(b, 'q')
("ab.py", <class 'a.B'>, 8)
>>> whither(b, 'x')
("ab.py", <class 'a.B'>, 9)
>>> whither(b, 'spigot')
("Attribute 'spigot' is a data attribute")
I want to use this while introspecting Plone, where every object has hundreds of methods and it would be really useful to sort through them organized by class and not just alphabetically.
Of course, in Python you can't always reasonably know, but it would be nice to get good answers in the common case of mostly-static code.
| [
"You are looking for the undocumented function inspect.classify_class_attrs(cls). Pass it a class and it will return a list of tuples ('name', 'kind' e.g. 'method' or 'data', defining class, property). If you need information on absolutely everything in a specific instance you'll have to do additional work.\nExample:\n>>> import inspect\n>>> import pprint\n>>> import calendar\n>>> \n>>> hc = calendar.HTMLCalendar()\n>>> hc.__class__.pathos = None\n>>> calendar.Calendar.phobos = None\n>>> pprint.pprint(inspect.classify_class_attrs(hc.__class__))\n[...\n ('__doc__',\n 'data',\n <class 'calendar.HTMLCalendar'>,\n '\\n This calendar returns complete HTML pages.\\n '),\n ...\n ('__new__',\n 'data',\n <type 'object'>,\n <built-in method __new__ of type object at 0x814fac0>),\n ...\n ('cssclasses',\n 'data',\n <class 'calendar.HTMLCalendar'>,\n ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']),\n ('firstweekday',\n 'property',\n <class 'calendar.Calendar'>,\n <property object at 0x98b8c34>),\n ('formatday',\n 'method',\n <class 'calendar.HTMLCalendar'>,\n <function formatday at 0x98b7bc4>),\n ...\n ('pathos', 'data', <class 'calendar.HTMLCalendar'>, None),\n ('phobos', 'data', <class 'calendar.Calendar'>, None),\n ...\n ]\n\n",
"This is more-or-less impossible without static analysis, and even then, it won't always work. You can get the line where a function was defined and in which file by examining its code object, but beyond that, there's not much you can do. The inspect module can help with this. So:\nimport ab\na = ab.A()\nmeth = a.x\n# So, now we have the method.\nfunc = meth.im_func\n# And the function from the method.\ncode = func.func_code\n# And the code from the function!\nprint code.co_firstlineno, code.co_filename\n\n# Or:\nimport inspect\nprint inspect.getsource(meth), inspect.getfile(meth)\n\nBut consider:\ndef some_method(self):\n pass\nab.A.some_method = some_method\nab.A.some_class_attribute = None\n\nOr worse:\nsome_cls = ab.A\nsome_string_var = 'another_instance_attribute'\nsetattr(some_cls, some_string_var, None)\n\nEspecially in the latter case, what do you want or expect to get?\n",
"You are looking for the inspect module, specifically inspect.getsourcefile() and inspect.getsourcelines(). For example\na.py:\nclass Hello(object):\n def say(self):\n print 1\n\n>>> from a import Hello\n>>> hi = Hello()\n>>> inspect.getsourcefile(hi.say)\na.py\n>>> inspect.getsourcelines(A, foo)\n([' def say(self):\\n print 1\\n'], 2)\n\nGiven the dynamic nature of Python, doing this for more complicated situations may simply not be possible...\n"
] | [
3,
2,
1
] | [] | [] | [
"introspection",
"plone",
"python",
"python_datamodel"
] | stackoverflow_0000484890_introspection_plone_python_python_datamodel.txt |
Q:
Why aren't all the names in dir(x) valid for attribute access?
Why would a coder stuff things into __dict__ that can't be used for attribute access? For example, in my Plone instance, dir(portal) includes index_html, but portal.index_html raises AttributeError. This is also true for the __class__ attribute of Products.ZCatalog.Catalog.mybrains. Is there a good reason why dir() can't be trusted?
Poking around the inspect module, I see they use object.__dict__['x'] instead of attribute access for this reason and because they do not want to trigger getattr magic.
A:
I don't know about Plone, so the following is general.
From the docs of dir:
If the object has a method named
__dir__(), this method will be called and must return the list of
attributes. This allows objects that
implement a custom __getattr__() or
__getattribute__() function to customize the way dir() reports their
attributes.
Just guessing here, but I can think of two things that may be happening--
The object has a __dir__() method that returns attributes that it doesn't have
(less likely) The object has the attribute you're asking for (i.e. it's in obj.__dict__ or type(obj).__dict__, but overrides __getattr__ to return AttributeError
EDIT: __dir__ is only supported in Python 2.6+, however the (deprecated) special attributes __methods__ and __members__ can be used instead for earlier versions.
| Why aren't all the names in dir(x) valid for attribute access? | Why would a coder stuff things into __dict__ that can't be used for attribute access? For example, in my Plone instance, dir(portal) includes index_html, but portal.index_html raises AttributeError. This is also true for the __class__ attribute of Products.ZCatalog.Catalog.mybrains. Is there a good reason why dir() can't be trusted?
Poking around the inspect module, I see they use object.__dict__['x'] instead of attribute access for this reason and because they do not want to trigger getattr magic.
| [
"I don't know about Plone, so the following is general.\nFrom the docs of dir:\n\nIf the object has a method named\n __dir__(), this method will be called and must return the list of\n attributes. This allows objects that\n implement a custom __getattr__() or\n __getattribute__() function to customize the way dir() reports their\n attributes.\n\nJust guessing here, but I can think of two things that may be happening--\n\nThe object has a __dir__() method that returns attributes that it doesn't have\n(less likely) The object has the attribute you're asking for (i.e. it's in obj.__dict__ or type(obj).__dict__, but overrides __getattr__ to return AttributeError\n\nEDIT: __dir__ is only supported in Python 2.6+, however the (deprecated) special attributes __methods__ and __members__ can be used instead for earlier versions. \n"
] | [
2
] | [] | [] | [
"introspection",
"python",
"python_datamodel"
] | stackoverflow_0000485095_introspection_python_python_datamodel.txt |
Q:
Get foreign key without requesting the whole object
I have a model Foo which have a ForeignKey to the User model.
Later, I need to grab all the User's id and put then on a list
foos = Foo.objects.filter(...)
l = [ f.user.id for f in foos ]
But when I do that, django grabs the whole User instance from the DB instead of giving me just the numeric user's id, which exist in each Foo row.
How can I get all the ids without querying each user or using a select_related?
Thanks
A:
Use queryset's values() function, which will return a list of dictionaries containing name/value pairs for each attribute passed as parameters:
>>> Foo.objects.all().values('user__id')
[{'user__id': 1}, {'user__id' 2}, {'user__id': 3}]
The ORM will then be able to optimize the SQL query to only return the required fields, instead of doing a "SELECT *".
A:
Whenever you define a ForeignKey in Django, it automatically adds a FIELD_id field to your model.
For instance, if Foo has a FK to User with an attribute named "user", then you also have an attribute named user_id which contains the id of the related user.
l = [ f.user_id for f in foos ]
Calling .values() also adds performance if you select your attributes wisely
A:
Nevermind... I´m not sure why this didn´t work before when I tried, but this is how to do:
l = [ f.user_id for f in foos ]
| Get foreign key without requesting the whole object | I have a model Foo which have a ForeignKey to the User model.
Later, I need to grab all the User's id and put then on a list
foos = Foo.objects.filter(...)
l = [ f.user.id for f in foos ]
But when I do that, django grabs the whole User instance from the DB instead of giving me just the numeric user's id, which exist in each Foo row.
How can I get all the ids without querying each user or using a select_related?
Thanks
| [
"Use queryset's values() function, which will return a list of dictionaries containing name/value pairs for each attribute passed as parameters:\n>>> Foo.objects.all().values('user__id')\n[{'user__id': 1}, {'user__id' 2}, {'user__id': 3}]\n\nThe ORM will then be able to optimize the SQL query to only return the required fields, instead of doing a \"SELECT *\".\n",
"Whenever you define a ForeignKey in Django, it automatically adds a FIELD_id field to your model.\nFor instance, if Foo has a FK to User with an attribute named \"user\", then you also have an attribute named user_id which contains the id of the related user.\nl = [ f.user_id for f in foos ]\n\nCalling .values() also adds performance if you select your attributes wisely\n",
"Nevermind... I´m not sure why this didn´t work before when I tried, but this is how to do:\nl = [ f.user_id for f in foos ]\n\n"
] | [
5,
4,
0
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000476731_django_django_models_python.txt |
Q:
Client Server programming in python?
Here is source code for multithreaed server and client in python.
In the code client and server closes connection after the job is finished.
I want to keep the connections alive and send more data over the same connections to avoid overhead of closing and opening sockets every time.
Following code is from : http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/1/
import pickle
import socket
import threading
# We'll pickle a list of numbers:
someList = [ 1, 2, 7, 9, 0 ]
pickledList = pickle.dumps ( someList )
# Our thread class:
class ClientThread ( threading.Thread ):
# Override Thread's __init__ method to accept the parameters needed:
def __init__ ( self, channel, details ):
self.channel = channel
self.details = details
threading.Thread.__init__ ( self )
def run ( self ):
print 'Received connection:', self.details [ 0 ]
self.channel.send ( pickledList )
for x in xrange ( 10 ):
print self.channel.recv ( 1024 )
self.channel.close()
print 'Closed connection:', self.details [ 0 ]
# Set up the server:
server = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
server.bind ( ( '', 2727 ) )
server.listen ( 5 )
# Have the server serve "forever":
while True:
channel, details = server.accept()
ClientThread ( channel, details ).start()
import pickle
import socket
import threading
# Here's our thread:
class ConnectionThread ( threading.Thread ):
def run ( self ):
# Connect to the server:
client = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
client.connect ( ( 'localhost', 2727 ) )
# Retrieve and unpickle the list object:
print pickle.loads ( client.recv ( 1024 ) )
# Send some messages:
for x in xrange ( 10 ):
client.send ( 'Hey. ' + str ( x ) + '\n' )
# Close the connection
client.close()
# Let's spawn a few threads:
for x in xrange ( 5 ):
ConnectionThread().start()
A:
Spawning a new thread for every connection is a really bad design choice.
What happens if you get hit by a lot of connections?
In fact, using threads to wait for network IO is not worth it. Your program gets really complex and you get absolutely no benefit since waiting for network in threads won't make you wait faster. You only lose by using threads in this case.
The following text is from python documentation:
There are only two ways to have a
program on a single processor do “more
than one thing at a time.”
Multi-threaded programming is the
simplest and most popular way to do
it, but there is another very
different technique, that lets you
have nearly all the advantages of
multi-threading, without actually
using multiple threads. It’s really
only practical if your program is
largely I/O bound. If your program is
processor bound, then pre-emptive
scheduled threads are probably what
you really need. Network servers are
rarely processor bound, however.
And if it is a processor bound server case. you could always leave another process/thread to do the processor part. Continuing:
If your operating system supports the
select system call in its I/O library
(and nearly all do), then you can use
it to juggle multiple communication
channels at once; doing other work
while your I/O is taking place in the
“background.” Although this strategy
can seem strange and complex,
especially at first, it is in many
ways easier to understand and control
than multi-threaded programming.
So instead of using threads, use non-blocking input/output: collect the sockets in a list and use an event loop with select.select to know which socket has data to read. Do that in a single thread.
You could choose a python asynchronous networking framework like twisted to do that for you. That will save you a lot of headaches. Twisted's code has been improved for years, and covers some corner cases you'll take time to master.
EDIT: Any existing async IO libraries (like Twisted) are python code. You could have written it yourself, but it has already been written for you. I don't see why you wouldn't use one of those libraries and write your own worst code instead, since you are a beginner. Networing IO is hard to get right.
A:
I'm not sure I understand the question, but don't call close() if you don't want to close the connection...
A:
For an example of a client the keeps a TCP connection open and uses a familiar protocol,
look at the source of the telnetlib module. (sorry, someone else will have to answer your threading questions.)
An example of a server that keeps a TCP connection open is in the source for the SocketServer module (any standard Python installation includes the source).
| Client Server programming in python? | Here is source code for multithreaed server and client in python.
In the code client and server closes connection after the job is finished.
I want to keep the connections alive and send more data over the same connections to avoid overhead of closing and opening sockets every time.
Following code is from : http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/1/
import pickle
import socket
import threading
# We'll pickle a list of numbers:
someList = [ 1, 2, 7, 9, 0 ]
pickledList = pickle.dumps ( someList )
# Our thread class:
class ClientThread ( threading.Thread ):
# Override Thread's __init__ method to accept the parameters needed:
def __init__ ( self, channel, details ):
self.channel = channel
self.details = details
threading.Thread.__init__ ( self )
def run ( self ):
print 'Received connection:', self.details [ 0 ]
self.channel.send ( pickledList )
for x in xrange ( 10 ):
print self.channel.recv ( 1024 )
self.channel.close()
print 'Closed connection:', self.details [ 0 ]
# Set up the server:
server = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
server.bind ( ( '', 2727 ) )
server.listen ( 5 )
# Have the server serve "forever":
while True:
channel, details = server.accept()
ClientThread ( channel, details ).start()
import pickle
import socket
import threading
# Here's our thread:
class ConnectionThread ( threading.Thread ):
def run ( self ):
# Connect to the server:
client = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
client.connect ( ( 'localhost', 2727 ) )
# Retrieve and unpickle the list object:
print pickle.loads ( client.recv ( 1024 ) )
# Send some messages:
for x in xrange ( 10 ):
client.send ( 'Hey. ' + str ( x ) + '\n' )
# Close the connection
client.close()
# Let's spawn a few threads:
for x in xrange ( 5 ):
ConnectionThread().start()
| [
"Spawning a new thread for every connection is a really bad design choice.\nWhat happens if you get hit by a lot of connections?\nIn fact, using threads to wait for network IO is not worth it. Your program gets really complex and you get absolutely no benefit since waiting for network in threads won't make you wait faster. You only lose by using threads in this case.\nThe following text is from python documentation:\n\nThere are only two ways to have a\n program on a single processor do “more\n than one thing at a time.”\n Multi-threaded programming is the\n simplest and most popular way to do\n it, but there is another very\n different technique, that lets you\n have nearly all the advantages of\n multi-threading, without actually\n using multiple threads. It’s really\n only practical if your program is\n largely I/O bound. If your program is\n processor bound, then pre-emptive\n scheduled threads are probably what\n you really need. Network servers are\n rarely processor bound, however.\n\nAnd if it is a processor bound server case. you could always leave another process/thread to do the processor part. Continuing:\n\nIf your operating system supports the\n select system call in its I/O library\n (and nearly all do), then you can use\n it to juggle multiple communication\n channels at once; doing other work\n while your I/O is taking place in the\n “background.” Although this strategy\n can seem strange and complex,\n especially at first, it is in many\n ways easier to understand and control\n than multi-threaded programming.\n\nSo instead of using threads, use non-blocking input/output: collect the sockets in a list and use an event loop with select.select to know which socket has data to read. Do that in a single thread.\nYou could choose a python asynchronous networking framework like twisted to do that for you. That will save you a lot of headaches. Twisted's code has been improved for years, and covers some corner cases you'll take time to master.\nEDIT: Any existing async IO libraries (like Twisted) are python code. You could have written it yourself, but it has already been written for you. I don't see why you wouldn't use one of those libraries and write your own worst code instead, since you are a beginner. Networing IO is hard to get right.\n",
"I'm not sure I understand the question, but don't call close() if you don't want to close the connection...\n",
"For an example of a client the keeps a TCP connection open and uses a familiar protocol,\nlook at the source of the telnetlib module. (sorry, someone else will have to answer your threading questions.)\nAn example of a server that keeps a TCP connection open is in the source for the SocketServer module (any standard Python installation includes the source).\n"
] | [
20,
3,
0
] | [] | [] | [
"client",
"multithreading",
"python",
"sockets"
] | stackoverflow_0000487229_client_multithreading_python_sockets.txt |
Q:
pyqt import problem
I am having some trouble doing this in Python:
from PyQt4 import QtCore, QtGui
from dcopext import DCOPClient, DCOPApp
The traceback I get is
from dcopext import DCOPClient, DCOPApp
File "/usr/lib/python2.5/site-packages/dcopext.py", line 35, in <module>
from dcop import DCOPClient
RuntimeError: the qt and PyQt4.QtCore modules both wrap the QObject class
I tried switching the imports, importing dcopext later in the file, but none worked.
Thanks for any suggestions.
Edit: I have narrowed it down to one problem: I am using dcopext which internally uses qt3, but I want it to use PyQt4.
A:
The dcopext module is part of PyKDE3, the Python bindings for KDE3 which uses Qt 3.x, while you're using PyQt/Qt 4.x.
You need to upgrade to PyKDE4, now released as part of KDE itself, unless you want to target KDE 3 in which case you need a corresponding old version of Qt and PyQt (3.x).
| pyqt import problem | I am having some trouble doing this in Python:
from PyQt4 import QtCore, QtGui
from dcopext import DCOPClient, DCOPApp
The traceback I get is
from dcopext import DCOPClient, DCOPApp
File "/usr/lib/python2.5/site-packages/dcopext.py", line 35, in <module>
from dcop import DCOPClient
RuntimeError: the qt and PyQt4.QtCore modules both wrap the QObject class
I tried switching the imports, importing dcopext later in the file, but none worked.
Thanks for any suggestions.
Edit: I have narrowed it down to one problem: I am using dcopext which internally uses qt3, but I want it to use PyQt4.
| [
"The dcopext module is part of PyKDE3, the Python bindings for KDE3 which uses Qt 3.x, while you're using PyQt/Qt 4.x. \nYou need to upgrade to PyKDE4, now released as part of KDE itself, unless you want to target KDE 3 in which case you need a corresponding old version of Qt and PyQt (3.x).\n"
] | [
1
] | [] | [] | [
"dcop",
"pyqt",
"python"
] | stackoverflow_0000487484_dcop_pyqt_python.txt |
Q:
Transferring object through Pyro
I'm using Pyro in a project, and can't seem to understand how to transfer a complete object over the wire. The object is not distributed (my distributed objects works perfectly fine), but should function as an argument to an already available distributed object.
My object is a derived from a custom class containing some methods and some variables - an integer and a list. The class is available for both the server and the client. When using my object as an argument to a method of a distributed object the integer variable is "received" correctly, but the list is empty, even though I can see that it contains values just before it's "sent".
Why is this?
Short version of the class:
class Collection(Pyro.core.ObjBase):
num = 0
operations = [("Operation:", "Value:", "Description", "Timestamp")]
def __init__(self):
Pyro.core.ObjBase.__init__(self)
def add(self, val, desc):
entry = ("Add", val, desc, strftime("%Y-%m-%d %H:%M:%S"))
self.operations.append(entry)
self.num = self.num + 1
def printop(self):
print "This collection will execute the following operations:"
for item in self.operations:
print item
The receving method in the distributed object:
def apply(self, collection):
print "Todo: Apply collection"
#op = collection.getop()
print "Number of collected operations:", collection.a
collection.printop()
A:
Operations is a class attribute, not the object attribute. That is why it's not transferred. Try setting it in __init__ via self.operations = <whatever>.
A:
Your receiving method, apply, has the same name as the built-in Python function.
| Transferring object through Pyro | I'm using Pyro in a project, and can't seem to understand how to transfer a complete object over the wire. The object is not distributed (my distributed objects works perfectly fine), but should function as an argument to an already available distributed object.
My object is a derived from a custom class containing some methods and some variables - an integer and a list. The class is available for both the server and the client. When using my object as an argument to a method of a distributed object the integer variable is "received" correctly, but the list is empty, even though I can see that it contains values just before it's "sent".
Why is this?
Short version of the class:
class Collection(Pyro.core.ObjBase):
num = 0
operations = [("Operation:", "Value:", "Description", "Timestamp")]
def __init__(self):
Pyro.core.ObjBase.__init__(self)
def add(self, val, desc):
entry = ("Add", val, desc, strftime("%Y-%m-%d %H:%M:%S"))
self.operations.append(entry)
self.num = self.num + 1
def printop(self):
print "This collection will execute the following operations:"
for item in self.operations:
print item
The receving method in the distributed object:
def apply(self, collection):
print "Todo: Apply collection"
#op = collection.getop()
print "Number of collected operations:", collection.a
collection.printop()
| [
"Operations is a class attribute, not the object attribute. That is why it's not transferred. Try setting it in __init__ via self.operations = <whatever>.\n",
"Your receiving method, apply, has the same name as the built-in Python function.\n"
] | [
4,
1
] | [] | [] | [
"distributed",
"python"
] | stackoverflow_0000487553_distributed_python.txt |
Q:
How can I get interactive Python to avoid using readline while allowing utf-8 input?
I use a terminal (9term) that does command-line editing itself - programs that use readline just get in its way. It's fully utf-8 aware. How can I make an interactive python session disable readline while retaining utf-8 input and output?
Currently I use:
LANG=en_GB.UTF-8 export LANG
cat | python -i
however this causes sys.stdin.encoding to be None, which implies Ascii
(the system default encoding, which doesn't seem to be changeable)
TERM=dumb python
doesn't disable readline (and it mangles utf-8 input also).
I'm new to python, so apologies if this is an obvious question.
A:
In the past, I've disabled Python readline by rebuilding it from source: configure --disable-readline
This might be overkill, though, for your situation.
| How can I get interactive Python to avoid using readline while allowing utf-8 input? | I use a terminal (9term) that does command-line editing itself - programs that use readline just get in its way. It's fully utf-8 aware. How can I make an interactive python session disable readline while retaining utf-8 input and output?
Currently I use:
LANG=en_GB.UTF-8 export LANG
cat | python -i
however this causes sys.stdin.encoding to be None, which implies Ascii
(the system default encoding, which doesn't seem to be changeable)
TERM=dumb python
doesn't disable readline (and it mangles utf-8 input also).
I'm new to python, so apologies if this is an obvious question.
| [
"In the past, I've disabled Python readline by rebuilding it from source: configure --disable-readline\nThis might be overkill, though, for your situation.\n"
] | [
2
] | [] | [] | [
"interactive",
"python",
"utf_8"
] | stackoverflow_0000487800_interactive_python_utf_8.txt |
Q:
ZipFile complains, is there a way around using the zipfile module?
I am trying to decompress some MMS messages sent to me zipped. The problem is that sometimes it works, and others not. And when it doesnt work, the python zipfile module complains and says that it is a bad zip file. But the zipfile decompresses fine using the unix unzip command.
This is what ive got
zippedfile = open('%stemp/tempfile.zip' % settings.MEDIA_ROOT, 'w+')
zippedfile.write(string)
z = zipfile.ZipFile(zippedfile)
I am using 'w+' and writing a string to it, the string contains a base64 decoded string representation of a zip file.
Then I do like this:
filelist = z.infolist()
images = []
for f in filelist:
raw_mimetype = mimetypes.guess_type(f.filename)[0]
if raw_mimetype:
mimetype = raw_mimetype.split('/')[0]
else:
mimetype = 'unknown'
if mimetype == 'image':
images.append(f.filename)
This way I've got a list of all the images in the zip file. But this doesnt always work, since the zipfile module complains about some of the files.
Is there a way to do this, without using the zipfile module?
Could I somehow use the unix command unzip instead of zipfile and then to the same thing to retrive all the images from the archive?
A:
You should very probably open the file in binary mode, when writing zipped data into it. That is, you should use
zippedfile = open('%stemp/tempfile.zip' % settings.MEDIA_ROOT, 'wb+')
A:
You might have to close and reopen the file, or maybe seek to the start of the file after writing it.
filename = '%stemp/tempfile.zip' % settings.MEDIA_ROOT
zippedfile = open(filename , 'wb+')
zippedfile.write(string)
zippedfile.close()
z = zipfile.ZipFile(filename,"r")
You say the string is base64 decoded, but you haven't shown any code that decodes it - are you sure it's not still encoded?
data = string.decode('base64')
| ZipFile complains, is there a way around using the zipfile module? | I am trying to decompress some MMS messages sent to me zipped. The problem is that sometimes it works, and others not. And when it doesnt work, the python zipfile module complains and says that it is a bad zip file. But the zipfile decompresses fine using the unix unzip command.
This is what ive got
zippedfile = open('%stemp/tempfile.zip' % settings.MEDIA_ROOT, 'w+')
zippedfile.write(string)
z = zipfile.ZipFile(zippedfile)
I am using 'w+' and writing a string to it, the string contains a base64 decoded string representation of a zip file.
Then I do like this:
filelist = z.infolist()
images = []
for f in filelist:
raw_mimetype = mimetypes.guess_type(f.filename)[0]
if raw_mimetype:
mimetype = raw_mimetype.split('/')[0]
else:
mimetype = 'unknown'
if mimetype == 'image':
images.append(f.filename)
This way I've got a list of all the images in the zip file. But this doesnt always work, since the zipfile module complains about some of the files.
Is there a way to do this, without using the zipfile module?
Could I somehow use the unix command unzip instead of zipfile and then to the same thing to retrive all the images from the archive?
| [
"You should very probably open the file in binary mode, when writing zipped data into it. That is, you should use\nzippedfile = open('%stemp/tempfile.zip' % settings.MEDIA_ROOT, 'wb+')\n\n",
"You might have to close and reopen the file, or maybe seek to the start of the file after writing it.\nfilename = '%stemp/tempfile.zip' % settings.MEDIA_ROOT\nzippedfile = open(filename , 'wb+')\nzippedfile.write(string)\nzippedfile.close()\nz = zipfile.ZipFile(filename,\"r\")\n\nYou say the string is base64 decoded, but you haven't shown any code that decodes it - are you sure it's not still encoded?\ndata = string.decode('base64')\n\n"
] | [
5,
1
] | [] | [] | [
"popen",
"python",
"python_zipfile",
"zip"
] | stackoverflow_0000488054_popen_python_python_zipfile_zip.txt |
Q:
How to copy a picture from canvas to clipboard?
I have some Tkinter canvas and some picture of lines and text on it. Is there an easy way to copy it to a clipboard?
A:
You could use .postscript method of the canvas to get an Encapsulated PostScript (EPS) representation of the contents. Then, use `ImageMagick's Python bindings (PythonMagick or PythonMagickWand) to convert the EPS to a Windows Enhanced Metafile (EMF). Finally, copy it to the clipboard (e.g. using nosklo's solution) with the CF_ENHMETAFILE clipboard format.
A:
To use windows clipboard you must convert the image data to a format accepted by win api. Then, just use this function:
import win32clipboard
def send_to_clibboard(clip_type, data):
win32clipboard.OpenClipboard()
win32clipboard.EmptyClipboard()
win32clipboard.SetClipboardData(clip_type, data)
win32clipboard.CloseClipboard()
Where clip_type can be win32clipboard.CF_BITMAP, win32clipboard.CF_TIFF or many others.
| How to copy a picture from canvas to clipboard? | I have some Tkinter canvas and some picture of lines and text on it. Is there an easy way to copy it to a clipboard?
| [
"You could use .postscript method of the canvas to get an Encapsulated PostScript (EPS) representation of the contents. Then, use `ImageMagick's Python bindings (PythonMagick or PythonMagickWand) to convert the EPS to a Windows Enhanced Metafile (EMF). Finally, copy it to the clipboard (e.g. using nosklo's solution) with the CF_ENHMETAFILE clipboard format.\n",
"To use windows clipboard you must convert the image data to a format accepted by win api. Then, just use this function:\nimport win32clipboard\n\ndef send_to_clibboard(clip_type, data): \n win32clipboard.OpenClipboard()\n win32clipboard.EmptyClipboard()\n win32clipboard.SetClipboardData(clip_type, data) \n win32clipboard.CloseClipboard()\n\nWhere clip_type can be win32clipboard.CF_BITMAP, win32clipboard.CF_TIFF or many others.\n"
] | [
5,
4
] | [] | [] | [
"clipboard",
"python",
"tkinter"
] | stackoverflow_0000457514_clipboard_python_tkinter.txt |
Q:
Validating Python Arguments in Subclasses
I'm trying to validate a few python arguments. Until we get the new static typing in Python 3.0, what is the best way of going about this.
Here is an example of what I am attempting:
class A(object):
@accepts(int, int, int)
def __init__(a, b, c):
pass
class B(A):
@accepts(int, int, int, int)
def __init__(a, b, c, d):
A.__init__(a, b, c)
As you can see the decorator is nicely performing type checking of the inputs to my class, but I have to define all the arguments to the second class, which gets very nasty when I have multiple levels of inheritance. I can use kwargs with some success, but it's not quite as nice as the above approach for type checking.
Essentially I want to pop one argument off the kwargs list and check it's type, then pass the remainder to it's parent, but do this in a very flexible and clean way as this scales.
Any suggestions?
A:
Why not just define an any value, and decorate the subclass constructor with @accepts(any, any, any, int)? Your decorator won't check parameters marked with any, and the @accepts on the superclass constructor will check all the arguments passed up to it by subclasses.
A:
You might want to play around with the inspect module. It will let you enumerate superclasses, argument lists, and other fun stuff. It seems that you might want to inspect the argument list of the superclass __init__ method and compare it against what you have in the subclass. I'm not sure if this is going to work for you or not.
I would be careful about what assumptions you make though. It might not be safe to assume that just because class A has N arguments to __init__, subclasses will contain at least N arguments and pass the first N through to the super class. If the subclass is more specific, then it might fill in all by 2 of the arguments to its superclasses __init__ method.
| Validating Python Arguments in Subclasses | I'm trying to validate a few python arguments. Until we get the new static typing in Python 3.0, what is the best way of going about this.
Here is an example of what I am attempting:
class A(object):
@accepts(int, int, int)
def __init__(a, b, c):
pass
class B(A):
@accepts(int, int, int, int)
def __init__(a, b, c, d):
A.__init__(a, b, c)
As you can see the decorator is nicely performing type checking of the inputs to my class, but I have to define all the arguments to the second class, which gets very nasty when I have multiple levels of inheritance. I can use kwargs with some success, but it's not quite as nice as the above approach for type checking.
Essentially I want to pop one argument off the kwargs list and check it's type, then pass the remainder to it's parent, but do this in a very flexible and clean way as this scales.
Any suggestions?
| [
"Why not just define an any value, and decorate the subclass constructor with @accepts(any, any, any, int)? Your decorator won't check parameters marked with any, and the @accepts on the superclass constructor will check all the arguments passed up to it by subclasses.\n",
"You might want to play around with the inspect module. It will let you enumerate superclasses, argument lists, and other fun stuff. It seems that you might want to inspect the argument list of the superclass __init__ method and compare it against what you have in the subclass. I'm not sure if this is going to work for you or not.\nI would be careful about what assumptions you make though. It might not be safe to assume that just because class A has N arguments to __init__, subclasses will contain at least N arguments and pass the first N through to the super class. If the subclass is more specific, then it might fill in all by 2 of the arguments to its superclasses __init__ method.\n"
] | [
1,
0
] | [] | [] | [
"decorator",
"inheritance",
"python",
"static_typing"
] | stackoverflow_0000488772_decorator_inheritance_python_static_typing.txt |
Q:
Why am I getting the following error in Python "ImportError: No module named py"?
I'm a Python newbie, so bear with me :)
I created a file called test.py with the contents as follows:
test.py
import sys
print sys.platform
print 2 ** 100
I then ran import test.py file in the interpreter to follow an example in my book.
When I do this, I get the output with the import error on the end.
win32
1267650600228229401496703205376
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named py
Why do I get this error and how do I fix it? Thanks!
A:
Instead of:
import test.py
simply write:
import test
This assumes test.py is in the same directory as the file that imports it.
A:
This strange-looking error is a result of how Python imports modules.
Python sees:
import test.py
Python thinks (simplified a bit):
import module test.
search for a test.py in the module search paths
execute test.py (where you get your output)
import 'test' as name into current namespace
import test.py
search for file test/py.py
throw ImportError (no module named 'py') found.
Because python allows dotted module names, it just thinks you have a submodule named py within the test module, and tried to find that. It has no idea you're attempting to import a file.
A:
You don't specify the extension when importing. Just do:
import test
A:
As others have mentioned, you don't need to put the file extension in your import statement. Recommended reading is the Modules section of the Python Tutorial.
For a little more background into the error, the interpreter thinks you're trying to import a module named py from inside the test package, since the dot indicates encapsulation. Because no such module exists (and test isn't even a package!), it raises that error.
As indicated in the more in-depth documentation on the import statement it still executes all the statements in the test module before attempting to import the py module, which is why you get the values printed out.
| Why am I getting the following error in Python "ImportError: No module named py"? | I'm a Python newbie, so bear with me :)
I created a file called test.py with the contents as follows:
test.py
import sys
print sys.platform
print 2 ** 100
I then ran import test.py file in the interpreter to follow an example in my book.
When I do this, I get the output with the import error on the end.
win32
1267650600228229401496703205376
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named py
Why do I get this error and how do I fix it? Thanks!
| [
"Instead of:\nimport test.py\n\nsimply write:\nimport test\n\nThis assumes test.py is in the same directory as the file that imports it.\n",
"This strange-looking error is a result of how Python imports modules. \nPython sees: \nimport test.py\n\nPython thinks (simplified a bit):\n\nimport module test.\n\nsearch for a test.py in the module search paths\nexecute test.py (where you get your output)\nimport 'test' as name into current namespace\n\nimport test.py\n\nsearch for file test/py.py\nthrow ImportError (no module named 'py') found.\n\n\nBecause python allows dotted module names, it just thinks you have a submodule named py within the test module, and tried to find that. It has no idea you're attempting to import a file.\n",
"You don't specify the extension when importing. Just do:\nimport test\n\n",
"As others have mentioned, you don't need to put the file extension in your import statement. Recommended reading is the Modules section of the Python Tutorial.\nFor a little more background into the error, the interpreter thinks you're trying to import a module named py from inside the test package, since the dot indicates encapsulation. Because no such module exists (and test isn't even a package!), it raises that error.\nAs indicated in the more in-depth documentation on the import statement it still executes all the statements in the test module before attempting to import the py module, which is why you get the values printed out.\n"
] | [
44,
7,
5,
2
] | [] | [] | [
"python"
] | stackoverflow_0000489497_python.txt |
Q:
Integrating command-line generated python .coverage files with PyDev
My build environment is configured to compile, run and create coverage file at the command line (using Ned Batchelder coverage.py tool).
I'm using Eclipse with PyDev as my editor, but for practical reasons, it's not possible/convenient for me to convert my whole build environment to Eclipse (and thus generate the coverage data directly from the IDE, as it's designed to do)
PyDev seems to be using the same coverage tool (or something very similar to it) to generate its coverage information, so I'm guessing there should be some way of integrating my external coverage files into Eclipse/PyDev.
Any idea on how to do this?
A:
I don't know anything about PyDev's integration of coverage.py (or if it even uses coverage.py), but the .coverage files are pretty simple. They are marhsal'ed dictionaries.
I haven't tested this code, but you can try this to combine two .coverage files into one:
import marshal
c1_dict = marshal.load(open(file_name_1, 'rb'))
c2_dict = marshal.load(open(file_name_2, 'rb'))
c1_dict.update(c2_dict)
marshal.dump(c1_dict, open(file_name_out, 'wb'))
A:
I needed exactly something like this some time ago, when PyDev still used an older version of coverage.py than the one accessible from the script creator's page.
What I did was detecting where PyDev was saving his .coverage file. For me it was:
C:\Users\Admin\workspace\.metadata\.plugins\org.python.pydev.debug\.coverage
Then I manually ran a new version of coverage.py from a separate script and told it to save its .coverage file in the place where PyDev saves its. I cannot remember if there is a command-line argument to coverage.py or if I simply copied the .coverage file with a script, but after that, if you simply open the Code Coverage Results View and click Refresh coverage information!, PyDev will nicely process the data as if it generated the file itself.
| Integrating command-line generated python .coverage files with PyDev | My build environment is configured to compile, run and create coverage file at the command line (using Ned Batchelder coverage.py tool).
I'm using Eclipse with PyDev as my editor, but for practical reasons, it's not possible/convenient for me to convert my whole build environment to Eclipse (and thus generate the coverage data directly from the IDE, as it's designed to do)
PyDev seems to be using the same coverage tool (or something very similar to it) to generate its coverage information, so I'm guessing there should be some way of integrating my external coverage files into Eclipse/PyDev.
Any idea on how to do this?
| [
"I don't know anything about PyDev's integration of coverage.py (or if it even uses coverage.py), but the .coverage files are pretty simple. They are marhsal'ed dictionaries.\nI haven't tested this code, but you can try this to combine two .coverage files into one:\nimport marshal\nc1_dict = marshal.load(open(file_name_1, 'rb'))\nc2_dict = marshal.load(open(file_name_2, 'rb'))\nc1_dict.update(c2_dict)\nmarshal.dump(c1_dict, open(file_name_out, 'wb'))\n\n",
"I needed exactly something like this some time ago, when PyDev still used an older version of coverage.py than the one accessible from the script creator's page.\nWhat I did was detecting where PyDev was saving his .coverage file. For me it was:\n C:\\Users\\Admin\\workspace\\.metadata\\.plugins\\org.python.pydev.debug\\.coverage\n\nThen I manually ran a new version of coverage.py from a separate script and told it to save its .coverage file in the place where PyDev saves its. I cannot remember if there is a command-line argument to coverage.py or if I simply copied the .coverage file with a script, but after that, if you simply open the Code Coverage Results View and click Refresh coverage information!, PyDev will nicely process the data as if it generated the file itself.\n"
] | [
3,
3
] | [] | [] | [
"code_coverage",
"eclipse",
"pydev",
"python",
"python_coverage"
] | stackoverflow_0000297294_code_coverage_eclipse_pydev_python_python_coverage.txt |
Q:
Connecting to MySQL with Python 2.6...how?
All my searches, including this question on Stack, point me to MySQLdb. Unfortunately MySQLdb doesn't have a version for Python 2.6.
What am I to do?
A:
Have you tried compiling it for Python 2.6? The APIs change very little in minor releases, so it's likely to Just Work (TM).
Edit: According to this post, it does work and the poster mentions that Windows binaries have been posted.
| Connecting to MySQL with Python 2.6...how? | All my searches, including this question on Stack, point me to MySQLdb. Unfortunately MySQLdb doesn't have a version for Python 2.6.
What am I to do?
| [
"Have you tried compiling it for Python 2.6? The APIs change very little in minor releases, so it's likely to Just Work (TM).\nEdit: According to this post, it does work and the poster mentions that Windows binaries have been posted.\n"
] | [
2
] | [] | [] | [
"mysql",
"python"
] | stackoverflow_0000489807_mysql_python.txt |
Q:
QScintilla scrollbar
When I add a QsciScintilla object to my main window the horizontal scrollbar is active and super wide (tons of apparent white space). Easy fix?
A:
Easy fix:
sc.SendScintilla(sc.SCI_SETHSCROLLBAR, 0)
| QScintilla scrollbar | When I add a QsciScintilla object to my main window the horizontal scrollbar is active and super wide (tons of apparent white space). Easy fix?
| [
"Easy fix:\nsc.SendScintilla(sc.SCI_SETHSCROLLBAR, 0)\n\n"
] | [
1
] | [] | [] | [
"python",
"qt",
"scintilla"
] | stackoverflow_0000490130_python_qt_scintilla.txt |