content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How do I make a menu that does not require the user to press [enter] to make a selection?
I've got a menu in Python. That part was easy. I'm using raw_input() to get the selection from the user.
The problem is that raw_input (and input) require the user to press Enter after they make a selection. Is there any way to make the program act immediately upon a keystroke? Here's what I've got so far:
import sys
print """Menu
1) Say Foo
2) Say Bar"""
answer = raw_input("Make a selection> ")
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
It would be great to have something like
print menu
while lastKey = "":
lastKey = check_for_recent_keystrokes()
if "1" in lastKey: #do stuff...
A:
On Windows:
import msvcrt
answer=msvcrt.getch()
A:
On Linux:
set raw mode
select and read the keystroke
restore normal settings
import sys
import select
import termios
import tty
def getkey():
old_settings = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin.fileno())
select.select([sys.stdin], [], [], 0)
answer = sys.stdin.read(1)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
return answer
print """Menu
1) Say Foo
2) Say Bar"""
answer=getkey()
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
A:
Wow, that took forever. Ok, here's what I've ended up with
#!C:\python25\python.exe
import msvcrt
print """Menu
1) Say Foo
2) Say Bar"""
while 1:
char = msvcrt.getch()
if char == chr(27): #escape
break
if char == "1":
print "foo"
break
if char == "2":
print "Bar"
break
It fails hard using IDLE, the python...thing...that comes with python. But once I tried it in DOS (er, CMD.exe), as a real program, then it ran fine.
No one try it in IDLE, unless you have Task Manager handy.
I've already forgotten how I lived with menus that arn't super-instant responsive.
A:
The reason msvcrt fails in IDLE is because IDLE is not accessing the library that runs msvcrt. Whereas when you run the program natively in cmd.exe it works nicely. For the same reason that your program blows up on Mac and Linux terminals.
But I guess if you're going to be using this specifically for windows, more power to ya.
| How do I make a menu that does not require the user to press [enter] to make a selection? | I've got a menu in Python. That part was easy. I'm using raw_input() to get the selection from the user.
The problem is that raw_input (and input) require the user to press Enter after they make a selection. Is there any way to make the program act immediately upon a keystroke? Here's what I've got so far:
import sys
print """Menu
1) Say Foo
2) Say Bar"""
answer = raw_input("Make a selection> ")
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
It would be great to have something like
print menu
while lastKey = "":
lastKey = check_for_recent_keystrokes()
if "1" in lastKey: #do stuff...
| [
"On Windows:\nimport msvcrt\nanswer=msvcrt.getch()\n\n",
"On Linux:\n\nset raw mode\nselect and read the keystroke\nrestore normal settings\n\n\nimport sys\nimport select\nimport termios\nimport tty\n\ndef getkey():\n old_settings = termios.tcgetattr(sys.stdin)\n tty.setraw(sys.stdin.fileno())\n select.select([sys.stdin], [], [], 0)\n answer = sys.stdin.read(1)\n termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)\n return answer\n\nprint \"\"\"Menu\n1) Say Foo\n2) Say Bar\"\"\"\n\nanswer=getkey()\n\nif \"1\" in answer: print \"foo\"\nelif \"2\" in answer: print \"bar\"\n\n\n",
"Wow, that took forever. Ok, here's what I've ended up with \n#!C:\\python25\\python.exe\nimport msvcrt\nprint \"\"\"Menu\n1) Say Foo \n2) Say Bar\"\"\"\nwhile 1:\n char = msvcrt.getch()\n if char == chr(27): #escape\n break\n if char == \"1\":\n print \"foo\"\n break\n if char == \"2\":\n print \"Bar\"\n break\n\nIt fails hard using IDLE, the python...thing...that comes with python. But once I tried it in DOS (er, CMD.exe), as a real program, then it ran fine.\nNo one try it in IDLE, unless you have Task Manager handy.\nI've already forgotten how I lived with menus that arn't super-instant responsive.\n",
"The reason msvcrt fails in IDLE is because IDLE is not accessing the library that runs msvcrt. Whereas when you run the program natively in cmd.exe it works nicely. For the same reason that your program blows up on Mac and Linux terminals.\nBut I guess if you're going to be using this specifically for windows, more power to ya.\n"
] | [
10,
9,
4,
0
] | [] | [] | [
"python"
] | stackoverflow_0000001829_python.txt |
Q:
File size differences after copying a file to a server vía FTP
I have created a PHP-script to update a web server that is live inside a local directory.
I'm migrating the script into Python. It works fine for the most part, but after a PUT command, the size of the file appears to change. Thus, the size of the file is different from that of the file on the server.
Once I download again the file from the FTP server, the only difference is the CR/LF mark. This annoys me because the same script is comparing the size of the files to update. Also, in case it means anything, the script works perfectly in PHP vía ftp_put.
from ftplib import FTP
ftpserver = "myserver"
ftpuser = "myuser"
ftppass = "mypwd"
locfile = "g:/test/style.css"
ftpfile = "/temp/style.css"
try:
ftp = FTP(ftpserver, ftpuser, ftppass)
except:
exit ("Cannot connect")
f = open (locfile, "r")
try:
ftp.delete (ftpfile)
except:
pass
# ftp.sendcmd ("TYPE I")
# ftp.storlines("STOR %s" % ftpfile, f)
ftp.storbinary("STOR %s" % ftpfile, f)
f.close()
ftp.dir (ftpfile)
ftp.quit()
Any suggestions?
A:
Do you need to open the locfile in binary using rb?
f = open (locfile, "rb")
A:
Well if you go under the properties of your file in Windows or a *nix environment, you will notice two sizes. One is the sector size, and one is the actual size. The sector size is the number of sectors in bytes that are used up on your hard disk. That is because two files cannot be in the same sector with most modern file systems, so if your file fills up half of the sector the whole sector is marked as filled.
So you might be comparing the sector file size to the actual file size on the FTP server or vice versa.
A:
Small files take up a whole node on the file system whatever the size is.
My host tends to report all small files as 4KB in ftp but gives an accurate size in a shell so it might be a 'feature' common to ftp clients.
| File size differences after copying a file to a server vía FTP | I have created a PHP-script to update a web server that is live inside a local directory.
I'm migrating the script into Python. It works fine for the most part, but after a PUT command, the size of the file appears to change. Thus, the size of the file is different from that of the file on the server.
Once I download again the file from the FTP server, the only difference is the CR/LF mark. This annoys me because the same script is comparing the size of the files to update. Also, in case it means anything, the script works perfectly in PHP vía ftp_put.
from ftplib import FTP
ftpserver = "myserver"
ftpuser = "myuser"
ftppass = "mypwd"
locfile = "g:/test/style.css"
ftpfile = "/temp/style.css"
try:
ftp = FTP(ftpserver, ftpuser, ftppass)
except:
exit ("Cannot connect")
f = open (locfile, "r")
try:
ftp.delete (ftpfile)
except:
pass
# ftp.sendcmd ("TYPE I")
# ftp.storlines("STOR %s" % ftpfile, f)
ftp.storbinary("STOR %s" % ftpfile, f)
f.close()
ftp.dir (ftpfile)
ftp.quit()
Any suggestions?
| [
"Do you need to open the locfile in binary using rb?\nf = open (locfile, \"rb\")\n\n",
"Well if you go under the properties of your file in Windows or a *nix environment, you will notice two sizes. One is the sector size, and one is the actual size. The sector size is the number of sectors in bytes that are used up on your hard disk. That is because two files cannot be in the same sector with most modern file systems, so if your file fills up half of the sector the whole sector is marked as filled.\nSo you might be comparing the sector file size to the actual file size on the FTP server or vice versa.\n",
"Small files take up a whole node on the file system whatever the size is.\nMy host tends to report all small files as 4KB in ftp but gives an accurate size in a shell so it might be a 'feature' common to ftp clients.\n"
] | [
17,
3,
0
] | [] | [] | [
"ftp",
"ftplib",
"php",
"python",
"webserver"
] | stackoverflow_0000002311_ftp_ftplib_php_python_webserver.txt |
Q:
Programmatically talking to a Serial Port in OS X or Linux
I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial .
When i do this everything seems to be hunky-dory:
stty -f /dev/cu.usbserial
speed 9600 baud;
lflags: -icanon -isig -iexten -echo
iflags: -icrnl -ixon -ixany -imaxbel -brkint
oflags: -opost -onlcr -oxtabs
cflags: cs8 -parenb
Everything also works when I use the serial port tool to talk to it.
If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost.
#!/usr/bin/python
import serial
ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10)
ser.write("<ID01><PA> \r\n")
read_chars = ser.read(20)
print read_chars
ser.close()
So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial?
Nope, no serial numbers. The thing is, the problem persists even with sudo-running the python script, and the only thing that makes it go through if I open the connection in the gui tool that I mentioned.
A:
/dev/cu.xxxxx is the "callout" device, it's what you use when you establish a connection to the serial device and start talking to it. /dev/tty.xxxxx is the "dialin" device, used for monitoring a port for incoming calls for e.g. a fax listener.
A:
have you tried watching the traffic between the GUI and the serial port to see if there is some kind of special command being sent across? Also just curious, Python is sending ASCII and not UTF-8 or something else right? The reason I ask is because I noticed your quote changes for the strings and in some languages that actually is the difference between ASCII and UTF-8.
| Programmatically talking to a Serial Port in OS X or Linux | I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial .
When i do this everything seems to be hunky-dory:
stty -f /dev/cu.usbserial
speed 9600 baud;
lflags: -icanon -isig -iexten -echo
iflags: -icrnl -ixon -ixany -imaxbel -brkint
oflags: -opost -onlcr -oxtabs
cflags: cs8 -parenb
Everything also works when I use the serial port tool to talk to it.
If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost.
#!/usr/bin/python
import serial
ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10)
ser.write("<ID01><PA> \r\n")
read_chars = ser.read(20)
print read_chars
ser.close()
So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial?
Nope, no serial numbers. The thing is, the problem persists even with sudo-running the python script, and the only thing that makes it go through if I open the connection in the gui tool that I mentioned.
| [
"/dev/cu.xxxxx is the \"callout\" device, it's what you use when you establish a connection to the serial device and start talking to it. /dev/tty.xxxxx is the \"dialin\" device, used for monitoring a port for incoming calls for e.g. a fax listener.\n",
"have you tried watching the traffic between the GUI and the serial port to see if there is some kind of special command being sent across? Also just curious, Python is sending ASCII and not UTF-8 or something else right? The reason I ask is because I noticed your quote changes for the strings and in some languages that actually is the difference between ASCII and UTF-8.\n"
] | [
5,
0
] | [] | [] | [
"linux",
"macos",
"python",
"serial_port"
] | stackoverflow_0000003976_linux_macos_python_serial_port.txt |
Q:
Get a preview JPEG of a PDF on Windows?
I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF.
On the Mac I am spawning sips. Is there something similarly simple I can do on Windows?
A:
ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output):
gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \
-dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \
-sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \
-sOutputFile=$OUTPUT -f$INPUT
where $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.)
This is good for two reasons:
You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions.
ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step.
Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m.
A:
You can use ImageMagick's convert utility for this, see some examples in http://studio.imagemagick.org/pipermail/magick-users/2002-May/002636.html
:
Convert taxes.pdf taxes.jpg
Will convert a two page PDF file into [2] jpeg files: taxes.jpg.0,
taxes.jpg.1
I can also convert these JPEGS to a thumbnail as follows:
convert -size 120x120 taxes.jpg.0 -geometry 120x120 +profile '*' thumbnail.jpg
I can even convert the PDF directly to a jpeg thumbnail as follows:
convert -size 120x120 taxes.pdf -geometry 120x120 +profile '*' thumbnail.jpg
This will result in a thumbnail.jpg.0 and thumbnail.jpg.1 for the two
pages.
A:
Is the PC likely to have Acrobat installed? I think Acrobat installs a shell extension so previews of the first page of a PDF document appear in Windows Explorer's thumbnail view. You can get thumbnails yourself via the IExtractImage COM API, which you'll need to wrap. VBAccelerator has an example in C# that you could port to Python.
| Get a preview JPEG of a PDF on Windows? | I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF.
On the Mac I am spawning sips. Is there something similarly simple I can do on Windows?
| [
"ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output):\ngs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \\\n-dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \\\n-sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \\\n-sOutputFile=$OUTPUT -f$INPUT\n\nwhere $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.)\nThis is good for two reasons:\n\nYou don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions.\nImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step.\n\nOther things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m.\n",
"You can use ImageMagick's convert utility for this, see some examples in http://studio.imagemagick.org/pipermail/magick-users/2002-May/002636.html\n:\n\nConvert taxes.pdf taxes.jpg \n\nWill convert a two page PDF file into [2] jpeg files: taxes.jpg.0,\n taxes.jpg.1\nI can also convert these JPEGS to a thumbnail as follows:\nconvert -size 120x120 taxes.jpg.0 -geometry 120x120 +profile '*' thumbnail.jpg\n\nI can even convert the PDF directly to a jpeg thumbnail as follows:\nconvert -size 120x120 taxes.pdf -geometry 120x120 +profile '*' thumbnail.jpg\n\nThis will result in a thumbnail.jpg.0 and thumbnail.jpg.1 for the two\n pages.\n\n",
"Is the PC likely to have Acrobat installed? I think Acrobat installs a shell extension so previews of the first page of a PDF document appear in Windows Explorer's thumbnail view. You can get thumbnails yourself via the IExtractImage COM API, which you'll need to wrap. VBAccelerator has an example in C# that you could port to Python.\n"
] | [
44,
16,
5
] | [] | [] | [
"image",
"pdf",
"python",
"windows"
] | stackoverflow_0000000502_image_pdf_python_windows.txt |
Q:
Best way to abstract season/show/episode data
Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here.
It grabs data from the API as requested, and has to store the data somehow, and make it available by doing:
print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1
What is the "best" way to abstract this data within the Tvdb() class?
I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = "something" without having to do if x[1].has_key(2): x[1][2] = [] and so on)
Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = "something"
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception).
Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key.
This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on.
The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems).
The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant.
I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems?
A:
OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name).
import new
myexc=new.classobj("ExcName",(Exception,),{})
i=myexc("This is the exc msg!")
raise i
this gives you:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
__main__.ExcName: This is the exc msg!
remember that you can always get the class name through:
self.__class__.__name__
So, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.
P.S. - you can also raise strings, but this is deprecated.
raise(self.__class__.__name__+"Exception")
A:
Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for sqlite3
If you don't want to use SQLite you could do an array of dicts.
episodes = []
episodes.append({'season':1, 'episode': 2, 'name':'Something'})
episodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']})
That way you add metadata to any record and search it very easily
season_1 = [e for e in episodes if e['season'] == 1]
billy_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]
for episode in billy_bob:
print "Billy bob was in Season %s Episode %s" % (episode['season'], episode['episode'])
A:
I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out.
NOTE: I'm not a Python guy so I don't know what your xml support is like.
NOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend.
A:
I don't get this part here:
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception)
There is a way to do it - called in:
>>>x={}
>>>x[1]={}
>>>x[1][2]={}
>>>x
{1: {2: {}}}
>>> 2 in x[1]
True
>>> 3 in x[1]
False
what seems to be the problem with that?
A:
Bartosz/To clarify "This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not"
x['some show'][3][24] would return season 3, episode 24 of "some show". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if "some show" doesn't exist, then raise tvdb_shownotfound
The current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on.
It works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)
| Best way to abstract season/show/episode data | Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here.
It grabs data from the API as requested, and has to store the data somehow, and make it available by doing:
print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1
What is the "best" way to abstract this data within the Tvdb() class?
I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = "something" without having to do if x[1].has_key(2): x[1][2] = [] and so on)
Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = "something"
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception).
Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key.
This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on.
The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems).
The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant.
I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems?
| [
"OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name). \nimport new\nmyexc=new.classobj(\"ExcName\",(Exception,),{})\ni=myexc(\"This is the exc msg!\")\nraise i\n\nthis gives you:\nTraceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\n__main__.ExcName: This is the exc msg!\n\nremember that you can always get the class name through:\nself.__class__.__name__\n\nSo, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.\nP.S. - you can also raise strings, but this is deprecated.\nraise(self.__class__.__name__+\"Exception\")\n\n",
"Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for sqlite3\n\nIf you don't want to use SQLite you could do an array of dicts.\nepisodes = []\nepisodes.append({'season':1, 'episode': 2, 'name':'Something'})\nepisodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']})\n\nThat way you add metadata to any record and search it very easily\nseason_1 = [e for e in episodes if e['season'] == 1]\nbilly_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]\n\nfor episode in billy_bob:\n print \"Billy bob was in Season %s Episode %s\" % (episode['season'], episode['episode'])\n\n",
"I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out.\nNOTE: I'm not a Python guy so I don't know what your xml support is like.\nNOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend.\n",
"I don't get this part here:\n\nThis worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception)\n\nThere is a way to do it - called in:\n>>>x={}\n>>>x[1]={}\n>>>x[1][2]={}\n>>>x\n{1: {2: {}}}\n>>> 2 in x[1]\nTrue\n>>> 3 in x[1]\nFalse\n\nwhat seems to be the problem with that?\n",
"Bartosz/To clarify \"This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not\"\nx['some show'][3][24] would return season 3, episode 24 of \"some show\". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if \"some show\" doesn't exist, then raise tvdb_shownotfound\nThe current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on.\nIt works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)\n"
] | [
7,
4,
0,
0,
0
] | [] | [] | [
"data_structures",
"python"
] | stackoverflow_0000005966_data_structures_python.txt |
Q:
Python Regular Expressions to implement string unescaping
I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what...
>>> import re
>>> mystring = r"This is \n a test \r"
>>> p = re.compile( "\\\\(\\S)" )
>>> p.sub( "\\1", mystring )
'This is n a test r'
>>> p.sub( "\\\\\\1", mystring )
'This is \\n a test \\r'
>>> p.sub( "\\\\1", mystring )
'This is \\1 a test \\1'
I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
A:
Isn't that what Anders' second example does?
In 2.5 there's also a string-escape encoding you can apply:
>>> mystring = r"This is \n a test \r"
>>> mystring.decode('string-escape')
'This is \n a test \r'
>>> print mystring.decode('string-escape')
This is
a test
>>>
A:
Well, I think you might have missed the r or miscounted the backslashes...
"\\n" == r"\n"
>>> import re
>>> mystring = r"This is \\n a test \\r"
>>> p = re.compile( r"[\\][\\](.)" )
>>> print p.sub( r"\\\1", mystring )
This is \n a test \r
>>>
Which, if I understood is what was requested.
I suspect the more common request is this:
>>> d = {'n':'\n', 'r':'\r', 'f':'\f'}
>>> p = re.compile(r"[\\]([nrfv])")
>>> print p.sub(lambda mo: d[mo.group(1)], mystring)
This is \
a test \
>>>
The interested student should also read Ken Thompson's Reflections on Trusting Trust", wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.
A:
The idea is that I'll read in an escaped string, and unescape it (a feature notably lacking from Python, which you shouldn't need to resort to regular expressions for in the first place). Unfortunately I'm not being tricked by the backslashes...
Another illustrative example:
>>> mystring = r"This is \n ridiculous"
>>> print mystring
This is \n ridiculous
>>> p = re.compile( r"\\(\S)" )
>>> print p.sub( 'bloody', mystring )
This is bloody ridiculous
>>> print p.sub( r'\1', mystring )
This is n ridiculous
>>> print p.sub( r'\\1', mystring )
This is \1 ridiculous
>>> print p.sub( r'\\\1', mystring )
This is \n ridiculous
What I'd like it to print is
This is
ridiculous
A:
You are being tricked by Python's representation of the result string. The Python expression:
'This is \\n a test \\r'
represents the string
This is \n a test \r
which is I think what you wanted. Try adding 'print' in front of each of your p.sub() calls to print the actual string returned instead of a Python representation of the string.
>>> mystring = r"This is \n a test \r"
>>> mystring
'This is \\n a test \\r'
>>> print mystring
This is \n a test \r
A:
Mark; his second example requires every escaped character thrown into an array initially, which generates a KeyError if the escape sequence happens not to be in the array. It will die on anything but the three characters provided (give \v a try), and enumerating every possible escape sequence every time you want to unescape a string (or keeping a global array) is a really bad solution. Analogous to PHP, that's using preg_replace_callback() with a lambda instead of preg_replace(), which is utterly unnecessary in this situation.
I'm sorry if I'm coming off as a dick about it, I'm just utterly frustrated with Python. This is supported by every other regular expression engine I've ever used, and I can't understand why this wouldn't work.
Thank you for responding; the string.decode('string-escape') function is precisely what i was looking for initially. If someone has a general solution to the regex backreference problem, feel free to post it and I'll accept that as an answer as well.
| Python Regular Expressions to implement string unescaping | I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what...
>>> import re
>>> mystring = r"This is \n a test \r"
>>> p = re.compile( "\\\\(\\S)" )
>>> p.sub( "\\1", mystring )
'This is n a test r'
>>> p.sub( "\\\\\\1", mystring )
'This is \\n a test \\r'
>>> p.sub( "\\\\1", mystring )
'This is \\1 a test \\1'
I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
| [
"Isn't that what Anders' second example does?\nIn 2.5 there's also a string-escape encoding you can apply:\n>>> mystring = r\"This is \\n a test \\r\"\n>>> mystring.decode('string-escape')\n'This is \\n a test \\r'\n>>> print mystring.decode('string-escape')\nThis is \n a test \n>>> \n\n",
"Well, I think you might have missed the r or miscounted the backslashes...\n\"\\\\n\" == r\"\\n\"\n\n>>> import re\n>>> mystring = r\"This is \\\\n a test \\\\r\"\n>>> p = re.compile( r\"[\\\\][\\\\](.)\" )\n>>> print p.sub( r\"\\\\\\1\", mystring )\nThis is \\n a test \\r\n>>>\n\nWhich, if I understood is what was requested.\nI suspect the more common request is this:\n>>> d = {'n':'\\n', 'r':'\\r', 'f':'\\f'}\n>>> p = re.compile(r\"[\\\\]([nrfv])\")\n>>> print p.sub(lambda mo: d[mo.group(1)], mystring)\nThis is \\\n a test \\\n>>>\n\nThe interested student should also read Ken Thompson's Reflections on Trusting Trust\", wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.\n",
"The idea is that I'll read in an escaped string, and unescape it (a feature notably lacking from Python, which you shouldn't need to resort to regular expressions for in the first place). Unfortunately I'm not being tricked by the backslashes...\nAnother illustrative example:\n>>> mystring = r\"This is \\n ridiculous\"\n>>> print mystring\nThis is \\n ridiculous\n>>> p = re.compile( r\"\\\\(\\S)\" )\n>>> print p.sub( 'bloody', mystring )\nThis is bloody ridiculous\n>>> print p.sub( r'\\1', mystring )\nThis is n ridiculous\n>>> print p.sub( r'\\\\1', mystring )\nThis is \\1 ridiculous\n>>> print p.sub( r'\\\\\\1', mystring )\nThis is \\n ridiculous\n\nWhat I'd like it to print is\nThis is \nridiculous\n\n",
"You are being tricked by Python's representation of the result string. The Python expression:\n'This is \\\\n a test \\\\r'\n\nrepresents the string\nThis is \\n a test \\r\n\nwhich is I think what you wanted. Try adding 'print' in front of each of your p.sub() calls to print the actual string returned instead of a Python representation of the string.\n>>> mystring = r\"This is \\n a test \\r\"\n>>> mystring\n'This is \\\\n a test \\\\r'\n>>> print mystring\nThis is \\n a test \\r\n\n",
"Mark; his second example requires every escaped character thrown into an array initially, which generates a KeyError if the escape sequence happens not to be in the array. It will die on anything but the three characters provided (give \\v a try), and enumerating every possible escape sequence every time you want to unescape a string (or keeping a global array) is a really bad solution. Analogous to PHP, that's using preg_replace_callback() with a lambda instead of preg_replace(), which is utterly unnecessary in this situation.\nI'm sorry if I'm coming off as a dick about it, I'm just utterly frustrated with Python. This is supported by every other regular expression engine I've ever used, and I can't understand why this wouldn't work.\nThank you for responding; the string.decode('string-escape') function is precisely what i was looking for initially. If someone has a general solution to the regex backreference problem, feel free to post it and I'll accept that as an answer as well.\n"
] | [
10,
3,
1,
0,
0
] | [] | [] | [
"backreference",
"python",
"regex"
] | stackoverflow_0000013791_backreference_python_regex.txt |
Q:
What's the best way to distribute python command-line tools?
My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar..
Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
A:
Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want.
To reproduce here:
from setuptools import setup
setup(
# other arguments here...
entry_points = {
'console_scripts': [
'foo = package.module:func',
'bar = othermodule:somefunc',
],
}
)
| What's the best way to distribute python command-line tools? | My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar..
Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
| [
"Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want.\nTo reproduce here:\nfrom setuptools import setup\n\nsetup(\n # other arguments here...\n entry_points = {\n 'console_scripts': [\n 'foo = package.module:func',\n 'bar = othermodule:somefunc',\n ],\n }\n)\n\n"
] | [
38
] | [] | [] | [
"command_line",
"packaging",
"python"
] | stackoverflow_0000017893_command_line_packaging_python.txt |
Q:
Introducing Python
The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development.
But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now.
How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company.
Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them.
A:
I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments ("can you parse the stats in these files into a CSV file organized by date and site?", etc) and had a quick turnaround time on all of them.
I also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc.
Eventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc.
This has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before.
So if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that.
A:
If the mandate of the new lead is to put the house in order, the current situation should likely be simplified as much as possible prior. If I had to bring things to order, I wouldn't want to have to manage an ongoing language conversion project on top of everything else, or at least I'd like some choice when initiating the project. When making your recommendation, did you think about the additional managerial complexity that coming into the middle of a conversion would entail?
A:
@darkdog:
Using a new language in production code is about more than easy syntax and high-level capability. You want to be familiar with core APIs and feel like you can fix something through logic instead of having to comb through the documentation.
I'm not saying transitioning to Python would be a bad idea for this company, but I'm with John--keep things simple during the transition. The new lead will appreciate having a say in such decisions.
If you'd really, really, really like to introduce Python, consider writing some extensions or utilities in straight-up Python or in the framework. You won't be upsetting your core initiatives, so it will be a low/no-risk opportunity to prove the merits of a switch.
A:
I think the language itself is not an issue here, as python is really nice high level language with good and easy to find, thorough documentation.
From what I've seen, the Django framework is also a great tooklit for web development, giving much the same developer performance boost Rails is touted to give.
The real issue is at the maintenance and management level.
How will this move fragment the maintenance between PHP and Python code. Is there a need to migrate existing code from one platform to another? What problems will adopting Python and Django solve that you have in your current development workflow and frameworks, etc.
A:
It's really all about schedules. To me the break should be with a specific project. If you decide your direction is Django then start new projects with that. Before you start a new project with a new language/framework, either make sure that you have scheduled time to get up to speed in this new direction, or get up to speed before using on new projects.
I would avoid going with a tool of the month. Make sure you want it to be your direction and commit some time/resources to learning enough to make a good decision.
A:
Well, python is a high level language.. its not hard to learn and if the guys already have programming knowledge it should be much easier to learn.. i like django.. i think it should be a nice try to use django ..
A:
I don't think it's a matter of a programming language as such.
What is the proficiency level of PHP in the team you're talking about? Are they doing spaghetti code or using some structured framework like Zend? If this is the first case then I absolutely understand the guy's interest in Python and Django. It this is the latter, it's just a hype.
A:
I love Python and Django, and use both to develop the our core webapps.
That said, it's hard to make a business case for switching at this point. Specifically:
Any new platform is risky compared to staying with the tried and true
You'll have the developer fragmentation you mentioned
It's far easier to find PHP programmers than python programmers
Moreover, as other posters have mention, if the issue is more with spaghetti code than PHP itself, there are plenty of nice PHP frameworks that could be used to refactor the code.
That said, if this developer is excited about python, stopping them outright is probably demoralizing. My suggestion would be to encourage them to develop in python, but not the mission critical parts of the app. Instead they could write some utility scripts, some small internal application that needs doing, etc.
In conclusion: I don't recommend switching from PHP, but I do recommend accommodating the developer's interest in some way at work.
| Introducing Python | The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development.
But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now.
How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company.
Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them.
| [
"I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments (\"can you parse the stats in these files into a CSV file organized by date and site?\", etc) and had a quick turnaround time on all of them.\nI also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc.\nEventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc.\nThis has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before.\nSo if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that.\n",
"If the mandate of the new lead is to put the house in order, the current situation should likely be simplified as much as possible prior. If I had to bring things to order, I wouldn't want to have to manage an ongoing language conversion project on top of everything else, or at least I'd like some choice when initiating the project. When making your recommendation, did you think about the additional managerial complexity that coming into the middle of a conversion would entail?\n",
"@darkdog:\nUsing a new language in production code is about more than easy syntax and high-level capability. You want to be familiar with core APIs and feel like you can fix something through logic instead of having to comb through the documentation.\nI'm not saying transitioning to Python would be a bad idea for this company, but I'm with John--keep things simple during the transition. The new lead will appreciate having a say in such decisions.\nIf you'd really, really, really like to introduce Python, consider writing some extensions or utilities in straight-up Python or in the framework. You won't be upsetting your core initiatives, so it will be a low/no-risk opportunity to prove the merits of a switch.\n",
"I think the language itself is not an issue here, as python is really nice high level language with good and easy to find, thorough documentation.\nFrom what I've seen, the Django framework is also a great tooklit for web development, giving much the same developer performance boost Rails is touted to give.\nThe real issue is at the maintenance and management level.\nHow will this move fragment the maintenance between PHP and Python code. Is there a need to migrate existing code from one platform to another? What problems will adopting Python and Django solve that you have in your current development workflow and frameworks, etc.\n",
"It's really all about schedules. To me the break should be with a specific project. If you decide your direction is Django then start new projects with that. Before you start a new project with a new language/framework, either make sure that you have scheduled time to get up to speed in this new direction, or get up to speed before using on new projects.\nI would avoid going with a tool of the month. Make sure you want it to be your direction and commit some time/resources to learning enough to make a good decision.\n",
"Well, python is a high level language.. its not hard to learn and if the guys already have programming knowledge it should be much easier to learn.. i like django.. i think it should be a nice try to use django .. \n",
"I don't think it's a matter of a programming language as such. \nWhat is the proficiency level of PHP in the team you're talking about? Are they doing spaghetti code or using some structured framework like Zend? If this is the first case then I absolutely understand the guy's interest in Python and Django. It this is the latter, it's just a hype.\n",
"I love Python and Django, and use both to develop the our core webapps.\nThat said, it's hard to make a business case for switching at this point. Specifically:\n\nAny new platform is risky compared to staying with the tried and true\nYou'll have the developer fragmentation you mentioned\nIt's far easier to find PHP programmers than python programmers\n\nMoreover, as other posters have mention, if the issue is more with spaghetti code than PHP itself, there are plenty of nice PHP frameworks that could be used to refactor the code.\nThat said, if this developer is excited about python, stopping them outright is probably demoralizing. My suggestion would be to encourage them to develop in python, but not the mission critical parts of the app. Instead they could write some utility scripts, some small internal application that needs doing, etc.\nIn conclusion: I don't recommend switching from PHP, but I do recommend accommodating the developer's interest in some way at work.\n"
] | [
15,
4,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"php",
"python"
] | stackoverflow_0000019654_php_python.txt |
Q:
How to check set of files conform to a naming scheme
I have a bunch of files (TV episodes, although that is fairly arbitrary) that I want to check match a specific naming/organisation scheme..
Currently: I have three arrays of regex, one for valid filenames, one for files missing an episode name, and one for valid paths.
Then, I loop though each valid-filename regex, if it matches, append it to a "valid" dict, if not, do the same with the missing-ep-name regexs, if it matches this I append it to an "invalid" dict with an error code (2:'missing epsiode name'), if it matches neither, it gets added to invalid with the 'malformed name' error code.
The current code can be found here
I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state..
How could I write this system in a more expandable way?
The rules it needs to check would be..
File is in the format Show Name - [01x23] - Episode Name.avi or Show Name - [01xSpecial02] - Special Name.avi or Show Name - [01xExtra01] - Extra Name.avi
If filename is in the format Show Name - [01x23].avi display it a 'missing episode name' section of the output
The path should be in the format Show Name/season 2/the_file.avi (where season 2 should be the correct season number in the filename)
each Show Name/season 1/ folder should contain "folder.jpg"
.any ideas? While I'm trying to check TV episodes, this concept/code should be able to apply to many things..
The only thought I had was a list of dicts in the format:
checker = [
{
'name':'valid files',
'type':'file',
'function':check_valid(), # runs check_valid() on all files
'status':0 # if it returns True, this is the status the file gets
}
A:
I want to add a rule that checks for
the presence of a folder.jpg file in
each directory, but to add this would
make the code substantially more messy
in it's current state..
This doesn't look bad. In fact your current code does it very nicely, and Sven mentioned a good way to do it as well:
Get a list of all the files
Check for "required" files
You would just have have add to your dictionary a list of required files:
checker = {
...
'required': ['file', 'list', 'for_required']
}
As far as there being a better/extensible way to do this? I am not exactly sure. I could only really think of a way to possibly drop the "multiple" regular expressions and build off of Sven's idea for using a delimiter. So my strategy would be defining a dictionary as follows (and I'm sorry I don't know Python syntax and I'm a tad to lazy to look it up but it should make sense. The /regex/ is shorthand for a regex):
check_dict = {
'delim' : /\-/,
'parts' : [ 'Show Name', 'Episode Name', 'Episode Number' ],
'patterns' : [/valid name/, /valid episode name/, /valid number/ ],
'required' : ['list', 'of', 'files'],
'ignored' : ['.*', 'hidden.txt'],
'start_dir': '/path/to/dir/to/test/'
}
Split the filename based on the delimiter.
Check each of the parts.
Because its an ordered list you can determine what parts are missing and if a section doesn't match any pattern it is malformed. Here the parts and patterns have a 1 to 1 ratio. Two arrays instead of a dictionary enforces the order.
Ignored and required files can be listed. The . and .. files should probably be ignored automatically. The user should be allowed to input "globs" which can be shell expanded. I'm thinking here of svn:ignore properties, but globbing is natural for listing files.
Here start_dir would be default to the current directory but if you wanted a single file to run automated testing of a bunch of directories this would be useful.
The real loose end here is the path template and along the same lines what path is required for "valid files". I really couldn't come up with a solid idea without writing one large regular expression and taking groups from it... to build a template. It felt a lot like writing a TextMate language grammar. But that starts to stray on the ease of use. The real problem was that the path template was not composed of parts, which makes sense but adds complexity.
Is this strategy in tune with what you were thinking of?
A:
maybe you should take the approach of defaulting to: "the filename is correct" and work from there to disprove that statement:
with the fact that you only allow filenames with: 'show name', 'season number x episode number' and 'episode name', you know for certain that these items should be separated by a "-" (dash) so you have to have 2 of those for a filename to be correct.
if that checks out, you can use your code to check that the show name matches the show name as seen in the parent's parent folder (case insensitive i assume), the season number matches the parents folder numeric value (with or without an extra 0 prepended).
if however you don't see the correct amount of dashes you instantly know that there is something wrong and stop before the rest of the tests etc.
and separately you can check if the file folder.jpg exists and take the necessary actions. or do that first and filter that file from the rest of the files in that folder.
| How to check set of files conform to a naming scheme | I have a bunch of files (TV episodes, although that is fairly arbitrary) that I want to check match a specific naming/organisation scheme..
Currently: I have three arrays of regex, one for valid filenames, one for files missing an episode name, and one for valid paths.
Then, I loop though each valid-filename regex, if it matches, append it to a "valid" dict, if not, do the same with the missing-ep-name regexs, if it matches this I append it to an "invalid" dict with an error code (2:'missing epsiode name'), if it matches neither, it gets added to invalid with the 'malformed name' error code.
The current code can be found here
I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state..
How could I write this system in a more expandable way?
The rules it needs to check would be..
File is in the format Show Name - [01x23] - Episode Name.avi or Show Name - [01xSpecial02] - Special Name.avi or Show Name - [01xExtra01] - Extra Name.avi
If filename is in the format Show Name - [01x23].avi display it a 'missing episode name' section of the output
The path should be in the format Show Name/season 2/the_file.avi (where season 2 should be the correct season number in the filename)
each Show Name/season 1/ folder should contain "folder.jpg"
.any ideas? While I'm trying to check TV episodes, this concept/code should be able to apply to many things..
The only thought I had was a list of dicts in the format:
checker = [
{
'name':'valid files',
'type':'file',
'function':check_valid(), # runs check_valid() on all files
'status':0 # if it returns True, this is the status the file gets
}
| [
"\nI want to add a rule that checks for\n the presence of a folder.jpg file in\n each directory, but to add this would\n make the code substantially more messy\n in it's current state..\n\nThis doesn't look bad. In fact your current code does it very nicely, and Sven mentioned a good way to do it as well:\n\nGet a list of all the files\nCheck for \"required\" files\n\nYou would just have have add to your dictionary a list of required files:\nchecker = {\n ...\n 'required': ['file', 'list', 'for_required']\n}\n\nAs far as there being a better/extensible way to do this? I am not exactly sure. I could only really think of a way to possibly drop the \"multiple\" regular expressions and build off of Sven's idea for using a delimiter. So my strategy would be defining a dictionary as follows (and I'm sorry I don't know Python syntax and I'm a tad to lazy to look it up but it should make sense. The /regex/ is shorthand for a regex):\ncheck_dict = {\n 'delim' : /\\-/,\n 'parts' : [ 'Show Name', 'Episode Name', 'Episode Number' ],\n 'patterns' : [/valid name/, /valid episode name/, /valid number/ ],\n 'required' : ['list', 'of', 'files'],\n 'ignored' : ['.*', 'hidden.txt'],\n 'start_dir': '/path/to/dir/to/test/'\n}\n\n\nSplit the filename based on the delimiter.\nCheck each of the parts.\n\nBecause its an ordered list you can determine what parts are missing and if a section doesn't match any pattern it is malformed. Here the parts and patterns have a 1 to 1 ratio. Two arrays instead of a dictionary enforces the order.\nIgnored and required files can be listed. The . and .. files should probably be ignored automatically. The user should be allowed to input \"globs\" which can be shell expanded. I'm thinking here of svn:ignore properties, but globbing is natural for listing files.\nHere start_dir would be default to the current directory but if you wanted a single file to run automated testing of a bunch of directories this would be useful.\nThe real loose end here is the path template and along the same lines what path is required for \"valid files\". I really couldn't come up with a solid idea without writing one large regular expression and taking groups from it... to build a template. It felt a lot like writing a TextMate language grammar. But that starts to stray on the ease of use. The real problem was that the path template was not composed of parts, which makes sense but adds complexity.\nIs this strategy in tune with what you were thinking of?\n",
"maybe you should take the approach of defaulting to: \"the filename is correct\" and work from there to disprove that statement:\nwith the fact that you only allow filenames with: 'show name', 'season number x episode number' and 'episode name', you know for certain that these items should be separated by a \"-\" (dash) so you have to have 2 of those for a filename to be correct.\nif that checks out, you can use your code to check that the show name matches the show name as seen in the parent's parent folder (case insensitive i assume), the season number matches the parents folder numeric value (with or without an extra 0 prepended).\nif however you don't see the correct amount of dashes you instantly know that there is something wrong and stop before the rest of the tests etc.\nand separately you can check if the file folder.jpg exists and take the necessary actions. or do that first and filter that file from the rest of the files in that folder.\n"
] | [
2,
0
] | [] | [] | [
"naming",
"python",
"validation"
] | stackoverflow_0000019030_naming_python_validation.txt |
Q:
Date/time conversion using time.mktime seems wrong
>>> import time
>>> time.strptime("01-31-2009", "%m-%d-%Y")
(2009, 1, 31, 0, 0, 0, 5, 31, -1)
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233378000.0
>>> 60*60*24 # seconds in a day
86400
>>> 1233378000.0 / 86400
14275.208333333334
time.mktime should return the number of seconds since the epoch. Since I'm giving it a time at midnight and the epoch is at midnight, shouldn't the result be evenly divisible by the number of seconds in a day?
A:
Short answer: Because of timezones.
The Epoch is in UTC.
For example, I'm on IST (Irish Standard Time) or UTC+1. time.mktime() is relative to my timezone, so on my system this refers to
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233360000.0
Because you got the result 1233378000, that would suggest that you're 5 hours behind me
>>> (1233378000 - 1233360000) / (60*60)
5
Have a look at the time.gmtime() function which works off UTC.
A:
mktime(...)
mktime(tuple) -> floating point number
Convert a time tuple in local time to seconds since the Epoch.
local time... fancy that.
The time tuple:
The other representation is a tuple of 9 integers giving local time.
The tuple items are:
year (four digits, e.g. 1998)
month (1-12)
day (1-31)
hours (0-23)
minutes (0-59)
seconds (0-59)
weekday (0-6, Monday is 0)
Julian day (day in the year, 1-366)
DST (Daylight Savings Time) flag (-1, 0 or 1)
If the DST flag is 0, the time is given in the regular time zone;
if it is 1, the time is given in the DST time zone;
if it is -1, mktime() should guess based on the date and time.
Incidentally, we seem to be 6 hours apart:
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233356400.0
>>> (1233378000.0 - 1233356400)/(60*60)
6.0
A:
Phil's answer really solved it, but I'll elaborate a little more. Since the epoch is in UTC, if I want to compare other times to the epoch, I need to interpret them as UTC as well.
>>> calendar.timegm((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233360000
>>> 1233360000 / (60*60*24)
14275
By converting the time tuple to a timestamp treating is as UTC time, I get a number which is evenly divisible by the number of seconds in a day.
I can use this to convert a date to a days-from-the-epoch representation which is what I'm ultimately after.
A:
Interesting. I don't know, but I did try this:
>>> now = time.mktime((2008, 8, 22, 11 ,17, -1, -1, -1, -1))
>>> tomorrow = time.mktime((2008, 8, 23, 11 ,17, -1, -1, -1, -1))
>>> tomorrow - now
86400.0
which is what you expected. My guess? Maybe some time correction was done since the epoch. This could be only a few seconds, something like a leap year. I think I heard something like this before, but can't remember exactly how and when it is done...
| Date/time conversion using time.mktime seems wrong | >>> import time
>>> time.strptime("01-31-2009", "%m-%d-%Y")
(2009, 1, 31, 0, 0, 0, 5, 31, -1)
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233378000.0
>>> 60*60*24 # seconds in a day
86400
>>> 1233378000.0 / 86400
14275.208333333334
time.mktime should return the number of seconds since the epoch. Since I'm giving it a time at midnight and the epoch is at midnight, shouldn't the result be evenly divisible by the number of seconds in a day?
| [
"Short answer: Because of timezones.\nThe Epoch is in UTC.\nFor example, I'm on IST (Irish Standard Time) or UTC+1. time.mktime() is relative to my timezone, so on my system this refers to\n>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233360000.0\n\nBecause you got the result 1233378000, that would suggest that you're 5 hours behind me\n>>> (1233378000 - 1233360000) / (60*60) \n5\n\nHave a look at the time.gmtime() function which works off UTC.\n",
"mktime(...)\n mktime(tuple) -> floating point number\n\n Convert a time tuple in local time to seconds since the Epoch.\n\nlocal time... fancy that.\nThe time tuple:\nThe other representation is a tuple of 9 integers giving local time.\nThe tuple items are:\n year (four digits, e.g. 1998)\n month (1-12)\n day (1-31)\n hours (0-23)\n minutes (0-59)\n seconds (0-59)\n weekday (0-6, Monday is 0)\n Julian day (day in the year, 1-366)\n DST (Daylight Savings Time) flag (-1, 0 or 1)\nIf the DST flag is 0, the time is given in the regular time zone;\nif it is 1, the time is given in the DST time zone;\nif it is -1, mktime() should guess based on the date and time.\n\nIncidentally, we seem to be 6 hours apart:\n>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233356400.0\n>>> (1233378000.0 - 1233356400)/(60*60)\n6.0\n\n",
"Phil's answer really solved it, but I'll elaborate a little more. Since the epoch is in UTC, if I want to compare other times to the epoch, I need to interpret them as UTC as well.\n>>> calendar.timegm((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233360000\n>>> 1233360000 / (60*60*24)\n14275\n\nBy converting the time tuple to a timestamp treating is as UTC time, I get a number which is evenly divisible by the number of seconds in a day.\nI can use this to convert a date to a days-from-the-epoch representation which is what I'm ultimately after.\n",
"Interesting. I don't know, but I did try this:\n>>> now = time.mktime((2008, 8, 22, 11 ,17, -1, -1, -1, -1))\n>>> tomorrow = time.mktime((2008, 8, 23, 11 ,17, -1, -1, -1, -1))\n>>> tomorrow - now\n86400.0\n\nwhich is what you expected. My guess? Maybe some time correction was done since the epoch. This could be only a few seconds, something like a leap year. I think I heard something like this before, but can't remember exactly how and when it is done...\n"
] | [
7,
3,
2,
0
] | [] | [] | [
"datetime",
"python"
] | stackoverflow_0000021961_datetime_python.txt |
Q:
Does PHP have an equivalent to this type of Python string substitution?
Python has this wonderful way of handling string substitutions using dictionaries:
>>> 'The %(site)s site %(adj)s because it %(adj)s' % {'site':'Stackoverflow', 'adj':'rocks'}
'The Stackoverflow site rocks because it rocks'
I love this because you can specify a value once in the dictionary and then replace it all over the place in the string.
I've tried to achieve something similar in PHP using various string replace functions but everything I've come up with feels awkward.
Does anybody have a nice clean way to do this kind of string substitution in PHP?
Edit
Here's the code from the sprintf page that I liked best.
<?php
function sprintf3($str, $vars, $char = '%')
{
$tmp = array();
foreach($vars as $k => $v)
{
$tmp[$char . $k . $char] = $v;
}
return str_replace(array_keys($tmp), array_values($tmp), $str);
}
echo sprintf3( 'The %site% site %adj% because it %adj%', array('site'=>'Stackoverflow', 'adj'=>'rocks'));
?>
A:
function subst($str, $dict){
return preg_replace(array_map(create_function('$a', 'return "/%\\($a\\)s/";'), array_keys($dict)), array_values($dict), $str);
}
You call it like so:
echo subst('The %(site)s site %(adj)s because it %(adj)s', array('site'=>'Stackoverflow', 'adj'=>'rocks'));
A:
@Marius
I don't know if it's faster, but you can do it without regexes:
function subst($str, $dict)
{
foreach ($dict AS $key, $value)
{
$str = str_replace($key, $value, $str);
}
return $str;
}
A:
Some of the user-contributed notes and functions in PHP's documentation for sprintf come quite close.
Note: search the page for "sprintf2".
| Does PHP have an equivalent to this type of Python string substitution? | Python has this wonderful way of handling string substitutions using dictionaries:
>>> 'The %(site)s site %(adj)s because it %(adj)s' % {'site':'Stackoverflow', 'adj':'rocks'}
'The Stackoverflow site rocks because it rocks'
I love this because you can specify a value once in the dictionary and then replace it all over the place in the string.
I've tried to achieve something similar in PHP using various string replace functions but everything I've come up with feels awkward.
Does anybody have a nice clean way to do this kind of string substitution in PHP?
Edit
Here's the code from the sprintf page that I liked best.
<?php
function sprintf3($str, $vars, $char = '%')
{
$tmp = array();
foreach($vars as $k => $v)
{
$tmp[$char . $k . $char] = $v;
}
return str_replace(array_keys($tmp), array_values($tmp), $str);
}
echo sprintf3( 'The %site% site %adj% because it %adj%', array('site'=>'Stackoverflow', 'adj'=>'rocks'));
?>
| [
"function subst($str, $dict){\n return preg_replace(array_map(create_function('$a', 'return \"/%\\\\($a\\\\)s/\";'), array_keys($dict)), array_values($dict), $str);\n }\n\nYou call it like so:\necho subst('The %(site)s site %(adj)s because it %(adj)s', array('site'=>'Stackoverflow', 'adj'=>'rocks'));\n\n",
"@Marius\nI don't know if it's faster, but you can do it without regexes:\nfunction subst($str, $dict)\n{\n foreach ($dict AS $key, $value)\n {\n $str = str_replace($key, $value, $str);\n }\n\n return $str;\n}\n\n",
"Some of the user-contributed notes and functions in PHP's documentation for sprintf come quite close.\nNote: search the page for \"sprintf2\".\n"
] | [
5,
4,
1
] | [] | [] | [
"php",
"python",
"string"
] | stackoverflow_0000028165_php_python_string.txt |
Q:
How do I create an xml document in python
Here is my sample code:
from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
when I run the above code I get this:
<?xml version="1.0" ?>
<foo/>
I would like to get:
<?xml version="1.0" ?>
<foo>bar</foo>
I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?
A:
@Daniel
Thanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)
from xml.dom.minidom import *
def make_xml():
doc = Document();
node = doc.createElement('foo')
node.appendChild(doc.createTextNode('bar'))
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
I swear I tried this before posting my question...
A:
Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. "node.noSuchAttr = 'bar'" would also not give an error).
Unless you need a specific feature of minidom, I would look at ElementTree:
import sys
from xml.etree.cElementTree import Element, ElementTree
def make_xml():
node = Element('foo')
node.text = 'bar'
doc = ElementTree(node)
return doc
if __name__ == '__main__':
make_xml().write(sys.stdout)
| How do I create an xml document in python | Here is my sample code:
from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
when I run the above code I get this:
<?xml version="1.0" ?>
<foo/>
I would like to get:
<?xml version="1.0" ?>
<foo>bar</foo>
I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?
| [
"@Daniel\nThanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)\n\n\nfrom xml.dom.minidom import *\ndef make_xml():\n doc = Document();\n node = doc.createElement('foo')\n node.appendChild(doc.createTextNode('bar'))\n doc.appendChild(node)\n return doc\nif __name__ == '__main__':\n make_xml().writexml(sys.stdout)\n\n\nI swear I tried this before posting my question...\n",
"Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. \"node.noSuchAttr = 'bar'\" would also not give an error).\nUnless you need a specific feature of minidom, I would look at ElementTree:\nimport sys\nfrom xml.etree.cElementTree import Element, ElementTree\n\ndef make_xml():\n node = Element('foo')\n node.text = 'bar'\n doc = ElementTree(node)\n return doc\n\nif __name__ == '__main__':\n make_xml().write(sys.stdout)\n\n"
] | [
13,
9
] | [] | [] | [
"python",
"xml"
] | stackoverflow_0000029243_python_xml.txt |
Q:
Proprietary plug-ins for GPL programs: what about interpreted languages?
I am developing a GPL-licensed application in Python and need to know if the GPL allows my program to use proprietary plug-ins. This is what the FSF has to say on the issue:
If a program released under the GPL uses plug-ins, what are the requirements for the licenses of a plug-in?
It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them.
If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed.
If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?
(edit: I understand why the distinction between fork/exec and dynamic linking, but it seems like someone who wanted to comply with the GPL but go against the "spirit" --I don't-- could just use fork/exec and interprocess communication to do pretty much anything).
The best solution would be to add an exception to my license to explicitly allow the use of proprietary plugins, but I am unable to do so since I'm using Qt/PyQt which is GPL.
A:
he distinction between fork/exec and dynamic linking, besides being kind of artificial,
I don't think its artificial at all. Basically they are just making the division based upon the level of integration. If the program has "plugins" which are essentially fire and forget with no API level integration, then the resulting work is unlikely to be considered a derived work. Generally speaking a plugin which is merely forked/exec'ed would fit this criteria, though there may be cases where it does not. This case especially applies if the "plugin" code would work independently of your code as well.
If, on the other hand, the code is deeply dependent upon the GPL'ed work, such as extensively calling APIs, or tight data structure integration, then things are more likely to be considered a derived work. Ie, the "plugin" cannot exist on its own without the GPL product, and a product with this plugin installed is essentially a derived work of the GPLed product.
So to make it a little more clear, the same principles could apply to your interpreted code. If the interpreted code relies heavily upon your APIs (or vice-versa) then it would be considered a derived work. If it is just a script that executes on its own with extremely little integration, then it may not.
Does that make more sense?
A:
@Daniel The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?
I'm not sure that the distinction is artificial. After a dynamic load the plugin code shares an execution context with the GPLed code. After a fork/exec it does not.
In anycase I would guess that importing causes the new code to run in the same execution context as the GPLed bit, and you should treat it like the dynamic link case. No?
A:
How much info are you sharing between the Plugins and the main program? If you are doing anything more than just executing them and waiting for the results (sharing no data between the program and the plugin in the process) then you could most likely get away with them being proprietary, otherwise they would probably need to be GPL'd.
| Proprietary plug-ins for GPL programs: what about interpreted languages? | I am developing a GPL-licensed application in Python and need to know if the GPL allows my program to use proprietary plug-ins. This is what the FSF has to say on the issue:
If a program released under the GPL uses plug-ins, what are the requirements for the licenses of a plug-in?
It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them.
If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed.
If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?
(edit: I understand why the distinction between fork/exec and dynamic linking, but it seems like someone who wanted to comply with the GPL but go against the "spirit" --I don't-- could just use fork/exec and interprocess communication to do pretty much anything).
The best solution would be to add an exception to my license to explicitly allow the use of proprietary plugins, but I am unable to do so since I'm using Qt/PyQt which is GPL.
| [
"\nhe distinction between fork/exec and dynamic linking, besides being kind of artificial,\n\nI don't think its artificial at all. Basically they are just making the division based upon the level of integration. If the program has \"plugins\" which are essentially fire and forget with no API level integration, then the resulting work is unlikely to be considered a derived work. Generally speaking a plugin which is merely forked/exec'ed would fit this criteria, though there may be cases where it does not. This case especially applies if the \"plugin\" code would work independently of your code as well.\nIf, on the other hand, the code is deeply dependent upon the GPL'ed work, such as extensively calling APIs, or tight data structure integration, then things are more likely to be considered a derived work. Ie, the \"plugin\" cannot exist on its own without the GPL product, and a product with this plugin installed is essentially a derived work of the GPLed product.\nSo to make it a little more clear, the same principles could apply to your interpreted code. If the interpreted code relies heavily upon your APIs (or vice-versa) then it would be considered a derived work. If it is just a script that executes on its own with extremely little integration, then it may not.\nDoes that make more sense?\n",
"@Daniel The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?\nI'm not sure that the distinction is artificial. After a dynamic load the plugin code shares an execution context with the GPLed code. After a fork/exec it does not.\nIn anycase I would guess that importing causes the new code to run in the same execution context as the GPLed bit, and you should treat it like the dynamic link case. No?\n",
"How much info are you sharing between the Plugins and the main program? If you are doing anything more than just executing them and waiting for the results (sharing no data between the program and the plugin in the process) then you could most likely get away with them being proprietary, otherwise they would probably need to be GPL'd.\n"
] | [
7,
1,
0
] | [] | [] | [
"interpreted_language",
"licensing",
"open_source",
"plugins",
"python"
] | stackoverflow_0000031412_interpreted_language_licensing_open_source_plugins_python.txt |
Q:
Install Python to match directory layout in OS X 10.5
The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work).
I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it.
A:
Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python?
A:
Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.
You can also add a second python installation, but that also causes more problems than it's worth IMO.
So I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?
A:
Hyposaurus,
It is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above.
The easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local
Another method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time
./configure --prefix=/usr/local/python64
make
sudo make install
Then you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive
A:
Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.
A:
The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program.
EDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.
| Install Python to match directory layout in OS X 10.5 | The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work).
I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it.
| [
"Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python?\n",
"Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.\nYou can also add a second python installation, but that also causes more problems than it's worth IMO.\nSo I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?\n",
"Hyposaurus,\nIt is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above. \nThe easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local\nAnother method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time\n./configure --prefix=/usr/local/python64\nmake\nsudo make install\n\nThen you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive\n",
"Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.\n",
"The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program.\nEDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.\n"
] | [
1,
1,
1,
0,
0
] | [] | [] | [
"64_bit",
"macos",
"python"
] | stackoverflow_0000029856_64_bit_macos_python.txt |
Q:
ssh hangs when command invoked directly, but exits cleanly when run interactive
I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr.
I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this:
import re, subprocess
executable = ... # Name of executable
regex = ... # Regex to extract the port number from the output
p = subprocess.Popen(executable,
bufsize=1, # line buffered
stderr=subprocess.PIPE
)
s = p.stderr.readline()
port = re.match(regex).groups()[0]
print port
If I log in interactively, this script works:
$ ssh remotehost.example.com
Last login: Thu Aug 28 17:31:18 2008 from localhost
$ ./invokejob.py
63409
$ exit
logout
Connection to remotehost.example.com closed.
(Note: successful logout, it did not hang).
However, if I try to invoke it from the command-line, it just hangs:
$ ssh remotehost.example.com invokejob.py
Does anybody know why it hangs in the second case, and what I can do to avoid this?
Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
A:
s = p.stderr.readline()
I suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.
When you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.
A:
what if you do the following:
ssh <remote host> '<your command> ;<your regexp using awk or something>'
For example
ssh <remote host> '<your program>; ps aux | awk \'/root/ {print $2}\''
This will connect to , execute and then print each PSID for any user root or any process with root in its description.
I have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).
| ssh hangs when command invoked directly, but exits cleanly when run interactive | I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr.
I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this:
import re, subprocess
executable = ... # Name of executable
regex = ... # Regex to extract the port number from the output
p = subprocess.Popen(executable,
bufsize=1, # line buffered
stderr=subprocess.PIPE
)
s = p.stderr.readline()
port = re.match(regex).groups()[0]
print port
If I log in interactively, this script works:
$ ssh remotehost.example.com
Last login: Thu Aug 28 17:31:18 2008 from localhost
$ ./invokejob.py
63409
$ exit
logout
Connection to remotehost.example.com closed.
(Note: successful logout, it did not hang).
However, if I try to invoke it from the command-line, it just hangs:
$ ssh remotehost.example.com invokejob.py
Does anybody know why it hangs in the second case, and what I can do to avoid this?
Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
| [
"\ns = p.stderr.readline()\n\n\nI suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.\nWhen you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.\n",
"what if you do the following:\nssh <remote host> '<your command> ;<your regexp using awk or something>'\nFor example\nssh <remote host> '<your program>; ps aux | awk \\'/root/ {print $2}\\''\nThis will connect to , execute and then print each PSID for any user root or any process with root in its description.\nI have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).\n"
] | [
3,
0
] | [] | [] | [
"python",
"ssh"
] | stackoverflow_0000033475_python_ssh.txt |
Q:
Why does Python's iter() on a mapping return iterkeys() instead of iteritems()?
It seems like if you want to get the keys of a mapping, you ask for them; otherwise, give me the whole mapping (constituted by a set of key-value pairs). Is there a historical reason for this?
A:
Check out this thread for a discussion on the reasons behind this behavior (including that Guido likes it, and it's not likely to change).
| Why does Python's iter() on a mapping return iterkeys() instead of iteritems()? | It seems like if you want to get the keys of a mapping, you ask for them; otherwise, give me the whole mapping (constituted by a set of key-value pairs). Is there a historical reason for this?
| [
"Check out this thread for a discussion on the reasons behind this behavior (including that Guido likes it, and it's not likely to change).\n"
] | [
11
] | [] | [] | [
"iteration",
"mapping",
"python"
] | stackoverflow_0000035569_iteration_mapping_python.txt |
Q:
Django ImageField core=False in newforms admin
In the transition to newforms admin I'm having difficulty figuring out how specify core=False for ImageFields.
I get the following error:
TypeError: __init__() got an unexpected keyword argument 'core'
[Edit] However, by just removing the core argument I get a "This field is required." error in the admin interface on attempted submission. How does one accomplish what core=False is meant to do using newforms admin?
A:
To get rid of "This field is required," you need to make it not required, by using blank=True (and possibly null=True as well, if it's not a CharField).
A:
The core attribute isn't used anymore.
From Brian Rosner's Blog:
You can safely just remove any and all core arguments. They are no longer used. newforms-admin now provides a nice delete checkbox for exisiting instances in inlines.
A:
This is simple. I started getting this problems a few revisions ago. Basically, just remove the "core=True" parameter in the ImageField in the models, and then follow the instructions here to convert to what the newforms admin uses.
| Django ImageField core=False in newforms admin | In the transition to newforms admin I'm having difficulty figuring out how specify core=False for ImageFields.
I get the following error:
TypeError: __init__() got an unexpected keyword argument 'core'
[Edit] However, by just removing the core argument I get a "This field is required." error in the admin interface on attempted submission. How does one accomplish what core=False is meant to do using newforms admin?
| [
"To get rid of \"This field is required,\" you need to make it not required, by using blank=True (and possibly null=True as well, if it's not a CharField).\n",
"The core attribute isn't used anymore.\nFrom Brian Rosner's Blog:\n\nYou can safely just remove any and all core arguments. They are no longer used. newforms-admin now provides a nice delete checkbox for exisiting instances in inlines.\n\n",
"This is simple. I started getting this problems a few revisions ago. Basically, just remove the \"core=True\" parameter in the ImageField in the models, and then follow the instructions here to convert to what the newforms admin uses.\n"
] | [
5,
4,
2
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000034209_django_django_models_python.txt |
Q:
Programmatically editing Python source
This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
A:
Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.
A:
I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do.
Otherwise AFAIK you have to use some conf objects.
A:
Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.
It's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).
There's a lot of power in this feature and something along these lines is probably what you're looking for. :)
[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)
| Programmatically editing Python source | This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
| [
"Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.\n",
"I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do. \nOtherwise AFAIK you have to use some conf objects.\n",
"Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.\nIt's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).\nThere's a lot of power in this feature and something along these lines is probably what you're looking for. :)\n[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)\n"
] | [
6,
0,
0
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0000032385_file_io_python.txt |
Q:
"The system cannot find the file specified" when invoking subprocess.Popen in python
I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue.
I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path):
P:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> i,k = os.popen4("svn --version")
>>> i.close()
>>> k.readline()
'svn, version 1.4.2 (r22196)\n'
Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking
os.popen4() it uses subprocess.Popen(). Trying that reproduces the error:
C:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE,
>>> close_fds=False, stderr=subprocess.PIPE)
Traceback (most recent call last):
File "", line 1, in
File "C:\Python25\lib\subprocess.py", line 594, in __init__
errread, errwrite)
File "C:\Python25\lib\subprocess.py", line 816, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
>>>
For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution.
If it's not obvious, I'm a complete python newbie, but Google hasn't helped.
A:
It's a bug, see the documentation of subprocess.Popen. There either needs to be a "shell=True" option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, "svn --version" which it doesn't find.
I don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it.
| "The system cannot find the file specified" when invoking subprocess.Popen in python | I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue.
I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path):
P:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> i,k = os.popen4("svn --version")
>>> i.close()
>>> k.readline()
'svn, version 1.4.2 (r22196)\n'
Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking
os.popen4() it uses subprocess.Popen(). Trying that reproduces the error:
C:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE,
>>> close_fds=False, stderr=subprocess.PIPE)
Traceback (most recent call last):
File "", line 1, in
File "C:\Python25\lib\subprocess.py", line 594, in __init__
errread, errwrite)
File "C:\Python25\lib\subprocess.py", line 816, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
>>>
For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution.
If it's not obvious, I'm a complete python newbie, but Google hasn't helped.
| [
"It's a bug, see the documentation of subprocess.Popen. There either needs to be a \"shell=True\" option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, \"svn --version\" which it doesn't find.\nI don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it.\n"
] | [
21
] | [] | [] | [
"python",
"svn_merge"
] | stackoverflow_0000036324_python_svn_merge.txt |
Q:
How do I add data to an existing model in Django?
Currently, I am writing up a bit of a product-based CMS as my first project.
Here is my question. How can I add additional data (products) to my Product model?
I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django
How can I do this all without using this existing django admin interface.
A:
You will want to wire your URL to the Django create_object generic view, and pass it either "model" (the model you want to create) or "form_class" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors.
Sample URLconf for the simplest case:
from django.conf.urls.defaults import *
from django.views.generic.create_update import create_object
from my_products_app.models import Product
urlpatterns = patterns('',
url(r'^admin/products/add/$', create_object, {'model': Product}))
Your template will get the context variable "form", which you just need to wrap in a <form> tag and add a submit button. The simplest working template (by default should go in "my_products_app/product_form.html"):
<form action="." method="POST">
{{ form }}
<input type="submit" name="submit" value="add">
</form>
Note that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save.
| How do I add data to an existing model in Django? | Currently, I am writing up a bit of a product-based CMS as my first project.
Here is my question. How can I add additional data (products) to my Product model?
I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django
How can I do this all without using this existing django admin interface.
| [
"You will want to wire your URL to the Django create_object generic view, and pass it either \"model\" (the model you want to create) or \"form_class\" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors.\nSample URLconf for the simplest case:\nfrom django.conf.urls.defaults import *\nfrom django.views.generic.create_update import create_object\n\nfrom my_products_app.models import Product\n\nurlpatterns = patterns('',\n url(r'^admin/products/add/$', create_object, {'model': Product}))\n\nYour template will get the context variable \"form\", which you just need to wrap in a <form> tag and add a submit button. The simplest working template (by default should go in \"my_products_app/product_form.html\"):\n<form action=\".\" method=\"POST\">\n {{ form }}\n <input type=\"submit\" name=\"submit\" value=\"add\">\n</form>\n\nNote that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save.\n"
] | [
7
] | [
"This topic is covered in Django tutorials.\n",
"Follow the Django tutorial for setting up the \"admin\" part of an application. This will allow you to modify your database.\nDjango Admin Setup\nAlternatively, you can just connect directly to the database using the standard tools for whatever database type you are using.\n"
] | [
-1,
-2
] | [
"django",
"python"
] | stackoverflow_0000036812_django_python.txt |
Q:
How can I simply inherit methods from an existing instance?
Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name.
import cgi
class ClassX(object):
pass # ... with own __repr__
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
inst_z=[ i*i for i in range(25) ]
inst_b=True
class HTMLDecorator(object):
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_y).html()
wrapped_z = HTMLDecorator(inst_z)
inst_z[0] += 70
wrapped_z[0] += 71
print wrapped_z.html()
print HTMLDecorator(inst_b).html()
Output:
Traceback (most recent call last):
File "html.py", line 21, in
print HTMLDecorator(inst_x).html()
TypeError: default __new__ takes no parameters
Is what I'm trying to do possible? If so, what am I doing wrong?
A:
Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.
Looks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like bool, but it will for your user-defined classes:
def HTMLDecorator (obj):
def html ():
sep = cgi.escape (repr (obj))
return sep.join (("<H1>", "</H1>"))
obj.html = html
return obj
And here is the proxy version:
class HTMLDecorator(object):
def __init__ (self, wrapped):
self.__wrapped = wrapped
def html (self):
sep = cgi.escape (repr (self.__wrapped))
return sep.join (("<H1>", "</H1>"))
def __getattr__ (self, name):
return getattr (self.__wrapped, name)
def __setattr__ (self, name, value):
if not name.startswith ('_HTMLDecorator__'):
setattr (self.__wrapped, name, value)
return
super (HTMLDecorator, self).__setattr__ (name, value)
def __delattr__ (self, name):
delattr (self.__wraped, name)
A:
Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types:
import cgi
class ClassX(object):
pass # ... with own __repr__
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
class HTMLDecorator:
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
ClassX.__bases__ += (HTMLDecorator,)
ClassY.__bases__ += (HTMLDecorator,)
print inst_x.html()
print inst_y.html()
Be warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library.
A:
Is what I'm trying to do possible? If so, what am I doing wrong?
It's certainly possible. What's wrong is that HTMLDecorator.__init__() doesn't accept parameters.
Here's a simple example:
def decorator (func):
def new_func ():
return "new_func %s" % func ()
return new_func
@decorator
def a ():
return "a"
def b ():
return "b"
print a() # new_func a
print decorator (b)() # new_func b
A:
@John (37448):
Sorry, I might have misled you with the name (bad choice). I'm not really looking for a decorator function, or anything to do with decorators at all. What I'm after is for the html(self) def to use ClassX or ClassY's __repr__. I want this to work without modifying ClassX or ClassY.
A:
Ah, in that case, perhaps code like this will be useful? It doesn't really have anything to do with decorators, but demonstrates how to pass arguments to a class's initialization function and to retrieve those arguments for later.
import cgi
class ClassX(object):
def __repr__ (self):
return "<class X>"
class HTMLDecorator(object):
def __init__ (self, wrapped):
self.__wrapped = wrapped
def html (self):
sep = cgi.escape (repr (self.__wrapped))
return sep.join (("<H1>", "</H1>"))
inst_x=ClassX()
inst_b=True
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_b).html()
A:
@John (37479):
Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.
import cgi
from math import sqrt
class ClassX(object):
def __repr__(self):
return "Best Guess"
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
inst_z=[ i*i for i in range(25) ]
inst_b=True
avoid="__class__ __init__ __dict__ __weakref__"
class HTMLDecorator(object):
def __init__(self,master):
self.master = master
for attr in dir(self.master):
if ( not attr.startswith("__") or
attr not in avoid.split() and "attr" not in attr):
self.__setattr__(attr, self.master.__getattribute__(attr))
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
def length(self):
return sqrt(sum(self.__iter__()))
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_y).html()
wrapped_z = HTMLDecorator(inst_z)
print wrapped_z.length()
inst_z[0] += 70
#wrapped_z[0] += 71
wrapped_z.__setitem__(0,wrapped_z.__getitem__(0)+ 71)
print wrapped_z.html()
print HTMLDecorator(inst_b).html()
Output:
<H1>Best Guess</H1>
<H1><__main__.ClassY object at 0x891df0c></H1>
70.0
<H1>[141, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576]</H1>
<H1>True</H1>
| How can I simply inherit methods from an existing instance? | Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name.
import cgi
class ClassX(object):
pass # ... with own __repr__
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
inst_z=[ i*i for i in range(25) ]
inst_b=True
class HTMLDecorator(object):
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_y).html()
wrapped_z = HTMLDecorator(inst_z)
inst_z[0] += 70
wrapped_z[0] += 71
print wrapped_z.html()
print HTMLDecorator(inst_b).html()
Output:
Traceback (most recent call last):
File "html.py", line 21, in
print HTMLDecorator(inst_x).html()
TypeError: default __new__ takes no parameters
Is what I'm trying to do possible? If so, what am I doing wrong?
| [
"\nVery close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.\n\nLooks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like bool, but it will for your user-defined classes:\ndef HTMLDecorator (obj):\n def html ():\n sep = cgi.escape (repr (obj))\n return sep.join ((\"<H1>\", \"</H1>\"))\n obj.html = html\n return obj\n\nAnd here is the proxy version:\nclass HTMLDecorator(object):\n def __init__ (self, wrapped):\n self.__wrapped = wrapped\n\n def html (self):\n sep = cgi.escape (repr (self.__wrapped))\n return sep.join ((\"<H1>\", \"</H1>\"))\n\n def __getattr__ (self, name):\n return getattr (self.__wrapped, name)\n\n def __setattr__ (self, name, value):\n if not name.startswith ('_HTMLDecorator__'):\n setattr (self.__wrapped, name, value)\n return\n super (HTMLDecorator, self).__setattr__ (name, value)\n\n def __delattr__ (self, name):\n delattr (self.__wraped, name)\n\n",
"Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types:\nimport cgi\n\nclass ClassX(object):\n pass # ... with own __repr__\n\nclass ClassY(object):\n pass # ... with own __repr__\n\ninst_x=ClassX()\ninst_y=ClassY()\n\nclass HTMLDecorator:\n def html(self): # an \"enhanced\" version of __repr__\n return cgi.escape(self.__repr__()).join((\"<H1>\",\"</H1>\"))\n\nClassX.__bases__ += (HTMLDecorator,)\nClassY.__bases__ += (HTMLDecorator,)\n\nprint inst_x.html()\nprint inst_y.html()\n\nBe warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library.\n",
"\nIs what I'm trying to do possible? If so, what am I doing wrong?\n\nIt's certainly possible. What's wrong is that HTMLDecorator.__init__() doesn't accept parameters.\nHere's a simple example:\ndef decorator (func):\n def new_func ():\n return \"new_func %s\" % func ()\n return new_func\n\n@decorator\ndef a ():\n return \"a\"\n\ndef b ():\n return \"b\"\n\nprint a() # new_func a\nprint decorator (b)() # new_func b\n\n",
"@John (37448):\nSorry, I might have misled you with the name (bad choice). I'm not really looking for a decorator function, or anything to do with decorators at all. What I'm after is for the html(self) def to use ClassX or ClassY's __repr__. I want this to work without modifying ClassX or ClassY.\n",
"Ah, in that case, perhaps code like this will be useful? It doesn't really have anything to do with decorators, but demonstrates how to pass arguments to a class's initialization function and to retrieve those arguments for later.\nimport cgi\n\nclass ClassX(object):\n def __repr__ (self):\n return \"<class X>\"\n\nclass HTMLDecorator(object):\n def __init__ (self, wrapped):\n self.__wrapped = wrapped\n\n def html (self):\n sep = cgi.escape (repr (self.__wrapped))\n return sep.join ((\"<H1>\", \"</H1>\"))\n\ninst_x=ClassX()\ninst_b=True\n\nprint HTMLDecorator(inst_x).html()\nprint HTMLDecorator(inst_b).html()\n\n",
"@John (37479):\nVery close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.\nimport cgi\nfrom math import sqrt\n\nclass ClassX(object): \n def __repr__(self): \n return \"Best Guess\"\n\nclass ClassY(object):\n pass # ... with own __repr__\n\ninst_x=ClassX()\n\ninst_y=ClassY()\n\ninst_z=[ i*i for i in range(25) ]\n\ninst_b=True\n\navoid=\"__class__ __init__ __dict__ __weakref__\"\n\nclass HTMLDecorator(object):\n def __init__(self,master):\n self.master = master\n for attr in dir(self.master):\n if ( not attr.startswith(\"__\") or \n attr not in avoid.split() and \"attr\" not in attr):\n self.__setattr__(attr, self.master.__getattribute__(attr))\n\n def html(self): # an \"enhanced\" version of __repr__\n return cgi.escape(self.__repr__()).join((\"<H1>\",\"</H1>\"))\n\n def length(self):\n return sqrt(sum(self.__iter__()))\n\nprint HTMLDecorator(inst_x).html()\nprint HTMLDecorator(inst_y).html()\nwrapped_z = HTMLDecorator(inst_z)\nprint wrapped_z.length()\ninst_z[0] += 70\n#wrapped_z[0] += 71\nwrapped_z.__setitem__(0,wrapped_z.__getitem__(0)+ 71)\nprint wrapped_z.html()\nprint HTMLDecorator(inst_b).html()\n\nOutput:\n<H1>Best Guess</H1>\n<H1><__main__.ClassY object at 0x891df0c></H1>\n70.0\n<H1>[141, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576]</H1>\n<H1>True</H1>\n"
] | [
2,
2,
0,
0,
0,
0
] | [] | [] | [
"inheritance",
"object",
"oop",
"python"
] | stackoverflow_0000037479_inheritance_object_oop_python.txt |
Q:
Why is the subprocess.Popen class not named Subprocess?
The primary class in the subprocess module is name Popen, and represents a subprocess. Popen sounds like someone was trying to force the name to follow some function naming format, rather than chosing a name that actually represents what the object is. Does anyone know why it was chosen over something simple like, say, Subprocess?
A:
Now, I'm not saying that this is the greatest name in the world, but here was the idea as I understand it.
Originally, the popen family was in the os module and was an implementation of the venerable posix popen. The movement to the subprocess module would have been an opportune time to rename them, but I guess that keeping Popen makes it easier to find in the docs for those who have a long history with python or even to the venerable posix functions.
From its earliest posix incarnation, Popen has always been meant to open a Process and allow you to read and write from its stdio like a file. Thus the mnemonic for Popen is that it is short for ProcessOpen in an attempt to kind of, sorta, look like open.
A:
subprocess.Popen replaces the group of os.popenX POSIX functions (which have a long history). I suppose that the name Popen makes it more likely for people used to the old functions to find and use the new ones.
The PEP for subprocess (PEP 324) has a little bit of discussion on the name of the module but not of class Popen. The list of PEPs (Python enhancement proposals) is in general an excellent place to start if you're looking for the rationale for features of Python.
| Why is the subprocess.Popen class not named Subprocess? | The primary class in the subprocess module is name Popen, and represents a subprocess. Popen sounds like someone was trying to force the name to follow some function naming format, rather than chosing a name that actually represents what the object is. Does anyone know why it was chosen over something simple like, say, Subprocess?
| [
"Now, I'm not saying that this is the greatest name in the world, but here was the idea as I understand it.\nOriginally, the popen family was in the os module and was an implementation of the venerable posix popen. The movement to the subprocess module would have been an opportune time to rename them, but I guess that keeping Popen makes it easier to find in the docs for those who have a long history with python or even to the venerable posix functions.\nFrom its earliest posix incarnation, Popen has always been meant to open a Process and allow you to read and write from its stdio like a file. Thus the mnemonic for Popen is that it is short for ProcessOpen in an attempt to kind of, sorta, look like open.\n",
"subprocess.Popen replaces the group of os.popenX POSIX functions (which have a long history). I suppose that the name Popen makes it more likely for people used to the old functions to find and use the new ones.\nThe PEP for subprocess (PEP 324) has a little bit of discussion on the name of the module but not of class Popen. The list of PEPs (Python enhancement proposals) is in general an excellent place to start if you're looking for the rationale for features of Python.\n"
] | [
8,
5
] | [
"I suppose the name was chosen because the functionality subprocess is replacing was formerly in the os module as the os.popen function. There could be even ways to automate migration between the two.\n"
] | [
-1
] | [
"python",
"subprocess"
] | stackoverflow_0000038197_python_subprocess.txt |
Q:
Retrieving an Oracle timestamp using Python's Win32 ODBC module
Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?
A:
I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.
In your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.
A:
My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:
cursor.execute("SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log")
This works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.
| Retrieving an Oracle timestamp using Python's Win32 ODBC module | Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?
| [
"I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.\nIn your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.\n",
"My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:\ncursor.execute(\"SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log\")\n\nThis works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.\n"
] | [
2,
1
] | [] | [] | [
"ora_00932",
"oracle",
"python"
] | stackoverflow_0000038435_ora_00932_oracle_python.txt |
Q:
Ruby "is" equivalent
Is there a Ruby equivalent for Python's "is"? It tests whether two objects are identical (i.e. have the same memory location).
A:
Use a.equal? b
http://www.ruby-doc.org/core/classes/Object.html
Unlike ==, the equal? method should never be overridden by subclasses: it is used to determine object identity (that is, a.equal?(b) iff a is the same object as b).
A:
You could also use __id__. This gives you the objects internal ID number, which is always unique. To check if to objects are the same, try
a.__id__ = b.__id__
This is how Ruby's standard library does it as far as I can tell (see group_by and others).
| Ruby "is" equivalent | Is there a Ruby equivalent for Python's "is"? It tests whether two objects are identical (i.e. have the same memory location).
| [
"Use a.equal? b\nhttp://www.ruby-doc.org/core/classes/Object.html\n\nUnlike ==, the equal? method should never be overridden by subclasses: it is used to determine object identity (that is, a.equal?(b) iff a is the same object as b). \n\n",
"You could also use __id__. This gives you the objects internal ID number, which is always unique. To check if to objects are the same, try\n\na.__id__ = b.__id__\n\nThis is how Ruby's standard library does it as far as I can tell (see group_by and others).\n"
] | [
13,
2
] | [] | [] | [
"python",
"ruby"
] | stackoverflow_0000035634_python_ruby.txt |
Q:
Why is my instance variable not in __dict__?
If I create a class A as follows:
class A:
def __init__(self):
self.name = 'A'
Inspecting the __dict__ member looks like {'name': 'A'}
If however I create a class B:
class B:
name = 'B'
__dict__ is empty.
What is the difference between the two, and why doesn't name show up in B's __dict__?
A:
B.name is a class attribute, not an instance attribute. It shows up in B.__dict__, but not in b = B(); b.__dict__.
The distinction is obscured somewhat because when you access an attribute on an instance, the class dict is a fallback. So in the above example, b.name will give you the value of B.name.
A:
class A:
def _ _init_ _(self):
self.name = 'A'
a = A()
Creates an attribute on the object instance a of type A and it can therefore be found in: a.__dict__
class B:
name = 'B'
b = B()
Creates an attribute on the class B and the attribute can be found in B.__dict__ alternatively if you have an instance b of type B you can see the class level attributes in b.__class__.__dict__
| Why is my instance variable not in __dict__? | If I create a class A as follows:
class A:
def __init__(self):
self.name = 'A'
Inspecting the __dict__ member looks like {'name': 'A'}
If however I create a class B:
class B:
name = 'B'
__dict__ is empty.
What is the difference between the two, and why doesn't name show up in B's __dict__?
| [
"B.name is a class attribute, not an instance attribute. It shows up in B.__dict__, but not in b = B(); b.__dict__.\nThe distinction is obscured somewhat because when you access an attribute on an instance, the class dict is a fallback. So in the above example, b.name will give you the value of B.name.\n",
"class A:\n def _ _init_ _(self):\n self.name = 'A'\na = A()\n\nCreates an attribute on the object instance a of type A and it can therefore be found in: a.__dict__\nclass B:\n name = 'B'\nb = B()\n\nCreates an attribute on the class B and the attribute can be found in B.__dict__ alternatively if you have an instance b of type B you can see the class level attributes in b.__class__.__dict__\n"
] | [
46,
12
] | [] | [] | [
"python"
] | stackoverflow_0000035805_python.txt |
Q:
Javascript equivalent of Python's locals()?
In Python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals(). Is there some equivalent way of doing this in Javascript? For instance, I would like to do something like the following:
var foo = function(){ alert('foo'); };
var bar = function(){ alert('bar'); };
var s = 'foo';
locals()[s](); // alerts 'foo'
Is this at all possible, or should I just be using a local object for the lookup?
A:
locals() - No.
globals() - Yes.
window is a reference to the global scope, like globals() in python.
globals()["foo"]
is the same as:
window["foo"]
A:
Well, I don't think that there is something like that in js. You can always use eval instead of locals(). Like this:
eval(s+"()");
You just have to know that actually function foo exists.
Edit:
Don't use eval:) Use:
var functionName="myFunctionName";
window[functionName]();
A:
I seem to remember Brendan Eich commented on this in a recent podcast; if i recall correctly, it's not being considered, as it adds unreasonable restrictions to optimization. He compared it to the arguments local in that, while useful for varargs, its very existence removes the ability to guess at what a function will touch just by looking at its definition.
BTW: i believe JS did have support for accessing locals through the arguments local at one time - a quick search shows this has been deprecated though.
A:
@e-bartek, I think that window[functionName] won't work if you in some closure, and the function name is local to that closure. For example:
function foo() {
var bar = function () {
alert('hello world');
};
var s = 'bar';
window[s](); // this won't work
}
In this case, s is 'bar', but the function 'bar' only exists inside the scope of the function 'foo'. It is not defined in the window scope.
Of course, this doesn't really answer the original question, I just wanted to chime in on this response. I don't believe there is a way to do what the original question asked.
A:
@pkaeding
Yes, you're right. window[functionName]() doesn't work in this case, but eval does. If I needed something like this, I'd create my own object to keep those functions together.
var func = {};
func.bar = ...;
var s = "bar";
func[s]();
| Javascript equivalent of Python's locals()? | In Python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals(). Is there some equivalent way of doing this in Javascript? For instance, I would like to do something like the following:
var foo = function(){ alert('foo'); };
var bar = function(){ alert('bar'); };
var s = 'foo';
locals()[s](); // alerts 'foo'
Is this at all possible, or should I just be using a local object for the lookup?
| [
"\nlocals() - No. \nglobals() - Yes.\n\nwindow is a reference to the global scope, like globals() in python.\nglobals()[\"foo\"]\n\nis the same as:\nwindow[\"foo\"]\n\n",
"Well, I don't think that there is something like that in js. You can always use eval instead of locals(). Like this: \neval(s+\"()\");\n\nYou just have to know that actually function foo exists.\nEdit:\nDon't use eval:) Use:\nvar functionName=\"myFunctionName\";\nwindow[functionName]();\n\n",
"I seem to remember Brendan Eich commented on this in a recent podcast; if i recall correctly, it's not being considered, as it adds unreasonable restrictions to optimization. He compared it to the arguments local in that, while useful for varargs, its very existence removes the ability to guess at what a function will touch just by looking at its definition. \nBTW: i believe JS did have support for accessing locals through the arguments local at one time - a quick search shows this has been deprecated though.\n",
"@e-bartek, I think that window[functionName] won't work if you in some closure, and the function name is local to that closure. For example:\nfunction foo() {\n var bar = function () {\n alert('hello world');\n };\n var s = 'bar';\n window[s](); // this won't work\n}\n\nIn this case, s is 'bar', but the function 'bar' only exists inside the scope of the function 'foo'. It is not defined in the window scope.\nOf course, this doesn't really answer the original question, I just wanted to chime in on this response. I don't believe there is a way to do what the original question asked.\n",
"@pkaeding\nYes, you're right. window[functionName]() doesn't work in this case, but eval does. If I needed something like this, I'd create my own object to keep those functions together.\nvar func = {};\nfunc.bar = ...;\nvar s = \"bar\";\nfunc[s]();\n\n"
] | [
18,
4,
3,
0,
0
] | [
"AFAIK, no. If you just want to check the existence of a given variable, you can do it by testing for it, something like this:\nif (foo) foo();\n\n"
] | [
-1
] | [
"javascript",
"python"
] | stackoverflow_0000039960_javascript_python.txt |
Q:
Using C in a shared multi-platform POSIX environment
I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
A:
Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.
In /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.
This won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.
I wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.
A:
Also, you could use autoconf and distribute your application in source form only. :)
A:
You know, you should look at static linking.
These days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore.
You could also try running your applications in chroot() jails and distributing those.
A:
Depending on your mix os OSes, you might be better off creating packages for each class of system.
Alternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.
| Using C in a shared multi-platform POSIX environment | I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
| [
"Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.\nIn /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.\nThis won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.\nI wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.\n",
"Also, you could use autoconf and distribute your application in source form only. :)\n",
"You know, you should look at static linking.\nThese days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore. \nYou could also try running your applications in chroot() jails and distributing those.\n",
"Depending on your mix os OSes, you might be better off creating packages for each class of system.\nAlternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"c",
"cross_platform",
"posix",
"python",
"scripting"
] | stackoverflow_0000039847_c_cross_platform_posix_python_scripting.txt |
Q:
cx_Oracle: how do I get the ORA-xxxxx error number?
In a try/except block, how do I extract the Oracle error number?
A:
try:
cursor.execute("select 1 / 0 from dual")
except cx_Oracle.DatabaseError, e:
error, = e
print "Code:", error.code
print "Message:", error.message
This results in the following output:
Code: 1476
Message: ORA-01476: divisor is equal to zero
| cx_Oracle: how do I get the ORA-xxxxx error number? | In a try/except block, how do I extract the Oracle error number?
| [
"try:\n cursor.execute(\"select 1 / 0 from dual\")\nexcept cx_Oracle.DatabaseError, e:\n error, = e\n print \"Code:\", error.code\n print \"Message:\", error.message\n\nThis results in the following output:\nCode: 1476\nMessage: ORA-01476: divisor is equal to zero\n\n"
] | [
13
] | [] | [] | [
"cx_oracle",
"oracle",
"python"
] | stackoverflow_0000040586_cx_oracle_oracle_python.txt |
Q:
Is there a python module for regex matching in zip files
I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files.
Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
A:
There's nothing that will automatically do what you want.
However, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file.
#!/usr/bin/python
import zipfile
f = zipfile.ZipFile('myfile.zip')
for subfile in f.namelist():
print subfile
data = f.read(subfile)
for line in data.split('\n'):
print line
A:
You could loop through the zip files, reading individual files using the zipfile module and running your regex on those, eliminating to unzip all the files at once.
I'm fairly certain that you can't run a regex over the zipped data, at least not meaningfully.
A:
To access the contents of a zip file you have to unzip it, although the zipfile package makes this fairly easy, as you can unzip each file within an archive individually.
Python zipfile module
A:
Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp?
(Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.)
EDIT: Also note that it's probably much more sensible to just use the zipfile solution.
| Is there a python module for regex matching in zip files | I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files.
Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
| [
"There's nothing that will automatically do what you want.\nHowever, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file.\n#!/usr/bin/python\n\nimport zipfile\nf = zipfile.ZipFile('myfile.zip')\n\nfor subfile in f.namelist():\n print subfile\n data = f.read(subfile)\n for line in data.split('\\n'):\n print line\n\n",
"You could loop through the zip files, reading individual files using the zipfile module and running your regex on those, eliminating to unzip all the files at once. \nI'm fairly certain that you can't run a regex over the zipped data, at least not meaningfully.\n",
"To access the contents of a zip file you have to unzip it, although the zipfile package makes this fairly easy, as you can unzip each file within an archive individually.\nPython zipfile module\n",
"Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp?\n(Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.)\nEDIT: Also note that it's probably much more sensible to just use the zipfile solution.\n"
] | [
10,
0,
0,
0
] | [] | [] | [
"python",
"regex",
"text_processing",
"zip"
] | stackoverflow_0000014281_python_regex_text_processing_zip.txt |
Q:
How do I do monkeypatching in python?
I've had to do some introspection in python and it wasn't pretty:
name = sys._getframe(1).f_code
name = "%s:%d %s()" %(os.path.split(name.co_filename)[1],name.co_firstlineno,name.co_name)
To get something like
foo.py:22 bar() blah blah
In our debugging output.
I'd ideally like to prepend anything to stderr with this sort of information --
Is it possible to change the behaviour of print globally within python?
A:
A print statement does its IO through "sys.stdout.write" so you can override sys.stdout if you want to manipulate the print stream.
A:
The python inspect module makes this a lot easier and cleaner.
| How do I do monkeypatching in python? | I've had to do some introspection in python and it wasn't pretty:
name = sys._getframe(1).f_code
name = "%s:%d %s()" %(os.path.split(name.co_filename)[1],name.co_firstlineno,name.co_name)
To get something like
foo.py:22 bar() blah blah
In our debugging output.
I'd ideally like to prepend anything to stderr with this sort of information --
Is it possible to change the behaviour of print globally within python?
| [
"A print statement does its IO through \"sys.stdout.write\" so you can override sys.stdout if you want to manipulate the print stream.\n",
"The python inspect module makes this a lot easier and cleaner. \n"
] | [
3,
1
] | [] | [] | [
"monkeypatching",
"python"
] | stackoverflow_0000041562_monkeypatching_python.txt |
Q:
Standard way to open a folder window in linux?
I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application.
On OSX, I can open a window in the finder with
os.system('open "%s"' % foldername)
and on Windows with
os.startfile(foldername)
What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)?
This looks like something that could be specified by the freedesktop.org folks (a python module, similar to webbrowser, would also be nice!).
A:
os.system('xdg-open "%s"' % foldername)
xdg-open can be used for files/urls also
A:
this would probably have to be done manually, or have as a config item since there are many file managers that users may want to use. Providing a way for command options as well.
There might be an function that launches the defaults for kde or gnome in their respective toolkits but I haven't had reason to look for them.
A:
You're going to have to do this based on the running window manager. OSX and Windows have a (defacto) standard way because there is only one choice.
You shouldn't need to specify the exact filemanager application, though, this should be possible to do through the wm. I know Gnome does, and it's important to do this in KDE since there are two possible file managers (Konqueror/Dolphin) that may be in use.
I agree that this would be a good thing for freedesktop.org to standardize, although I doubt it will happen unless someone steps up and volunteers to do it.
EDIT: I wasn't aware of xdg-open. Good to know!
| Standard way to open a folder window in linux? | I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application.
On OSX, I can open a window in the finder with
os.system('open "%s"' % foldername)
and on Windows with
os.startfile(foldername)
What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)?
This looks like something that could be specified by the freedesktop.org folks (a python module, similar to webbrowser, would also be nice!).
| [
"os.system('xdg-open \"%s\"' % foldername)\n\nxdg-open can be used for files/urls also\n",
"this would probably have to be done manually, or have as a config item since there are many file managers that users may want to use. Providing a way for command options as well.\nThere might be an function that launches the defaults for kde or gnome in their respective toolkits but I haven't had reason to look for them.\n",
"You're going to have to do this based on the running window manager. OSX and Windows have a (defacto) standard way because there is only one choice.\nYou shouldn't need to specify the exact filemanager application, though, this should be possible to do through the wm. I know Gnome does, and it's important to do this in KDE since there are two possible file managers (Konqueror/Dolphin) that may be in use.\nI agree that this would be a good thing for freedesktop.org to standardize, although I doubt it will happen unless someone steps up and volunteers to do it.\n\nEDIT: I wasn't aware of xdg-open. Good to know!\n"
] | [
15,
0,
0
] | [] | [] | [
"cross_platform",
"desktop",
"linux",
"python"
] | stackoverflow_0000041969_cross_platform_desktop_linux_python.txt |
Q:
Pure Python library to generate Identicons?
Does anyone know of a FOSS Python lib for generating Identicons? I've looked, but so far I haven't had much luck.
A:
I've found two implementations:
http://coderepos.org/share/browser/lang/python/misc/identicon.py
http://code.google.com/p/visicon/
| Pure Python library to generate Identicons? | Does anyone know of a FOSS Python lib for generating Identicons? I've looked, but so far I haven't had much luck.
| [
"I've found two implementations:\nhttp://coderepos.org/share/browser/lang/python/misc/identicon.py\nhttp://code.google.com/p/visicon/\n"
] | [
12
] | [] | [] | [
"identicon",
"python"
] | stackoverflow_0000042093_identicon_python.txt |
Q:
How can I get a commit message from a bzr post-commit hook?
I'm trying to write a bzr post-commit hook for my private bugtracker, but I'm stuck at the function signature of
post_commit(local, master, old_revno, old_revid, new_revno, mew_revid)
How can I extract the commit message for the branch from this with bzrlib in Python?
A:
And the answer is like so:
def check_commit_msg(local, master, old_revno, old_revid, new_revno, new_revid):
branch = local or master
revision = branch.repository.get_revision(new_revid)
print revision.message
local and master are Branch objects, so once you have a revision, it's easy to extract the message.
| How can I get a commit message from a bzr post-commit hook? | I'm trying to write a bzr post-commit hook for my private bugtracker, but I'm stuck at the function signature of
post_commit(local, master, old_revno, old_revid, new_revno, mew_revid)
How can I extract the commit message for the branch from this with bzrlib in Python?
| [
"And the answer is like so:\ndef check_commit_msg(local, master, old_revno, old_revid, new_revno, new_revid):\n branch = local or master\n revision = branch.repository.get_revision(new_revid)\n print revision.message\n\nlocal and master are Branch objects, so once you have a revision, it's easy to extract the message.\n"
] | [
5
] | [] | [] | [
"bazaar",
"dvcs",
"python"
] | stackoverflow_0000043099_bazaar_dvcs_python.txt |
Q:
Can the HTTP version or headers affect the visual appearance of a web page?
I know, I would have thought the answer was obviously "no" as well, but I am experiencing a strange situation where when I view my site from our staging server it appears slightly larger than when I view it from my local dev server. I have used Charles to confirm that all of the content -- the HTML, the images, the CSS, the javascript, everything is the same.
The ONLY difference in the traffic is that (because the local site is served from the Django development mode server) the response headers look like this:
HTTP/1.0 200 OK
Server WSGIServer/0.1 Python/2.5.2
Date Thu, 04 Sep 2008 23:56:10 GMT
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
Whereas on the staging server (where Django is running inside Apache) the headers look like this:
HTTP/1.1 200 OK
Date Thu, 04 Sep 2008 23:56:06 GMT
Server Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5 with Suhosin-Patch
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
So, as far as I can tell the only differences are HTTP/1.1 vs HTTP/1.0, the server identifer (Apache vs WSGIServer) and the order of the Date/Server headers.
To elaborate a bit further on the differences in appearance, basically it appears as if the version of the site on the staging server is "zoomed in" by about 10%. For example, the primary logo which dominates our home page is 220 pixels wide but when server from our staging server shows up as 245 pixels wide. Everything else on the page, (other images, text, spacing, etc) is also proportionately larger.
This is all in Firefox 3. I don't have any other browsers available to test with at the moment.
Has anyone else encountered any bizarre behavior anything like this before? I am at a loss.
A:
Have you tried View -> Zoom -> Reset on both sites?
| Can the HTTP version or headers affect the visual appearance of a web page? | I know, I would have thought the answer was obviously "no" as well, but I am experiencing a strange situation where when I view my site from our staging server it appears slightly larger than when I view it from my local dev server. I have used Charles to confirm that all of the content -- the HTML, the images, the CSS, the javascript, everything is the same.
The ONLY difference in the traffic is that (because the local site is served from the Django development mode server) the response headers look like this:
HTTP/1.0 200 OK
Server WSGIServer/0.1 Python/2.5.2
Date Thu, 04 Sep 2008 23:56:10 GMT
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
Whereas on the staging server (where Django is running inside Apache) the headers look like this:
HTTP/1.1 200 OK
Date Thu, 04 Sep 2008 23:56:06 GMT
Server Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5 with Suhosin-Patch
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
So, as far as I can tell the only differences are HTTP/1.1 vs HTTP/1.0, the server identifer (Apache vs WSGIServer) and the order of the Date/Server headers.
To elaborate a bit further on the differences in appearance, basically it appears as if the version of the site on the staging server is "zoomed in" by about 10%. For example, the primary logo which dominates our home page is 220 pixels wide but when server from our staging server shows up as 245 pixels wide. Everything else on the page, (other images, text, spacing, etc) is also proportionately larger.
This is all in Firefox 3. I don't have any other browsers available to test with at the moment.
Has anyone else encountered any bizarre behavior anything like this before? I am at a loss.
| [
"Have you tried View -> Zoom -> Reset on both sites?\n"
] | [
9
] | [] | [] | [
"django",
"firefox",
"python"
] | stackoverflow_0000045013_django_firefox_python.txt |
Q:
Python packages - import by class, not file
Say I have the following file structure:
app/
app.py
controllers/
__init__.py
project.py
plugin.py
If app/controllers/project.py defines a class Project, app.py would import it like this:
from app.controllers.project import Project
I'd like to just be able to do:
from app.controllers import Project
How would this be done?
A:
You need to put
from project import Project
in controllers/__init__.py.
Note that when Absolute imports become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named project), i.e.,
from .project import Project
| Python packages - import by class, not file | Say I have the following file structure:
app/
app.py
controllers/
__init__.py
project.py
plugin.py
If app/controllers/project.py defines a class Project, app.py would import it like this:
from app.controllers.project import Project
I'd like to just be able to do:
from app.controllers import Project
How would this be done?
| [
"You need to put\nfrom project import Project\n\nin controllers/__init__.py.\nNote that when Absolute imports become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named project), i.e.,\nfrom .project import Project\n\n"
] | [
103
] | [] | [] | [
"package",
"python"
] | stackoverflow_0000045122_package_python.txt |
Q:
Where can I find the time and space complexity of the built-in sequence types in Python
I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
A:
Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.
A:
Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on set and dict, but list was covered too.
There are also some photos of the pertinent slides from EuroPython in a blog.
Here is a summary of my notes on list:
Stores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.
Tries to avoid memcpy when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.
Some operations pre-size. Examples given were range(n), map(), list(), [None] * n, and slicing.
When shrinking, the array is realloced only when it is wasting 50% of space. pop is cheap.
A:
If your asking what I think your asking, you can find them Here... page 476 and on.
It's written around optimization techniques for Python; It's mostly Big-O notation of time efficiencies not much memory.
| Where can I find the time and space complexity of the built-in sequence types in Python | I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
| [
"Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.\n",
"Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on set and dict, but list was covered too.\nThere are also some photos of the pertinent slides from EuroPython in a blog.\nHere is a summary of my notes on list:\n\nStores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.\nTries to avoid memcpy when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.\nSome operations pre-size. Examples given were range(n), map(), list(), [None] * n, and slicing.\nWhen shrinking, the array is realloced only when it is wasting 50% of space. pop is cheap.\n\n",
"If your asking what I think your asking, you can find them Here... page 476 and on.\nIt's written around optimization techniques for Python; It's mostly Big-O notation of time efficiencies not much memory.\n"
] | [
19,
15,
2
] | [] | [] | [
"big_o",
"complexity_theory",
"performance",
"python",
"sequences"
] | stackoverflow_0000045228_big_o_complexity_theory_performance_python_sequences.txt |
Q:
Pylons error - 'MySQL server has gone away'
I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away')
I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :).
Maybe something in my MySQL config is goofy? Not sure where to look exactly.
Other relevant details:
Python 2.5
Pylons: 0.9.6.2 (w/ sql_alchemy)
MySQL: 5.0.51
A:
I think I fixed it. It's turns out I had a simple config error. My ini file read:
sqlalchemy.default.url = [connection string here]
sqlalchemy.pool_recycle = 1800
The problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored.
The solution is to simply change the second line in the ini to:
sqlalchemy.default.pool_recycle = 1800
A:
You might want to check MySQL's timeout variables:
show variables like '%timeout%';
You're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently.
AFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?
| Pylons error - 'MySQL server has gone away' | I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away')
I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :).
Maybe something in my MySQL config is goofy? Not sure where to look exactly.
Other relevant details:
Python 2.5
Pylons: 0.9.6.2 (w/ sql_alchemy)
MySQL: 5.0.51
| [
"I think I fixed it. It's turns out I had a simple config error. My ini file read:\nsqlalchemy.default.url = [connection string here]\nsqlalchemy.pool_recycle = 1800\n\nThe problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored.\nThe solution is to simply change the second line in the ini to:\nsqlalchemy.default.pool_recycle = 1800\n\n",
"You might want to check MySQL's timeout variables:\nshow variables like '%timeout%';\n\nYou're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently.\nAFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?\n"
] | [
8,
2
] | [] | [] | [
"mysql",
"pylons",
"python"
] | stackoverflow_0000008154_mysql_pylons_python.txt |
Q:
Django: Print url of view without hardcoding the url
Can i print out a url /admin/manage/products/add of a certain view in a template?
Here is the rule i want to create a link for
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
Edit: I am not using the default admin (well, i am but it is at another url), this is my own
A:
You can use get_absolute_url, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.
You want to use named URL patterns. Here's a quick intro:
Change the line in your urls.py to:
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, "create-product"),
Then, in your template you use this to display the URL:
{% url create-product %}
If you're using Django 1.5 or higher you need this:
{% url 'create-product' %}
You can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).
A:
If you use named url patterns you can do the follwing in your template
{% url create_object %}
A:
The preferred way of creating the URL is by adding a get_absolute_url method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy.
You can go further by utilizing the permalink decorator that figures the path based on the urls configuration.
You can read more in the django documentation here.
| Django: Print url of view without hardcoding the url | Can i print out a url /admin/manage/products/add of a certain view in a template?
Here is the rule i want to create a link for
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
Edit: I am not using the default admin (well, i am but it is at another url), this is my own
| [
"You can use get_absolute_url, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.\nYou want to use named URL patterns. Here's a quick intro:\nChange the line in your urls.py to:\n(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, \"create-product\"),\n\nThen, in your template you use this to display the URL:\n{% url create-product %}\n\nIf you're using Django 1.5 or higher you need this:\n{% url 'create-product' %}\n\nYou can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).\n",
"If you use named url patterns you can do the follwing in your template\n{% url create_object %}\n\n",
"The preferred way of creating the URL is by adding a get_absolute_url method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy.\nYou can go further by utilizing the permalink decorator that figures the path based on the urls configuration.\nYou can read more in the django documentation here.\n"
] | [
17,
2,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000047207_django_python.txt |
Q:
Python: No module named core.exceptions
I'm trying to get Google AppEngine to work on my Debian box and am getting the following error when I try to access my page:
<type 'exceptions.ImportError'>: No module named core.exceptions
The same app works fine for me when I run it on my other Ubuntu box, so I know it's not a problem with the app itself. However, I need to get it working on this Debian box. It originally had python 2.4 but after AppEngine complained about it I installed the python2.5 and python2.5-dev packages (to no avail).
I saw on this Google Group post that it may be due to the version of AppEngine and just to reinstall it, but that didn't work. Any ideas?
Edit 1: Also tried uninstalling python2.4 and 2.5 then reinstalling 2.5, which also didn't work.
Edit 2: Turns out when I made AppEngine into a CVS project it didn't add the core directory into my project, so when I checked it out there literally was no module named core.exceptions. Re-downloading that folder resolved the problem.
A:
core.exceptions is part of django; what version of django do you have installed? The AppEngine comes with the appropriate version for whatever release you've downloaded (in the lib/django directory). It can be installed by going to that directory and running python setup.py install
| Python: No module named core.exceptions | I'm trying to get Google AppEngine to work on my Debian box and am getting the following error when I try to access my page:
<type 'exceptions.ImportError'>: No module named core.exceptions
The same app works fine for me when I run it on my other Ubuntu box, so I know it's not a problem with the app itself. However, I need to get it working on this Debian box. It originally had python 2.4 but after AppEngine complained about it I installed the python2.5 and python2.5-dev packages (to no avail).
I saw on this Google Group post that it may be due to the version of AppEngine and just to reinstall it, but that didn't work. Any ideas?
Edit 1: Also tried uninstalling python2.4 and 2.5 then reinstalling 2.5, which also didn't work.
Edit 2: Turns out when I made AppEngine into a CVS project it didn't add the core directory into my project, so when I checked it out there literally was no module named core.exceptions. Re-downloading that folder resolved the problem.
| [
"core.exceptions is part of django; what version of django do you have installed? The AppEngine comes with the appropriate version for whatever release you've downloaded (in the lib/django directory). It can be installed by going to that directory and running python setup.py install\n"
] | [
6
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000048777_google_app_engine_python.txt |
Q:
Python descriptor protocol analog in other languages?
Is there something like the Python descriptor protocol implemented in other languages? It seems like a nice way to increase modularity/encapsulation without bloating your containing class' implementation, but I've never heard of a similar thing in any other languages. Is it likely absent from other languages because of the lookup overhead?
A:
I've not heard of a direct equivalent either. You could probably achieve the same effect with macros, especially in a language like Lisp which has extremely powerful macros.
I wouldn't be at all surprised if other languages start to incorporate something similar because it is so powerful.
A:
Ruby and C# both easily let you create accessors by specifying getter/setter methods for an attribute, much like in Python. However, this isn't designed to naturally let you write the code for these methods in another class the way that Python allows. In practice, I'm not sure how much this matters, since every time I've seen an attribute defined through the descriptor protocol its been implemented in the same class.
EDIT: Darn my dyslexia (by which I mean careless reading). For some reason I've always read "descriptor" as "decorator" and vice versa, even when I'm the one typing both of them. I'll leave my post intact since it has valid information, albeit information which has absolutely nothing to do with the question.
The term "decorator" itself is actually the name of a design pattern described in the famous "Design Patterns" book. The Wikipedia article contains many examples in different programming languages of decorator usage: http://en.wikipedia.org/wiki/Decorator_pattern
However, the decorators in that article object-oriented; they have classes implementing a predefined interface which lets another existing class behave differently somehow, etc. Python decorators act in a functional way by replacing a function at runtime with another function, allowing you to effectively modify/replace that function, insert code, etc.
This is known in the Java world as Aspect-Oriented programming, and the AspectJ Java compiler lets you do these kinds of things and compile your AspectJ code (which is a superset of Java) into Java bytecode.
I'm not familiar enough with C# or Ruby to know what their version of decorators would be.
| Python descriptor protocol analog in other languages? | Is there something like the Python descriptor protocol implemented in other languages? It seems like a nice way to increase modularity/encapsulation without bloating your containing class' implementation, but I've never heard of a similar thing in any other languages. Is it likely absent from other languages because of the lookup overhead?
| [
"I've not heard of a direct equivalent either. You could probably achieve the same effect with macros, especially in a language like Lisp which has extremely powerful macros.\nI wouldn't be at all surprised if other languages start to incorporate something similar because it is so powerful.\n",
"Ruby and C# both easily let you create accessors by specifying getter/setter methods for an attribute, much like in Python. However, this isn't designed to naturally let you write the code for these methods in another class the way that Python allows. In practice, I'm not sure how much this matters, since every time I've seen an attribute defined through the descriptor protocol its been implemented in the same class.\nEDIT: Darn my dyslexia (by which I mean careless reading). For some reason I've always read \"descriptor\" as \"decorator\" and vice versa, even when I'm the one typing both of them. I'll leave my post intact since it has valid information, albeit information which has absolutely nothing to do with the question.\nThe term \"decorator\" itself is actually the name of a design pattern described in the famous \"Design Patterns\" book. The Wikipedia article contains many examples in different programming languages of decorator usage: http://en.wikipedia.org/wiki/Decorator_pattern\nHowever, the decorators in that article object-oriented; they have classes implementing a predefined interface which lets another existing class behave differently somehow, etc. Python decorators act in a functional way by replacing a function at runtime with another function, allowing you to effectively modify/replace that function, insert code, etc.\nThis is known in the Java world as Aspect-Oriented programming, and the AspectJ Java compiler lets you do these kinds of things and compile your AspectJ code (which is a superset of Java) into Java bytecode.\nI'm not familiar enough with C# or Ruby to know what their version of decorators would be.\n"
] | [
4,
0
] | [] | [] | [
"encapsulation",
"language_features",
"python"
] | stackoverflow_0000034243_encapsulation_language_features_python.txt |
Q:
How do I implement a pre-commit hook script in SVN that calls dos2unix to validate checked-in file
I was wondering if anyone here had some experience writing this type of script and if they could give me some pointers.
I would like to modify this script to validate that the check-in file does not have a Carriage Return in the EOL formatting. The EOL format is CR LF in Windows and LF in Unix. When a User checks-in code with the Windows format. It does not compile in Unix anymore. I know this can be done on the client side but I need to have this validation done on the server side. To achieve this, I need to do the following:
1) Make sure the file I check is not a binary, I dont know how to do this with svnlook, should I check the mime:type of the file? The Red Book does not indicate this clearly or I must have not seen it.
2) I would like to run the dos2unix command to validate that the file has the correct EOL format. I would compare the output of the dos2unix command against the original file. If there is a diff between both, I give an error message to the client and cancel the check-in.
I would like your comments/feedback on this approach.
A:
I think you can avoid a commit hook script in this case by using the svn:eol-style property as described in the SVNBook:
End-of-Line Character Sequences
Subversion Properties
This way SVN can worry about your line endings for you.
Good luck!
A:
What exactly are you trying to do?
Of course, there are numerous places to learn about svn pre-commit hooks (e.g. here , here, and in the Red Book) but it depends what you're trying to do and what is available on your system.
Can you be more specific?
| How do I implement a pre-commit hook script in SVN that calls dos2unix to validate checked-in file | I was wondering if anyone here had some experience writing this type of script and if they could give me some pointers.
I would like to modify this script to validate that the check-in file does not have a Carriage Return in the EOL formatting. The EOL format is CR LF in Windows and LF in Unix. When a User checks-in code with the Windows format. It does not compile in Unix anymore. I know this can be done on the client side but I need to have this validation done on the server side. To achieve this, I need to do the following:
1) Make sure the file I check is not a binary, I dont know how to do this with svnlook, should I check the mime:type of the file? The Red Book does not indicate this clearly or I must have not seen it.
2) I would like to run the dos2unix command to validate that the file has the correct EOL format. I would compare the output of the dos2unix command against the original file. If there is a diff between both, I give an error message to the client and cancel the check-in.
I would like your comments/feedback on this approach.
| [
"I think you can avoid a commit hook script in this case by using the svn:eol-style property as described in the SVNBook:\n\nEnd-of-Line Character Sequences\nSubversion Properties\n\nThis way SVN can worry about your line endings for you.\nGood luck!\n",
"What exactly are you trying to do?\nOf course, there are numerous places to learn about svn pre-commit hooks (e.g. here , here, and in the Red Book) but it depends what you're trying to do and what is available on your system. \nCan you be more specific? \n"
] | [
4,
1
] | [] | [] | [
"dos2unix",
"python",
"svn"
] | stackoverflow_0000048562_dos2unix_python_svn.txt |
Q:
What language should I learn as a bridge to C (and derivatives)
The first language I learnt was PHP, but I have more recently picked up Python. As these are all 'high-level' languages, I have found them a bit difficult to pick up. I also tried to learn Objective-C but I gave up.
So, what language should I learn to bridge between Python to C
A:
It's not clear why you need a bridge language. Why don't you start working with C directly? C is a very simple language itself. I think that hardest part for C learner is pointers and everything else related to memory management. Also C lang is oriented on structured programming, so you will need to learn how to implement data structures and algorithms without OOP goodness. Actually, your question is pretty hard, usually people go from low level langs to high level and I can understand frustration of those who goes in other direction.
A:
The best place to start learning C is the book "The C Programming Language" by Kernighan and Ritchie.
You will recognise a lot of things from PHP, and you will be surprised how much PHP (and Perl, Python etc) do for you.
Oh and you also will need a C compiler, but i guess you knew that.
A:
I generally agree with most of the others - There's not really a good stepping stone language.
It is, however, useful to understand what is difficult about learning C, which might help you understand what's making it difficult for you.
I'd say the things that would prove difficult in C for someone coming from PHP would be :
Pointers and memory management This is pretty much the reason you're learning C I imagine, so there's not really any getting around it. Learning lower level assembly type languages might make this easier, but C is probably a bridge to do that, not the other way around.
Lack of built in data structures PHP and co all have native String types, and useful things like hash tables built in, which is not the case in C. In C, a String is just an array of characters, which means you'll need to do a lot more work, or look seriously at libraries which add the features you're used to.
Lack of built in libraries Languages like PHP nowadays almost always come with stacks of libraries for things like database connections, image manipulation and stacks of other things. In C, this is not the case other than a very thin standard library which revolves mostly around file reading, writing and basic string manipulation. There are almost always good choices available to fill these needs, but you need to include them yourself.
Suitability for high level tasks If you try to implement the same type of application in C as you might in PHP, you'll find it very slow going. Generating a web page, for example, isn't really something plain C is suited for, so if you're trying to do that, you'll find it very slow going.
Preprocessor and compilation Most languages these days don't have a preprocessor, and if you're coming from PHP, the compilation cycle will seem painful. Both of these are performance trade offs in a way - Scripting languages make the trade off in terms of developer efficiency, where as C prefers performance.
I'm sure there are more that aren't springing to mind for me right now. The moral of the story is that trying to understand what you're finding difficult in C may help you proceed. If you're trying to generate web pages with it, try doing something lower level. If you're missing hash tables, try writing your own, or find a library. If you're struggling with pointers, stick with it :)
A:
Learning any language takes time, I always ensure I have a measurable goal; I set myself an objective, then start learning the language to achieve this objective, as opposed to trying to learn every nook and cranny of the language and syntax.
C is not easy, pointers can be hard to comprehend if you’re not coming assembler roots. I first learned C++, then retro fit C to my repertoire but I started with x86 and 68000 assembler.
A:
Python is about as close to C as you're going to get. It is in fact a very thin wrapper around C in a lot of places. However, C does require that you know a little more about how the computer works on a low level. Thus, you may benefit from trying an assembly language.
LC-3 is a simple assembly language with a simulated machine.
Alternatively, you could try playing with an interactive C interpreter like CINT.
Finally, toughing it out and reading K&R's book is usually the best approach.
A:
Forget Java - it is not going to bring you anywhere closer to C (you have allready proved that you don't have a problem learning new syntax).
Either read K&R or go one lower: Learn about the machine itself. The only tricky part in C is pointers and memory management (which is closely related to pointers, but also has a bit to do with how functions are called). Learning a (simple, maybe even "fake" assembly) language should help you out here.
Then, start reading up on the standard library provided by C. It will be your daily bread and butter.
Oh: another tip! If you really do want to bridge, try FORTH. It helped me get into pointers. Also, using the win32 api from Visual Basic 6.0 can teach you some stuff about pointers ;)
A:
C is a bridge onto itself.
K&R is the only programming language book you can read in one sitting and almost never pick it up again ...
A:
My suggestion is to get a good C-book that is relevant to what you want to do. I agree that K & R is considered to be "The book" on C, but I found "UNIX Systems Programming" by Kay A. Robbins and Steven Robbins to be more practical and hands on. The book is full of clean and short code snippets you can type in, compile and try in just a few minutes each.
There is a preview at http://books.google.com/books?id=tdsZHyH9bQEC&printsec=frontcover (Hyperlinking it didn't work.)
A:
I'm feeling your pain, I also learned PHP first and I'm trying to learn C++, it's not easy, and I am really struggling, It's been 2 years since I started on c++ and Still the extent of what I can do is cout, cin, and math.
If anyone reads this and wonders where to start, START LOWER.
A:
Java might actually be a good option here, believe it or not. It is strongly based on C/C++, so if you can get the syntax and the strong typing, picking up C might be easier. The benefit is you can learn the lower level syntax without having to learn pointers (since memory is managed for you just like in Python and PHP). You will, however, learn a similar concept... references (or objects in general).
Also, it is strongly Object Oriented, so it may be difficult to pick up on that if you haven't dealt with OOP yet.... you might be better off just digging in with C like others suggested, but it is an option.
A:
I think C++ is a good "bridge" to C. I learned C++ first at University, and since it's based on C you'll learn a lot of the same concepts - perhaps most notably pointers - but also Object Oriented Design. OO can be applied to all kinds of modern languages, so it's worth learning.
After learning C++, I found it wasn't too hard to pick up the differences between C++ and C as required (for example, when working on devices that didn't support C++).
A:
try to learn a language which you are comfortable with, try different approach and the basics.
A:
Languages are easy to learn (especially one like C)... the hard part is learning the libraries and/or coding style of the language. For instance, I know C++ fairly well, but most C/C++ code I see confuses me because the naming conventions are so different from what I work with on a daily basis.
Anyway, I guess what I'm trying to say is don't worry too much about the syntax, focus on said language's library. This isn't specific to C, you can say the same about c#, vb.net, java and just about every other language out there.
A:
Pascal! Close enough syntax, still requires you to do some memory management, but not as rough for beginners.
| What language should I learn as a bridge to C (and derivatives) | The first language I learnt was PHP, but I have more recently picked up Python. As these are all 'high-level' languages, I have found them a bit difficult to pick up. I also tried to learn Objective-C but I gave up.
So, what language should I learn to bridge between Python to C
| [
"It's not clear why you need a bridge language. Why don't you start working with C directly? C is a very simple language itself. I think that hardest part for C learner is pointers and everything else related to memory management. Also C lang is oriented on structured programming, so you will need to learn how to implement data structures and algorithms without OOP goodness. Actually, your question is pretty hard, usually people go from low level langs to high level and I can understand frustration of those who goes in other direction.\n",
"The best place to start learning C is the book \"The C Programming Language\" by Kernighan and Ritchie.\nYou will recognise a lot of things from PHP, and you will be surprised how much PHP (and Perl, Python etc) do for you.\nOh and you also will need a C compiler, but i guess you knew that.\n",
"I generally agree with most of the others - There's not really a good stepping stone language.\nIt is, however, useful to understand what is difficult about learning C, which might help you understand what's making it difficult for you.\nI'd say the things that would prove difficult in C for someone coming from PHP would be :\n\nPointers and memory management This is pretty much the reason you're learning C I imagine, so there's not really any getting around it. Learning lower level assembly type languages might make this easier, but C is probably a bridge to do that, not the other way around.\nLack of built in data structures PHP and co all have native String types, and useful things like hash tables built in, which is not the case in C. In C, a String is just an array of characters, which means you'll need to do a lot more work, or look seriously at libraries which add the features you're used to.\nLack of built in libraries Languages like PHP nowadays almost always come with stacks of libraries for things like database connections, image manipulation and stacks of other things. In C, this is not the case other than a very thin standard library which revolves mostly around file reading, writing and basic string manipulation. There are almost always good choices available to fill these needs, but you need to include them yourself.\nSuitability for high level tasks If you try to implement the same type of application in C as you might in PHP, you'll find it very slow going. Generating a web page, for example, isn't really something plain C is suited for, so if you're trying to do that, you'll find it very slow going.\nPreprocessor and compilation Most languages these days don't have a preprocessor, and if you're coming from PHP, the compilation cycle will seem painful. Both of these are performance trade offs in a way - Scripting languages make the trade off in terms of developer efficiency, where as C prefers performance.\n\nI'm sure there are more that aren't springing to mind for me right now. The moral of the story is that trying to understand what you're finding difficult in C may help you proceed. If you're trying to generate web pages with it, try doing something lower level. If you're missing hash tables, try writing your own, or find a library. If you're struggling with pointers, stick with it :)\n",
"Learning any language takes time, I always ensure I have a measurable goal; I set myself an objective, then start learning the language to achieve this objective, as opposed to trying to learn every nook and cranny of the language and syntax. \nC is not easy, pointers can be hard to comprehend if you’re not coming assembler roots. I first learned C++, then retro fit C to my repertoire but I started with x86 and 68000 assembler.\n",
"Python is about as close to C as you're going to get. It is in fact a very thin wrapper around C in a lot of places. However, C does require that you know a little more about how the computer works on a low level. Thus, you may benefit from trying an assembly language.\nLC-3 is a simple assembly language with a simulated machine.\nAlternatively, you could try playing with an interactive C interpreter like CINT.\nFinally, toughing it out and reading K&R's book is usually the best approach.\n",
"Forget Java - it is not going to bring you anywhere closer to C (you have allready proved that you don't have a problem learning new syntax).\nEither read K&R or go one lower: Learn about the machine itself. The only tricky part in C is pointers and memory management (which is closely related to pointers, but also has a bit to do with how functions are called). Learning a (simple, maybe even \"fake\" assembly) language should help you out here.\nThen, start reading up on the standard library provided by C. It will be your daily bread and butter.\nOh: another tip! If you really do want to bridge, try FORTH. It helped me get into pointers. Also, using the win32 api from Visual Basic 6.0 can teach you some stuff about pointers ;)\n",
"C is a bridge onto itself.\nK&R is the only programming language book you can read in one sitting and almost never pick it up again ... \n",
"My suggestion is to get a good C-book that is relevant to what you want to do. I agree that K & R is considered to be \"The book\" on C, but I found \"UNIX Systems Programming\" by Kay A. Robbins and Steven Robbins to be more practical and hands on. The book is full of clean and short code snippets you can type in, compile and try in just a few minutes each.\nThere is a preview at http://books.google.com/books?id=tdsZHyH9bQEC&printsec=frontcover (Hyperlinking it didn't work.)\n",
"I'm feeling your pain, I also learned PHP first and I'm trying to learn C++, it's not easy, and I am really struggling, It's been 2 years since I started on c++ and Still the extent of what I can do is cout, cin, and math.\nIf anyone reads this and wonders where to start, START LOWER.\n",
"Java might actually be a good option here, believe it or not. It is strongly based on C/C++, so if you can get the syntax and the strong typing, picking up C might be easier. The benefit is you can learn the lower level syntax without having to learn pointers (since memory is managed for you just like in Python and PHP). You will, however, learn a similar concept... references (or objects in general).\nAlso, it is strongly Object Oriented, so it may be difficult to pick up on that if you haven't dealt with OOP yet.... you might be better off just digging in with C like others suggested, but it is an option.\n",
"I think C++ is a good \"bridge\" to C. I learned C++ first at University, and since it's based on C you'll learn a lot of the same concepts - perhaps most notably pointers - but also Object Oriented Design. OO can be applied to all kinds of modern languages, so it's worth learning. \nAfter learning C++, I found it wasn't too hard to pick up the differences between C++ and C as required (for example, when working on devices that didn't support C++).\n",
"try to learn a language which you are comfortable with, try different approach and the basics.\n",
"Languages are easy to learn (especially one like C)... the hard part is learning the libraries and/or coding style of the language. For instance, I know C++ fairly well, but most C/C++ code I see confuses me because the naming conventions are so different from what I work with on a daily basis.\nAnyway, I guess what I'm trying to say is don't worry too much about the syntax, focus on said language's library. This isn't specific to C, you can say the same about c#, vb.net, java and just about every other language out there.\n",
"Pascal! Close enough syntax, still requires you to do some memory management, but not as rough for beginners.\n"
] | [
15,
7,
5,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"c",
"python"
] | stackoverflow_0000049195_c_python.txt |
Q:
How do you create a weak reference to an object in Python?
How do you create a weak reference to an object in Python?
A:
>>> import weakref
>>> class Object:
... pass
...
>>> o = Object()
>>> r = weakref.ref(o)
>>> # if the reference is still active, r() will be o, otherwise None
>>> do_something_with_o(r())
See the wearkref module docs for more details.
You can also use weakref.proxy to create an object that proxies o. Will throw ReferenceError if used when the referent is no longer referenced.
| How do you create a weak reference to an object in Python? | How do you create a weak reference to an object in Python?
| [
">>> import weakref\n>>> class Object:\n... pass\n...\n>>> o = Object()\n>>> r = weakref.ref(o)\n>>> # if the reference is still active, r() will be o, otherwise None\n>>> do_something_with_o(r()) \n\nSee the wearkref module docs for more details.\nYou can also use weakref.proxy to create an object that proxies o. Will throw ReferenceError if used when the referent is no longer referenced.\n"
] | [
13
] | [] | [] | [
"python",
"weak_references"
] | stackoverflow_0000050923_python_weak_references.txt |
Q:
Best way to extract data from a FileMaker Pro database in a script?
My job would be easier, or at least less tedious if I could come up with an automated way (preferably in a Python script) to extract useful information from a FileMaker Pro database. I am working on Linux machine and the FileMaker database is on the same LAN running on an OS X machine. I can log into the webby interface from my machine.
I'm quite handy with SQL, and if somebody could point me to some FileMaker plug-in that could give me SQL access to the data within FileMaker, I would be pleased as punch. Everything I've found only goes the other way: Having FileMaker get data from SQL sources. Not useful.
It's not my first choice, but I'd use Perl instead of Python if there was a Perl-y solution at hand.
Note: XML/XSLT services (as suggested by some folks) are only available on FM Server, not FM Pro. Otherwise, that would probably be the best solution. ODBC is turning out to be extremely difficult to even get working. There is absolutely zero feedback from FM when you set it up so you have to dig through /var/log/system.log and parse obscure error messages.
Conclusion: I got it working by running a python script locally on the machine that queries the FM database through the ODBC connections. The script is actually a TCPServer that accepts socket connections from other systems on the LAN, runs the queries, and returns the data through the socket connection. I had to do this to bypass the fact that FM Pro only accepts ODBC connections locally (FM server is required for external connections).
A:
It has been a really long time since I did anything with FileMaker Pro, but I know that it does have capabilities for an ODBC (and JDBC) connection to be made to it (however, I don't know how, or if, that translates to the linux/perl/python world though).
This article shows how to share/expose your FileMaker data via ODBC & JDBC:
Sharing FileMaker Pro data via ODBC or JDBC
From there, if you're able to create an ODBC/JDBC connection you could query out data as needed.
A:
You'll need the FileMaker Pro installation CD to get the drivers. This document details the process for FMP 9 - it is similar for versions 7.x and 8.x as well. Versions 6.x and earlier are completely different and I wouldn't bother trying (xDBC support in those previous versions is "minimal" at best).
FMP 9 supports SQL-92 standard syntax (mostly). Note that rather than querying tables directly you query using the "table occurrence" name which serves as a table alias of sorts. If the data tables are stored in multiple files it is possible to create a single FMP file with table occurrences/aliases pointing to those data tables. There's an "undocumented feature" where such a file must have a table defined in it as well and that table "related" to any other table on the relationships graph (doesn't matter which one) for ODBC access to work. Otherwise your queries will always return no results.
The PDF document details all of the limitations of using the xDBC interface FMP provides. Performance of simple queries is reasonably fast, ymmv. I have found the performance of queries specifying the "LIKE" operator to be less than stellar.
FMP also has an XML/XSLT interface that you can use to query FMP data over an HTTP connection. It also provides a PHP class for accessing and using FMP data in web applications.
A:
If your leaning is to Python, you may be interested in checking out the Python Wrapper for Filemaker. It provides two way access to the Filemaker data via Filemaker's built-in XML services. You can find some quite thorough information on this at:
http://code.google.com/p/pyfilemaker/
| Best way to extract data from a FileMaker Pro database in a script? | My job would be easier, or at least less tedious if I could come up with an automated way (preferably in a Python script) to extract useful information from a FileMaker Pro database. I am working on Linux machine and the FileMaker database is on the same LAN running on an OS X machine. I can log into the webby interface from my machine.
I'm quite handy with SQL, and if somebody could point me to some FileMaker plug-in that could give me SQL access to the data within FileMaker, I would be pleased as punch. Everything I've found only goes the other way: Having FileMaker get data from SQL sources. Not useful.
It's not my first choice, but I'd use Perl instead of Python if there was a Perl-y solution at hand.
Note: XML/XSLT services (as suggested by some folks) are only available on FM Server, not FM Pro. Otherwise, that would probably be the best solution. ODBC is turning out to be extremely difficult to even get working. There is absolutely zero feedback from FM when you set it up so you have to dig through /var/log/system.log and parse obscure error messages.
Conclusion: I got it working by running a python script locally on the machine that queries the FM database through the ODBC connections. The script is actually a TCPServer that accepts socket connections from other systems on the LAN, runs the queries, and returns the data through the socket connection. I had to do this to bypass the fact that FM Pro only accepts ODBC connections locally (FM server is required for external connections).
| [
"It has been a really long time since I did anything with FileMaker Pro, but I know that it does have capabilities for an ODBC (and JDBC) connection to be made to it (however, I don't know how, or if, that translates to the linux/perl/python world though). \nThis article shows how to share/expose your FileMaker data via ODBC & JDBC:\nSharing FileMaker Pro data via ODBC or JDBC \nFrom there, if you're able to create an ODBC/JDBC connection you could query out data as needed.\n",
"You'll need the FileMaker Pro installation CD to get the drivers. This document details the process for FMP 9 - it is similar for versions 7.x and 8.x as well. Versions 6.x and earlier are completely different and I wouldn't bother trying (xDBC support in those previous versions is \"minimal\" at best).\nFMP 9 supports SQL-92 standard syntax (mostly). Note that rather than querying tables directly you query using the \"table occurrence\" name which serves as a table alias of sorts. If the data tables are stored in multiple files it is possible to create a single FMP file with table occurrences/aliases pointing to those data tables. There's an \"undocumented feature\" where such a file must have a table defined in it as well and that table \"related\" to any other table on the relationships graph (doesn't matter which one) for ODBC access to work. Otherwise your queries will always return no results.\nThe PDF document details all of the limitations of using the xDBC interface FMP provides. Performance of simple queries is reasonably fast, ymmv. I have found the performance of queries specifying the \"LIKE\" operator to be less than stellar.\nFMP also has an XML/XSLT interface that you can use to query FMP data over an HTTP connection. It also provides a PHP class for accessing and using FMP data in web applications.\n",
"If your leaning is to Python, you may be interested in checking out the Python Wrapper for Filemaker. It provides two way access to the Filemaker data via Filemaker's built-in XML services. You can find some quite thorough information on this at:\nhttp://code.google.com/p/pyfilemaker/\n"
] | [
6,
4,
2
] | [] | [] | [
"filemaker",
"linux",
"perl",
"python",
"scripting"
] | stackoverflow_0000028668_filemaker_linux_perl_python_scripting.txt |
Q:
Java -> Python?
Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa?
A:
List comprehensions. I often find myself filtering/mapping lists, and being able to say [line.replace("spam","eggs") for line in open("somefile.txt") if line.startswith("nee")] is really nice.
Functions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like people.sort(key=lambda p: p.age) and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.
Everything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). Array.sort is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.
Properties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'
Default and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say Student(name="Eli", age=25)
Functions can only return 1 thing. In Python you have tuple assignment, so you can say spam, eggs = nee() but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.
Built-in syntax for lists and dictionaries.
Operator Overloading.
Generally better designed libraries. For example, to parse an XML document in Java, you say
Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse("test.xml");
and in Python you say
doc = parse("test.xml")
Anyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.
Java has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish.
A:
I think this pair of articles by Philip J. Eby does a great job discussing the differences between the two languages (mostly about philosophy/mentality rather than specific language features).
Python is Not Java
Java is Not Python, either
A:
One key difference in Python is significant whitespace. This puts a lot of people off - me too for a long time - but once you get going it seems natural and makes much more sense than ;s everywhere.
From a personal perspective, Python has the following benefits over Java:
No Checked Exceptions
Optional Arguments
Much less boilerplate and less verbose generally
Other than those, this page on the Python Wiki is a good place to look with lots of links to interesting articles.
A:
With Jython you can have both. It's only at Python 2.2, but still very useful if you need an embedded interpreter that has access to the Java runtime.
A:
Apart from what Eli Courtwright said:
I find iterators in Python more concise. You can use for i in something, and it works with pretty much everything. Yeah, Java has gotten better since 1.5, but for example you can iterate through a string in python with this same construct.
Introspection: In python you can get at runtime information about an object or a module about its symbols, methods, or even its docstrings. You can also instantiate them dynamically. Java has some of this, but usually in Java it takes half a page of code to get an instance of a class, whereas in Python it is about 3 lines. And as far as I know the docstrings thing is not available in Java
| Java -> Python? | Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa?
| [
"\nList comprehensions. I often find myself filtering/mapping lists, and being able to say [line.replace(\"spam\",\"eggs\") for line in open(\"somefile.txt\") if line.startswith(\"nee\")] is really nice.\nFunctions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like people.sort(key=lambda p: p.age) and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.\nEverything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). Array.sort is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.\nProperties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'\nDefault and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say Student(name=\"Eli\", age=25)\nFunctions can only return 1 thing. In Python you have tuple assignment, so you can say spam, eggs = nee() but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.\nBuilt-in syntax for lists and dictionaries.\nOperator Overloading.\nGenerally better designed libraries. For example, to parse an XML document in Java, you say\nDocument doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(\"test.xml\");\nand in Python you say\ndoc = parse(\"test.xml\")\n\nAnyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.\nJava has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish.\n",
"I think this pair of articles by Philip J. Eby does a great job discussing the differences between the two languages (mostly about philosophy/mentality rather than specific language features). \n\nPython is Not Java\nJava is Not Python, either\n\n",
"One key difference in Python is significant whitespace. This puts a lot of people off - me too for a long time - but once you get going it seems natural and makes much more sense than ;s everywhere.\nFrom a personal perspective, Python has the following benefits over Java:\n\nNo Checked Exceptions\nOptional Arguments\nMuch less boilerplate and less verbose generally\n\nOther than those, this page on the Python Wiki is a good place to look with lots of links to interesting articles.\n",
"With Jython you can have both. It's only at Python 2.2, but still very useful if you need an embedded interpreter that has access to the Java runtime.\n",
"Apart from what Eli Courtwright said:\n\nI find iterators in Python more concise. You can use for i in something, and it works with pretty much everything. Yeah, Java has gotten better since 1.5, but for example you can iterate through a string in python with this same construct.\nIntrospection: In python you can get at runtime information about an object or a module about its symbols, methods, or even its docstrings. You can also instantiate them dynamically. Java has some of this, but usually in Java it takes half a page of code to get an instance of a class, whereas in Python it is about 3 lines. And as far as I know the docstrings thing is not available in Java\n\n"
] | [
47,
16,
5,
3,
2
] | [] | [] | [
"java",
"python"
] | stackoverflow_0000049824_java_python.txt |
Q:
Large Python Includes
I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements.
Is there some way to group together several files and include them all at once?
A:
Put files in one folder.
Add __init__.py file to the folder. Do necessary imports in __init__.py
Replace multiple imports by one:
import folder_name
See Python Package Management
A:
Yes, take a look at the "6.4 Packages" section in http://docs.python.org/tut/node8.html:
Basically, you can place a bunch of files into a directory and add an __init__.py file to the directory. If the directory is in your PYTHONPATH or sys.path, you can do "import directoryname" to import everything in the directory or "import directoryname.some_file_in_directory" to import a specific file that is in the directory.
The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as "string", from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later.
| Large Python Includes | I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements.
Is there some way to group together several files and include them all at once?
| [
"\nPut files in one folder. \nAdd __init__.py file to the folder. Do necessary imports in __init__.py\nReplace multiple imports by one:\nimport folder_name \n\nSee Python Package Management\n",
"Yes, take a look at the \"6.4 Packages\" section in http://docs.python.org/tut/node8.html:\nBasically, you can place a bunch of files into a directory and add an __init__.py file to the directory. If the directory is in your PYTHONPATH or sys.path, you can do \"import directoryname\" to import everything in the directory or \"import directoryname.some_file_in_directory\" to import a specific file that is in the directory.\n\nThe __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as \"string\", from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later. \n\n"
] | [
8,
6
] | [] | [] | [
"python"
] | stackoverflow_0000053027_python.txt |
Q:
Getting international characters from a web page?
I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as ÄÄRITALO!
That is, html uses escaped markup for the special characters, such as Ä
Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it.
A:
I would recommend BeautifulSoup for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so:
>>> from BeautifulSoup import BeautifulSoup
>>> html = "<html>ÄÄRITALO!</html>"
>>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES)
>>> print soup.contents[0].string
ÄÄRITALO!
(It would be nice if the standard codecs module included a codec for this, such that you could do "some_string".decode('html_entities') but unfortunately it doesn't!)
EDIT:
Another solution:
Python developer Fredrik Lundh (author of elementtree, among other things) has a function to unsecape HTML entities on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones).
A:
Try using BeautifulSoup. It should do the trick and give you a nicely formatted DOM to work with as well.
This blog entry seems to have had some success with it.
A:
I haven't tried it myself, but have you tried
http://zesty.ca/python/scrape.html ?
It seems to have a method htmldecode(text) which would do what you want.
| Getting international characters from a web page? | I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as ÄÄRITALO!
That is, html uses escaped markup for the special characters, such as Ä
Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it.
| [
"I would recommend BeautifulSoup for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so:\n>>> from BeautifulSoup import BeautifulSoup \n>>> html = \"<html>ÄÄRITALO!</html>\"\n>>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES)\n>>> print soup.contents[0].string\nÄÄRITALO!\n\n(It would be nice if the standard codecs module included a codec for this, such that you could do \"some_string\".decode('html_entities') but unfortunately it doesn't!)\nEDIT:\nAnother solution:\nPython developer Fredrik Lundh (author of elementtree, among other things) has a function to unsecape HTML entities on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones).\n",
"Try using BeautifulSoup. It should do the trick and give you a nicely formatted DOM to work with as well.\nThis blog entry seems to have had some success with it.\n",
"I haven't tried it myself, but have you tried\nhttp://zesty.ca/python/scrape.html ?\nIt seems to have a method htmldecode(text) which would do what you want.\n"
] | [
7,
2,
0
] | [] | [] | [
"html",
"parsing",
"python",
"unicode"
] | stackoverflow_0000053224_html_parsing_python_unicode.txt |
Q:
Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP)
I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows):
SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples;
Is this normal behaviour when using a SQL database?
The schema (the table holds responses to a survey):
CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer);
\copy tuples from '350,000 responses.csv' delimiter as ','
I wrote some tests in Java and Python for context and they crush SQL (except for pure python):
java 1.5 threads ~ 7 ms
java 1.5 ~ 10 ms
python 2.5 numpy ~ 18 ms
python 2.5 ~ 370 ms
Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown)
Tunings i've tried without success include (blindly following some web advice):
increased the shared memory available to Postgres to 256MB
increased the working memory to 2MB
disabled connection and statement logging
used a stored procedure via CREATE FUNCTION ... LANGUAGE SQL
So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous.
Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help.
No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest.
The sqlite3 timing is driven by the Python program and is running from disk (not :memory:)
I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data.
The Postgres query doesn't change timing on subsequent runs.
I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!)
A:
I would say your test scheme is not really useful. To fulfill the db query, the db server goes through several steps:
parse the SQL
work up a query plan, i. e. decide on which indices to use (if any), optimize etc.
if an index is used, search it for the pointers to the actual data, then go to the appropriate location in the data or
if no index is used, scan the whole table to determine which rows are needed
load the data from disk into a temporary location (hopefully, but not necessarily, memory)
perform the count() and avg() calculations
So, creating an array in Python and getting the average basically skips all these steps save the last one. As disk I/O is among the most expensive operations a program has to perform, this is a major flaw in the test (see also the answers to this question I asked here before). Even if you read the data from disk in your other test, the process is completely different and it's hard to tell how relevant the results are.
To obtain more information about where Postgres spends its time, I would suggest the following tests:
Compare the execution time of your query to a SELECT without the aggregating functions (i. e. cut step 5)
If you find that the aggregation leads to a significant slowdown, try if Python does it faster, obtaining the raw data through the plain SELECT from the comparison.
To speed up your query, reduce disk access first. I doubt very much that it's the aggregation that takes the time.
There's several ways to do that:
Cache data (in memory!) for subsequent access, either via the db engine's own capabilities or with tools like memcached
Reduce the size of your stored data
Optimize the use of indices. Sometimes this can mean to skip index use altogether (after all, it's disk access, too). For MySQL, I seem to remember that it's recommended to skip indices if you assume that the query fetches more than 10% of all the data in the table.
If your query makes good use of indices, I know that for MySQL databases it helps to put indices and data on separate physical disks. However, I don't know whether that's applicable for Postgres.
There also might be more sophisticated problems such as swapping rows to disk if for some reason the result set can't be completely processed in memory. But I would leave that kind of research until I run into serious performance problems that I can't find another way to fix, as it requires knowledge about a lot of little under-the-hood details in your process.
Update:
I just realized that you seem to have no use for indices for the above query and most likely aren't using any, too, so my advice on indices probably wasn't helpful. Sorry. Still, I'd say that the aggregation is not the problem but disk access is. I'll leave the index stuff in, anyway, it might still have some use.
A:
Postgres is doing a lot more than it looks like (maintaining data consistency for a start!)
If the values don't have to be 100% spot on, or if the table is updated rarely, but you are running this calculation often, you might want to look into Materialized Views to speed it up.
(Note, I have not used materialized views in Postgres, they look at little hacky, but might suite your situation).
Materialized Views
Also consider the overhead of actually connecting to the server and the round trip required to send the request to the server and back.
I'd consider 200ms for something like this to be pretty good, A quick test on my oracle server, the same table structure with about 500k rows and no indexes, takes about 1 - 1.5 seconds, which is almost all just oracle sucking the data off disk.
The real question is, is 200ms fast enough?
-------------- More --------------------
I was interested in solving this using materialized views, since I've never really played with them. This is in oracle.
First I created a MV which refreshes every minute.
create materialized view mv_so_x
build immediate
refresh complete
START WITH SYSDATE NEXT SYSDATE + 1/24/60
as select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;
While its refreshing, there is no rows returned
SQL> select * from mv_so_x;
no rows selected
Elapsed: 00:00:00.00
Once it refreshes, its MUCH faster than doing the raw query
SQL> select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:05.74
SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00
SQL>
If we insert into the base table, the result is not immediately viewable view the MV.
SQL> insert into so_x values (1,2,3,4,5);
1 row created.
Elapsed: 00:00:00.00
SQL> commit;
Commit complete.
Elapsed: 00:00:00.00
SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00
SQL>
But wait a minute or so, and the MV will update behind the scenes, and the result is returned fast as you could want.
SQL> /
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)
---------- ---------- ---------- ---------- ----------
1899460 7495.35823 22.2905352 5.00276078 2.17647059
Elapsed: 00:00:00.00
SQL>
This isn't ideal. for a start, its not realtime, inserts/updates will not be immediately visible. Also, you've got a query running to update the MV whether you need it or not (this can be tune to whatever time frame, or on demand). But, this does show how much faster an MV can make it seem to the end user, if you can live with values which aren't quite upto the second accurate.
A:
I retested with MySQL specifying ENGINE = MEMORY and it doesn't change a thing (still 200 ms). Sqlite3 using an in-memory db gives similar timings as well (250 ms).
The math here looks correct (at least the size, as that's how big the sqlite db is :-)
I'm just not buying the disk-causes-slowness argument as there is every indication the tables are in memory (the postgres guys all warn against trying too hard to pin tables to memory as they swear the OS will do it better than the programmer)
To clarify the timings, the Java code is not reading from disk, making it a totally unfair comparison if Postgres is reading from the disk and calculating a complicated query, but that's really besides the point, the DB should be smart enough to bring a small table into memory and precompile a stored procedure IMHO.
UPDATE (in response to the first comment below):
I'm not sure how I'd test the query without using an aggregation function in a way that would be fair, since if i select all of the rows it'll spend tons of time serializing and formatting everything. I'm not saying that the slowness is due to the aggregation function, it could still be just overhead from concurrency, integrity, and friends. I just don't know how to isolate the aggregation as the sole independent variable.
A:
Those are very detailed answers, but they mostly beg the question, how do I get these benefits without leaving Postgres given that the data easily fits into memory, requires concurrent reads but no writes and is queried with the same query over and over again.
Is it possible to precompile the query and optimization plan? I would have thought the stored procedure would do this, but it doesn't really help.
To avoid disk access it's necessary to cache the whole table in memory, can I force Postgres to do that? I think it's already doing this though, since the query executes in just 200 ms after repeated runs.
Can I tell Postgres that the table is read only, so it can optimize any locking code?
I think it's possible to estimate the query construction costs with an empty table (timings range from 20-60 ms)
I still can't see why the Java/Python tests are invalid. Postgres just isn't doing that much more work (though I still haven't addressed the concurrency aspect, just the caching and query construction)
UPDATE:
I don't think it's fair to compare the SELECTS as suggested by pulling 350,000 through the driver and serialization steps into Python to run the aggregation, nor even to omit the aggregation as the overhead in formatting and displaying is hard to separate from the timing. If both engines are operating on in memory data, it should be an apples to apples comparison, I'm not sure how to guarantee that's already happening though.
I can't figure out how to add comments, maybe i don't have enough reputation?
A:
I'm a MS-SQL guy myself, and we'd use DBCC PINTABLE to keep a table cached, and SET STATISTICS IO to see that it's reading from cache, and not disk.
I can't find anything on Postgres to mimic PINTABLE, but pg_buffercache seems to give details on what is in the cache - you may want to check that, and see if your table is actually being cached.
A quick back of the envelope calculation makes me suspect that you're paging from disk. Assuming Postgres uses 4-byte integers, you have (6 * 4) bytes per row, so your table is a minimum of (24 * 350,000) bytes ~ 8.4MB. Assuming 40 MB/s sustained throughput on your HDD, you're looking at right around 200ms to read the data (which, as pointed out, should be where almost all of the time is being spent).
Unless I screwed up my math somewhere, I don't see how it's possible that you are able to read 8MB into your Java app and process it in the times you're showing - unless that file is already cached by either the drive or your OS.
A:
I don't think that your results are all that surprising -- if anything it is that Postgres is so fast.
Does the Postgres query run faster a second time once it has had a chance to cache the data? To be a little fairer your test for Java and Python should cover the cost of acquiring the data in the first place (ideally loading it off disk).
If this performance level is a problem for your application in practice but you need a RDBMS for other reasons then you could look at memcached. You would then have faster cached access to raw data and could do the calculations in code.
A:
One other thing that an RDBMS generally does for you is to provide concurrency by protecting you from simultaneous access by another process. This is done by placing locks, and there's some overhead from that.
If you're dealing with entirely static data that never changes, and especially if you're in a basically "single user" scenario, then using a relational database doesn't necessarily gain you much benefit.
A:
Are you using TCP to access the Postgres? In that case Nagle is messing with your timing.
A:
You need to increase postgres' caches to the point where the whole working set fits into memory before you can expect to see perfomance comparable to doing it in-memory with a program.
A:
Thanks for the Oracle timings, that's the kind of stuff I'm looking for (disappointing though :-)
Materialized views are probably worth considering as I think I can precompute the most interesting forms of this query for most users.
I don't think query round trip time should be very high as i'm running the the queries on the same machine that runs Postgres, so it can't add much latency?
I've also done some checking into the cache sizes, and it seems Postgres relies on the OS to handle caching, they specifically mention BSD as the ideal OS for this, so I thinking Mac OS ought to be pretty smart about bringing the table into memory. Unless someone has more specific params in mind I think more specific caching is out of my control.
In the end I can probably put up with 200 ms response times, but knowing that 7 ms is a possible target makes me feel unsatisfied, as even 20-50 ms times would enable more users to have more up to date queries and get rid of a lots of caching and precomputed hacks.
I just checked the timings using MySQL 5 and they are slightly worse than Postgres. So barring some major caching breakthroughs, I guess this is what I can expect going the relational db route.
I wish I could up vote some of your answers, but I don't have enough points yet.
| Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP) | I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows):
SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples;
Is this normal behaviour when using a SQL database?
The schema (the table holds responses to a survey):
CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer);
\copy tuples from '350,000 responses.csv' delimiter as ','
I wrote some tests in Java and Python for context and they crush SQL (except for pure python):
java 1.5 threads ~ 7 ms
java 1.5 ~ 10 ms
python 2.5 numpy ~ 18 ms
python 2.5 ~ 370 ms
Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown)
Tunings i've tried without success include (blindly following some web advice):
increased the shared memory available to Postgres to 256MB
increased the working memory to 2MB
disabled connection and statement logging
used a stored procedure via CREATE FUNCTION ... LANGUAGE SQL
So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous.
Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help.
No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest.
The sqlite3 timing is driven by the Python program and is running from disk (not :memory:)
I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data.
The Postgres query doesn't change timing on subsequent runs.
I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!)
| [
"I would say your test scheme is not really useful. To fulfill the db query, the db server goes through several steps:\n\nparse the SQL\nwork up a query plan, i. e. decide on which indices to use (if any), optimize etc.\nif an index is used, search it for the pointers to the actual data, then go to the appropriate location in the data or\nif no index is used, scan the whole table to determine which rows are needed\nload the data from disk into a temporary location (hopefully, but not necessarily, memory)\nperform the count() and avg() calculations\n\nSo, creating an array in Python and getting the average basically skips all these steps save the last one. As disk I/O is among the most expensive operations a program has to perform, this is a major flaw in the test (see also the answers to this question I asked here before). Even if you read the data from disk in your other test, the process is completely different and it's hard to tell how relevant the results are.\nTo obtain more information about where Postgres spends its time, I would suggest the following tests:\n\nCompare the execution time of your query to a SELECT without the aggregating functions (i. e. cut step 5)\nIf you find that the aggregation leads to a significant slowdown, try if Python does it faster, obtaining the raw data through the plain SELECT from the comparison.\n\nTo speed up your query, reduce disk access first. I doubt very much that it's the aggregation that takes the time.\nThere's several ways to do that:\n\nCache data (in memory!) for subsequent access, either via the db engine's own capabilities or with tools like memcached\nReduce the size of your stored data\nOptimize the use of indices. Sometimes this can mean to skip index use altogether (after all, it's disk access, too). For MySQL, I seem to remember that it's recommended to skip indices if you assume that the query fetches more than 10% of all the data in the table.\nIf your query makes good use of indices, I know that for MySQL databases it helps to put indices and data on separate physical disks. However, I don't know whether that's applicable for Postgres.\nThere also might be more sophisticated problems such as swapping rows to disk if for some reason the result set can't be completely processed in memory. But I would leave that kind of research until I run into serious performance problems that I can't find another way to fix, as it requires knowledge about a lot of little under-the-hood details in your process.\n\nUpdate:\nI just realized that you seem to have no use for indices for the above query and most likely aren't using any, too, so my advice on indices probably wasn't helpful. Sorry. Still, I'd say that the aggregation is not the problem but disk access is. I'll leave the index stuff in, anyway, it might still have some use.\n",
"Postgres is doing a lot more than it looks like (maintaining data consistency for a start!)\nIf the values don't have to be 100% spot on, or if the table is updated rarely, but you are running this calculation often, you might want to look into Materialized Views to speed it up.\n(Note, I have not used materialized views in Postgres, they look at little hacky, but might suite your situation).\nMaterialized Views\nAlso consider the overhead of actually connecting to the server and the round trip required to send the request to the server and back.\nI'd consider 200ms for something like this to be pretty good, A quick test on my oracle server, the same table structure with about 500k rows and no indexes, takes about 1 - 1.5 seconds, which is almost all just oracle sucking the data off disk.\nThe real question is, is 200ms fast enough?\n-------------- More --------------------\nI was interested in solving this using materialized views, since I've never really played with them. This is in oracle.\nFirst I created a MV which refreshes every minute.\ncreate materialized view mv_so_x \nbuild immediate \nrefresh complete \nSTART WITH SYSDATE NEXT SYSDATE + 1/24/60\n as select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;\n\nWhile its refreshing, there is no rows returned\nSQL> select * from mv_so_x;\n\nno rows selected\n\nElapsed: 00:00:00.00\n\nOnce it refreshes, its MUCH faster than doing the raw query\nSQL> select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899459 7495.38839 22.2905454 5.00276131 2.13432836\n\nElapsed: 00:00:05.74\nSQL> select * from mv_so_x;\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899459 7495.38839 22.2905454 5.00276131 2.13432836\n\nElapsed: 00:00:00.00\nSQL> \n\nIf we insert into the base table, the result is not immediately viewable view the MV.\nSQL> insert into so_x values (1,2,3,4,5);\n\n1 row created.\n\nElapsed: 00:00:00.00\nSQL> commit;\n\nCommit complete.\n\nElapsed: 00:00:00.00\nSQL> select * from mv_so_x;\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899459 7495.38839 22.2905454 5.00276131 2.13432836\n\nElapsed: 00:00:00.00\nSQL> \n\nBut wait a minute or so, and the MV will update behind the scenes, and the result is returned fast as you could want.\nSQL> /\n\n COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D)\n---------- ---------- ---------- ---------- ----------\n 1899460 7495.35823 22.2905352 5.00276078 2.17647059\n\nElapsed: 00:00:00.00\nSQL> \n\nThis isn't ideal. for a start, its not realtime, inserts/updates will not be immediately visible. Also, you've got a query running to update the MV whether you need it or not (this can be tune to whatever time frame, or on demand). But, this does show how much faster an MV can make it seem to the end user, if you can live with values which aren't quite upto the second accurate.\n",
"I retested with MySQL specifying ENGINE = MEMORY and it doesn't change a thing (still 200 ms). Sqlite3 using an in-memory db gives similar timings as well (250 ms).\nThe math here looks correct (at least the size, as that's how big the sqlite db is :-)\nI'm just not buying the disk-causes-slowness argument as there is every indication the tables are in memory (the postgres guys all warn against trying too hard to pin tables to memory as they swear the OS will do it better than the programmer)\nTo clarify the timings, the Java code is not reading from disk, making it a totally unfair comparison if Postgres is reading from the disk and calculating a complicated query, but that's really besides the point, the DB should be smart enough to bring a small table into memory and precompile a stored procedure IMHO.\nUPDATE (in response to the first comment below):\nI'm not sure how I'd test the query without using an aggregation function in a way that would be fair, since if i select all of the rows it'll spend tons of time serializing and formatting everything. I'm not saying that the slowness is due to the aggregation function, it could still be just overhead from concurrency, integrity, and friends. I just don't know how to isolate the aggregation as the sole independent variable.\n",
"Those are very detailed answers, but they mostly beg the question, how do I get these benefits without leaving Postgres given that the data easily fits into memory, requires concurrent reads but no writes and is queried with the same query over and over again.\nIs it possible to precompile the query and optimization plan? I would have thought the stored procedure would do this, but it doesn't really help.\nTo avoid disk access it's necessary to cache the whole table in memory, can I force Postgres to do that? I think it's already doing this though, since the query executes in just 200 ms after repeated runs.\nCan I tell Postgres that the table is read only, so it can optimize any locking code?\nI think it's possible to estimate the query construction costs with an empty table (timings range from 20-60 ms) \nI still can't see why the Java/Python tests are invalid. Postgres just isn't doing that much more work (though I still haven't addressed the concurrency aspect, just the caching and query construction)\nUPDATE: \nI don't think it's fair to compare the SELECTS as suggested by pulling 350,000 through the driver and serialization steps into Python to run the aggregation, nor even to omit the aggregation as the overhead in formatting and displaying is hard to separate from the timing. If both engines are operating on in memory data, it should be an apples to apples comparison, I'm not sure how to guarantee that's already happening though.\nI can't figure out how to add comments, maybe i don't have enough reputation?\n",
"I'm a MS-SQL guy myself, and we'd use DBCC PINTABLE to keep a table cached, and SET STATISTICS IO to see that it's reading from cache, and not disk. \nI can't find anything on Postgres to mimic PINTABLE, but pg_buffercache seems to give details on what is in the cache - you may want to check that, and see if your table is actually being cached.\nA quick back of the envelope calculation makes me suspect that you're paging from disk. Assuming Postgres uses 4-byte integers, you have (6 * 4) bytes per row, so your table is a minimum of (24 * 350,000) bytes ~ 8.4MB. Assuming 40 MB/s sustained throughput on your HDD, you're looking at right around 200ms to read the data (which, as pointed out, should be where almost all of the time is being spent). \nUnless I screwed up my math somewhere, I don't see how it's possible that you are able to read 8MB into your Java app and process it in the times you're showing - unless that file is already cached by either the drive or your OS.\n",
"I don't think that your results are all that surprising -- if anything it is that Postgres is so fast.\nDoes the Postgres query run faster a second time once it has had a chance to cache the data? To be a little fairer your test for Java and Python should cover the cost of acquiring the data in the first place (ideally loading it off disk).\nIf this performance level is a problem for your application in practice but you need a RDBMS for other reasons then you could look at memcached. You would then have faster cached access to raw data and could do the calculations in code.\n",
"One other thing that an RDBMS generally does for you is to provide concurrency by protecting you from simultaneous access by another process. This is done by placing locks, and there's some overhead from that.\nIf you're dealing with entirely static data that never changes, and especially if you're in a basically \"single user\" scenario, then using a relational database doesn't necessarily gain you much benefit.\n",
"Are you using TCP to access the Postgres? In that case Nagle is messing with your timing.\n",
"You need to increase postgres' caches to the point where the whole working set fits into memory before you can expect to see perfomance comparable to doing it in-memory with a program.\n",
"Thanks for the Oracle timings, that's the kind of stuff I'm looking for (disappointing though :-)\nMaterialized views are probably worth considering as I think I can precompute the most interesting forms of this query for most users.\nI don't think query round trip time should be very high as i'm running the the queries on the same machine that runs Postgres, so it can't add much latency?\nI've also done some checking into the cache sizes, and it seems Postgres relies on the OS to handle caching, they specifically mention BSD as the ideal OS for this, so I thinking Mac OS ought to be pretty smart about bringing the table into memory. Unless someone has more specific params in mind I think more specific caching is out of my control.\nIn the end I can probably put up with 200 ms response times, but knowing that 7 ms is a possible target makes me feel unsatisfied, as even 20-50 ms times would enable more users to have more up to date queries and get rid of a lots of caching and precomputed hacks.\nI just checked the timings using MySQL 5 and they are slightly worse than Postgres. So barring some major caching breakthroughs, I guess this is what I can expect going the relational db route.\nI wish I could up vote some of your answers, but I don't have enough points yet.\n"
] | [
15,
12,
6,
3,
3,
1,
1,
1,
0,
0
] | [] | [] | [
"aggregate",
"olap",
"optimization",
"python",
"sql"
] | stackoverflow_0000051553_aggregate_olap_optimization_python_sql.txt |
Q:
Does re.compile() or any given Python library call throw an exception?
I can't tell from the Python documentation whether the re.compile(x) function may throw an exception (assuming you pass in a string). I imagine there is something that could be considered an invalid regular expression. The larger question is, where do I go to find if a given Python library call may throw exception(s) and what those are?
A:
Well, re.compile certainly may:
>>> import re
>>> re.compile('he(lo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python25\lib\re.py", line 180, in compile
return _compile(pattern, flags)
File "C:\Python25\lib\re.py", line 233, in _compile
raise error, v # invalid expression
sre_constants.error: unbalanced parenthesis
The documentation does support this, in a roundabout way - check the bottom of the "Module Contents" page for (brief) description of the error exception.
Unfortunately, I don't have any answer to the general question. I suppose the documentation for the various modules varies in quality and thoroughness. If there were particular modules you were interested in, you might be able to decompile them (if written in Python) or even look at the source, if they're in the standard library.
A:
Unlike Java, where there are exceptions that must be declared to be raised (and some that don't have to be, but that's another story), any Python code may raise any exception at any time.
There are a list of built-in exceptions, which generally has some description of when these exceptions might be raised. Its up to you as to how much exception handling you will do, and if you will handle stuff gracefully, or just fail with a traceback.
| Does re.compile() or any given Python library call throw an exception? | I can't tell from the Python documentation whether the re.compile(x) function may throw an exception (assuming you pass in a string). I imagine there is something that could be considered an invalid regular expression. The larger question is, where do I go to find if a given Python library call may throw exception(s) and what those are?
| [
"Well, re.compile certainly may:\n>>> import re\n>>> re.compile('he(lo')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"C:\\Python25\\lib\\re.py\", line 180, in compile\n return _compile(pattern, flags)\n File \"C:\\Python25\\lib\\re.py\", line 233, in _compile\n raise error, v # invalid expression\nsre_constants.error: unbalanced parenthesis\n\nThe documentation does support this, in a roundabout way - check the bottom of the \"Module Contents\" page for (brief) description of the error exception.\nUnfortunately, I don't have any answer to the general question. I suppose the documentation for the various modules varies in quality and thoroughness. If there were particular modules you were interested in, you might be able to decompile them (if written in Python) or even look at the source, if they're in the standard library.\n",
"Unlike Java, where there are exceptions that must be declared to be raised (and some that don't have to be, but that's another story), any Python code may raise any exception at any time.\nThere are a list of built-in exceptions, which generally has some description of when these exceptions might be raised. Its up to you as to how much exception handling you will do, and if you will handle stuff gracefully, or just fail with a traceback.\n"
] | [
13,
7
] | [] | [] | [
"exception",
"python",
"regex"
] | stackoverflow_0000058119_exception_python_regex.txt |
Q:
Can distutils create empty __init__.py files?
If all of my __init__.py files are empty, do I have to store them into version control, or is there a way to make distutils create empty __init__.py files during installation?
A:
In Python, __init__.py files actually have a meaning! They mean that the folder they are in is a Python module. As such, they have a real role in your code and should most probably be stored in Version Control.
You could well imagine a folder in your source tree that is NOT a Python module, for example a folder containing only resources (e.g. images) and no code. That folder would not need to have a __init__.py file in it. Now how do you make the difference between folders where distutils should create those files and folders where it should not ?
A:
Is there a reason you want to avoid putting empty __init__.py files in version control? If you do this you won't be able to import your packages from the source directory wihout first running distutils.
If you really want to, I suppose you can create __init__.py in setup.py. It has to be before running distutils.setup, so setup itself is able to find your packages:
from distutils import setup
import os
for path in [my_package_directories]:
filename = os.path.join(pagh, '__init__.py')
if not os.path.exists(filename):
init = open(filename, 'w')
init.close()
setup(
...
)
but... what would you gain from this, compared to having the empty __init__.py files there in the first place?
| Can distutils create empty __init__.py files? | If all of my __init__.py files are empty, do I have to store them into version control, or is there a way to make distutils create empty __init__.py files during installation?
| [
"In Python, __init__.py files actually have a meaning! They mean that the folder they are in is a Python module. As such, they have a real role in your code and should most probably be stored in Version Control.\nYou could well imagine a folder in your source tree that is NOT a Python module, for example a folder containing only resources (e.g. images) and no code. That folder would not need to have a __init__.py file in it. Now how do you make the difference between folders where distutils should create those files and folders where it should not ?\n",
"Is there a reason you want to avoid putting empty __init__.py files in version control? If you do this you won't be able to import your packages from the source directory wihout first running distutils.\nIf you really want to, I suppose you can create __init__.py in setup.py. It has to be before running distutils.setup, so setup itself is able to find your packages:\nfrom distutils import setup\nimport os\n\nfor path in [my_package_directories]:\n filename = os.path.join(pagh, '__init__.py')\n if not os.path.exists(filename):\n init = open(filename, 'w')\n init.close()\n\nsetup(\n...\n)\n\nbut... what would you gain from this, compared to having the empty __init__.py files there in the first place? \n"
] | [
7,
4
] | [] | [] | [
"distutils",
"python",
"version_control"
] | stackoverflow_0000060352_distutils_python_version_control.txt |
Q:
python regex to match multi-line preprocessor macro
What follows is a regular expression I have written to match multi-line pre-processor macros in C / C++ code. I'm by no means a regular expressions guru, so I'd welcome any advice on how I can make this better.
Here's the regex:
\s*#define(.*\\\n)+[\S]+(?!\\)
It should match all of this:
#define foo(x) if(x) \
doSomething(x)
But only some of this (shouldn't match the next line of code:
#define foo(x) if(x) \
doSomething(x)
normalCode();
And also shouldn't match single-line preprocessor macros.
I'm pretty sure that the regex above works - but as I said, there probably a better way of doing it, and I imagine that there are ways of breaking it. Can anyone suggest any?
A:
This is a simple test program I knocked up:
#!/usr/bin/env python
TEST1="""
#include "Foo.h"
#define bar foo\\
x
#include "Bar.h"
"""
TEST2="""
#define bar foo
#define x 1 \\
12 \\
2 \\\\ 3
Foobar
"""
TEST3="""
#define foo(x) if(x) \\
doSomething(x)
"""
TEST4="""
#define foo(x) if(x) \\
doSomething(x)
normalCode();
"""
import re
matcher = re.compile(r"^[ \t]*#define(.*\\\n)+.*$",re.MULTILINE)
def extractDefines(s):
mo = matcher.search(s)
if not mo:
print mo
return
print mo.group(0)
extractDefines(TEST1)
extractDefines(TEST2)
extractDefines(TEST3)
extractDefines(TEST4)
The re I used:
r"^[ \t]*#define(.*\\\n)+.*$"
Is very similar to the one use used, the changes:
[ \t] To avoid newlines at the start
of the define.
I rely on + being
greedy, so I can use a simple .*$ at
the end to get the first line of the
define that doesn't end with \
A:
start = r"^\s*#define\s+"
continuation = r"(?:.*\\\n)+"
lastline = r".*$"
re_multiline_macros = re.compile(start + continuation + lastline,
re.MULTILINE)
| python regex to match multi-line preprocessor macro | What follows is a regular expression I have written to match multi-line pre-processor macros in C / C++ code. I'm by no means a regular expressions guru, so I'd welcome any advice on how I can make this better.
Here's the regex:
\s*#define(.*\\\n)+[\S]+(?!\\)
It should match all of this:
#define foo(x) if(x) \
doSomething(x)
But only some of this (shouldn't match the next line of code:
#define foo(x) if(x) \
doSomething(x)
normalCode();
And also shouldn't match single-line preprocessor macros.
I'm pretty sure that the regex above works - but as I said, there probably a better way of doing it, and I imagine that there are ways of breaking it. Can anyone suggest any?
| [
"This is a simple test program I knocked up:\n#!/usr/bin/env python\n\nTEST1=\"\"\"\n#include \"Foo.h\"\n#define bar foo\\\\\n x\n#include \"Bar.h\"\n\"\"\"\n\nTEST2=\"\"\"\n#define bar foo\n#define x 1 \\\\\n 12 \\\\\n 2 \\\\\\\\ 3\nFoobar\n\"\"\"\n\nTEST3=\"\"\"\n#define foo(x) if(x) \\\\\ndoSomething(x)\n\"\"\"\n\nTEST4=\"\"\"\n#define foo(x) if(x) \\\\\ndoSomething(x)\nnormalCode();\n\"\"\"\n\nimport re\nmatcher = re.compile(r\"^[ \\t]*#define(.*\\\\\\n)+.*$\",re.MULTILINE)\n\ndef extractDefines(s):\n mo = matcher.search(s)\n if not mo:\n print mo\n return\n print mo.group(0)\n\nextractDefines(TEST1)\nextractDefines(TEST2)\nextractDefines(TEST3)\nextractDefines(TEST4)\n\nThe re I used:\nr\"^[ \\t]*#define(.*\\\\\\n)+.*$\"\n\nIs very similar to the one use used, the changes:\n\n[ \\t] To avoid newlines at the start\nof the define.\nI rely on + being\ngreedy, so I can use a simple .*$ at\nthe end to get the first line of the\ndefine that doesn't end with \\\n\n",
"start = r\"^\\s*#define\\s+\"\ncontinuation = r\"(?:.*\\\\\\n)+\"\nlastline = r\".*$\"\n\nre_multiline_macros = re.compile(start + continuation + lastline, \n re.MULTILINE)\n\n"
] | [
6,
4
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000060685_python_regex.txt |
Q:
How do I write a python HTTP server to listen on multiple ports?
I'm writing a small web server in Python, using BaseHTTPServer and a custom subclass of BaseHTTPServer.BaseHTTPRequestHandler. Is it possible to make this listen on more than one port?
What I'm doing now:
class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def doGET
[...]
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
pass
server = ThreadingHTTPServer(('localhost', 80), MyRequestHandler)
server.serve_forever()
A:
Sure; just start two different servers on two different ports in two different threads that each use the same handler. Here's a complete, working example that I just wrote and tested. If you run this code then you'll be able to get a Hello World webpage at both http://localhost:1111/ and http://localhost:2222/
from threading import Thread
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write("Hello World!")
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
daemon_threads = True
def serve_on_port(port):
server = ThreadingHTTPServer(("localhost",port), Handler)
server.serve_forever()
Thread(target=serve_on_port, args=[1111]).start()
serve_on_port(2222)
update:
This also works with Python 3 but three lines need to be slightly changed:
from socketserver import ThreadingMixIn
from http.server import HTTPServer, BaseHTTPRequestHandler
and
self.wfile.write(bytes("Hello World!", "utf-8"))
A:
Not easily. You could have two ThreadingHTTPServer instances, write your own serve_forever() function (don't worry it's not a complicated function).
The existing function:
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__serving = True
self.__is_shut_down.clear()
while self.__serving:
# XXX: Consider using another file descriptor or
# connecting to the socket to wake this up instead of
# polling. Polling reduces our responsiveness to a
# shutdown request and wastes cpu at all other times.
r, w, e = select.select([self], [], [], poll_interval)
if r:
self._handle_request_noblock()
self.__is_shut_down.set()
So our replacement would be something like:
def serve_forever(server1,server2):
while True:
r,w,e = select.select([server1,server2],[],[],0)
if server1 in r:
server1.handle_request()
if server2 in r:
server2.handle_request()
A:
I would say that threading for something this simple is overkill. You're better off using some form of asynchronous programming.
Here is an example using Twisted:
from twisted.internet import reactor
from twisted.web import resource, server
class MyResource(resource.Resource):
isLeaf = True
def render_GET(self, request):
return 'gotten'
site = server.Site(MyResource())
reactor.listenTCP(8000, site)
reactor.listenTCP(8001, site)
reactor.run()
I also thinks it looks a lot cleaner to have each port be handled in the same way, instead of having the main thread handle one port and an additional thread handle the other. Arguably that can be fixed in the thread example, but then you're using three threads.
| How do I write a python HTTP server to listen on multiple ports? | I'm writing a small web server in Python, using BaseHTTPServer and a custom subclass of BaseHTTPServer.BaseHTTPRequestHandler. Is it possible to make this listen on more than one port?
What I'm doing now:
class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def doGET
[...]
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
pass
server = ThreadingHTTPServer(('localhost', 80), MyRequestHandler)
server.serve_forever()
| [
"Sure; just start two different servers on two different ports in two different threads that each use the same handler. Here's a complete, working example that I just wrote and tested. If you run this code then you'll be able to get a Hello World webpage at both http://localhost:1111/ and http://localhost:2222/\nfrom threading import Thread\nfrom SocketServer import ThreadingMixIn\nfrom BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler\n\nclass Handler(BaseHTTPRequestHandler):\n def do_GET(self):\n self.send_response(200)\n self.send_header(\"Content-type\", \"text/plain\")\n self.end_headers()\n self.wfile.write(\"Hello World!\")\n\nclass ThreadingHTTPServer(ThreadingMixIn, HTTPServer):\n daemon_threads = True\n\ndef serve_on_port(port):\n server = ThreadingHTTPServer((\"localhost\",port), Handler)\n server.serve_forever()\n\nThread(target=serve_on_port, args=[1111]).start()\nserve_on_port(2222)\n\nupdate:\nThis also works with Python 3 but three lines need to be slightly changed:\nfrom socketserver import ThreadingMixIn\nfrom http.server import HTTPServer, BaseHTTPRequestHandler\n\nand\nself.wfile.write(bytes(\"Hello World!\", \"utf-8\"))\n\n",
"Not easily. You could have two ThreadingHTTPServer instances, write your own serve_forever() function (don't worry it's not a complicated function).\nThe existing function:\ndef serve_forever(self, poll_interval=0.5):\n \"\"\"Handle one request at a time until shutdown.\n\n Polls for shutdown every poll_interval seconds. Ignores\n self.timeout. If you need to do periodic tasks, do them in\n another thread.\n \"\"\"\n self.__serving = True\n self.__is_shut_down.clear()\n while self.__serving:\n # XXX: Consider using another file descriptor or\n # connecting to the socket to wake this up instead of\n # polling. Polling reduces our responsiveness to a\n # shutdown request and wastes cpu at all other times.\n r, w, e = select.select([self], [], [], poll_interval)\n if r:\n self._handle_request_noblock()\n self.__is_shut_down.set()\n\nSo our replacement would be something like:\ndef serve_forever(server1,server2):\n while True:\n r,w,e = select.select([server1,server2],[],[],0)\n if server1 in r:\n server1.handle_request()\n if server2 in r:\n server2.handle_request()\n\n",
"I would say that threading for something this simple is overkill. You're better off using some form of asynchronous programming.\nHere is an example using Twisted:\nfrom twisted.internet import reactor\nfrom twisted.web import resource, server\n\nclass MyResource(resource.Resource):\n isLeaf = True\n def render_GET(self, request):\n return 'gotten'\n\nsite = server.Site(MyResource())\n\nreactor.listenTCP(8000, site)\nreactor.listenTCP(8001, site)\nreactor.run()\n\nI also thinks it looks a lot cleaner to have each port be handled in the same way, instead of having the main thread handle one port and an additional thread handle the other. Arguably that can be fixed in the thread example, but then you're using three threads.\n"
] | [
40,
6,
6
] | [] | [] | [
"python",
"webserver"
] | stackoverflow_0000060680_python_webserver.txt |
Q:
Comparing runtimes
I am trying to get some accurate runtime comparisons of PHP vs Python (and potentially any other language that I have to include). Timing within a script is not my problem but timing within a script does not account for everything from the moment the request is made to run the script to output.
1) Is it actually worth taking such things into account?
2) Assuming it is worth taking it into account, how do I do this?
I'm using a Mac so I've got access to Linux commands and I'm not afraid to compile/create a command to help me, I just don't know how to write such a command.
A:
If your idea is to compare the languages, I'd say anything outside them is not relevant for comparison purposes.
Nonetheless you can use the time command to measure everything and can compare it with the timing within a script.
Like this:
$ time script.php
HI!
real 0m3.218s
user 0m0.080s
sys 0m0.064s
It will give you clock time, user time (php interpreter) and sys time (OS time)
If you are thinking web, then it gets a lot harder because you would be mixing webserver overhead and that is not always easy to compare if, say, you are using WSGI v/s mod_php. Then you'd have to hook probes into the webserving parts of the chain as well
A:
It's worth taking speed into account if you're optimizing code. You should generally know why you're optimizing code (as in: a specific task in your existing codebase is taking too long, not "I heard PHP is slower than Python"). It's not worth taking speed into account if you don't actually plan on switching languages. Just because one tiny module does something slightly faster doesn't mean rewriting your app in another language is a good idea. There are many other factors to choosing a language besides speed.
You benchmark, of course. Run the two codebases multiple times and compare the timing. You can use the time command if both scripts are executable from the shell, or use respective benchmarking functionality from each language; the latter case depends heavily on the actual language, naturally.
A:
Well, you can use the "time" command to help:
you@yourmachine:~$ time echo "hello world"
hello world
real 0m0.000s
user 0m0.000s
sys 0m0.000s
you@yourmachine:~$
And this will get around timing outside of the environment.
As for whether you need to actually time that extra work... that entirely depends on what you are doing. I assume this is for some kind of web application of some sort, so it depends on how the framework you use actually works... does it cache some kind of compiled (or parsed) version of the script? If so, then startup time will be totally irrelevant (since the first hit will be the only one that startup time exists in).
Also, make sure to run your tests in a loop so you can discount the first run (and include the cost on the first run in your report if you want). I have done some tests in Java, and the first run is always slowest due to the JIT doing its job (and the same sort of hit may exist in PHP, Python and any other languages you try).
| Comparing runtimes | I am trying to get some accurate runtime comparisons of PHP vs Python (and potentially any other language that I have to include). Timing within a script is not my problem but timing within a script does not account for everything from the moment the request is made to run the script to output.
1) Is it actually worth taking such things into account?
2) Assuming it is worth taking it into account, how do I do this?
I'm using a Mac so I've got access to Linux commands and I'm not afraid to compile/create a command to help me, I just don't know how to write such a command.
| [
"If your idea is to compare the languages, I'd say anything outside them is not relevant for comparison purposes. \nNonetheless you can use the time command to measure everything and can compare it with the timing within a script.\nLike this:\n$ time script.php\nHI!\n\nreal 0m3.218s\nuser 0m0.080s\nsys 0m0.064s\n\nIt will give you clock time, user time (php interpreter) and sys time (OS time)\nIf you are thinking web, then it gets a lot harder because you would be mixing webserver overhead and that is not always easy to compare if, say, you are using WSGI v/s mod_php. Then you'd have to hook probes into the webserving parts of the chain as well\n",
"\nIt's worth taking speed into account if you're optimizing code. You should generally know why you're optimizing code (as in: a specific task in your existing codebase is taking too long, not \"I heard PHP is slower than Python\"). It's not worth taking speed into account if you don't actually plan on switching languages. Just because one tiny module does something slightly faster doesn't mean rewriting your app in another language is a good idea. There are many other factors to choosing a language besides speed.\nYou benchmark, of course. Run the two codebases multiple times and compare the timing. You can use the time command if both scripts are executable from the shell, or use respective benchmarking functionality from each language; the latter case depends heavily on the actual language, naturally.\n\n",
"Well, you can use the \"time\" command to help:\nyou@yourmachine:~$ time echo \"hello world\"\nhello world\n\nreal 0m0.000s\nuser 0m0.000s\nsys 0m0.000s\nyou@yourmachine:~$ \n\nAnd this will get around timing outside of the environment.\nAs for whether you need to actually time that extra work... that entirely depends on what you are doing. I assume this is for some kind of web application of some sort, so it depends on how the framework you use actually works... does it cache some kind of compiled (or parsed) version of the script? If so, then startup time will be totally irrelevant (since the first hit will be the only one that startup time exists in).\nAlso, make sure to run your tests in a loop so you can discount the first run (and include the cost on the first run in your report if you want). I have done some tests in Java, and the first run is always slowest due to the JIT doing its job (and the same sort of hit may exist in PHP, Python and any other languages you try).\n"
] | [
4,
1,
1
] | [] | [] | [
"benchmarking",
"php",
"python"
] | stackoverflow_0000062079_benchmarking_php_python.txt |
Q:
Passing on named variable arguments in python
Say I have the following methods:
def methodA(arg, **kwargs):
pass
def methodB(arg, *args, **kwargs):
pass
In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments.
def methodA(arg, **kwargs):
methodB("argvalue", kwargs)
How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB?
A:
Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments.
methodB("argvalue", **kwargs)
A:
As an aside: When using functions instead of methods, you could also use functools.partial:
import functools
def foo(arg, **kwargs):
...
bar = functools.partial(foo, "argvalue")
The last line will define a function "bar" that, when called, will call foo with the first argument set to "argvalue" and all other functions just passed on:
bar(5, myarg="value")
will call
foo("argvalue", 5, myarg="value")
Unfortunately that will not work with methods.
A:
Some experimentation and I figured this one out:
def methodA(arg, **kwargs):
methodB("argvalue", **kwargs)
Seems obvious now...
| Passing on named variable arguments in python | Say I have the following methods:
def methodA(arg, **kwargs):
pass
def methodB(arg, *args, **kwargs):
pass
In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments.
def methodA(arg, **kwargs):
methodB("argvalue", kwargs)
How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB?
| [
"Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments.\nmethodB(\"argvalue\", **kwargs)\n\n",
"As an aside: When using functions instead of methods, you could also use functools.partial:\nimport functools\n\ndef foo(arg, **kwargs):\n ...\n\nbar = functools.partial(foo, \"argvalue\")\n\nThe last line will define a function \"bar\" that, when called, will call foo with the first argument set to \"argvalue\" and all other functions just passed on:\nbar(5, myarg=\"value\")\n\nwill call\nfoo(\"argvalue\", 5, myarg=\"value\")\n\nUnfortunately that will not work with methods.\n",
"Some experimentation and I figured this one out:\ndef methodA(arg, **kwargs):\n methodB(\"argvalue\", **kwargs)\nSeems obvious now...\n"
] | [
34,
2,
1
] | [] | [] | [
"python",
"variadic_functions"
] | stackoverflow_0000051412_python_variadic_functions.txt |
Q:
How to add method using metaclass
How do I add an instance method to a class using a metaclass (yes I do need to use a metaclass)? The following kind of works, but the func_name will still be "foo":
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
dict["foobar"] = bar
return type(name, bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
>>> f = Foo()
>>> f.foobar()
bar
>>> f.foobar.func_name
'bar'
My problem is that some library code actually uses the func_name and later fails to find the 'bar' method of the Foo instance. I could do:
dict["foobar"] = types.FunctionType(bar.func_code, {}, "foobar")
There is also types.MethodType, but I need an instance that does'nt exist yet to use that. Am I missing someting here?
A:
Try dynamically extending the bases that way you can take advantage of the mro and the methods are actual methods:
class Parent(object):
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
return type(name, (Parent,) + bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
if __name__ == "__main__":
f = Foo()
f.bar()
print f.bar.func_name
A:
I think what you want to do is this:
>>> class Foo():
... def __init__(self, x):
... self.x = x
...
>>> def bar(self):
... print 'bar:', self.x
...
>>> bar.func_name = 'foobar'
>>> Foo.foobar = bar
>>> f = Foo(12)
>>> f.foobar()
bar: 12
>>> f.foobar.func_name
'foobar'
Now you are free to pass Foos to a library that expects Foo instances to have a method named foobar.
Unfortunately, (1) I don't know how to use metaclasses and (2) I'm not sure I read your question correctly, but I hope this helps.
Note that func_name is only assignable in Python 2.4 and higher.
| How to add method using metaclass | How do I add an instance method to a class using a metaclass (yes I do need to use a metaclass)? The following kind of works, but the func_name will still be "foo":
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
dict["foobar"] = bar
return type(name, bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
>>> f = Foo()
>>> f.foobar()
bar
>>> f.foobar.func_name
'bar'
My problem is that some library code actually uses the func_name and later fails to find the 'bar' method of the Foo instance. I could do:
dict["foobar"] = types.FunctionType(bar.func_code, {}, "foobar")
There is also types.MethodType, but I need an instance that does'nt exist yet to use that. Am I missing someting here?
| [
"Try dynamically extending the bases that way you can take advantage of the mro and the methods are actual methods:\nclass Parent(object):\n def bar(self):\n print \"bar\"\n\nclass MetaFoo(type):\n def __new__(cls, name, bases, dict):\n return type(name, (Parent,) + bases, dict)\n\nclass Foo(object):\n __metaclass__ = MetaFoo\n\nif __name__ == \"__main__\":\n f = Foo()\n f.bar()\n print f.bar.func_name\n\n",
"I think what you want to do is this:\n>>> class Foo():\n... def __init__(self, x):\n... self.x = x\n... \n>>> def bar(self):\n... print 'bar:', self.x\n... \n>>> bar.func_name = 'foobar'\n>>> Foo.foobar = bar\n>>> f = Foo(12)\n>>> f.foobar()\nbar: 12\n>>> f.foobar.func_name\n'foobar'\n\nNow you are free to pass Foos to a library that expects Foo instances to have a method named foobar.\nUnfortunately, (1) I don't know how to use metaclasses and (2) I'm not sure I read your question correctly, but I hope this helps. \nNote that func_name is only assignable in Python 2.4 and higher.\n"
] | [
15,
2
] | [] | [] | [
"metaclass",
"python"
] | stackoverflow_0000065400_metaclass_python.txt |
Q:
How create threads under Python for Delphi
I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.
A:
Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.
threading Documentation
thread Documentation
The thread module offers low level threading and synchronization using simple Lock objects.
Again, not sure if this helps since you're using Python under a Delphi environment.
A:
Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.
You'll probably want the new process to end (via exit() or the like) immediately after spawning the script.
A:
If a process dies all it's threads die with it, so a solution might be a separate process.
See if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.
| How create threads under Python for Delphi | I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script.
Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine.
I'm trying to use "threading" standard module for threads.
| [
"Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module.\nthreading Documentation\nthread Documentation\nThe thread module offers low level threading and synchronization using simple Lock objects.\nAgain, not sure if this helps since you're using Python under a Delphi environment.\n",
"Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends.\nYou'll probably want the new process to end (via exit() or the like) immediately after spawning the script.\n",
"If a process dies all it's threads die with it, so a solution might be a separate process.\nSee if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.\n"
] | [
2,
0,
0
] | [] | [] | [
"delphi",
"python"
] | stackoverflow_0000063681_delphi_python.txt |
Q:
python cgi on IIS
How do you set up IIS so that you can call python scripts from asp pages?
Ok, so I found the answer to that question here: http://support.microsoft.com/kb/276494
So on to my next question: How do you call a cgi script from within classic asp (vb) code? Particularly one which is not in the web root directory.
A:
You could also do it this way.
A:
I don't believe that VBScript as hosted by IIS has any way of executing an external process. If you are using python as an AXscripting engine then you could just use the sys module. If the script you're calling is actually meant to be a cgi script you'll have to mimic all the environment variables that the cgi uses. The alternative is to put the script on the python path, import it and hope that it is modular enough that you can call the pieces you need and bypass the cgi handling code.
| python cgi on IIS | How do you set up IIS so that you can call python scripts from asp pages?
Ok, so I found the answer to that question here: http://support.microsoft.com/kb/276494
So on to my next question: How do you call a cgi script from within classic asp (vb) code? Particularly one which is not in the web root directory.
| [
"You could also do it this way.\n",
"I don't believe that VBScript as hosted by IIS has any way of executing an external process. If you are using python as an AXscripting engine then you could just use the sys module. If the script you're calling is actually meant to be a cgi script you'll have to mimic all the environment variables that the cgi uses. The alternative is to put the script on the python path, import it and hope that it is modular enough that you can call the pieces you need and bypass the cgi handling code.\n"
] | [
2,
1
] | [] | [] | [
"asp_classic",
"cgi",
"iis",
"python",
"vbscript"
] | stackoverflow_0000061781_asp_classic_cgi_iis_python_vbscript.txt |
Q:
Nice Python wrapper for Yahoo's Geoplanet web service?
Has anybody created a nice wrapper around Yahoo's geo webservice "GeoPlanet" yet?
A:
After a brief amount of Googling, I found nothing that looks like a wrapper for this API, but I'm not quite sure if a wrapper is what is necessary for GeoPlanet.
According to Yahoo's documentation for GeoPlanet, requests are made in the form of an HTTP GET messages which can very easily be made using Python's httplib module, and responses can take one of several forms including XML and JSON. Python can very easily parse these formats. In fact, Yahoo! itself even offers libraries for parsing both XML and JSON with Python.
I know it sounds like a lot of libraries, but all the hard work has already been done for the programmer. It would just take a little "gluing together" and you would have yourself a nice interface to Yahoo! GeoPlanet using the power of Python.
| Nice Python wrapper for Yahoo's Geoplanet web service? | Has anybody created a nice wrapper around Yahoo's geo webservice "GeoPlanet" yet?
| [
"After a brief amount of Googling, I found nothing that looks like a wrapper for this API, but I'm not quite sure if a wrapper is what is necessary for GeoPlanet. \nAccording to Yahoo's documentation for GeoPlanet, requests are made in the form of an HTTP GET messages which can very easily be made using Python's httplib module, and responses can take one of several forms including XML and JSON. Python can very easily parse these formats. In fact, Yahoo! itself even offers libraries for parsing both XML and JSON with Python. \nI know it sounds like a lot of libraries, but all the hard work has already been done for the programmer. It would just take a little \"gluing together\" and you would have yourself a nice interface to Yahoo! GeoPlanet using the power of Python.\n"
] | [
2
] | [] | [] | [
"gis",
"python",
"yahoo"
] | stackoverflow_0000064185_gis_python_yahoo.txt |
Q:
Decorating a parent class method
I would like to make a child class that has a method of the parent class where the method is a 'classmethod' in the child class but not in the parent class.
Essentially, I am trying to accomplish the following:
class foo(Object):
def meth1(self, val):
self.value = val
class bar(foo):
meth1 = classmethod(foo.meth1)
A:
I'm also not entirely sure what the exact behaviour you want is, but assuming its that you want bar.meth1(42) to be equivalent to foo.meth1 being a classmethod of bar (with "self" being the class), then you can acheive this with:
def convert_to_classmethod(method):
return classmethod(method.im_func)
class bar(foo):
meth1 = convert_to_classmethod(foo.meth1)
The problem with classmethod(foo.meth1) is that foo.meth1 has already been converted to a method, with a special meaning for the first parameter. You need to undo this and look at the underlying function object, reinterpreting what "self" means.
I'd also caution that this is a pretty odd thing to do, and thus liable to cause confusion to anyone reading your code. You are probably better off thinking through a different solution to your problem.
A:
What are you trying to accomplish? If I saw such a construct in live Python code, I would consider beating the original programmer.
A:
The question, as posed, seems quite odd to me: I can't see why anyone would want to do that. It is possible that you are misunderstanding just what a "classmethod" is in Python (it's a bit different from, say, a static method in Java).
A normal method is more-or-less just a function which takes as its first argument (usually called "self"), an instance of the class, and which is invoked as ".".
A classmethod is more-or-less just a function which takes as its first argument (often called "cls"), a class, and which can be invoked as "." OR as ".".
With this in mind, and your code shown above, what would you expect to have happen if someone creates an instance of bar and calls meth1 on it?
bar1 = bar()
bar1.meth1("xyz")
When the code to meth1 is called, it is passed two arguments 'self' and 'val'. I guess that you expect "xyz" to be passed for 'val', but what are you thinking gets passed for 'self'? Should it be the bar1 instance (in this case, no override was needed)? Or should it be the class bar (what then would this code DO)?
| Decorating a parent class method | I would like to make a child class that has a method of the parent class where the method is a 'classmethod' in the child class but not in the parent class.
Essentially, I am trying to accomplish the following:
class foo(Object):
def meth1(self, val):
self.value = val
class bar(foo):
meth1 = classmethod(foo.meth1)
| [
"I'm also not entirely sure what the exact behaviour you want is, but assuming its that you want bar.meth1(42) to be equivalent to foo.meth1 being a classmethod of bar (with \"self\" being the class), then you can acheive this with:\ndef convert_to_classmethod(method):\n return classmethod(method.im_func)\n\nclass bar(foo):\n meth1 = convert_to_classmethod(foo.meth1)\n\nThe problem with classmethod(foo.meth1) is that foo.meth1 has already been converted to a method, with a special meaning for the first parameter. You need to undo this and look at the underlying function object, reinterpreting what \"self\" means.\nI'd also caution that this is a pretty odd thing to do, and thus liable to cause confusion to anyone reading your code. You are probably better off thinking through a different solution to your problem.\n",
"What are you trying to accomplish? If I saw such a construct in live Python code, I would consider beating the original programmer.\n",
"The question, as posed, seems quite odd to me: I can't see why anyone would want to do that. It is possible that you are misunderstanding just what a \"classmethod\" is in Python (it's a bit different from, say, a static method in Java).\nA normal method is more-or-less just a function which takes as its first argument (usually called \"self\"), an instance of the class, and which is invoked as \".\".\nA classmethod is more-or-less just a function which takes as its first argument (often called \"cls\"), a class, and which can be invoked as \".\" OR as \".\".\nWith this in mind, and your code shown above, what would you expect to have happen if someone creates an instance of bar and calls meth1 on it?\nbar1 = bar()\nbar1.meth1(\"xyz\")\n\nWhen the code to meth1 is called, it is passed two arguments 'self' and 'val'. I guess that you expect \"xyz\" to be passed for 'val', but what are you thinking gets passed for 'self'? Should it be the bar1 instance (in this case, no override was needed)? Or should it be the class bar (what then would this code DO)?\n"
] | [
4,
3,
0
] | [] | [] | [
"inheritance",
"oop",
"python"
] | stackoverflow_0000066636_inheritance_oop_python.txt |
Q:
Change command Method for Tkinter Button in Python
I create a new Button object but did not specify the command option upon creation. Is there a way in Tkinter to change the command (onclick) function after the object has been created?
A:
Though Eli Courtwright's program will work fine¹, what you really seem to want though is just a way to reconfigure after instantiation any attribute which you could have set when you instantiated². How you do so is by way of the configure() method.
from Tkinter import Tk, Button
def goodbye_world():
print "Goodbye World!\nWait, I changed my mind!"
button.configure(text = "Hello World!", command=hello_world)
def hello_world():
print "Hello World!\nWait, I changed my mind!"
button.configure(text = "Goodbye World!", command=goodbye_world)
root = Tk()
button = Button(root, text="Hello World!", command=hello_world)
button.pack()
root.mainloop()
¹ "fine" if you use only the mouse; if you care about tabbing and using [Space] or [Enter] on buttons, then you will have to implement (duplicating existing code) keypress events too. Setting the command option through .configure is much easier.
² the only attribute that can't change after instantiation is name.
A:
Sure; just use the bind method to specify the callback after the button has been created. I've just written and tested the example below. You can find a nice tutorial on doing this at http://www.pythonware.com/library/tkinter/introduction/events-and-bindings.htm
from Tkinter import Tk, Button
root = Tk()
button = Button(root, text="Click Me!")
button.pack()
def callback(event):
print "Hello World!"
button.bind("<Button-1>", callback)
root.mainloop()
| Change command Method for Tkinter Button in Python | I create a new Button object but did not specify the command option upon creation. Is there a way in Tkinter to change the command (onclick) function after the object has been created?
| [
"Though Eli Courtwright's program will work fine¹, what you really seem to want though is just a way to reconfigure after instantiation any attribute which you could have set when you instantiated². How you do so is by way of the configure() method.\nfrom Tkinter import Tk, Button\n\ndef goodbye_world():\n print \"Goodbye World!\\nWait, I changed my mind!\"\n button.configure(text = \"Hello World!\", command=hello_world)\n\ndef hello_world():\n print \"Hello World!\\nWait, I changed my mind!\"\n button.configure(text = \"Goodbye World!\", command=goodbye_world)\n\nroot = Tk()\nbutton = Button(root, text=\"Hello World!\", command=hello_world)\nbutton.pack()\n\nroot.mainloop()\n\n¹ \"fine\" if you use only the mouse; if you care about tabbing and using [Space] or [Enter] on buttons, then you will have to implement (duplicating existing code) keypress events too. Setting the command option through .configure is much easier.\n² the only attribute that can't change after instantiation is name.\n",
"Sure; just use the bind method to specify the callback after the button has been created. I've just written and tested the example below. You can find a nice tutorial on doing this at http://www.pythonware.com/library/tkinter/introduction/events-and-bindings.htm\nfrom Tkinter import Tk, Button\n\nroot = Tk()\nbutton = Button(root, text=\"Click Me!\")\nbutton.pack()\n\ndef callback(event):\n print \"Hello World!\"\n\nbutton.bind(\"<Button-1>\", callback)\nroot.mainloop()\n\n"
] | [
37,
2
] | [] | [] | [
"python",
"tkinter",
"user_interface"
] | stackoverflow_0000068327_python_tkinter_user_interface.txt |
Q:
Best way to open a socket in Python
I want to open a TCP client socket in Python. Do I have to go through all the low-level BSD create-socket-handle / connect-socket stuff or is there a simpler one-line way?
A:
Opening sockets in python is pretty simple. You really just need something like this:
import socket
sock = socket.socket()
sock.connect((address, port))
and then you can send() and recv() like any other socket
A:
OK, this code worked
s = socket.socket()
s.connect((ip,port))
s.send("my request\r")
print s.recv(256)
s.close()
It was quite difficult to work that out from the Python socket module documentation. So I'll accept The.Anti.9's answer.
A:
For developing portable network programs of any sort in Python, Twisted is quite useful. One of its benefits is providing a convenient layer above low-level socket APIs.
| Best way to open a socket in Python | I want to open a TCP client socket in Python. Do I have to go through all the low-level BSD create-socket-handle / connect-socket stuff or is there a simpler one-line way?
| [
"Opening sockets in python is pretty simple. You really just need something like this:\nimport socket\nsock = socket.socket()\nsock.connect((address, port))\n\nand then you can send() and recv() like any other socket\n",
"OK, this code worked\ns = socket.socket()\ns.connect((ip,port))\ns.send(\"my request\\r\")\nprint s.recv(256)\ns.close()\n\nIt was quite difficult to work that out from the Python socket module documentation. So I'll accept The.Anti.9's answer.\n",
"For developing portable network programs of any sort in Python, Twisted is quite useful. One of its benefits is providing a convenient layer above low-level socket APIs.\n"
] | [
81,
21,
10
] | [] | [] | [
"networking",
"python",
"tcp"
] | stackoverflow_0000068774_networking_python_tcp.txt |
Q:
mod_python/MySQL error on INSERT with a lot of data: "OperationalError: (2006, 'MySQL server has gone away')"
When doing an INSERT with a lot of data, ie:
INSERT INTO table (mediumtext_field) VALUES ('...lots of text here: about 2MB worth...')
MySQL returns
"OperationalError: (2006, 'MySQL server has gone away')"
This is happening within a minute of starting the script, so it is not a timeout issue. Also, mediumtext_field should be able to hold ~16MB of data, so that shouldn't be a problem.
Any ideas what is causing the error or how to work around it?
Some relevant libraries being used: mod_python 3.3.1, MySQL 5.0.51 (on Windows XP SP3, via xampp, details below)
ApacheFriends XAMPP (basic package) version 1.6.5
Apache 2.2.6
MySQL 5.0.51
phpMyAdmin 2.11.3
A:
check the max_packet setting in your my.cnf file. this determines the largest amount of data you can send to your mysql server in a single statement. exceeding this values results in that error.
| mod_python/MySQL error on INSERT with a lot of data: "OperationalError: (2006, 'MySQL server has gone away')" | When doing an INSERT with a lot of data, ie:
INSERT INTO table (mediumtext_field) VALUES ('...lots of text here: about 2MB worth...')
MySQL returns
"OperationalError: (2006, 'MySQL server has gone away')"
This is happening within a minute of starting the script, so it is not a timeout issue. Also, mediumtext_field should be able to hold ~16MB of data, so that shouldn't be a problem.
Any ideas what is causing the error or how to work around it?
Some relevant libraries being used: mod_python 3.3.1, MySQL 5.0.51 (on Windows XP SP3, via xampp, details below)
ApacheFriends XAMPP (basic package) version 1.6.5
Apache 2.2.6
MySQL 5.0.51
phpMyAdmin 2.11.3
| [
"check the max_packet setting in your my.cnf file. this determines the largest amount of data you can send to your mysql server in a single statement. exceeding this values results in that error.\n"
] | [
1
] | [] | [] | [
"mysql",
"mysql_error_2006",
"python",
"xampp"
] | stackoverflow_0000067180_mysql_mysql_error_2006_python_xampp.txt |
Q:
Using the docstring from one method to automatically overwrite that of another method
The problem: I have a class which contains a template method execute which calls another method _execute. Subclasses are supposed to overwrite _execute to implement some specific functionality. This functionality should be documented in the docstring of _execute.
Advanced users can create their own subclasses to extend the library. However, another user dealing with such a subclass should only use execute, so he won't see the correct docstring if he uses help(execute).
Therefore it would be nice to modify the base class in such a way that in a subclass the docstring of execute is automatically replaced with that of _execute. Any ideas how this might be done?
I was thinking of metaclasses to do this, to make this completely transparent to the user.
A:
Well, if you don't mind copying the original method in the subclass, you can use the following technique.
import new
def copyfunc(func):
return new.function(func.func_code, func.func_globals, func.func_name,
func.func_defaults, func.func_closure)
class Metaclass(type):
def __new__(meta, name, bases, attrs):
for key in attrs.keys():
if key[0] == '_':
skey = key[1:]
for base in bases:
original = getattr(base, skey, None)
if original is not None:
copy = copyfunc(original)
copy.__doc__ = attrs[key].__doc__
attrs[skey] = copy
break
return type.__new__(meta, name, bases, attrs)
class Class(object):
__metaclass__ = Metaclass
def execute(self):
'''original doc-string'''
return self._execute()
class Subclass(Class):
def _execute(self):
'''sub-class doc-string'''
pass
A:
Is there a reason you can't override the base class's execute function directly?
class Base(object):
def execute(self):
...
class Derived(Base):
def execute(self):
"""Docstring for derived class"""
Base.execute(self)
...stuff specific to Derived...
If you don't want to do the above:
Method objects don't support writing to the __doc__ attribute, so you have to change __doc__ in the actual function object. Since you don't want to override the one in the base class, you'd have to give each subclass its own copy of execute:
class Derived(Base):
def execute(self):
return Base.execute(self)
class _execute(self):
"""Docstring for subclass"""
...
execute.__doc__= _execute.__doc__
but this is similar to a roundabout way of redefining execute...
A:
Look at the functools.wraps() decorator; it does all of this, but I don't know offhand if you can get it to run in the right context
A:
Well the doc-string is stored in __doc__ so it wouldn't be too hard to re-assign it based on the doc-string of _execute after the fact.
Basically:
class MyClass(object):
def execute(self):
'''original doc-string'''
self._execute()
class SubClass(MyClass):
def _execute(self):
'''sub-class doc-string'''
pass
# re-assign doc-string of execute
def execute(self,*args,**kw):
return MyClass.execute(*args,**kw)
execute.__doc__=_execute.__doc__
Execute has to be re-declared to that the doc string gets attached to the version of execute for the SubClass and not for MyClass (which would otherwise interfere with other sub-classes).
That's not a very tidy way of doing it, but from the POV of the user of a library it should give the desired result. You could then wrap this up in a meta-class to make it easier for people who are sub-classing.
A:
I agree that the simplest, most Pythonic way of approaching this is to simply redefine execute in your subclasses and have it call the execute method of the base class:
class Sub(Base):
def execute(self):
"""New docstring goes here"""
return Base.execute(self)
This is very little code to accomplish what you want; the only downside is that you must repeat this code in every subclass that extends Base. However, this is a small price to pay for the behavior you want.
If you want a sloppy and verbose way of making sure that the docstring for execute is dynamically generated, you can use the descriptor protocol, which would be significantly less code than the other proposals here. This is annoying because you can't just set a descriptor on an existing function, which means that execute must be written as a separate class with a __call__ method.
Here's the code to do this, but keep in mind that my above example is much simpler and more Pythonic:
class Executor(object):
__doc__ = property(lambda self: self.inst._execute.__doc__)
def __call__(self):
return self.inst._execute()
class Base(object):
execute = Executor()
class Sub(Base):
def __init__(self):
self.execute.inst = self
def _execute(self):
"""Actually does something!"""
return "Hello World!"
spam = Sub()
print spam.execute.__doc__ # prints "Actually does something!"
help(spam) # the execute method says "Actually does something!"
| Using the docstring from one method to automatically overwrite that of another method | The problem: I have a class which contains a template method execute which calls another method _execute. Subclasses are supposed to overwrite _execute to implement some specific functionality. This functionality should be documented in the docstring of _execute.
Advanced users can create their own subclasses to extend the library. However, another user dealing with such a subclass should only use execute, so he won't see the correct docstring if he uses help(execute).
Therefore it would be nice to modify the base class in such a way that in a subclass the docstring of execute is automatically replaced with that of _execute. Any ideas how this might be done?
I was thinking of metaclasses to do this, to make this completely transparent to the user.
| [
"Well, if you don't mind copying the original method in the subclass, you can use the following technique.\nimport new\n\ndef copyfunc(func):\n return new.function(func.func_code, func.func_globals, func.func_name,\n func.func_defaults, func.func_closure)\n\nclass Metaclass(type):\n def __new__(meta, name, bases, attrs):\n for key in attrs.keys():\n if key[0] == '_':\n skey = key[1:]\n for base in bases:\n original = getattr(base, skey, None)\n if original is not None:\n copy = copyfunc(original)\n copy.__doc__ = attrs[key].__doc__\n attrs[skey] = copy\n break\n return type.__new__(meta, name, bases, attrs)\n\nclass Class(object):\n __metaclass__ = Metaclass\n def execute(self):\n '''original doc-string'''\n return self._execute()\n\nclass Subclass(Class):\n def _execute(self):\n '''sub-class doc-string'''\n pass\n\n",
"Is there a reason you can't override the base class's execute function directly?\nclass Base(object):\n def execute(self):\n ...\n\nclass Derived(Base):\n def execute(self):\n \"\"\"Docstring for derived class\"\"\"\n Base.execute(self)\n ...stuff specific to Derived...\n\nIf you don't want to do the above:\nMethod objects don't support writing to the __doc__ attribute, so you have to change __doc__ in the actual function object. Since you don't want to override the one in the base class, you'd have to give each subclass its own copy of execute:\nclass Derived(Base):\n def execute(self):\n return Base.execute(self)\n\n class _execute(self):\n \"\"\"Docstring for subclass\"\"\"\n ...\n\n execute.__doc__= _execute.__doc__\n\nbut this is similar to a roundabout way of redefining execute...\n",
"Look at the functools.wraps() decorator; it does all of this, but I don't know offhand if you can get it to run in the right context\n",
"Well the doc-string is stored in __doc__ so it wouldn't be too hard to re-assign it based on the doc-string of _execute after the fact.\nBasically:\n\n\nclass MyClass(object):\n def execute(self):\n '''original doc-string'''\n self._execute()\n\nclass SubClass(MyClass):\n def _execute(self):\n '''sub-class doc-string'''\n pass\n\n # re-assign doc-string of execute\n def execute(self,*args,**kw):\n return MyClass.execute(*args,**kw)\n execute.__doc__=_execute.__doc__\n\n\n\nExecute has to be re-declared to that the doc string gets attached to the version of execute for the SubClass and not for MyClass (which would otherwise interfere with other sub-classes).\nThat's not a very tidy way of doing it, but from the POV of the user of a library it should give the desired result. You could then wrap this up in a meta-class to make it easier for people who are sub-classing.\n",
"I agree that the simplest, most Pythonic way of approaching this is to simply redefine execute in your subclasses and have it call the execute method of the base class:\nclass Sub(Base):\n def execute(self):\n \"\"\"New docstring goes here\"\"\"\n return Base.execute(self)\n\nThis is very little code to accomplish what you want; the only downside is that you must repeat this code in every subclass that extends Base. However, this is a small price to pay for the behavior you want.\nIf you want a sloppy and verbose way of making sure that the docstring for execute is dynamically generated, you can use the descriptor protocol, which would be significantly less code than the other proposals here. This is annoying because you can't just set a descriptor on an existing function, which means that execute must be written as a separate class with a __call__ method.\nHere's the code to do this, but keep in mind that my above example is much simpler and more Pythonic:\nclass Executor(object):\n __doc__ = property(lambda self: self.inst._execute.__doc__)\n\n def __call__(self):\n return self.inst._execute()\n\nclass Base(object):\n execute = Executor()\n\nclass Sub(Base):\n def __init__(self):\n self.execute.inst = self\n\n def _execute(self):\n \"\"\"Actually does something!\"\"\"\n return \"Hello World!\"\n\nspam = Sub()\nprint spam.execute.__doc__ # prints \"Actually does something!\"\nhelp(spam) # the execute method says \"Actually does something!\"\n\n"
] | [
4,
2,
1,
0,
0
] | [] | [] | [
"metaclass",
"python"
] | stackoverflow_0000071817_metaclass_python.txt |
Q:
Is there a common way to check in Python if an object is any function type?
I have a function in Python which is iterating over the attributes returned from dir(obj), and I want to check to see if any of the objects contained within is a function, method, built-in function, etc. Normally you could use callable() for this, but I don't want to include classes. The best I've come up with so far is:
isinstance(obj, (types.BuiltinFunctionType, types.FunctionType, types.MethodType))
Is there a more future-proof way to do this check?
Edit: I misspoke before when I said: "Normally you could use callable() for this, but I don't want to disqualify classes." I actually do want to disqualify classes. I want to match only functions, not classes.
A:
The inspect module has exactly what you want:
inspect.isroutine( obj )
FYI, the code is:
def isroutine(object):
"""Return true if the object is any kind of function or method."""
return (isbuiltin(object)
or isfunction(object)
or ismethod(object)
or ismethoddescriptor(object))
A:
If you want to exclude classes and other random objects that may have a __call__ method, and only check for functions and methods, these three functions in the inspect module
inspect.isfunction(obj)
inspect.isbuiltin(obj)
inspect.ismethod(obj)
should do what you want in a future-proof way.
A:
if hasattr(obj, '__call__'): pass
This also fits in better with Python's "duck typing" philosophy, because you don't really care what it is, so long as you can call it.
It's worth noting that callable() is being removed from Python and is not present in 3.0.
A:
Depending on what you mean by 'class':
callable( obj ) and not inspect.isclass( obj )
or:
callable( obj ) and not isinstance( obj, types.ClassType )
For example, results are different for 'dict':
>>> callable( dict ) and not inspect.isclass( dict )
False
>>> callable( dict ) and not isinstance( dict, types.ClassType )
True
| Is there a common way to check in Python if an object is any function type? | I have a function in Python which is iterating over the attributes returned from dir(obj), and I want to check to see if any of the objects contained within is a function, method, built-in function, etc. Normally you could use callable() for this, but I don't want to include classes. The best I've come up with so far is:
isinstance(obj, (types.BuiltinFunctionType, types.FunctionType, types.MethodType))
Is there a more future-proof way to do this check?
Edit: I misspoke before when I said: "Normally you could use callable() for this, but I don't want to disqualify classes." I actually do want to disqualify classes. I want to match only functions, not classes.
| [
"The inspect module has exactly what you want:\ninspect.isroutine( obj )\n\nFYI, the code is:\ndef isroutine(object):\n \"\"\"Return true if the object is any kind of function or method.\"\"\"\n return (isbuiltin(object)\n or isfunction(object)\n or ismethod(object)\n or ismethoddescriptor(object))\n\n",
"If you want to exclude classes and other random objects that may have a __call__ method, and only check for functions and methods, these three functions in the inspect module\ninspect.isfunction(obj)\ninspect.isbuiltin(obj)\ninspect.ismethod(obj)\n\nshould do what you want in a future-proof way.\n",
"if hasattr(obj, '__call__'): pass\n\nThis also fits in better with Python's \"duck typing\" philosophy, because you don't really care what it is, so long as you can call it.\nIt's worth noting that callable() is being removed from Python and is not present in 3.0.\n",
"Depending on what you mean by 'class':\ncallable( obj ) and not inspect.isclass( obj )\n\nor:\ncallable( obj ) and not isinstance( obj, types.ClassType )\n\nFor example, results are different for 'dict':\n>>> callable( dict ) and not inspect.isclass( dict )\nFalse\n>>> callable( dict ) and not isinstance( dict, types.ClassType )\nTrue\n\n"
] | [
17,
5,
3,
1
] | [] | [] | [
"python",
"types"
] | stackoverflow_0000074092_python_types.txt |
Q:
libxml2-p25 on OS X 10.5 needs sudo?
When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.
I have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo?
A:
Check your path by running:
'echo $PATH'
A:
I would suspect the permissions on the library. Can you do a strace or similar to find out the filenames it's looking for, and then check the permissions on them?
A:
The PATH environment variable was the mistake.
| libxml2-p25 on OS X 10.5 needs sudo? | When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.
I have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo?
| [
"Check your path by running:\n'echo $PATH'\n\n",
"I would suspect the permissions on the library. Can you do a strace or similar to find out the filenames it's looking for, and then check the permissions on them?\n",
"The PATH environment variable was the mistake.\n"
] | [
3,
0,
0
] | [] | [] | [
"libxml2",
"macos",
"python"
] | stackoverflow_0000068541_libxml2_macos_python.txt |
Q:
Python and "re"
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed:
result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness")
# result is None`
After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change:
regex = ".*(a_regex_of_pure_awesomeness)"
into
regex = "a_regex_of_pure_awesomeness"
Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
A:
In Python, there's a distinction between "match" and "search"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string.
Python regex docs
Matching vs searching
A:
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(your_html)
for a in soup.findAll('a', href=True):
# do something with `a` w/ href attribute
print a['href']
A:
>>> import re
>>> pattern = re.compile("url")
>>> string = " url"
>>> pattern.match(string)
>>> pattern.search(string)
<_sre.SRE_Match object at 0xb7f7a6e8>
A:
Are you using the re.match() or re.search() method? My understanding is that re.match() assumes a "^" at the beginning of your expression and will only search at the beginning of the text, while re.search() acts more like the Perl regular expressions and will only match the beginning of the text if you include a "^" at the beginning of your expression. Hope that helps.
| Python and "re" | A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed:
result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness")
# result is None`
After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change:
regex = ".*(a_regex_of_pure_awesomeness)"
into
regex = "a_regex_of_pure_awesomeness"
Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
| [
"In Python, there's a distinction between \"match\" and \"search\"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string.\nPython regex docs\nMatching vs searching\n",
"from BeautifulSoup import BeautifulSoup \n\nsoup = BeautifulSoup(your_html)\nfor a in soup.findAll('a', href=True):\n # do something with `a` w/ href attribute\n print a['href']\n\n",
">>> import re\n>>> pattern = re.compile(\"url\")\n>>> string = \" url\"\n>>> pattern.match(string)\n>>> pattern.search(string)\n<_sre.SRE_Match object at 0xb7f7a6e8>\n\n",
"Are you using the re.match() or re.search() method? My understanding is that re.match() assumes a \"^\" at the beginning of your expression and will only search at the beginning of the text, while re.search() acts more like the Perl regular expressions and will only match the beginning of the text if you include a \"^\" at the beginning of your expression. Hope that helps.\n"
] | [
20,
4,
3,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000072393_python_regex.txt |
Q:
iBATIS for Python?
At my current gig, we use iBATIS through Java to CRUD our databases. I like the abstract qualities of the tool, especially when working with legacy databases, as it doesn't impose its own syntax on you.
I'm looking for a Python analogue to this library, since the website only has Java/.NET/Ruby versions available. I don't want to have to switch to Jython if I don't need to.
Are there any other projects similar to iBATIS functionality out there for Python?
A:
iBatis sequesters the SQL DML (or the definitions of the SQL) in an XML file. It specifically focuses on the mapping between the SQL and some object model defined elsewhere.
SQL Alchemy can do this -- but it isn't really a very complete solution. Like iBatis, you can merely have SQL table definitions and a mapping between the tables and Python class definitions.
What's more complete is to have a class definition that is also the SQL database definition. If the class definition generates the SQL Table DDL as well as the query and processing DML, that's much more complete.
I flip-flop between SQLAlchemy and the Django ORM. SQLAlchemy can be used in an iBatis like manner. But I prefer to make the object design central and leave the SQL implementation be derived from the objects by the toolset.
I use SQLAlchemy for large, batch, stand-alone projects. DB Loads, schema conversions, DW reporting and the like work out well. In these projects, the focus is on the relational view of the data, not the object model. The SQL that's generated may be moved into PL/SQL stored procedures, for example.
I use Django for web applications, exploiting its built-in ORM capabilities. You can, with a little work, segregate the Django ORM from the rest of the Django environment. You can provide global settings to bind your app to a specific database without using a separate settings module.
Django includes a number of common relationships (Foreign Key, Many-to-Many, One-to-One) for which it can manage the SQL implementation. It generates key and index definitions for the attached database.
If your problem is largely object-oriented, with the database being used for persistence, then the nearly transparent ORM layer of Django has advantages.
If your problem is largely relational, with the SQL processing central, then the capability of seeing the generated SQL in SQLAlchemy has advantages.
A:
Perhaps SQLAlchemy SQL Expression support is suitable. See the documentation.
| iBATIS for Python? | At my current gig, we use iBATIS through Java to CRUD our databases. I like the abstract qualities of the tool, especially when working with legacy databases, as it doesn't impose its own syntax on you.
I'm looking for a Python analogue to this library, since the website only has Java/.NET/Ruby versions available. I don't want to have to switch to Jython if I don't need to.
Are there any other projects similar to iBATIS functionality out there for Python?
| [
"iBatis sequesters the SQL DML (or the definitions of the SQL) in an XML file. It specifically focuses on the mapping between the SQL and some object model defined elsewhere.\nSQL Alchemy can do this -- but it isn't really a very complete solution. Like iBatis, you can merely have SQL table definitions and a mapping between the tables and Python class definitions. \nWhat's more complete is to have a class definition that is also the SQL database definition. If the class definition generates the SQL Table DDL as well as the query and processing DML, that's much more complete. \nI flip-flop between SQLAlchemy and the Django ORM. SQLAlchemy can be used in an iBatis like manner. But I prefer to make the object design central and leave the SQL implementation be derived from the objects by the toolset.\nI use SQLAlchemy for large, batch, stand-alone projects. DB Loads, schema conversions, DW reporting and the like work out well. In these projects, the focus is on the relational view of the data, not the object model. The SQL that's generated may be moved into PL/SQL stored procedures, for example.\nI use Django for web applications, exploiting its built-in ORM capabilities. You can, with a little work, segregate the Django ORM from the rest of the Django environment. You can provide global settings to bind your app to a specific database without using a separate settings module.\nDjango includes a number of common relationships (Foreign Key, Many-to-Many, One-to-One) for which it can manage the SQL implementation. It generates key and index definitions for the attached database.\nIf your problem is largely object-oriented, with the database being used for persistence, then the nearly transparent ORM layer of Django has advantages.\nIf your problem is largely relational, with the SQL processing central, then the capability of seeing the generated SQL in SQLAlchemy has advantages.\n",
"Perhaps SQLAlchemy SQL Expression support is suitable. See the documentation. \n"
] | [
10,
1
] | [] | [] | [
"ibatis",
"orm",
"python"
] | stackoverflow_0000077731_ibatis_orm_python.txt |
Q:
Random in python 2.5 not working?
I am trying to use the import random statement in python, but it doesn't appear to have any methods in it to use.
Am I missing something?
A:
You probably have a file named random.py or random.pyc in your working directory. That's shadowing the built-in random module. You need to rename random.py to something like my_random.py and/or remove the random.pyc file.
To tell for sure what's going on, do this:
>>> import random
>>> print random.__file__
That will show you exactly which file is being imported.
A:
This is happening because you have a random.py file in the python search path, most likely the current directory.
Python is searching for modules using sys.path, which normally includes the current directory before the standard site-packages, which contains the expected random.py.
This is expected to be fixed in Python 3.0, so that you can't import modules from the current directory without using a special import syntax.
Just remove the random.py + random.pyc in the directory you're running python from and it'll work fine.
A:
I think you need to give some more information. It's not really possible to answer why it's not working based on the information in the question. The basic documentation for random is at:
https://docs.python.org/library/random.html
You might check there.
A:
Python 2.5.2 (r252:60911, Jun 16 2008, 18:27:58)
[GCC 3.3.4 (pre 3.3.5 20040809)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import random
>>> random.seed()
>>> dir(random)
['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', 'WichmannHill', '_BuiltinMethodType', '_MethodType', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '_acos', '_ceil', '_cos', '_e', '_exp', '_hexlify', '_inst', '_log', '_pi', '_random', '_sin', '_sqrt', '_test', '_test_generator', '_urandom', '_warn', 'betavariate', 'choice', 'expovariate', 'gammavariate', 'gauss', 'getrandbits', 'getstate', 'jumpahead', 'lognormvariate', 'normalvariate', 'paretovariate', 'randint', 'random', 'randrange', 'sample', 'seed', 'setstate', 'shuffle', 'uniform', 'vonmisesvariate', 'weibullvariate']
>>> random.randint(0,3)
3
>>> random.randint(0,3)
1
>>>
A:
If the script you are trying to run is itself called random.py, then you would have a naming conflict. Choose a different name for your script.
A:
Can you post an example of what you're trying to do? It's not clear from your question what the actual problem is.
Here's an example of how to use the random module:
import random
print random.randint(0,10)
A:
Seems to work fine for me. Check out the methods in the official python documentation for random:
>>> import random
>>> random.random()
0.69130806168332215
>>> random.uniform(1, 10)
8.8384170917436293
>>> random.randint(1, 10)
4
A:
Works for me:
Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:51)
[GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import random
>>> brothers = ['larry', 'curly', 'moe']
>>> random.choice(brothers)
'moe'
>>> random.choice(brothers)
'curly'
| Random in python 2.5 not working? | I am trying to use the import random statement in python, but it doesn't appear to have any methods in it to use.
Am I missing something?
| [
"You probably have a file named random.py or random.pyc in your working directory. That's shadowing the built-in random module. You need to rename random.py to something like my_random.py and/or remove the random.pyc file.\nTo tell for sure what's going on, do this:\n>>> import random\n>>> print random.__file__\n\nThat will show you exactly which file is being imported.\n",
"This is happening because you have a random.py file in the python search path, most likely the current directory.\nPython is searching for modules using sys.path, which normally includes the current directory before the standard site-packages, which contains the expected random.py.\nThis is expected to be fixed in Python 3.0, so that you can't import modules from the current directory without using a special import syntax.\nJust remove the random.py + random.pyc in the directory you're running python from and it'll work fine.\n",
"I think you need to give some more information. It's not really possible to answer why it's not working based on the information in the question. The basic documentation for random is at: \nhttps://docs.python.org/library/random.html\nYou might check there. \n",
"Python 2.5.2 (r252:60911, Jun 16 2008, 18:27:58)\n[GCC 3.3.4 (pre 3.3.5 20040809)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import random\n>>> random.seed()\n>>> dir(random)\n['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', 'WichmannHill', '_BuiltinMethodType', '_MethodType', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '_acos', '_ceil', '_cos', '_e', '_exp', '_hexlify', '_inst', '_log', '_pi', '_random', '_sin', '_sqrt', '_test', '_test_generator', '_urandom', '_warn', 'betavariate', 'choice', 'expovariate', 'gammavariate', 'gauss', 'getrandbits', 'getstate', 'jumpahead', 'lognormvariate', 'normalvariate', 'paretovariate', 'randint', 'random', 'randrange', 'sample', 'seed', 'setstate', 'shuffle', 'uniform', 'vonmisesvariate', 'weibullvariate']\n>>> random.randint(0,3)\n3\n>>> random.randint(0,3)\n1\n>>> \n\n",
"If the script you are trying to run is itself called random.py, then you would have a naming conflict. Choose a different name for your script.\n",
"Can you post an example of what you're trying to do? It's not clear from your question what the actual problem is.\nHere's an example of how to use the random module:\nimport random\nprint random.randint(0,10)\n\n",
"Seems to work fine for me. Check out the methods in the official python documentation for random:\n>>> import random\n>>> random.random()\n0.69130806168332215\n>>> random.uniform(1, 10)\n8.8384170917436293\n>>> random.randint(1, 10)\n4\n\n",
"Works for me:\nPython 2.5.1 (r251:54863, Jun 15 2008, 18:24:51) \n[GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import random\n>>> brothers = ['larry', 'curly', 'moe']\n>>> random.choice(brothers)\n'moe'\n>>> random.choice(brothers)\n'curly'\n\n"
] | [
36,
3,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0000074430_python.txt |
Q:
Redirect command to input of another in Python
I would like to replicate this in python:
gvimdiff <(hg cat file.txt) file.txt
(hg cat file.txt outputs the most recently committed version of file.txt)
I know how to pipe the file to gvimdiff, but it won't accept another file:
$ hg cat file.txt | gvimdiff file.txt -
Too many edit arguments: "-"
Getting to the python part...
# hgdiff.py
import subprocess
import sys
file = sys.argv[1]
subprocess.call(["gvimdiff", "<(hg cat %s)" % file, file])
When subprocess is called it merely passes <(hg cat file) onto gvimdiff as a filename.
So, is there any way to redirect a command as bash does?
For simplicity's sake just cat a file and redirect it to diff:
diff <(cat file.txt) file.txt
A:
It can be done. As of Python 2.5, however, this mechanism is Linux-specific and not portable:
import subprocess
import sys
file = sys.argv[1]
p1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)
p2 = subprocess.Popen([
'gvimdiff',
'/proc/self/fd/%s' % p1.stdout.fileno(),
file])
p2.wait()
That said, in the specific case of diff, you can simply take one of the files from stdin, and remove the need to use the bash-alike functionality in question:
file = sys.argv[1]
p1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['diff', '-', file], stdin=p1.stdout)
diff_text = p2.communicate()[0]
A:
There is also the commands module:
import commands
status, output = commands.getstatusoutput("gvimdiff <(hg cat file.txt) file.txt")
There is also the popen set of functions, if you want to actually grok the data from a command as it is running.
A:
This is actually an example in the docs:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
which means for you:
import subprocess
import sys
file = sys.argv[1]
p1 = Popen(["hg", "cat", file], stdout=PIPE)
p2 = Popen(["gvimdiff", "file.txt"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
This removes the use of the linux-specific /proc/self/fd bits, making it probably work on other unices like Solaris and the BSDs (including MacOS) and maybe even work on Windows.
| Redirect command to input of another in Python | I would like to replicate this in python:
gvimdiff <(hg cat file.txt) file.txt
(hg cat file.txt outputs the most recently committed version of file.txt)
I know how to pipe the file to gvimdiff, but it won't accept another file:
$ hg cat file.txt | gvimdiff file.txt -
Too many edit arguments: "-"
Getting to the python part...
# hgdiff.py
import subprocess
import sys
file = sys.argv[1]
subprocess.call(["gvimdiff", "<(hg cat %s)" % file, file])
When subprocess is called it merely passes <(hg cat file) onto gvimdiff as a filename.
So, is there any way to redirect a command as bash does?
For simplicity's sake just cat a file and redirect it to diff:
diff <(cat file.txt) file.txt
| [
"It can be done. As of Python 2.5, however, this mechanism is Linux-specific and not portable:\nimport subprocess\nimport sys\n\nfile = sys.argv[1]\np1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)\np2 = subprocess.Popen([\n 'gvimdiff',\n '/proc/self/fd/%s' % p1.stdout.fileno(),\n file])\np2.wait()\n\nThat said, in the specific case of diff, you can simply take one of the files from stdin, and remove the need to use the bash-alike functionality in question:\nfile = sys.argv[1]\np1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)\np2 = subprocess.Popen(['diff', '-', file], stdin=p1.stdout)\ndiff_text = p2.communicate()[0]\n\n",
"There is also the commands module:\nimport commands\n\nstatus, output = commands.getstatusoutput(\"gvimdiff <(hg cat file.txt) file.txt\")\n\nThere is also the popen set of functions, if you want to actually grok the data from a command as it is running.\n",
"This is actually an example in the docs:\np1 = Popen([\"dmesg\"], stdout=PIPE)\np2 = Popen([\"grep\", \"hda\"], stdin=p1.stdout, stdout=PIPE)\noutput = p2.communicate()[0]\n\nwhich means for you:\nimport subprocess\nimport sys\n\nfile = sys.argv[1]\np1 = Popen([\"hg\", \"cat\", file], stdout=PIPE)\np2 = Popen([\"gvimdiff\", \"file.txt\"], stdin=p1.stdout, stdout=PIPE)\noutput = p2.communicate()[0]\n\nThis removes the use of the linux-specific /proc/self/fd bits, making it probably work on other unices like Solaris and the BSDs (including MacOS) and maybe even work on Windows.\n"
] | [
10,
2,
2
] | [
"It just dawned on me that you are probably looking for one of the popen functions.\nfrom: http://docs.python.org/lib/module-popen2.html\npopen3(cmd[, bufsize[, mode]])\n Executes cmd as a sub-process. Returns the file objects (child_stdout, child_stdin, child_stderr). \nnamaste,\nMark\n"
] | [
-1
] | [
"bash",
"diff",
"python",
"redirect",
"vimdiff"
] | stackoverflow_0000078431_bash_diff_python_redirect_vimdiff.txt |
Q:
How to check for memory leaks in Guile extension modules?
I develop an extension module for Guile, written in C. This extension module embeds a Python interpreter.
Since this extension module invokes the Python interpreter, I need to verify that it properly manages the memory occupied by Python objects.
I found that the Python interpreter is well-behaved in its own memory handling, so that by running valgrind I can find memory leaks due to bugs in my own Python interpreter embedding code, if there are no other interfering factors.
However, when I run Guile under valgrind, valgrind reports memory leaks. Such memory leaks obscure any memory leaks due to my own code.
The question is what can I do to separate memory leaks due to bugs in my code from memory leaks reported by valgrind as due to Guile. Another tool instead of valgrind? Special valgrind options? Give up and rely upon manual code walkthrough?
A:
You've got a couple options. One is to write a supressions file for valgrind that turns off reporting of stuff that you're not working on. Python has such a file, for example:
http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp
If valgrind doesn't like your setup, another possibility is using libmudflap; you compile your program with gcc -fmudflap -lmudflap, and the resulting code is instrumented for pointer debugging. Described in the gcc docs, and here: http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging
| How to check for memory leaks in Guile extension modules? | I develop an extension module for Guile, written in C. This extension module embeds a Python interpreter.
Since this extension module invokes the Python interpreter, I need to verify that it properly manages the memory occupied by Python objects.
I found that the Python interpreter is well-behaved in its own memory handling, so that by running valgrind I can find memory leaks due to bugs in my own Python interpreter embedding code, if there are no other interfering factors.
However, when I run Guile under valgrind, valgrind reports memory leaks. Such memory leaks obscure any memory leaks due to my own code.
The question is what can I do to separate memory leaks due to bugs in my code from memory leaks reported by valgrind as due to Guile. Another tool instead of valgrind? Special valgrind options? Give up and rely upon manual code walkthrough?
| [
"You've got a couple options. One is to write a supressions file for valgrind that turns off reporting of stuff that you're not working on. Python has such a file, for example: \nhttp://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp\nIf valgrind doesn't like your setup, another possibility is using libmudflap; you compile your program with gcc -fmudflap -lmudflap, and the resulting code is instrumented for pointer debugging. Described in the gcc docs, and here: http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging\n"
] | [
7
] | [] | [] | [
"guile",
"memory_leaks",
"python",
"valgrind"
] | stackoverflow_0000078900_guile_memory_leaks_python_valgrind.txt |
Q:
How do I get started processing email related to website activity?
I am writing a web application that requires user interaction via email. I'm curious if there is a best practice or recommended source for learning about processing email. I am writing my application in Python, but I'm not sure what mail server to use or how to format the message or subject line to account for automated processing. I'm also looking for guidance on processing bouncebacks.
A:
There are some pretty serious concerns here for how to send email automatically, and here are a few:
Use an email library. Python includes one called 'email'. This is your friend, it will stop you from doing anything tragically wrong. Read an example from the Python Manual.
Some points that will stop you from getting blocked by spam filters:
Always send from a valid email address. You must be able to send email to this address and have it received (it can go into /dev/null after it's received, but it must be possible to /deliver/ there). This will stop spam filters that do Sender Address Verification from blocking your mail.
The email address you send from on the server.sendmail(fromaddr, [toaddr]) line will be where bounces go. The From: line in the email is a totally different address, and that's where mail will go when the user hits 'Reply:'. Use this to your advantage, bounces can go to one place, while reply goes to another.
Send email to a local mail server, I recommend postfix. This local server will receive your mail and be responsible for sending it to your upstream server. Once it has been delivered to the local server, treat it as 'sent' from a programmatic point of view.
If you have a site that is on a static ip in a datacenter of good reputation, don't be afraid to simply relay the mail directly to the internet. If you're in a datacenter full of script kiddies and spammers, you will need to relay this mail via a public MTA of good reputation, hopefully you will be able to work this out without a hassle.
Don't send an email in only HTML. Always send it in Plain and HTML, or just Plain. Be nice, I use a text only email client, and you don't want to annoy me.
Verify that you're not running SPF on your email domain, or get it configured to allow your server to send the mail. Do this by doing a TXT lookup on your domain.
$ dig google.com txt
...snip...
;; ANSWER SECTION:
google.com. 300 IN TXT "v=spf1 include:_netblocks.google.com ~all"
As you can see from that result, there's an SPF record there. If you don't have SPF, there won't be a TXT record. Read more about SPF on wikipedia.
Hope that helps.
A:
Some general information with regards to automated mail processing...
First, the mail server "brand" itself isn't that important for broadcasting or receiving emails. All of them support the standard smtp / pop3 communications protocol. Most even have IMAP support and have some level of spam filtering. That said, try to use a current generation email server.
Second, be aware that in an effort to reduce spam a lot of the receiving mail servers out there will simply throw a message away instead of responding back that a mail account doesn't exist. Which means you may not receive those.
Bear in mind that getting past spam filters is an art. A number of isp's watch for duplicate messages, messages that look like spam based on keywords or other content, etc. This is sometimes independent of the quantity of messages sent; I've seen messages with as few as 50 copies get blocked by AOL even though they were legitimate emails. So, testing is your friend and look into this article on wikipedia on anti-spam techniques. Then make sure your not doing that crap.
**
As far as processing the messages, just remember it's a queued system. Connect to the server via POP3 to retrieve messages, open it, do some action, delete the message or archive it, and move on.
With regards to bouncebacks, let the mail server do most of the work. You should be able to configure it to notify a certain email account on the server in the event that it is unable to deliver a message. You can check that account periodically and process the Non Delivery Reports as necessary.
| How do I get started processing email related to website activity? | I am writing a web application that requires user interaction via email. I'm curious if there is a best practice or recommended source for learning about processing email. I am writing my application in Python, but I'm not sure what mail server to use or how to format the message or subject line to account for automated processing. I'm also looking for guidance on processing bouncebacks.
| [
"There are some pretty serious concerns here for how to send email automatically, and here are a few:\nUse an email library. Python includes one called 'email'. This is your friend, it will stop you from doing anything tragically wrong. Read an example from the Python Manual.\nSome points that will stop you from getting blocked by spam filters:\nAlways send from a valid email address. You must be able to send email to this address and have it received (it can go into /dev/null after it's received, but it must be possible to /deliver/ there). This will stop spam filters that do Sender Address Verification from blocking your mail.\nThe email address you send from on the server.sendmail(fromaddr, [toaddr]) line will be where bounces go. The From: line in the email is a totally different address, and that's where mail will go when the user hits 'Reply:'. Use this to your advantage, bounces can go to one place, while reply goes to another.\nSend email to a local mail server, I recommend postfix. This local server will receive your mail and be responsible for sending it to your upstream server. Once it has been delivered to the local server, treat it as 'sent' from a programmatic point of view.\nIf you have a site that is on a static ip in a datacenter of good reputation, don't be afraid to simply relay the mail directly to the internet. If you're in a datacenter full of script kiddies and spammers, you will need to relay this mail via a public MTA of good reputation, hopefully you will be able to work this out without a hassle.\nDon't send an email in only HTML. Always send it in Plain and HTML, or just Plain. Be nice, I use a text only email client, and you don't want to annoy me.\nVerify that you're not running SPF on your email domain, or get it configured to allow your server to send the mail. Do this by doing a TXT lookup on your domain.\n$ dig google.com txt\n...snip...\n;; ANSWER SECTION:\ngoogle.com. 300 IN TXT \"v=spf1 include:_netblocks.google.com ~all\"\n\nAs you can see from that result, there's an SPF record there. If you don't have SPF, there won't be a TXT record. Read more about SPF on wikipedia.\nHope that helps.\n",
"Some general information with regards to automated mail processing...\nFirst, the mail server \"brand\" itself isn't that important for broadcasting or receiving emails. All of them support the standard smtp / pop3 communications protocol. Most even have IMAP support and have some level of spam filtering. That said, try to use a current generation email server.\nSecond, be aware that in an effort to reduce spam a lot of the receiving mail servers out there will simply throw a message away instead of responding back that a mail account doesn't exist. Which means you may not receive those.\nBear in mind that getting past spam filters is an art. A number of isp's watch for duplicate messages, messages that look like spam based on keywords or other content, etc. This is sometimes independent of the quantity of messages sent; I've seen messages with as few as 50 copies get blocked by AOL even though they were legitimate emails. So, testing is your friend and look into this article on wikipedia on anti-spam techniques. Then make sure your not doing that crap.\n**\nAs far as processing the messages, just remember it's a queued system. Connect to the server via POP3 to retrieve messages, open it, do some action, delete the message or archive it, and move on.\nWith regards to bouncebacks, let the mail server do most of the work. You should be able to configure it to notify a certain email account on the server in the event that it is unable to deliver a message. You can check that account periodically and process the Non Delivery Reports as necessary.\n"
] | [
4,
2
] | [] | [] | [
"email",
"python"
] | stackoverflow_0000079602_email_python.txt |
Q:
How to skip sys.exitfunc when unhandled exceptions occur
As you can see, even after the program should have died it speaks from the grave. Is there a way to "deregister" the exitfunction in case of exceptions?
import atexit
def helloworld():
print("Hello World!")
atexit.register(helloworld)
raise Exception("Good bye cruel world!")
outputs
Traceback (most recent call last):
File "test.py", line 8, in <module>
raise Exception("Good bye cruel world!")
Exception: Good bye cruel world!
Hello World!
A:
I don't really know why you want to do that, but you can install an excepthook that will be called by Python whenever an uncatched exception is raised, and in it clear the array of registered function in the atexit module.
Something like that :
import sys
import atexit
def clear_atexit_excepthook(exctype, value, traceback):
atexit._exithandlers[:] = []
sys.__excepthook__(exctype, value, traceback)
def helloworld():
print "Hello world!"
sys.excepthook = clear_atexit_excepthook
atexit.register(helloworld)
raise Exception("Good bye cruel world!")
Beware that it may behave incorrectly if the exception is raised from an atexit registered function (but then the behaviour would have been strange even if this hook was not used).
A:
In addition to calling os._exit() to avoid the registered exit handler you also need to catch the unhandled exception:
import atexit
import os
def helloworld():
print "Hello World!"
atexit.register(helloworld)
try:
raise Exception("Good bye cruel world!")
except Exception, e:
print 'caught unhandled exception', str(e)
os._exit(1)
| How to skip sys.exitfunc when unhandled exceptions occur | As you can see, even after the program should have died it speaks from the grave. Is there a way to "deregister" the exitfunction in case of exceptions?
import atexit
def helloworld():
print("Hello World!")
atexit.register(helloworld)
raise Exception("Good bye cruel world!")
outputs
Traceback (most recent call last):
File "test.py", line 8, in <module>
raise Exception("Good bye cruel world!")
Exception: Good bye cruel world!
Hello World!
| [
"I don't really know why you want to do that, but you can install an excepthook that will be called by Python whenever an uncatched exception is raised, and in it clear the array of registered function in the atexit module.\nSomething like that :\nimport sys\nimport atexit\n\ndef clear_atexit_excepthook(exctype, value, traceback):\n atexit._exithandlers[:] = []\n sys.__excepthook__(exctype, value, traceback)\n\ndef helloworld():\n print \"Hello world!\"\n\nsys.excepthook = clear_atexit_excepthook\natexit.register(helloworld)\n\nraise Exception(\"Good bye cruel world!\")\n\nBeware that it may behave incorrectly if the exception is raised from an atexit registered function (but then the behaviour would have been strange even if this hook was not used).\n",
"In addition to calling os._exit() to avoid the registered exit handler you also need to catch the unhandled exception:\nimport atexit\nimport os\n\ndef helloworld():\n print \"Hello World!\"\n\natexit.register(helloworld) \n\ntry:\n raise Exception(\"Good bye cruel world!\")\n\nexcept Exception, e:\n print 'caught unhandled exception', str(e)\n\n os._exit(1)\n\n"
] | [
7,
0
] | [
"If you call\nimport os\nos._exit(0)\n\nthe exit handlers will not be called, yours or those registered by other modules in the application.\n"
] | [
-1
] | [
"atexit",
"exception",
"python"
] | stackoverflow_0000080993_atexit_exception_python.txt |
Q:
How to use form values from an unbound form
I have a web report that uses a Django form (new forms) for fields that control the query used to generate the report (start date, end date, ...). The issue I'm having is that the page should work using the form's initial values (unbound), but I can't access the cleaned_data field unless I call is_valid(). But is_valid() always fails on unbound forms.
It seems like Django's forms were designed with the use case of editing data such that an unbound form isn't really useful for anything other than displaying HTML.
For example, if I have:
if request.method == 'GET':
form = MyForm()
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
is_valid() will fail if this is a GET (since it's unbound), and if I do:
if request.method == 'GET':
form = MyForm()
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
the first call to do_query triggers exceptions on form.cleaned_data, which is not a valid field because is_valid() has not been called. It seems like I have to do something like:
if request.method == 'GET':
form = MyForm()
do_query(form['start_date'].field.initial, form['end_date'].field.initial)
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
that is, there isn't a common interface for retrieving the form's values between a bound form and an unbound one.
Does anyone see a cleaner way to do this?
A:
If you add this method to your form class:
def get_cleaned_or_initial(self, fieldname):
if hasattr(self, 'cleaned_data'):
return self.cleaned_data.get(fieldname)
else:
return self[fieldname].field.initial
you could then re-write your code as:
if request.method == 'GET':
form = MyForm()
else:
form = MyForm(request.method.POST)
form.is_valid()
do_query(form.get_cleaned_or_initial('start_date'), form.get_cleaned_or_initial('end_date'))
A:
Unbound means there is no data associated with form (either initial or provided later), so the validation may fail. As mentioned in other answers (and in your own conclusion), you have to provide initial values and check for both bound data and initial values.
The use case for forms is form processing and validation, so you must have some data to validate before you accessing cleaned_data.
A:
You can pass a dictionary of initial values to your form:
if request.method == "GET":
# calculate my_start_date and my_end_date here...
form = MyForm( { 'start_date': my_start_date, 'end_date': my_end_date} )
...
See the official forms API documentation, where they demonstrate this.
edit: Based on answers from other users, maybe this is the cleanest solution:
if request.method == "GET":
form = MyForm()
form['start_date'] = form['start_date'].field.initial
form['end_date'] = form['end_date'].field.initial
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
I haven't tried this though; can someone confirm that this works? I think this is better than creating a new method, because this approach doesn't require other code (possibly not written by you) to know about your new 'magic' accessor.
| How to use form values from an unbound form | I have a web report that uses a Django form (new forms) for fields that control the query used to generate the report (start date, end date, ...). The issue I'm having is that the page should work using the form's initial values (unbound), but I can't access the cleaned_data field unless I call is_valid(). But is_valid() always fails on unbound forms.
It seems like Django's forms were designed with the use case of editing data such that an unbound form isn't really useful for anything other than displaying HTML.
For example, if I have:
if request.method == 'GET':
form = MyForm()
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
is_valid() will fail if this is a GET (since it's unbound), and if I do:
if request.method == 'GET':
form = MyForm()
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
the first call to do_query triggers exceptions on form.cleaned_data, which is not a valid field because is_valid() has not been called. It seems like I have to do something like:
if request.method == 'GET':
form = MyForm()
do_query(form['start_date'].field.initial, form['end_date'].field.initial)
else:
form = MyForm(request.method.POST)
if form.is_valid():
do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])
that is, there isn't a common interface for retrieving the form's values between a bound form and an unbound one.
Does anyone see a cleaner way to do this?
| [
"If you add this method to your form class:\ndef get_cleaned_or_initial(self, fieldname):\n if hasattr(self, 'cleaned_data'):\n return self.cleaned_data.get(fieldname)\n else:\n return self[fieldname].field.initial\n\nyou could then re-write your code as:\nif request.method == 'GET':\n form = MyForm()\nelse:\n form = MyForm(request.method.POST)\n form.is_valid()\n\ndo_query(form.get_cleaned_or_initial('start_date'), form.get_cleaned_or_initial('end_date'))\n\n",
"Unbound means there is no data associated with form (either initial or provided later), so the validation may fail. As mentioned in other answers (and in your own conclusion), you have to provide initial values and check for both bound data and initial values.\nThe use case for forms is form processing and validation, so you must have some data to validate before you accessing cleaned_data.\n",
"You can pass a dictionary of initial values to your form:\nif request.method == \"GET\":\n # calculate my_start_date and my_end_date here...\n form = MyForm( { 'start_date': my_start_date, 'end_date': my_end_date} )\n...\n\nSee the official forms API documentation, where they demonstrate this.\nedit: Based on answers from other users, maybe this is the cleanest solution:\nif request.method == \"GET\":\n form = MyForm()\n form['start_date'] = form['start_date'].field.initial\n form['end_date'] = form['end_date'].field.initial\nelse:\n form = MyForm(request.method.POST)\nif form.is_valid():\n do_query(form.cleaned_data['start_date'], form.cleaned_data['end_date'])\n\nI haven't tried this though; can someone confirm that this works? I think this is better than creating a new method, because this approach doesn't require other code (possibly not written by you) to know about your new 'magic' accessor.\n"
] | [
7,
2,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000075621_django_python.txt |
Q:
Asynchronous Programming in Python Twisted
I'm having trouble developing a reverse proxy in Twisted. It works, but it seems overly complex and convoluted. So much of it feels like voodoo.
Are there any simple, solid examples of asynchronous program structure on the web or in books? A sort of best practices guide? When I complete my program I'd like to be able to still see the structure in some way, not be looking at a bowl of spaghetti.
A:
Twisted contains a large number of examples. One in particular, the "evolution of Finger" tutorial, contains a thorough explanation of how an asynchronous program grows from a very small kernel up to a complex system with lots of moving parts. Another one that might be of interest to you is the tutorial about simply writing servers.
The key thing to keep in mind about Twisted, or even other asynchronous networking libraries (such as asyncore, MINA, or ACE), is that your code only gets invoked when something happens. The part that I've heard most often sound like "voodoo" is the management of callbacks: for example, Deferred. If you're used to writing code that runs in a straight line, and only calls functions which return immediately with results, the idea of waiting for something to call you back might be confusing. But there's nothing magical, no "voodoo" about callbacks. At the lowest level, the reactor is just sitting around and waiting for one of a small number of things to happen:
Data arrives on a connection (it will call dataReceived on a Protocol)
Time has passed (it will call a function registered with callLater).
A connection has been accepted (it will call buildProtocol on a factory registered with a listenXXX or connectXXX function).
A connection has been dropped (it will call connectionLost on the appropriate Protocol)
Every asynchronous program starts by hooking up a few of these events and then kicking off the reactor to wait for them to happen. Of course, events that happen lead to more events that get hooked up or disconnected, and so your program goes on its merry way. Beyond that, there's nothing special about asynchronous program structure that are interesting or special; event handlers and callbacks are just objects, and your code is run in the usual way.
Here's a simple "event-driven engine" that shows you just how simple this process is.
# Engine
import time
class SimplestReactor(object):
def __init__(self):
self.events = []
self.stopped = False
def do(self, something):
self.events.append(something)
def run(self):
while not self.stopped:
time.sleep(0.1)
if self.events:
thisTurn = self.events.pop(0)
thisTurn()
def stop(self):
self.stopped = True
reactor = SimplestReactor()
# Application
def thing1():
print 'Doing thing 1'
reactor.do(thing2)
reactor.do(thing3)
def thing2():
print 'Doing thing 2'
def thing3():
print 'Doing thing 3: and stopping'
reactor.stop()
reactor.do(thing1)
print 'Running'
reactor.run()
print 'Done!'
At the core of libraries like Twisted, the function in the main loop is not sleep, but an operating system call like select() or poll(), as exposed by a module like the Python select module. I say "like" select, because this is an API that varies a lot between platforms, and almost every GUI toolkit has its own version. Twisted currently provides an abstract interface to 14 different variations on this theme. The common thing that such an API provides is provide a way to say "Here are a list of events that I'm waiting for. Go to sleep until one of them happens, then wake up and tell me which one of them it was."
| Asynchronous Programming in Python Twisted | I'm having trouble developing a reverse proxy in Twisted. It works, but it seems overly complex and convoluted. So much of it feels like voodoo.
Are there any simple, solid examples of asynchronous program structure on the web or in books? A sort of best practices guide? When I complete my program I'd like to be able to still see the structure in some way, not be looking at a bowl of spaghetti.
| [
"Twisted contains a large number of examples. One in particular, the \"evolution of Finger\" tutorial, contains a thorough explanation of how an asynchronous program grows from a very small kernel up to a complex system with lots of moving parts. Another one that might be of interest to you is the tutorial about simply writing servers.\nThe key thing to keep in mind about Twisted, or even other asynchronous networking libraries (such as asyncore, MINA, or ACE), is that your code only gets invoked when something happens. The part that I've heard most often sound like \"voodoo\" is the management of callbacks: for example, Deferred. If you're used to writing code that runs in a straight line, and only calls functions which return immediately with results, the idea of waiting for something to call you back might be confusing. But there's nothing magical, no \"voodoo\" about callbacks. At the lowest level, the reactor is just sitting around and waiting for one of a small number of things to happen:\n\nData arrives on a connection (it will call dataReceived on a Protocol)\nTime has passed (it will call a function registered with callLater).\nA connection has been accepted (it will call buildProtocol on a factory registered with a listenXXX or connectXXX function).\nA connection has been dropped (it will call connectionLost on the appropriate Protocol)\n\nEvery asynchronous program starts by hooking up a few of these events and then kicking off the reactor to wait for them to happen. Of course, events that happen lead to more events that get hooked up or disconnected, and so your program goes on its merry way. Beyond that, there's nothing special about asynchronous program structure that are interesting or special; event handlers and callbacks are just objects, and your code is run in the usual way.\nHere's a simple \"event-driven engine\" that shows you just how simple this process is.\n# Engine\nimport time\nclass SimplestReactor(object):\n def __init__(self):\n self.events = []\n self.stopped = False\n\n def do(self, something):\n self.events.append(something)\n\n def run(self):\n while not self.stopped:\n time.sleep(0.1)\n if self.events:\n thisTurn = self.events.pop(0)\n thisTurn()\n\n def stop(self):\n self.stopped = True\n\nreactor = SimplestReactor()\n\n# Application \ndef thing1():\n print 'Doing thing 1'\n reactor.do(thing2)\n reactor.do(thing3)\n\ndef thing2():\n print 'Doing thing 2'\n\ndef thing3():\n print 'Doing thing 3: and stopping'\n reactor.stop()\n\nreactor.do(thing1)\nprint 'Running'\nreactor.run()\nprint 'Done!'\n\nAt the core of libraries like Twisted, the function in the main loop is not sleep, but an operating system call like select() or poll(), as exposed by a module like the Python select module. I say \"like\" select, because this is an API that varies a lot between platforms, and almost every GUI toolkit has its own version. Twisted currently provides an abstract interface to 14 different variations on this theme. The common thing that such an API provides is provide a way to say \"Here are a list of events that I'm waiting for. Go to sleep until one of them happens, then wake up and tell me which one of them it was.\"\n"
] | [
65
] | [] | [] | [
"asynchronous",
"python",
"twisted"
] | stackoverflow_0000080617_asynchronous_python_twisted.txt |
Q:
What IDE to use for Python?
What IDEs ("GUIs/editors") do others use for Python coding?
A:
Results
Spreadsheet version
Alternatively, in plain text: (also available as a a screenshot)
Bracket Matching -. .- Line Numbering
Smart Indent -. | | .- UML Editing / Viewing
Source Control Integration -. | | | | .- Code Folding
Error Markup -. | | | | | | .- Code Templates
Integrated Python Debugging -. | | | | | | | | .- Unit Testing
Multi-Language Support -. | | | | | | | | | | .- GUI Designer (Qt, Eric, etc)
Auto Code Completion -. | | | | | | | | | | | | .- Integrated DB Support
Commercial/Free -. | | | | | | | | | | | | | | .- Refactoring
Cross Platform -. | | | | | | | | | | | | | | | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
Atom |Y |F |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y | | | | |*many plugins
Editra |Y |F |Y |Y | | |Y |Y |Y |Y | |Y | | | | | |
Emacs |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |
Eric Ide |Y |F |Y | |Y |Y | |Y | |Y | |Y | |Y | | | |
Geany |Y |F |Y*|Y | | | |Y |Y |Y | |Y | | | | | |*very limited
Gedit |Y |F |Y¹|Y | | | |Y |Y |Y | | |Y²| | | | |¹with plugin; ²sort of
Idle |Y |F |Y | |Y | | |Y |Y | | | | | | | | |
IntelliJ |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |
JEdit |Y |F | |Y | | | | |Y |Y | |Y | | | | | |
KDevelop |Y |F |Y*|Y | | |Y |Y |Y |Y | |Y | | | | | |*no type inference
Komodo |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | |Y | |
NetBeans* |Y |F |Y |Y |Y | |Y |Y |Y |Y |Y |Y |Y |Y | | |Y |*pre-v7.0
Notepad++ |W |F |Y |Y | |Y*|Y*|Y*|Y |Y | |Y |Y*| | | | |*with plugin
Pfaide |W |C |Y |Y | | | |Y |Y |Y | |Y |Y | | | | |
PIDA |LW|F |Y |Y | | | |Y |Y |Y | |Y | | | | | |VIM based
PTVS |W |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y | | |Y*| |Y |*WPF bsed
PyCharm |Y |CF|Y |Y*|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |*JavaScript
PyDev (Eclipse) |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |
PyScripter |W |F |Y | |Y |Y | |Y |Y |Y | |Y |Y |Y | | | |
PythonWin |W |F |Y | |Y | | |Y |Y | | |Y | | | | | |
SciTE |Y |F¹| |Y | |Y | |Y |Y |Y | |Y |Y | | | | |¹Mac version is
ScriptDev |W |C |Y |Y |Y |Y | |Y |Y |Y | |Y |Y | | | | | commercial
Spyder |Y |F |Y | |Y |Y | |Y |Y |Y | | | | | | | |
Sublime Text |Y |CF|Y |Y | |Y |Y |Y |Y |Y | |Y |Y |Y*| | | |extensible w/Python,
TextMate |M |F | |Y | | |Y |Y |Y |Y | |Y |Y | | | | | *PythonTestRunner
UliPad |Y |F |Y |Y |Y | | |Y |Y | | | |Y |Y | | | |
Vim |Y |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |
Visual Studio |W |CF|Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |Y |? |Y |
Visual Studio Code|Y |F |Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |? |? |Y |uses plugins
WingIde |Y |C |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |*support for C
Zeus |W |C | | | | |Y |Y |Y |Y | |Y |Y | | | | |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
Cross Platform -' | | | | | | | | | | | | | | | |
Commercial/Free -' | | | | | | | | | | | | | | '- Refactoring
Auto Code Completion -' | | | | | | | | | | | | '- Integrated DB Support
Multi-Language Support -' | | | | | | | | | | '- GUI Designer (Qt, Eric, etc)
Integrated Python Debugging -' | | | | | | | | '- Unit Testing
Error Markup -' | | | | | | '- Code Templates
Source Control Integration -' | | | | '- Code Folding
Smart Indent -' | | '- UML Editing / Viewing
Bracket Matching -' '- Line Numbering
Acronyms used:
L - Linux
W - Windows
M - Mac
C - Commercial
F - Free
CF - Commercial with Free limited edition
? - To be confirmed
I don't mention basics like syntax highlighting as I expect these by default.
This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers.
PS. Can you help me to add features of the above editors to the list (like auto-complete, debugging, etc.)?
We have a comprehensive wiki page for this question https://wiki.python.org/moin/IntegratedDevelopmentEnvironments
Submit edits to the spreadsheet
| What IDE to use for Python? | What IDEs ("GUIs/editors") do others use for Python coding?
| [
"\nResults\nSpreadsheet version\n\nAlternatively, in plain text: (also available as a a screenshot)\n Bracket Matching -. .- Line Numbering\n Smart Indent -. | | .- UML Editing / Viewing\n Source Control Integration -. | | | | .- Code Folding\n Error Markup -. | | | | | | .- Code Templates\n Integrated Python Debugging -. | | | | | | | | .- Unit Testing\n Multi-Language Support -. | | | | | | | | | | .- GUI Designer (Qt, Eric, etc)\n Auto Code Completion -. | | | | | | | | | | | | .- Integrated DB Support\n Commercial/Free -. | | | | | | | | | | | | | | .- Refactoring\n Cross Platform -. | | | | | | | | | | | | | | | | \n +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\nAtom |Y |F |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y | | | | |*many plugins\nEditra |Y |F |Y |Y | | |Y |Y |Y |Y | |Y | | | | | |\nEmacs |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |\nEric Ide |Y |F |Y | |Y |Y | |Y | |Y | |Y | |Y | | | |\nGeany |Y |F |Y*|Y | | | |Y |Y |Y | |Y | | | | | |*very limited\nGedit |Y |F |Y¹|Y | | | |Y |Y |Y | | |Y²| | | | |¹with plugin; ²sort of\nIdle |Y |F |Y | |Y | | |Y |Y | | | | | | | | |\nIntelliJ |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |\nJEdit |Y |F | |Y | | | | |Y |Y | |Y | | | | | |\nKDevelop |Y |F |Y*|Y | | |Y |Y |Y |Y | |Y | | | | | |*no type inference\nKomodo |Y |CF|Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | |Y | |\nNetBeans* |Y |F |Y |Y |Y | |Y |Y |Y |Y |Y |Y |Y |Y | | |Y |*pre-v7.0\nNotepad++ |W |F |Y |Y | |Y*|Y*|Y*|Y |Y | |Y |Y*| | | | |*with plugin\nPfaide |W |C |Y |Y | | | |Y |Y |Y | |Y |Y | | | | |\nPIDA |LW|F |Y |Y | | | |Y |Y |Y | |Y | | | | | |VIM based\nPTVS |W |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y | | |Y*| |Y |*WPF bsed\nPyCharm |Y |CF|Y |Y*|Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |*JavaScript\nPyDev (Eclipse) |Y |F |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y |Y | | | |\nPyScripter |W |F |Y | |Y |Y | |Y |Y |Y | |Y |Y |Y | | | |\nPythonWin |W |F |Y | |Y | | |Y |Y | | |Y | | | | | |\nSciTE |Y |F¹| |Y | |Y | |Y |Y |Y | |Y |Y | | | | |¹Mac version is\nScriptDev |W |C |Y |Y |Y |Y | |Y |Y |Y | |Y |Y | | | | | commercial\nSpyder |Y |F |Y | |Y |Y | |Y |Y |Y | | | | | | | |\nSublime Text |Y |CF|Y |Y | |Y |Y |Y |Y |Y | |Y |Y |Y*| | | |extensible w/Python,\nTextMate |M |F | |Y | | |Y |Y |Y |Y | |Y |Y | | | | | *PythonTestRunner\nUliPad |Y |F |Y |Y |Y | | |Y |Y | | | |Y |Y | | | |\nVim |Y |F |Y |Y |Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |\nVisual Studio |W |CF|Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |Y |? |Y |\nVisual Studio Code|Y |F |Y |Y |Y |Y |Y |Y |Y |Y |? |Y |? |? |? |? |Y |uses plugins\nWingIde |Y |C |Y |Y*|Y |Y |Y |Y |Y |Y | |Y |Y |Y | | | |*support for C\nZeus |W |C | | | | |Y |Y |Y |Y | |Y |Y | | | | |\n +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+\n Cross Platform -' | | | | | | | | | | | | | | | | \n Commercial/Free -' | | | | | | | | | | | | | | '- Refactoring\n Auto Code Completion -' | | | | | | | | | | | | '- Integrated DB Support\n Multi-Language Support -' | | | | | | | | | | '- GUI Designer (Qt, Eric, etc)\n Integrated Python Debugging -' | | | | | | | | '- Unit Testing\n Error Markup -' | | | | | | '- Code Templates\n Source Control Integration -' | | | | '- Code Folding\n Smart Indent -' | | '- UML Editing / Viewing\n Bracket Matching -' '- Line Numbering\n\n\nAcronyms used:\n L - Linux\n W - Windows\n M - Mac\n C - Commercial\n F - Free\n CF - Commercial with Free limited edition\n ? - To be confirmed\n\nI don't mention basics like syntax highlighting as I expect these by default.\n\nThis is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers.\nPS. Can you help me to add features of the above editors to the list (like auto-complete, debugging, etc.)?\nWe have a comprehensive wiki page for this question https://wiki.python.org/moin/IntegratedDevelopmentEnvironments\nSubmit edits to the spreadsheet\n"
] | [
1293
] | [] | [] | [
"editor",
"ide",
"python"
] | stackoverflow_0000081584_editor_ide_python.txt |
Q:
Testing GUI code: should I use a mocking library?
Recently I've been experimenting with TDD while developing a GUI application in Python. I find it very reassuring to have tests that verify the functionality of my code, but it's been tricky to follow some of the recommened practices of TDD. Namely, writing tests first has been hard. And I'm finding it difficult to make my tests readable (due to extensive use of a mocking library).
I chose a mocking library called mocker. I use it a lot since much of the code I'm testing makes calls to (a) other methods in my application that depend on system state or (b) ObjC/Cocoa objects that cannot exist without an event loop, etc.
Anyway, I've got a lot of tests that look like this:
def test_current_window_controller():
def test(config):
ac = AppController()
m = Mocker()
ac.iter_window_controllers = iwc = m.replace(ac.iter_window_controllers)
expect(iwc()).result(iter(config))
with m:
result = ac.current_window_controller()
assert result == (config[0] if config else None)
yield test, []
yield test, [0]
yield test, [1, 0]
Notice that this is actually three tests; all use the same parameterized test function. Here's the code that is being tested:
def current_window_controller(self):
try:
# iter_window_controllers() iterates in z-order starting
# with the controller of the top-most window
# assumption: the top-most window is the "current" one
wc = self.iter_window_controllers().next()
except StopIteration:
return None
return wc
One of the things I've noticed with using mocker is that it's easier to write the application code first and then go back and write the tests second, since most of the time I'm mocking many method calls and the syntax to write the mocked calls is much more verbose (thus harder to write) than the application code. It's easier to write the app code and then model the test code off of that.
I find that with this testing method (and a bit of discipline) I can easily write code with 100% test coverage.
I'm wondering if these tests are good tests? Will I regret doing it this way down the road when I finally discover the secret to writing good tests?
Am I violating the core principles of TDD so much that my testing is in vain?
A:
If you are writing your tests after you've written your code and making them pass, you are not doing TDD (nor are you getting any benefits of Test-First or Test-Driven development.. check out SO questions for definitive books on TDD)
One of the things I've noticed with
using mocker is that it's easier to
write the application code first and
then go back and write the tests
second, since most of the time I'm
mocking many method calls and the
syntax to write the mocked calls is
much more verbose (thus harder to
write) than the application code. It's
easier to write the app code and then
model the test code off of that.
Of course, its easier because you are just testing that the sky is orange after you made it orange by painting it with a specific kind of brush.
This is retrofitting tests (for self-assurance). Mocks are good but you should know how and when to use them - Like the saying goes 'When you have a hammer everything looks like a nail' It's also easy to write a whole load of unreadable and not-as-helpful-as-can-be tests. The time spent understanding what the test is about is time lost that can be used to fix broken ones.
And the point is:
Read Mocks aren't stubs - Martin Fowler if you haven't already. Google out some documented instances of good ModelViewPresenter patterned GUIs (Fake/Mock out the UIs if necessary).
Study your options and choose wisely. I'll play the guy with the halo on your left shoulder in white saying 'Don't do it.' Read this question as to my reasons - St. Justin is on your right shoulder. I believe he has also something to say:)
| Testing GUI code: should I use a mocking library? | Recently I've been experimenting with TDD while developing a GUI application in Python. I find it very reassuring to have tests that verify the functionality of my code, but it's been tricky to follow some of the recommened practices of TDD. Namely, writing tests first has been hard. And I'm finding it difficult to make my tests readable (due to extensive use of a mocking library).
I chose a mocking library called mocker. I use it a lot since much of the code I'm testing makes calls to (a) other methods in my application that depend on system state or (b) ObjC/Cocoa objects that cannot exist without an event loop, etc.
Anyway, I've got a lot of tests that look like this:
def test_current_window_controller():
def test(config):
ac = AppController()
m = Mocker()
ac.iter_window_controllers = iwc = m.replace(ac.iter_window_controllers)
expect(iwc()).result(iter(config))
with m:
result = ac.current_window_controller()
assert result == (config[0] if config else None)
yield test, []
yield test, [0]
yield test, [1, 0]
Notice that this is actually three tests; all use the same parameterized test function. Here's the code that is being tested:
def current_window_controller(self):
try:
# iter_window_controllers() iterates in z-order starting
# with the controller of the top-most window
# assumption: the top-most window is the "current" one
wc = self.iter_window_controllers().next()
except StopIteration:
return None
return wc
One of the things I've noticed with using mocker is that it's easier to write the application code first and then go back and write the tests second, since most of the time I'm mocking many method calls and the syntax to write the mocked calls is much more verbose (thus harder to write) than the application code. It's easier to write the app code and then model the test code off of that.
I find that with this testing method (and a bit of discipline) I can easily write code with 100% test coverage.
I'm wondering if these tests are good tests? Will I regret doing it this way down the road when I finally discover the secret to writing good tests?
Am I violating the core principles of TDD so much that my testing is in vain?
| [
"If you are writing your tests after you've written your code and making them pass, you are not doing TDD (nor are you getting any benefits of Test-First or Test-Driven development.. check out SO questions for definitive books on TDD)\n\nOne of the things I've noticed with\n using mocker is that it's easier to\n write the application code first and\n then go back and write the tests\n second, since most of the time I'm\n mocking many method calls and the\n syntax to write the mocked calls is\n much more verbose (thus harder to\n write) than the application code. It's\n easier to write the app code and then\n model the test code off of that.\n\nOf course, its easier because you are just testing that the sky is orange after you made it orange by painting it with a specific kind of brush. \nThis is retrofitting tests (for self-assurance). Mocks are good but you should know how and when to use them - Like the saying goes 'When you have a hammer everything looks like a nail' It's also easy to write a whole load of unreadable and not-as-helpful-as-can-be tests. The time spent understanding what the test is about is time lost that can be used to fix broken ones. \nAnd the point is: \n\nRead Mocks aren't stubs - Martin Fowler if you haven't already. Google out some documented instances of good ModelViewPresenter patterned GUIs (Fake/Mock out the UIs if necessary). \nStudy your options and choose wisely. I'll play the guy with the halo on your left shoulder in white saying 'Don't do it.' Read this question as to my reasons - St. Justin is on your right shoulder. I believe he has also something to say:) \n\n"
] | [
8
] | [
"Unit tests are really useful when you refactor your code (ie. completely rewrite or move a module). As long as you have unit tests before you do the big changes, you'll have confidence that you havent forgotten to move or include something when you finish.\n",
"Please remember that TDD is not a panaceum. It's hard, it's supposed to be hard, and it's especially hard to write mocking tests \"in advance\".\nSo I would say - do what works for you. Even it's not \"certified TDD\". I do basically the same thing.\nYou may want to provide your own API for GUI that would sit between controller code and GUI library code. That could be easier to mock, or you can even add some testing hooks to it.\nLast but not least, your code doesn't look too unreadable to me. Code using mocks is generally harder to understand. Fortunately in Python mocking is much easier and cleaner than i n other languages.\n"
] | [
-2,
-3
] | [
"python",
"tdd",
"unit_testing",
"user_interface"
] | stackoverflow_0000079454_python_tdd_unit_testing_user_interface.txt |
Q:
HTML parser in Python
Using the Python Documentation I found the HTML parser but I have no idea which library to import to use it, how do I find this out (bearing in mind it doesn't say on the page).
A:
You probably really want BeautifulSoup, check the link for an example.
But in any case
>>> import HTMLParser
>>> h = HTMLParser.HTMLParser()
>>> h.feed('<html></html>')
>>> h.get_starttag_text()
'<html>'
>>> h.close()
A:
Try:
import HTMLParser
In Python 3.0, the HTMLParser module has been renamed to html.parser
you can check about this here
Python 3.0
import html.parser
Python 2.2 and above
import HTMLParser
A:
I would recommend using Beautiful Soup module instead and it has good documentation.
A:
You should also look at html5lib for Python as it tries to parse HTML in a way that very much resembles what web browsers do, especially when dealing with invalid HTML (which is more than 90% of today's web).
A:
You may be interested in lxml. It is a separate package and has C components, but is the fastest. It has also very nice API, allowing you to easily list links in HTML documents, or list forms, sanitize HTML, and more. It also has capabilities to parse not well-formed HTML (it's configurable).
A:
I don't recommend BeautifulSoup if you want speed. lxml is much, much faster, and you can fall back in lxml's BS soupparser if the default parser doesn't work.
A:
There's a link to an example on the bottom of (http://docs.python.org/2/library/htmlparser.html) , it just doesn't work with the original python or python3. It has to be python2 as it says on the top.
A:
For real world HTML processing I'd recommend BeautifulSoup. It is great and takes away much of the pain. Installation is easy.
| HTML parser in Python | Using the Python Documentation I found the HTML parser but I have no idea which library to import to use it, how do I find this out (bearing in mind it doesn't say on the page).
| [
"You probably really want BeautifulSoup, check the link for an example. \nBut in any case\n>>> import HTMLParser\n>>> h = HTMLParser.HTMLParser()\n>>> h.feed('<html></html>')\n>>> h.get_starttag_text()\n'<html>'\n>>> h.close()\n\n",
"Try:\nimport HTMLParser\n\nIn Python 3.0, the HTMLParser module has been renamed to html.parser\nyou can check about this here\nPython 3.0\nimport html.parser\n\nPython 2.2 and above\nimport HTMLParser\n\n",
"I would recommend using Beautiful Soup module instead and it has good documentation.\n",
"You should also look at html5lib for Python as it tries to parse HTML in a way that very much resembles what web browsers do, especially when dealing with invalid HTML (which is more than 90% of today's web).\n",
"You may be interested in lxml. It is a separate package and has C components, but is the fastest. It has also very nice API, allowing you to easily list links in HTML documents, or list forms, sanitize HTML, and more. It also has capabilities to parse not well-formed HTML (it's configurable).\n",
"I don't recommend BeautifulSoup if you want speed. lxml is much, much faster, and you can fall back in lxml's BS soupparser if the default parser doesn't work.\n",
"There's a link to an example on the bottom of (http://docs.python.org/2/library/htmlparser.html) , it just doesn't work with the original python or python3. It has to be python2 as it says on the top.\n",
"For real world HTML processing I'd recommend BeautifulSoup. It is great and takes away much of the pain. Installation is easy.\n"
] | [
24,
20,
4,
4,
4,
3,
1,
1
] | [] | [] | [
"import",
"python"
] | stackoverflow_0000071151_import_python.txt |
Q:
Best server-side framework for heavy RIA based application?
What do the collective beleive to be the best platform to use as a backend to AJAX / Flex / Silverlight applications and why?
We are undergoing a technology review and I would like to know some other opinions.
Is It Java, Grails, Python, Rails, ColdFusion, something else?
A:
There is no definitive answer. However, I would choose a light solution, like Python or Rails, over Java or ColdFusion.
You may want to investigate C# ASP.NET + Silverlight combo. Microsoft made it highly integrated, which is double-edged sword. But in many cases this helps.
You may also want to review existing solutions / applications / startups. Don't ditch PHP up front, there are many existing components for it. And don't overestimate the impact of server-side technology choice on success.
| Best server-side framework for heavy RIA based application? | What do the collective beleive to be the best platform to use as a backend to AJAX / Flex / Silverlight applications and why?
We are undergoing a technology review and I would like to know some other opinions.
Is It Java, Grails, Python, Rails, ColdFusion, something else?
| [
"There is no definitive answer. However, I would choose a light solution, like Python or Rails, over Java or ColdFusion.\nYou may want to investigate C# ASP.NET + Silverlight combo. Microsoft made it highly integrated, which is double-edged sword. But in many cases this helps.\nYou may also want to review existing solutions / applications / startups. Don't ditch PHP up front, there are many existing components for it. And don't overestimate the impact of server-side technology choice on success.\n"
] | [
2
] | [] | [] | [
"java",
"python",
"ria"
] | stackoverflow_0000082599_java_python_ria.txt |
Q:
Classes in Python
In Python is there any way to make a class, then make a second version of that class with identical dat,a but which can be changed, then reverted to be the same as the data in the original class?
So I would make a class with the numbers 1 to 5 as the data in it, then make a second class with the same names for sections (or very similar). Mess around with the numbers in the second class then with one function then reset them to be the same as in the first class.
The only alternative I've found is to make one aggravatingly long class with too many separate pieces of data in it to be readily usable.
A:
A class is a template, it allows you to create a blueprint, you can then have multiple instances of a class each with different numbers, like so.
class dog(object):
def __init__(self, height, width, lenght):
self.height = height
self.width = width
self.length = length
def revert(self):
self.height = 1
self.width = 2
self.length = 3
dog1 = dog(5, 6, 7)
dog2 = dog(2, 3, 4)
dog1.revert()
A:
Here's another answer kind of like pobk's; it uses the instance's dict to do the work of saving/resetting variables, but doesn't require you to specify the names of them in your code. You can call save() at any time to save the state of the instance and reset() to reset to that state.
class MyReset:
def __init__(self, x, y):
self.x = x
self.y = y
self.save()
def save(self):
self.saved = self.__dict__.copy()
def reset(self):
self.__dict__ = self.saved.copy()
a = MyReset(20, 30)
a.x = 50
print a.x
a.reset()
print a.x
Why do you want to do this? It might not be the best/only way.
A:
Classes don't have values. Objects do. Is what you want basically a class that can reset an instance (object) to a set of default values?
How about just providing a reset method, that resets the properties of your object to whatever is the default?
I think you should simplify your question, or tell us what you really want to do. It's not at all clear.
A:
I think you are confused. You should re-check the meaning of "class" and "instance".
I think you are trying to first declare a Instance of a certain Class, and then declare a instance of other Class, use the data from the first one, and then find a way to convert the data in the second instance and use it on the first instance...
I recommend that you use operator overloading to assign the data.
A:
class ABC(self):
numbers = [0,1,2,3]
class DEF(ABC):
def __init__(self):
self.new_numbers = super(ABC,self).numbers
def setnums(self, numbers):
self.new_numbers = numbers
def getnums(self):
return self.new_numbers
def reset(self):
__init__()
A:
Just FYI, here's an alternate implementation... Probably violates about 15 million pythonic rules, but I publish it per information/observation:
class Resettable(object):
base_dict = {}
def reset(self):
self.__dict__ = self.__class__.base_dict
def __init__(self):
self.__dict__ = self.__class__.base_dict.copy()
class SomeClass(Resettable):
base_dict = {
'number_one': 1,
'number_two': 2,
'number_three': 3,
'number_four': 4,
'number_five': 5,
}
def __init__(self):
Resettable.__init__(self)
p = SomeClass()
p.number_one = 100
print p.number_one
p.reset()
print p.number_one
| Classes in Python | In Python is there any way to make a class, then make a second version of that class with identical dat,a but which can be changed, then reverted to be the same as the data in the original class?
So I would make a class with the numbers 1 to 5 as the data in it, then make a second class with the same names for sections (or very similar). Mess around with the numbers in the second class then with one function then reset them to be the same as in the first class.
The only alternative I've found is to make one aggravatingly long class with too many separate pieces of data in it to be readily usable.
| [
"A class is a template, it allows you to create a blueprint, you can then have multiple instances of a class each with different numbers, like so.\nclass dog(object):\n def __init__(self, height, width, lenght):\n self.height = height\n self.width = width\n self.length = length\n\n def revert(self):\n self.height = 1\n self.width = 2\n self.length = 3\n\ndog1 = dog(5, 6, 7)\ndog2 = dog(2, 3, 4)\n\ndog1.revert()\n\n",
"Here's another answer kind of like pobk's; it uses the instance's dict to do the work of saving/resetting variables, but doesn't require you to specify the names of them in your code. You can call save() at any time to save the state of the instance and reset() to reset to that state.\nclass MyReset:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n self.save()\n\n def save(self):\n self.saved = self.__dict__.copy()\n\n def reset(self):\n self.__dict__ = self.saved.copy()\n\na = MyReset(20, 30)\na.x = 50\nprint a.x\na.reset()\nprint a.x\n\nWhy do you want to do this? It might not be the best/only way.\n",
"Classes don't have values. Objects do. Is what you want basically a class that can reset an instance (object) to a set of default values? \nHow about just providing a reset method, that resets the properties of your object to whatever is the default?\nI think you should simplify your question, or tell us what you really want to do. It's not at all clear.\n",
"I think you are confused. You should re-check the meaning of \"class\" and \"instance\".\nI think you are trying to first declare a Instance of a certain Class, and then declare a instance of other Class, use the data from the first one, and then find a way to convert the data in the second instance and use it on the first instance...\nI recommend that you use operator overloading to assign the data.\n",
"class ABC(self):\n numbers = [0,1,2,3]\n\nclass DEF(ABC):\n def __init__(self):\n self.new_numbers = super(ABC,self).numbers\n\n def setnums(self, numbers):\n self.new_numbers = numbers\n\n def getnums(self):\n return self.new_numbers\n\n def reset(self):\n __init__()\n\n",
"Just FYI, here's an alternate implementation... Probably violates about 15 million pythonic rules, but I publish it per information/observation:\nclass Resettable(object):\n base_dict = {}\n def reset(self):\n self.__dict__ = self.__class__.base_dict\n\n def __init__(self):\n self.__dict__ = self.__class__.base_dict.copy()\n\nclass SomeClass(Resettable):\n base_dict = {\n 'number_one': 1,\n 'number_two': 2,\n 'number_three': 3,\n 'number_four': 4,\n 'number_five': 5,\n }\n def __init__(self):\n Resettable.__init__(self)\n\n\np = SomeClass()\np.number_one = 100\nprint p.number_one\np.reset()\nprint p.number_one\n\n"
] | [
5,
2,
1,
1,
1,
1
] | [] | [] | [
"class",
"python"
] | stackoverflow_0000064141_class_python.txt |
Q:
PythonWin's python interactive shell calling constructors twice?
While answering Static class variables in Python
I noticed that PythonWin PyWin32 build 209.2 interpreter seems to evaluate twice?
PythonWin 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32.
Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1, 1]
>>>
while the python interpreter does the right thing
C:\>python
ActivePython 2.5.0.0 (ActiveState Software Inc.) based on
Python 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1]
>>>
A:
My guess is as follows. The PythonWin editor offers autocomplete for an object, i.e. when you type myobject. it offers a little popup of all the availble method names. So I think when you type X(). it's creating an instance of X in the background and doing a dir or similar to find out the attributes of the object.
So the constructor is only being run once for each object but to give you the interactivity it's creating objects silently in the background without telling you about it.
A:
Dave Webb is correct, and you can see this by adding a print statement:
>>> class X:
... l = []
... def __init__(self):
... print 'inited'
... self.__class__.l.append(1)
...
Then as soon as you type the period in X(). it prints inited prior to offering you the completion popup.
A:
Two small additional points.
First, self.__class__.l.append(1) isn't really sensible.
Just say self.l.append(1). Python searches the instance before it searches the class for the reference.
More importantly, class-level variables are rarely useful. Class-level constants are sometimes sensible, but even then, they're hard to justify.
In C++ and Java, class-level ('static') variables seem handy, but don't do much of value. They're hard to teach to n00bz -- often wasting lots of classroom time on minutia -- and they aren't very practical. If you want to know all instances of an X that was created, it's probably better to create an XFactory class that doesn't rely on class variables.
class XFactory( object ):
def __init__( self ):
self.listOfX= []
def makeX( self, *args, **kw ):
newX= X(*args,**kw)
self.listOfX.append(newX)
return newX
No class-level variable anomalies. And, it doesn't conflate the X's with the collection of X's. In the long run, I find it confusing when a class is both some thing and also some collection of things.
Simpler is better than Complex.
| PythonWin's python interactive shell calling constructors twice? | While answering Static class variables in Python
I noticed that PythonWin PyWin32 build 209.2 interpreter seems to evaluate twice?
PythonWin 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32.
Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1, 1]
>>>
while the python interpreter does the right thing
C:\>python
ActivePython 2.5.0.0 (ActiveState Software Inc.) based on
Python 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1]
>>>
| [
"My guess is as follows. The PythonWin editor offers autocomplete for an object, i.e. when you type myobject. it offers a little popup of all the availble method names. So I think when you type X(). it's creating an instance of X in the background and doing a dir or similar to find out the attributes of the object.\nSo the constructor is only being run once for each object but to give you the interactivity it's creating objects silently in the background without telling you about it.\n",
"Dave Webb is correct, and you can see this by adding a print statement:\n>>> class X:\n... l = []\n... def __init__(self):\n... print 'inited'\n... self.__class__.l.append(1)\n... \n\nThen as soon as you type the period in X(). it prints inited prior to offering you the completion popup.\n",
"Two small additional points.\nFirst, self.__class__.l.append(1) isn't really sensible.\nJust say self.l.append(1). Python searches the instance before it searches the class for the reference.\nMore importantly, class-level variables are rarely useful. Class-level constants are sometimes sensible, but even then, they're hard to justify. \nIn C++ and Java, class-level ('static') variables seem handy, but don't do much of value. They're hard to teach to n00bz -- often wasting lots of classroom time on minutia -- and they aren't very practical. If you want to know all instances of an X that was created, it's probably better to create an XFactory class that doesn't rely on class variables.\nclass XFactory( object ):\n def __init__( self ):\n self.listOfX= []\n def makeX( self, *args, **kw ):\n newX= X(*args,**kw)\n self.listOfX.append(newX)\n return newX\n\nNo class-level variable anomalies. And, it doesn't conflate the X's with the collection of X's. In the long run, I find it confusing when a class is both some thing and also some collection of things.\nSimpler is better than Complex.\n"
] | [
3,
2,
1
] | [] | [] | [
"activestate",
"python",
"python_2.x"
] | stackoverflow_0000081191_activestate_python_python_2.x.txt |
Q:
Why isn't the 'len' function inherited by dictionaries and lists in Python
example:
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me
A:
Guido's explanation is here:
First of all, I chose len(x) over x.len() for HCI reasons (def __len__() came much later). There are two intertwined reasons actually, both HCI:
(a) For some operations, prefix notation just reads better than postfix — prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.
(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn’t a file has a write() method.
Saying the same thing in another way, I see ‘len‘ as a built-in operation. I’d hate to lose that. /…/
A:
The short answer: 1) backwards compatibility and 2) there's not enough of a difference for it to really matter. For a more detailed explanation, read on.
The idiomatic Python approach to such operations is special methods which aren't intended to be called directly. For example, to make x + y work for your own class, you write a __add__ method. To make sure that int(spam) properly converts your custom class, write a __int__ method. To make sure that len(foo) does something sensible, write a __len__ method.
This is how things have always been with Python, and I think it makes a lot of sense for some things. In particular, this seems like a sensible way to implement operator overloading. As for the rest, different languages disagree; in Ruby you'd convert something to an integer by calling spam.to_i directly instead of saying int(spam).
You're right that Python is an extremely object-oriented language and that having to call an external function on an object to get its length seems odd. On the other hand, len(silly_walks) isn't any more onerous than silly_walks.len(), and Guido has said that he actually prefers it (http://mail.python.org/pipermail/python-3000/2006-November/004643.html).
A:
It just isn't.
You can, however, do:
>>> [1,2,3].__len__()
3
Adding a __len__() method to a class is what makes the len() magic work.
A:
This way fits in better with the rest of the language. The convention in python is that you add __foo__ special methods to objects to make them have certain capabilities (rather than e.g. deriving from a specific base class). For example, an object is
callable if it has a __call__ method
iterable if it has an __iter__ method,
supports access with [] if it has __getitem__ and __setitem__.
...
One of these special methods is __len__ which makes it have a length accessible with len().
A:
Maybe you're looking for __len__. If that method exists, then len(a) calls it:
>>> class Spam:
... def __len__(self): return 3
...
>>> s = Spam()
>>> len(s)
3
A:
Well, there actually is a length method, it is just hidden:
>>> a_list = [1, 2, 3]
>>> a_list.__len__()
3
The len() built-in function appears to be simply a wrapper for a call to the hidden len() method of the object.
Not sure why they made the decision to implement things this way though.
A:
there is some good info below on why certain things are functions and other are methods. It does indeed cause some inconsistencies in the language.
http://mail.python.org/pipermail/python-dev/2008-January/076612.html
| Why isn't the 'len' function inherited by dictionaries and lists in Python | example:
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me
| [
"Guido's explanation is here:\n\nFirst of all, I chose len(x) over x.len() for HCI reasons (def __len__() came much later). There are two intertwined reasons actually, both HCI:\n(a) For some operations, prefix notation just reads better than postfix — prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.\n(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn’t a file has a write() method.\nSaying the same thing in another way, I see ‘len‘ as a built-in operation. I’d hate to lose that. /…/\n\n",
"The short answer: 1) backwards compatibility and 2) there's not enough of a difference for it to really matter. For a more detailed explanation, read on.\nThe idiomatic Python approach to such operations is special methods which aren't intended to be called directly. For example, to make x + y work for your own class, you write a __add__ method. To make sure that int(spam) properly converts your custom class, write a __int__ method. To make sure that len(foo) does something sensible, write a __len__ method.\nThis is how things have always been with Python, and I think it makes a lot of sense for some things. In particular, this seems like a sensible way to implement operator overloading. As for the rest, different languages disagree; in Ruby you'd convert something to an integer by calling spam.to_i directly instead of saying int(spam).\nYou're right that Python is an extremely object-oriented language and that having to call an external function on an object to get its length seems odd. On the other hand, len(silly_walks) isn't any more onerous than silly_walks.len(), and Guido has said that he actually prefers it (http://mail.python.org/pipermail/python-3000/2006-November/004643.html).\n",
"It just isn't.\nYou can, however, do:\n>>> [1,2,3].__len__()\n\n3\n\nAdding a __len__() method to a class is what makes the len() magic work.\n",
"This way fits in better with the rest of the language. The convention in python is that you add __foo__ special methods to objects to make them have certain capabilities (rather than e.g. deriving from a specific base class). For example, an object is \n\ncallable if it has a __call__ method \niterable if it has an __iter__ method, \nsupports access with [] if it has __getitem__ and __setitem__. \n...\n\nOne of these special methods is __len__ which makes it have a length accessible with len().\n",
"Maybe you're looking for __len__. If that method exists, then len(a) calls it:\n>>> class Spam:\n... def __len__(self): return 3\n... \n>>> s = Spam()\n>>> len(s)\n3\n\n",
"Well, there actually is a length method, it is just hidden:\n>>> a_list = [1, 2, 3]\n>>> a_list.__len__()\n3\n\nThe len() built-in function appears to be simply a wrapper for a call to the hidden len() method of the object.\nNot sure why they made the decision to implement things this way though.\n",
"there is some good info below on why certain things are functions and other are methods. It does indeed cause some inconsistencies in the language.\nhttp://mail.python.org/pipermail/python-dev/2008-January/076612.html\n"
] | [
45,
13,
11,
6,
2,
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0000083983_python.txt |
Q:
How can I access App Engine through a Corporate proxy?
I have corporate proxy that supports https but not HTTP CONNECT (even after authentication). It just gives 403 Forbidden in response anything but HTTP or HTTPS URLS. It uses HTTP authenication, not NTLM. It is well documented the urllib2 does not work with https thru a proxy. App Engine trys to connect to a https URL using urllib2 to update the app.
On *nix, urllib2 expects proxies to set using env variables.
export http_proxy="http://mycorporateproxy:8080"
export https_proxy="https://mycorporateproxy:8080"
This is sited as a work around: http://code.activestate.com/recipes/456195/. Also see http://code.google.com/p/googleappengine/issues/detail?id=126.
None of these fixes have worked for me. They seem to rely on the proxy server supporting HTTP CONNECT. Does anyone have any other work arounds? I sure I am not the only
one behind a restrictive corporate proxy.
A:
Do you mean it uses http basic-auth before allowing proxying, and does it then allow 'connect'.
Then you should be able to tunnel over it using http-tunnel or proxytunnel
| How can I access App Engine through a Corporate proxy? | I have corporate proxy that supports https but not HTTP CONNECT (even after authentication). It just gives 403 Forbidden in response anything but HTTP or HTTPS URLS. It uses HTTP authenication, not NTLM. It is well documented the urllib2 does not work with https thru a proxy. App Engine trys to connect to a https URL using urllib2 to update the app.
On *nix, urllib2 expects proxies to set using env variables.
export http_proxy="http://mycorporateproxy:8080"
export https_proxy="https://mycorporateproxy:8080"
This is sited as a work around: http://code.activestate.com/recipes/456195/. Also see http://code.google.com/p/googleappengine/issues/detail?id=126.
None of these fixes have worked for me. They seem to rely on the proxy server supporting HTTP CONNECT. Does anyone have any other work arounds? I sure I am not the only
one behind a restrictive corporate proxy.
| [
"Do you mean it uses http basic-auth before allowing proxying, and does it then allow 'connect'.\nThen you should be able to tunnel over it using http-tunnel or proxytunnel\n"
] | [
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000064362_google_app_engine_python.txt |
Q:
How can I get Emacs' key bindings in Python's IDLE?
I use Emacs primarily for coding Python but sometimes I use IDLE. Is there a way to change the key bindings easily in IDLE to match Emacs?
A:
IDLE provides Emacs keybindings without having to install other software.
Open up the menu item Options -> Configure IDLE...
Go to Keys tab
In the drop down menu on the right
side of the dialog change the select
to "IDLE Classic Unix"
It's not the true emacs key bindings but you get the basics like movement, saving/opening, ...
A:
There's a program for Windows called XKeymacs that allows you to specify emacs keybindings for different programs. It should work with IDLE.
http://www.cam.hi-ho.ne.jp/oishi/indexen.html
-Mark
A:
'readline' module supposedly provides Emacs like key bindings and even functionality. However, it is not available on Windows but on Unix. Therefore, this might be a viable solution if you are not using Windows.
import readline
Since I am running IDLE on Windows it is unfortunately not an option for me.
| How can I get Emacs' key bindings in Python's IDLE? | I use Emacs primarily for coding Python but sometimes I use IDLE. Is there a way to change the key bindings easily in IDLE to match Emacs?
| [
"IDLE provides Emacs keybindings without having to install other software. \n\nOpen up the menu item Options -> Configure IDLE...\nGo to Keys tab\nIn the drop down menu on the right\nside of the dialog change the select\nto \"IDLE Classic Unix\"\n\nIt's not the true emacs key bindings but you get the basics like movement, saving/opening, ...\n",
"There's a program for Windows called XKeymacs that allows you to specify emacs keybindings for different programs. It should work with IDLE.\nhttp://www.cam.hi-ho.ne.jp/oishi/indexen.html\n-Mark\n",
"'readline' module supposedly provides Emacs like key bindings and even functionality. However, it is not available on Windows but on Unix. Therefore, this might be a viable solution if you are not using Windows.\nimport readline\n\nSince I am running IDLE on Windows it is unfortunately not an option for me.\n"
] | [
6,
2,
0
] | [] | [] | [
"emacs",
"ide",
"keyboard",
"python"
] | stackoverflow_0000055365_emacs_ide_keyboard_python.txt |
Q:
How to best implement simple crash / error reporting?
What would be the best way to implement a simple crash / error reporting mechanism?
Details: my app is cross-platform (mac/windows/linux) and written in Python, so I just need something that will send me a small amount of text, e.g. just a timestamp and a traceback (which I already generate and show in my error dialog).
It would be fine if it could simply email it, but I can't think of a way to do this without including a username and password for the smtp server in the application...
Should I implement a simple web service on the server side and have my app send it an HTTP request with the info? Any better ideas?
A:
The web service is the best way, but there are some caveats:
You should always ask the user if it is ok to send error feedback information.
You should be prepared to fail gracefully if there are network errors. Don't let a failure to report a crash impede recovery!
You should avoid including user identifying or sensitive information unless the user knows (see #1) and you should either use SSL or otherwise protect it. Some jurisdictions impose burdens on you that you might not want to deal with, so it's best to simply not save such information.
Like any web service, make sure your service is not exploitable by miscreants.
A:
I can't think of a way to do this without including a username and password for the smtp server in the application...
You only need a username and password for authenticating yourself to a smarthost. You don't need it to send mail directly, you need it to send mail through a relay, e.g. your ISP's mail server. It's perfectly possible to send email without authentication - that's why spam is so hard to stop.
Having said that, some ISPs block outbound traffic on port 25, so the most robust alternative is an HTTP POST, which is unlikely to be blocked by anything. Be sure to pick a URL that you won't feel restricted by later on, or better yet, have the application periodically check for updates, so if you decide to change domains or something, you can push an update in advance.
Security isn't really an issue. You can fairly easily discard junk data, so all that really concerns you is whether or not somebody would go to the trouble of constructing fake tracebacks to mess with you, and that's a very unlikely situation.
As for the payload, PyCrash can help you with that.
A:
The web hit is the way to go, but make sure you pick a good URL - your app will be hitting it for years to come.
A:
PyCrash?
A:
Whether you use SMTP or HTTP to send the data, you need to have a username/password in the application to prevent just anyone from sending random data to you.
With that in mind, I suspect it would be easier to use SMTP rather than HTTP to send the data.
A:
Some kind of simple web service would suffice. You would have to consider security so not just anyone could make requests to your service..
On a larger scale we considered a JMS messaging system. Put a serialized object of data containing the traceback/error message into a queue and consume it every x minutes generating reports/alerts from that data.
| How to best implement simple crash / error reporting? | What would be the best way to implement a simple crash / error reporting mechanism?
Details: my app is cross-platform (mac/windows/linux) and written in Python, so I just need something that will send me a small amount of text, e.g. just a timestamp and a traceback (which I already generate and show in my error dialog).
It would be fine if it could simply email it, but I can't think of a way to do this without including a username and password for the smtp server in the application...
Should I implement a simple web service on the server side and have my app send it an HTTP request with the info? Any better ideas?
| [
"The web service is the best way, but there are some caveats:\n\nYou should always ask the user if it is ok to send error feedback information.\nYou should be prepared to fail gracefully if there are network errors. Don't let a failure to report a crash impede recovery!\nYou should avoid including user identifying or sensitive information unless the user knows (see #1) and you should either use SSL or otherwise protect it. Some jurisdictions impose burdens on you that you might not want to deal with, so it's best to simply not save such information.\nLike any web service, make sure your service is not exploitable by miscreants.\n\n",
"\nI can't think of a way to do this without including a username and password for the smtp server in the application...\n\nYou only need a username and password for authenticating yourself to a smarthost. You don't need it to send mail directly, you need it to send mail through a relay, e.g. your ISP's mail server. It's perfectly possible to send email without authentication - that's why spam is so hard to stop.\nHaving said that, some ISPs block outbound traffic on port 25, so the most robust alternative is an HTTP POST, which is unlikely to be blocked by anything. Be sure to pick a URL that you won't feel restricted by later on, or better yet, have the application periodically check for updates, so if you decide to change domains or something, you can push an update in advance.\nSecurity isn't really an issue. You can fairly easily discard junk data, so all that really concerns you is whether or not somebody would go to the trouble of constructing fake tracebacks to mess with you, and that's a very unlikely situation.\nAs for the payload, PyCrash can help you with that.\n",
"The web hit is the way to go, but make sure you pick a good URL - your app will be hitting it for years to come. \n",
"PyCrash?\n",
"Whether you use SMTP or HTTP to send the data, you need to have a username/password in the application to prevent just anyone from sending random data to you.\nWith that in mind, I suspect it would be easier to use SMTP rather than HTTP to send the data.\n",
"Some kind of simple web service would suffice. You would have to consider security so not just anyone could make requests to your service..\nOn a larger scale we considered a JMS messaging system. Put a serialized object of data containing the traceback/error message into a queue and consume it every x minutes generating reports/alerts from that data.\n"
] | [
6,
3,
1,
1,
0,
0
] | [] | [] | [
"cross_platform",
"error_reporting",
"python"
] | stackoverflow_0000085985_cross_platform_error_reporting_python.txt |
Q:
Search for host with MAC-address using Python
I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?
A:
You need ARP. Python's standard library doesn't include any code for that, so you either need to call an external program (your OS may have an 'arp' utility) or you need to build the packets yourself (possibly with a tool like Scapy.
A:
I don't think there is a built in way to get it from Python itself.
My question is, how are you getting the IP information from your network?
To get it from your local machine you could parse ifconfig (unix) or ipconfig (windows) with little difficulty.
A:
If you want a pure Python solution, you can take a look at Scapy to craft packets (you need to send ARP request, and inspect replies). Or if you don't mind invoking external program, you can use arping (on Un*x systems, I don't know of a Windows equivalent).
A:
It seems that there is not a native way of doing this with Python. Your best bet would be to parse the output of "ipconfig /all" on Windows, or "ifconfig" on Linux. Consider using os.popen() with some regexps.
A:
Depends on your platform. If you're using *nix, you can use the 'arp' command to look up the mac address for a given IP (assuming IPv4) address. If that doesn't work, you could ping the address and then look, or if you have access to the raw network (using BPF or some other mechanism), you could send your own ARP packets (but that is probably overkill).
A:
You would want to parse the output of 'arp', but the kernel ARP cache will only contain those IP address(es) if those hosts have communicated with the host where the Python script is running.
ifconfig can be used to display the MAC addresses of local interfaces, but not those on the LAN.
A:
Mark Pilgrim describes how to do this on Windows for the current machine with the Netbios module here. You can get the Netbios module as part of the Win32 package available at python.org. Unfortunately at the moment I cannot find the docs on the module.
| Search for host with MAC-address using Python | I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?
| [
"You need ARP. Python's standard library doesn't include any code for that, so you either need to call an external program (your OS may have an 'arp' utility) or you need to build the packets yourself (possibly with a tool like Scapy.\n",
"I don't think there is a built in way to get it from Python itself. \nMy question is, how are you getting the IP information from your network?\nTo get it from your local machine you could parse ifconfig (unix) or ipconfig (windows) with little difficulty.\n",
"If you want a pure Python solution, you can take a look at Scapy to craft packets (you need to send ARP request, and inspect replies). Or if you don't mind invoking external program, you can use arping (on Un*x systems, I don't know of a Windows equivalent).\n",
"It seems that there is not a native way of doing this with Python. Your best bet would be to parse the output of \"ipconfig /all\" on Windows, or \"ifconfig\" on Linux. Consider using os.popen() with some regexps.\n",
"Depends on your platform. If you're using *nix, you can use the 'arp' command to look up the mac address for a given IP (assuming IPv4) address. If that doesn't work, you could ping the address and then look, or if you have access to the raw network (using BPF or some other mechanism), you could send your own ARP packets (but that is probably overkill).\n",
"You would want to parse the output of 'arp', but the kernel ARP cache will only contain those IP address(es) if those hosts have communicated with the host where the Python script is running.\nifconfig can be used to display the MAC addresses of local interfaces, but not those on the LAN.\n",
"Mark Pilgrim describes how to do this on Windows for the current machine with the Netbios module here. You can get the Netbios module as part of the Win32 package available at python.org. Unfortunately at the moment I cannot find the docs on the module.\n"
] | [
13,
1,
1,
1,
0,
0,
0
] | [
"as python was not meant to deal with OS-specific issues (it's supposed to be interpreted and cross platform), i would execute an external command to do so:\nin unix the command is ifconfig\nif you execute it as a pipe you get the desired result:\nimport os\nmyPipe = os.popen2(\"/sbin/ifconfig\",\"a\")\nprint(myPipe[1].read())\n\n"
] | [
-1
] | [
"network_programming",
"python"
] | stackoverflow_0000085577_network_programming_python.txt |
Q:
Running multiple sites from a single Python web framework
I know you can do redirection based on the domain or path to rewrite the URI to point at a site-specific location and I've also seen some brutish if and elif statements for every site as shown in the following code, which I would like to avoid.
if site == 'site1':
...
elif site == 'site2:
...
What are some good and clever ways of running multiple sites from a single, common Python web framework (i.e., Pylons, TurboGears, etc)?
A:
Django has this built in. See the sites framework.
As a general technique, include a 'host' column in your database schema attached to the data you want to be host-specific, then include the Host HTTP header in the query when you are retrieving data.
A:
Using Django on apache with mod_python, I host multiple (unrelated) django sites simply with the following apache config:
<VirtualHost 1.2.3.4>
DocumentRoot /www/site1
ServerName site1.com
<Location />
SetHandler python-program
SetEnv DJANGO_SETTINGS_MODULE site1.settings
PythonPath "['/www'] + sys.path"
PythonDebug On
PythonInterpreter site1
</Location>
</VirtualHost>
<VirtualHost 1.2.3.4>
DocumentRoot /www/site2
ServerName site2.com
<Location />
SetHandler python-program
SetEnv DJANGO_SETTINGS_MODULE site2.settings
PythonPath "['/www'] + sys.path"
PythonDebug On
PythonInterpreter site2
</Location>
</VirtualHost>
No need for multiple apache instances or proxy servers. Using a different PythonInterpreter directive for each site (the name you enter is arbitrary) keeps the namespaces separate.
A:
I use CherryPy as my web server (which comes bundled with Turbogears), and I simply run multiple instances of the CherryPy web server on different ports bound to localhost. Then I configure Apache with mod_proxy and mod_rewrite to transparently forward requests to the proper port based on the HTTP request.
A:
Using multiple server instances on local ports is a good idea, but you don't need a full featured web server to redirect HTTP requests.
I would use pound as a reverse proxy to do the job. It is small, fast, simple and does exactly what we need here.
WHAT POUND IS:
a reverse-proxy: it passes requests from client browsers to one or more back-end servers.
a load balancer: it will distribute the requests from the client browsers among several back-end servers, while keeping session information.
an SSL wrapper: Pound will decrypt HTTPS requests from client browsers and pass them as plain HTTP to the back-end servers.
an HTTP/HTTPS sanitizer: Pound will verify requests for correctness and accept only well-formed ones.
a fail over-server: should a back-end server fail, Pound will take note of the fact and stop passing requests to it until it recovers.
a request redirector: requests may be distributed among servers according to the requested URL.
| Running multiple sites from a single Python web framework | I know you can do redirection based on the domain or path to rewrite the URI to point at a site-specific location and I've also seen some brutish if and elif statements for every site as shown in the following code, which I would like to avoid.
if site == 'site1':
...
elif site == 'site2:
...
What are some good and clever ways of running multiple sites from a single, common Python web framework (i.e., Pylons, TurboGears, etc)?
| [
"Django has this built in. See the sites framework.\nAs a general technique, include a 'host' column in your database schema attached to the data you want to be host-specific, then include the Host HTTP header in the query when you are retrieving data.\n",
"Using Django on apache with mod_python, I host multiple (unrelated) django sites simply with the following apache config:\n<VirtualHost 1.2.3.4>\n DocumentRoot /www/site1\n ServerName site1.com\n <Location />\n SetHandler python-program\n SetEnv DJANGO_SETTINGS_MODULE site1.settings\n PythonPath \"['/www'] + sys.path\"\n PythonDebug On\n PythonInterpreter site1\n </Location>\n</VirtualHost>\n\n<VirtualHost 1.2.3.4>\n DocumentRoot /www/site2\n ServerName site2.com\n <Location />\n SetHandler python-program\n SetEnv DJANGO_SETTINGS_MODULE site2.settings\n PythonPath \"['/www'] + sys.path\"\n PythonDebug On\n PythonInterpreter site2\n </Location>\n</VirtualHost>\n\nNo need for multiple apache instances or proxy servers. Using a different PythonInterpreter directive for each site (the name you enter is arbitrary) keeps the namespaces separate.\n",
"I use CherryPy as my web server (which comes bundled with Turbogears), and I simply run multiple instances of the CherryPy web server on different ports bound to localhost. Then I configure Apache with mod_proxy and mod_rewrite to transparently forward requests to the proper port based on the HTTP request.\n",
"Using multiple server instances on local ports is a good idea, but you don't need a full featured web server to redirect HTTP requests. \nI would use pound as a reverse proxy to do the job. It is small, fast, simple and does exactly what we need here.\n\nWHAT POUND IS:\n\na reverse-proxy: it passes requests from client browsers to one or more back-end servers.\na load balancer: it will distribute the requests from the client browsers among several back-end servers, while keeping session information.\nan SSL wrapper: Pound will decrypt HTTPS requests from client browsers and pass them as plain HTTP to the back-end servers.\nan HTTP/HTTPS sanitizer: Pound will verify requests for correctness and accept only well-formed ones.\na fail over-server: should a back-end server fail, Pound will take note of the fact and stop passing requests to it until it recovers.\na request redirector: requests may be distributed among servers according to the requested URL.\n\n\n"
] | [
11,
7,
3,
3
] | [] | [] | [
"frameworks",
"python"
] | stackoverflow_0000085119_frameworks_python.txt |
Q:
Setting Environment Variables for Mercurial Hook
I am trying to call a shell script that sets a bunch of environment variables on our server from a mercurial hook. The shell script gets called fine when a new changegroup comes in, but the environment variables aren't carrying over past the call to the shell script.
My hgrc file on the respository looks like this:
[hooks]
changegroup = shell_script
changegroup.env = env
I can see the output of the shell script, and then the output of the env command, but the env command doesn't include the new environment variables set by the shell script.
I have verified that the shell script works fine when run by itself but when run in the context of the mercurial hook it does not properly set the environment.
A:
Shell scripts can't modify their enviroment.
http://tldp.org/LDP/abs/html/gotchas.html
A script may not export variables back to its parent process, the shell, or to the environment. Just as we learned in biology, a child process can inherit from a parent, but not vice versa
$ cat > eg.sh
export FOO="bar";
^D
$ bash eg.sh
$ echo $FOO;
$
also, the problem is greater, as you have multiple calls of bash
bash 1 -> hg -> bash 2 ( shell script )
-> bash 3 ( env call )
it would be like thinking I could set a variable in one php script and then magically get it with another simply by running one after the other.
| Setting Environment Variables for Mercurial Hook | I am trying to call a shell script that sets a bunch of environment variables on our server from a mercurial hook. The shell script gets called fine when a new changegroup comes in, but the environment variables aren't carrying over past the call to the shell script.
My hgrc file on the respository looks like this:
[hooks]
changegroup = shell_script
changegroup.env = env
I can see the output of the shell script, and then the output of the env command, but the env command doesn't include the new environment variables set by the shell script.
I have verified that the shell script works fine when run by itself but when run in the context of the mercurial hook it does not properly set the environment.
| [
"Shell scripts can't modify their enviroment. \nhttp://tldp.org/LDP/abs/html/gotchas.html\n\nA script may not export variables back to its parent process, the shell, or to the environment. Just as we learned in biology, a child process can inherit from a parent, but not vice versa\n\n$ cat > eg.sh \nexport FOO=\"bar\";\n^D\n$ bash eg.sh \n$ echo $FOO; \n\n$\n\nalso, the problem is greater, as you have multiple calls of bash \nbash 1 -> hg -> bash 2 ( shell script ) \n -> bash 3 ( env call )\n\nit would be like thinking I could set a variable in one php script and then magically get it with another simply by running one after the other. \n"
] | [
2
] | [] | [] | [
"mercurial",
"mercurial_hook",
"python",
"shell"
] | stackoverflow_0000088194_mercurial_mercurial_hook_python_shell.txt |
Q:
How do I unit test an __init__() method of a python class with assertRaises()?
I have a class:
class MyClass:
def __init__(self, foo):
if foo != 1:
raise Error("foo is not equal to 1!")
and a unit test that is supposed to make sure the incorrect arg passed to the constructor properly raises an error:
def testInsufficientArgs(self):
foo = 0
self.assertRaises((Error), myClass = MyClass(Error, foo))
But I get...
NameError: global name 'Error' is not defined
Why? Where should I be defining this Error object? I thought it was built-in as a default exception type, no?
A:
'Error' in this example could be any exception object. I think perhaps you have read a code example that used it as a metasyntatic placeholder to mean, "The Appropriate Exception Class".
The baseclass of all exceptions is called 'Exception', and most of its subclasses are descriptive names of the type of error involved, such as 'OSError', 'ValueError', 'NameError', 'TypeError'.
In this case, the appropriate error is 'ValueError' (the value of foo was wrong, therefore a ValueError). I would recommend replacing 'Error' with 'ValueError' in your script.
Here is a complete version of the code you are trying to write, I'm duplicating everything because you have a weird keyword argument in your original example that you seem to be conflating with an assignment, and I'm using the 'failUnless' function name because that's the non-aliased name of the function:
class MyClass:
def __init__(self, foo):
if foo != 1:
raise ValueError("foo is not equal to 1!")
import unittest
class TestFoo(unittest.TestCase):
def testInsufficientArgs(self):
foo = 0
self.failUnlessRaises(ValueError, MyClass, foo)
if __name__ == '__main__':
unittest.main()
The output is:
.
----------------------------------------------------------------------
Ran 1 test in 0.007s
OK
There is a flaw in the unit testing library 'unittest' that other unit testing frameworks fix. You'll note that it is impossible to gain access to the exception object from the calling context. If you want to fix this, you'll have to redefine that method in a subclass of UnitTest:
This is an example of it in use:
class TestFoo(unittest.TestCase):
def failUnlessRaises(self, excClass, callableObj, *args, **kwargs):
try:
callableObj(*args, **kwargs)
except excClass, excObj:
return excObj # Actually return the exception object
else:
if hasattr(excClass,'__name__'): excName = excClass.__name__
else: excName = str(excClass)
raise self.failureException, "%s not raised" % excName
def testInsufficientArgs(self):
foo = 0
excObj = self.failUnlessRaises(ValueError, MyClass, foo)
self.failUnlessEqual(excObj[0], 'foo is not equal to 1!')
I have copied the failUnlessRaises function from unittest.py from python2.5 and modified it slightly.
A:
How about this:
class MyClass:
def __init__(self, foo):
if foo != 1:
raise Exception("foo is not equal to 1!")
import unittest
class Tests(unittest.TestCase):
def testSufficientArgs(self):
foo = 1
MyClass(foo)
def testInsufficientArgs(self):
foo = 2
self.assertRaises(Exception, MyClass, foo)
if __name__ == '__main__':
unittest.main()
A:
I think you're thinking of Exceptions. Replace the word Error in your description with Exception and you should be good to go :-)
| How do I unit test an __init__() method of a python class with assertRaises()? | I have a class:
class MyClass:
def __init__(self, foo):
if foo != 1:
raise Error("foo is not equal to 1!")
and a unit test that is supposed to make sure the incorrect arg passed to the constructor properly raises an error:
def testInsufficientArgs(self):
foo = 0
self.assertRaises((Error), myClass = MyClass(Error, foo))
But I get...
NameError: global name 'Error' is not defined
Why? Where should I be defining this Error object? I thought it was built-in as a default exception type, no?
| [
"'Error' in this example could be any exception object. I think perhaps you have read a code example that used it as a metasyntatic placeholder to mean, \"The Appropriate Exception Class\".\nThe baseclass of all exceptions is called 'Exception', and most of its subclasses are descriptive names of the type of error involved, such as 'OSError', 'ValueError', 'NameError', 'TypeError'.\nIn this case, the appropriate error is 'ValueError' (the value of foo was wrong, therefore a ValueError). I would recommend replacing 'Error' with 'ValueError' in your script.\nHere is a complete version of the code you are trying to write, I'm duplicating everything because you have a weird keyword argument in your original example that you seem to be conflating with an assignment, and I'm using the 'failUnless' function name because that's the non-aliased name of the function:\nclass MyClass:\n def __init__(self, foo):\n if foo != 1:\n raise ValueError(\"foo is not equal to 1!\")\n\nimport unittest\nclass TestFoo(unittest.TestCase):\n def testInsufficientArgs(self):\n foo = 0\n self.failUnlessRaises(ValueError, MyClass, foo)\n\nif __name__ == '__main__':\n unittest.main()\n\nThe output is:\n.\n----------------------------------------------------------------------\nRan 1 test in 0.007s\n\nOK\n\nThere is a flaw in the unit testing library 'unittest' that other unit testing frameworks fix. You'll note that it is impossible to gain access to the exception object from the calling context. If you want to fix this, you'll have to redefine that method in a subclass of UnitTest:\nThis is an example of it in use:\nclass TestFoo(unittest.TestCase):\n def failUnlessRaises(self, excClass, callableObj, *args, **kwargs):\n try:\n callableObj(*args, **kwargs)\n except excClass, excObj:\n return excObj # Actually return the exception object\n else:\n if hasattr(excClass,'__name__'): excName = excClass.__name__\n else: excName = str(excClass)\n raise self.failureException, \"%s not raised\" % excName\n\n def testInsufficientArgs(self):\n foo = 0\n excObj = self.failUnlessRaises(ValueError, MyClass, foo)\n self.failUnlessEqual(excObj[0], 'foo is not equal to 1!')\n\nI have copied the failUnlessRaises function from unittest.py from python2.5 and modified it slightly.\n",
"How about this:\nclass MyClass:\n def __init__(self, foo):\n if foo != 1:\n raise Exception(\"foo is not equal to 1!\")\n\nimport unittest\n\nclass Tests(unittest.TestCase):\n def testSufficientArgs(self):\n foo = 1\n MyClass(foo)\n\n def testInsufficientArgs(self):\n foo = 2\n self.assertRaises(Exception, MyClass, foo)\n\nif __name__ == '__main__':\n unittest.main()\n\n",
"I think you're thinking of Exceptions. Replace the word Error in your description with Exception and you should be good to go :-)\n"
] | [
33,
7,
1
] | [] | [] | [
"exception",
"python",
"unit_testing"
] | stackoverflow_0000088325_exception_python_unit_testing.txt |
Q:
In Python, how do you take tokenized input such as with the C++?
In C++, I can have take input like this:
cin >> a >> b >> c;
And a can be int, b can be float, and c can be whatever... How do I do the same in python?
input() and raw_input(), the way I'm using them, don't seem to be giving me the desired results.
A:
You generally shouldn't use input() in production code. If you want an int and then a float, try this:
>>> line = raw_input().split()
>>> a = int(line[0])
>>> b = float(line[1])
>>> c = " ".join(line[2:])
It all depends on what exactly you're trying to accomplish, but remember that readability counts. Obscure one-liners may seem cool but in the face of maintainability, try to choose something sensible :)
(P.S.: Don't forget to check for errors with try: ... except (ValueError, IndexError):)
A:
Since the C++ cin reads from sys.stdin, you'll often do something more like the following.
import sys
tokens= sys.stdin.read().split()
try:
a= int(token[0])
b= float(token[1])
except ValueError, e:
print e # handle the invalid input
A:
Depending upon what you are doing, something like the getopt module could be useful, but only in certain situations and I'm not sure if it would apply in yours.
| In Python, how do you take tokenized input such as with the C++? | In C++, I can have take input like this:
cin >> a >> b >> c;
And a can be int, b can be float, and c can be whatever... How do I do the same in python?
input() and raw_input(), the way I'm using them, don't seem to be giving me the desired results.
| [
"You generally shouldn't use input() in production code. If you want an int and then a float, try this:\n>>> line = raw_input().split()\n>>> a = int(line[0])\n>>> b = float(line[1])\n>>> c = \" \".join(line[2:])\n\nIt all depends on what exactly you're trying to accomplish, but remember that readability counts. Obscure one-liners may seem cool but in the face of maintainability, try to choose something sensible :)\n(P.S.: Don't forget to check for errors with try: ... except (ValueError, IndexError):)\n",
"Since the C++ cin reads from sys.stdin, you'll often do something more like the following.\nimport sys\ntokens= sys.stdin.read().split()\ntry:\n a= int(token[0])\n b= float(token[1])\nexcept ValueError, e:\n print e # handle the invalid input\n\n",
"Depending upon what you are doing, something like the getopt module could be useful, but only in certain situations and I'm not sure if it would apply in yours.\n"
] | [
4,
3,
0
] | [] | [] | [
"c++",
"input",
"python"
] | stackoverflow_0000088554_c++_input_python.txt |
Q:
How do i generate a histogram for a given probability distribution (for functional testing a server)?
I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation)
I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly)
I use this distribution to generate the number of requests that should be sent each day between a given start and end date
I've hacked together an algorithm in Python that sort of works but it feels kludgy:
how_many_days = (end_date - start_date).days
freqs = defaultdict(int)
for x in xrange(how_many_responses):
freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1
timeline = []
day = start_date
for i,freq in sorted(freqs.iteritems()):
timeline.append((day, freq))
day += timedelta(days=1)
return timeline
What better ways are there to do this?
A:
Why don't you try The Grinder 3 to load test your server, it comes with all this and more prebuilt, and it supports python as a scripting language
A:
Slightly longer but probably more readable rework of your last four lines:
samples = [0 for i in xrange(how_many_days + 1)]
for s in xrange(how_many_responses):
samples[min(int(how_many_days * weibullvariate(0.5, 2)), how_many_days)] += 1
histogram = zip(timeline, samples)
print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram)
This always drops the samples within the date range, but you get a corresponding bump at the end of the timeline from all of the samples that are above the [0, 1] range.
A:
This is quick and probably not that accurate, but if you calculate the PDF yourself, then at least you make it easier to lay several smaller/larger ones on a single timeline. dev is the std deviation in the Guassian noise, which controls the roughness. Note that this is not the 'right' way to generate what you want, but it's easy.
import math
from datetime import datetime, timedelta, date
from random import gauss
how_many_responses = 1000
start_date = date(2008, 5, 1)
end_date = date(2008, 6, 1)
num_days = (end_date - start_date).days + 1
timeline = [start_date + timedelta(i) for i in xrange(num_days)]
def weibull(x, k, l):
return (k / l) * (x / l)**(k-1) * math.e**(-(x/l)**k)
dev = 0.1
samples = [i * 1.25/(num_days-1) for i in range(num_days)]
probs = [weibull(i, 2, 0.5) for i in samples]
noise = [gauss(0, dev) for i in samples]
simdata = [max(0., e + n) for (e, n) in zip(probs, noise)]
events = [int(p * (how_many_responses / sum(probs))) for p in simdata]
histogram = zip(timeline, events)
print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram)
A:
Instead of giving the number of requests as a fixed value, why not use a scaling factor instead? At the moment, you're treating requests as a limited quantity, and randomising the days on which those requests fall. It would seem more reasonable to treat your requests-per-day as independent.
from datetime import *
from random import *
timeline = []
scaling = 10
start_date = date(2008, 5, 1)
end_date = date(2008, 6, 1)
num_days = (end_date - start_date).days + 1
days = [start_date + timedelta(i) for i in range(num_days)]
requests = [int(scaling * weibullvariate(0.5, 2)) for i in range(num_days)]
timeline = zip(days, requests)
timeline
A:
I rewrote the code above to be shorter (but maybe it's too obfuscated now?)
timeline = (start_date + timedelta(days=days) for days in count(0))
how_many_days = (end_date - start_date).days
pick_a_day = lambda _:int(how_many_days * weibullvariate(0.5, 2))
days = sorted(imap(pick_a_day, xrange(how_many_responses)))
histogram = zip(timeline, (len(list(responses)) for day, responses in groupby(days)))
print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram)
A:
Another solution is to use Rpy, which puts all of the power of R (including lots of tools for distributions), easily into Python.
| How do i generate a histogram for a given probability distribution (for functional testing a server)? | I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation)
I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly)
I use this distribution to generate the number of requests that should be sent each day between a given start and end date
I've hacked together an algorithm in Python that sort of works but it feels kludgy:
how_many_days = (end_date - start_date).days
freqs = defaultdict(int)
for x in xrange(how_many_responses):
freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1
timeline = []
day = start_date
for i,freq in sorted(freqs.iteritems()):
timeline.append((day, freq))
day += timedelta(days=1)
return timeline
What better ways are there to do this?
| [
"Why don't you try The Grinder 3 to load test your server, it comes with all this and more prebuilt, and it supports python as a scripting language\n",
"Slightly longer but probably more readable rework of your last four lines:\nsamples = [0 for i in xrange(how_many_days + 1)]\nfor s in xrange(how_many_responses):\n samples[min(int(how_many_days * weibullvariate(0.5, 2)), how_many_days)] += 1\nhistogram = zip(timeline, samples)\nprint '\\n'.join((d.strftime('%Y-%m-%d ') + \"*\" * c) for d,c in histogram)\n\nThis always drops the samples within the date range, but you get a corresponding bump at the end of the timeline from all of the samples that are above the [0, 1] range.\n",
"This is quick and probably not that accurate, but if you calculate the PDF yourself, then at least you make it easier to lay several smaller/larger ones on a single timeline. dev is the std deviation in the Guassian noise, which controls the roughness. Note that this is not the 'right' way to generate what you want, but it's easy.\nimport math\nfrom datetime import datetime, timedelta, date\nfrom random import gauss\n\nhow_many_responses = 1000\nstart_date = date(2008, 5, 1)\nend_date = date(2008, 6, 1)\nnum_days = (end_date - start_date).days + 1\ntimeline = [start_date + timedelta(i) for i in xrange(num_days)]\n\ndef weibull(x, k, l):\n return (k / l) * (x / l)**(k-1) * math.e**(-(x/l)**k)\n\ndev = 0.1\nsamples = [i * 1.25/(num_days-1) for i in range(num_days)]\nprobs = [weibull(i, 2, 0.5) for i in samples]\nnoise = [gauss(0, dev) for i in samples]\nsimdata = [max(0., e + n) for (e, n) in zip(probs, noise)]\nevents = [int(p * (how_many_responses / sum(probs))) for p in simdata]\n\nhistogram = zip(timeline, events)\n\nprint '\\n'.join((d.strftime('%Y-%m-%d ') + \"*\" * c) for d,c in histogram)\n\n",
"Instead of giving the number of requests as a fixed value, why not use a scaling factor instead? At the moment, you're treating requests as a limited quantity, and randomising the days on which those requests fall. It would seem more reasonable to treat your requests-per-day as independent.\nfrom datetime import *\nfrom random import *\n\ntimeline = []\nscaling = 10\nstart_date = date(2008, 5, 1)\nend_date = date(2008, 6, 1)\n\nnum_days = (end_date - start_date).days + 1\ndays = [start_date + timedelta(i) for i in range(num_days)]\nrequests = [int(scaling * weibullvariate(0.5, 2)) for i in range(num_days)]\ntimeline = zip(days, requests)\ntimeline\n\n",
"I rewrote the code above to be shorter (but maybe it's too obfuscated now?)\ntimeline = (start_date + timedelta(days=days) for days in count(0))\nhow_many_days = (end_date - start_date).days\npick_a_day = lambda _:int(how_many_days * weibullvariate(0.5, 2))\ndays = sorted(imap(pick_a_day, xrange(how_many_responses)))\nhistogram = zip(timeline, (len(list(responses)) for day, responses in groupby(days)))\nprint '\\n'.join((d.strftime('%Y-%m-%d ') + \"*\" * c) for d,c in histogram)\n\n",
"Another solution is to use Rpy, which puts all of the power of R (including lots of tools for distributions), easily into Python. \n"
] | [
1,
1,
1,
0,
0,
0
] | [] | [] | [
"python",
"simulation",
"statistics",
"stress_testing"
] | stackoverflow_0000053786_python_simulation_statistics_stress_testing.txt |
Q:
What are the pros and cons of the various Python implementations?
I am relatively new to Python, and I have always used the standard cpython (v2.5) implementation.
I've been wondering about the other implementations though, particularly Jython and IronPython. What makes them better? What makes them worse? What other implementations are there?
I guess what I'm looking for is a summary and list of pros and cons for each implementation.
A:
Jython and IronPython are useful if you have an overriding need to interface with existing libraries written in a different platform, like if you have 100,000 lines of Java and you just want to write a 20-line Python script. Not particularly useful for anything else, in my opinion, because they are perpetually a few versions behind CPython due to community inertia.
Stackless is interesting because it has support for green threads, continuations, etc. Sort of an Erlang-lite.
PyPy is an experimental interpreter/compiler that may one day supplant CPython, but for now is more of a testbed for new ideas.
A:
An additional benefit for Jython, at least for some, is it lacks the GIL (the Global Interpreter Lock) and uses Java's native threads. This means that you can run pure Python code in parallel, something not possible with the GIL.
A:
All of the implementations are listed here:
https://wiki.python.org/moin/PythonImplementations
CPython is the "reference implementation" and developed by Guido and the core developers.
A:
Pros: Access to the libraries available for JVM or CLR.
Cons: Both naturally lag behind CPython in terms of features.
A:
IronPython and Jython use the runtime environment for .NET or Java and with that comes Just In Time compilation and a garbage collector different from the original CPython. They might be also faster than CPython thanks to the JIT, but I don't know that for sure.
A downside in using Jython or IronPython is that you cannot use native C modules, they can be only used in CPython.
A:
PyPy is a Python implementation written in RPython wich is a Python subset.
RPython can be translated to run on a VM or, unlike standard Python, RPython can be statically compiled.
| What are the pros and cons of the various Python implementations? | I am relatively new to Python, and I have always used the standard cpython (v2.5) implementation.
I've been wondering about the other implementations though, particularly Jython and IronPython. What makes them better? What makes them worse? What other implementations are there?
I guess what I'm looking for is a summary and list of pros and cons for each implementation.
| [
"Jython and IronPython are useful if you have an overriding need to interface with existing libraries written in a different platform, like if you have 100,000 lines of Java and you just want to write a 20-line Python script. Not particularly useful for anything else, in my opinion, because they are perpetually a few versions behind CPython due to community inertia.\nStackless is interesting because it has support for green threads, continuations, etc. Sort of an Erlang-lite.\nPyPy is an experimental interpreter/compiler that may one day supplant CPython, but for now is more of a testbed for new ideas.\n",
"An additional benefit for Jython, at least for some, is it lacks the GIL (the Global Interpreter Lock) and uses Java's native threads. This means that you can run pure Python code in parallel, something not possible with the GIL.\n",
"All of the implementations are listed here:\nhttps://wiki.python.org/moin/PythonImplementations\nCPython is the \"reference implementation\" and developed by Guido and the core developers.\n",
"Pros: Access to the libraries available for JVM or CLR.\nCons: Both naturally lag behind CPython in terms of features.\n",
"IronPython and Jython use the runtime environment for .NET or Java and with that comes Just In Time compilation and a garbage collector different from the original CPython. They might be also faster than CPython thanks to the JIT, but I don't know that for sure.\nA downside in using Jython or IronPython is that you cannot use native C modules, they can be only used in CPython.\n",
"PyPy is a Python implementation written in RPython wich is a Python subset. \nRPython can be translated to run on a VM or, unlike standard Python, RPython can be statically compiled.\n"
] | [
15,
6,
3,
1,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000086134_python.txt |
Q:
I need to write code in python for comparing text of two documents using fingerprint techniques
I need to write code in python language for comparing the text of document using fingerprint techniques. I do not know to take fingerprint of a document or to generate fingerprint of a document. I'm asking if anyone knows the method or has source code for generating fingerprints of documents which is stored in bits form.
A:
If you want message digests (cryptographic hashes), use the hashlib library. Here's an example (IPython session):
In [1]: import hashlib
In [2]: md = hashlib.sha256(open('/tmp/Calendar.xls', 'rb').read())
In [3]: md.hexdigest()
Out[3]: '8517f1eae176f1a20de78d879f81f23de503cfd6b8e4be1d798fb2342934b187'
A:
You might try the following papers to get started with the concept of fingerprinting:
Winnowing: Local Algorithms for Document Fingerprinting
Scalable Document Fingerprinting
| I need to write code in python for comparing text of two documents using fingerprint techniques | I need to write code in python language for comparing the text of document using fingerprint techniques. I do not know to take fingerprint of a document or to generate fingerprint of a document. I'm asking if anyone knows the method or has source code for generating fingerprints of documents which is stored in bits form.
| [
"If you want message digests (cryptographic hashes), use the hashlib library. Here's an example (IPython session):\n\n In [1]: import hashlib\n\n In [2]: md = hashlib.sha256(open('/tmp/Calendar.xls', 'rb').read())\n\n In [3]: md.hexdigest()\n Out[3]: '8517f1eae176f1a20de78d879f81f23de503cfd6b8e4be1d798fb2342934b187'\n\n",
"You might try the following papers to get started with the concept of fingerprinting:\n\nWinnowing: Local Algorithms for Document Fingerprinting\nScalable Document Fingerprinting\n\n"
] | [
4,
4
] | [] | [] | [
"diff",
"python"
] | stackoverflow_0000091183_diff_python.txt |
Q:
Will everything in the standard library treat strings as unicode in Python 3.0?
I'm a little confused about how the standard library will behave now that Python (from 3.0) is unicode-based. Will modules such as CGI and urllib use unicode strings or will they use the new 'bytes' type and just provide encoded data?
A:
Logically a lot of things like MIME-encoded mail messages, URLs, XML documents, and so on should be returned as bytes not strings. This could cause some consternation as the libraries start to be nailed down for Python 3 and people discover that they have to be more aware of the bytes/string conversions than they were for str/unicode ...
A:
One of the great things about this question (and Python in general) is that you can just mess around in the interpreter! Python 3.0 rc1 is currently available for download.
>>> import urllib.request
>>> fh = urllib.request.urlopen('http://www.python.org/')
>>> print(type(fh.read(100)))
<class 'bytes'>
A:
There will be a two-step dance here. See Python 3000 and You.
Step 1 is to get running under 3.0.
Step 2 is to rethink your API's to, perhaps, do something more sensible.
The most likely course is that the libraries will switch to unicode strings to remain as compatible as possible with how they used to work.
Then, perhaps, some will switch to bytes to more properly implement the RFC standards for the various protocols.
| Will everything in the standard library treat strings as unicode in Python 3.0? | I'm a little confused about how the standard library will behave now that Python (from 3.0) is unicode-based. Will modules such as CGI and urllib use unicode strings or will they use the new 'bytes' type and just provide encoded data?
| [
"Logically a lot of things like MIME-encoded mail messages, URLs, XML documents, and so on should be returned as bytes not strings. This could cause some consternation as the libraries start to be nailed down for Python 3 and people discover that they have to be more aware of the bytes/string conversions than they were for str/unicode ...\n",
"One of the great things about this question (and Python in general) is that you can just mess around in the interpreter! Python 3.0 rc1 is currently available for download.\n>>> import urllib.request\n>>> fh = urllib.request.urlopen('http://www.python.org/')\n>>> print(type(fh.read(100)))\n<class 'bytes'>\n\n",
"There will be a two-step dance here. See Python 3000 and You.\nStep 1 is to get running under 3.0.\nStep 2 is to rethink your API's to, perhaps, do something more sensible.\nThe most likely course is that the libraries will switch to unicode strings to remain as compatible as possible with how they used to work.\nThen, perhaps, some will switch to bytes to more properly implement the RFC standards for the various protocols.\n"
] | [
12,
7,
1
] | [] | [] | [
"cgi",
"python",
"python_3.x",
"string",
"unicode"
] | stackoverflow_0000091205_cgi_python_python_3.x_string_unicode.txt |
Q:
NI CVI with Python
I'd like to integrate a Python IDLE-esque command prompt interface into an existing NI-CVI (LabWindows) application. I've tried to follow the Python.org discussions but seem to get lost in the details. Is there a resource out there for dummies like me?
A:
Here is a python sample code calling a CVI.
There are DaqMx python bindings too.
| NI CVI with Python | I'd like to integrate a Python IDLE-esque command prompt interface into an existing NI-CVI (LabWindows) application. I've tried to follow the Python.org discussions but seem to get lost in the details. Is there a resource out there for dummies like me?
| [
"Here is a python sample code calling a CVI.\nThere are DaqMx python bindings too.\n"
] | [
1
] | [] | [] | [
"cvi",
"labview",
"labwindows",
"python"
] | stackoverflow_0000091666_cvi_labview_labwindows_python.txt |
Q:
Django + FCGID on Fedora Core 9 -- what am I missing?
Fedora Core 9 seems to have FCGID instead of FastCGI as a pre-built, YUM-managed module. [I'd rather not have to maintain a module outside of YUM; so no manual builds for me or my sysadmins.]
I'm trying to launch Django through the runfastcgi interface (per the FastCGI deployment docs).
What I'm seeing is the resulting page written to error_log. It does not come back through Apache to my browser. Further, there are a bunch of messages -- apparently from flup and WSGIServer -- that indicate that the WSGI environment isn't defined properly.
Is FastCGI available for FC9, and I just overlooked it?
Does FCGID and flup actually create the necessary WSGI environment for Django? If so, can you share the .fcgi interface script you're using? Mine is copied from mysite.fcgi in the Django docs. The FCGID Documentations page drops hints that PHP and Ruby are supported -- PHP directly, and Ruby through dispatch.fcgi -- and Python is not supported.
Update. The error messages are...
WSGIServer: missing FastCGI param REQUEST_METHOD required by WSGI!
WSGIServer: missing FastCGI param SERVER_NAME required by WSGI!
WSGIServer: missing FastCGI param SERVER_PORT required by WSGI!
WSGIServer: missing FastCGI param SERVER_PROTOCOL required by WSGI!
Should I abandon ship and switch to mod_python and give up on this approach?
A:
Why don't you try modwsgi? It sounds as the preffered way these days for WSGI applications such as Django.
If you don't wan't to compile stuff for Fedora Core, that might be trickier.
Regarding to your first question, this seems to solve the fcgid configuration problem.
Note that you don't want to run the django application manually like this: python manage.py runfcgi, the fcgi is run by apache automatically if the setup is correct and restarted by touch your.fcgi.
| Django + FCGID on Fedora Core 9 -- what am I missing? | Fedora Core 9 seems to have FCGID instead of FastCGI as a pre-built, YUM-managed module. [I'd rather not have to maintain a module outside of YUM; so no manual builds for me or my sysadmins.]
I'm trying to launch Django through the runfastcgi interface (per the FastCGI deployment docs).
What I'm seeing is the resulting page written to error_log. It does not come back through Apache to my browser. Further, there are a bunch of messages -- apparently from flup and WSGIServer -- that indicate that the WSGI environment isn't defined properly.
Is FastCGI available for FC9, and I just overlooked it?
Does FCGID and flup actually create the necessary WSGI environment for Django? If so, can you share the .fcgi interface script you're using? Mine is copied from mysite.fcgi in the Django docs. The FCGID Documentations page drops hints that PHP and Ruby are supported -- PHP directly, and Ruby through dispatch.fcgi -- and Python is not supported.
Update. The error messages are...
WSGIServer: missing FastCGI param REQUEST_METHOD required by WSGI!
WSGIServer: missing FastCGI param SERVER_NAME required by WSGI!
WSGIServer: missing FastCGI param SERVER_PORT required by WSGI!
WSGIServer: missing FastCGI param SERVER_PROTOCOL required by WSGI!
Should I abandon ship and switch to mod_python and give up on this approach?
| [
"Why don't you try modwsgi? It sounds as the preffered way these days for WSGI applications such as Django.\nIf you don't wan't to compile stuff for Fedora Core, that might be trickier.\nRegarding to your first question, this seems to solve the fcgid configuration problem. \nNote that you don't want to run the django application manually like this: python manage.py runfcgi, the fcgi is run by apache automatically if the setup is correct and restarted by touch your.fcgi.\n"
] | [
3
] | [] | [] | [
"apache2",
"django",
"fastcgi",
"fcgid",
"python"
] | stackoverflow_0000092373_apache2_django_fastcgi_fcgid_python.txt |
Q:
Is there a pretty printer for python data?
Working with python interactively, it's sometimes necessary to display a result which is some arbitrarily complex data structure (like lists with embedded lists, etc.)
The default way to display them is just one massive linear dump which just wraps over and over and you have to parse carefully to read it.
Is there something that will take any python object and display it in a more rational manner. e.g.
[0, 1,
[a, b, c],
2, 3, 4]
instead of:
[0, 1, [a, b, c], 2, 3, 4]
I know that's not a very good example, but I think you get the idea.
A:
from pprint import pprint
a = [0, 1, ['a', 'b', 'c'], 2, 3, 4]
pprint(a)
Note that for a short list like my example, pprint will in fact print it all on one line. However, for more complex structures it does a pretty good job of pretty printing data.
A:
Somtimes YAML can be good for this.
import yaml
a = [0, 1, ['a', 'b', 'c'], 2, 3, 4]
print yaml.dump(a)
Produces:
- 0
- 1
- [a, b, c]
- 2
- 3
- 4
A:
In addition to pprint.pprint, pprint.pformat is really useful for making readable __repr__s. My complex __repr__s usually look like so:
def __repr__(self):
from pprint import pformat
return "<ClassName %s>" % pformat({"attrs":self.attrs,
"that_i":self.that_i,
"care_about":self.care_about})
A:
Another good option is to use IPython, which is an interactive environment with a lot of extra features, including automatic pretty printing, tab-completion of methods, easy shell access, and a lot more. It's also very easy to install.
IPython tutorial
| Is there a pretty printer for python data? | Working with python interactively, it's sometimes necessary to display a result which is some arbitrarily complex data structure (like lists with embedded lists, etc.)
The default way to display them is just one massive linear dump which just wraps over and over and you have to parse carefully to read it.
Is there something that will take any python object and display it in a more rational manner. e.g.
[0, 1,
[a, b, c],
2, 3, 4]
instead of:
[0, 1, [a, b, c], 2, 3, 4]
I know that's not a very good example, but I think you get the idea.
| [
"from pprint import pprint\na = [0, 1, ['a', 'b', 'c'], 2, 3, 4]\npprint(a)\n\nNote that for a short list like my example, pprint will in fact print it all on one line. However, for more complex structures it does a pretty good job of pretty printing data.\n",
"Somtimes YAML can be good for this.\nimport yaml\na = [0, 1, ['a', 'b', 'c'], 2, 3, 4]\nprint yaml.dump(a)\n\nProduces:\n- 0\n- 1\n- [a, b, c]\n- 2\n- 3\n- 4\n\n",
"In addition to pprint.pprint, pprint.pformat is really useful for making readable __repr__s. My complex __repr__s usually look like so:\ndef __repr__(self):\n from pprint import pformat\n\n return \"<ClassName %s>\" % pformat({\"attrs\":self.attrs,\n \"that_i\":self.that_i,\n \"care_about\":self.care_about})\n\n",
"Another good option is to use IPython, which is an interactive environment with a lot of extra features, including automatic pretty printing, tab-completion of methods, easy shell access, and a lot more. It's also very easy to install. \nIPython tutorial\n"
] | [
29,
11,
8,
3
] | [] | [] | [
"prettify",
"python"
] | stackoverflow_0000091810_prettify_python.txt |
Q:
How to associated the cn in an ssl cert of pyOpenSSL verify_cb to a generated socket
I am a little new to pyOpenSSL. I am trying to figure out how to associate the generated socket to an ssl cert. verify_cb gets called which give me access to the cert and a conn but how do I associate those things when this happens:
cli,addr = self.server.accept()
A:
After the handshake is complete, you can get the client certificate. While the client certificate is also available in the verify callback (verify_cb), there's not really any reason to try to do anything aside from verify the certificate in that callback. Setting up an application-specific mapping is better done after the handshake has completely successfully. So, consider using the OpenSSL.SSL.Connection instance returned by the accept method to get the certificate (and from there, the commonName) and associate it with the connection object at that point. For example,
client, clientAddress = self.server.accept()
client.do_handshake()
commonNamesToConnections[client.get_peer_certificate().commonName] = client
You might want to check the mapping to make sure you're not overwriting any existing connection (perhaps using a list of connections instead of just mapping each common name to one). And of course you need to remove entries when connections are lost.
The `do_handshake´ call forces the handshake to actually happen. Without this, the handshake will happen when application data is first transferred over the connection. That's fine, but it would make setting up this mapping slightly more complicated.
| How to associated the cn in an ssl cert of pyOpenSSL verify_cb to a generated socket | I am a little new to pyOpenSSL. I am trying to figure out how to associate the generated socket to an ssl cert. verify_cb gets called which give me access to the cert and a conn but how do I associate those things when this happens:
cli,addr = self.server.accept()
| [
"After the handshake is complete, you can get the client certificate. While the client certificate is also available in the verify callback (verify_cb), there's not really any reason to try to do anything aside from verify the certificate in that callback. Setting up an application-specific mapping is better done after the handshake has completely successfully. So, consider using the OpenSSL.SSL.Connection instance returned by the accept method to get the certificate (and from there, the commonName) and associate it with the connection object at that point. For example,\nclient, clientAddress = self.server.accept()\nclient.do_handshake()\ncommonNamesToConnections[client.get_peer_certificate().commonName] = client\n\nYou might want to check the mapping to make sure you're not overwriting any existing connection (perhaps using a list of connections instead of just mapping each common name to one). And of course you need to remove entries when connections are lost.\nThe `do_handshake´ call forces the handshake to actually happen. Without this, the handshake will happen when application data is first transferred over the connection. That's fine, but it would make setting up this mapping slightly more complicated.\n"
] | [
5
] | [] | [] | [
"pyopenssl",
"python"
] | stackoverflow_0000096508_pyopenssl_python.txt |
Q:
What's the easiest non-memory intensive way to output XML from Python?
Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions?
A:
I think you'll find XMLGenerator from xml.sax.saxutils is the closest thing to what you want.
import time
from xml.sax.saxutils import XMLGenerator
from xml.sax.xmlreader import AttributesNSImpl
LOG_LEVELS = ['DEBUG', 'WARNING', 'ERROR']
class xml_logger:
def __init__(self, output, encoding):
"""
Set up a logger object, which takes SAX events and outputs
an XML log file
"""
logger = XMLGenerator(output, encoding)
logger.startDocument()
attrs = AttributesNSImpl({}, {})
logger.startElementNS((None, u'log'), u'log', attrs)
self._logger = logger
self._output = output
self._encoding = encoding
return
def write_entry(self, level, msg):
"""
Write a log entry to the logger
level - the level of the entry
msg - the text of the entry. Must be a Unicode object
"""
#Note: in a real application, I would use ISO 8601 for the date
#asctime used here for simplicity
now = time.asctime(time.localtime())
attr_vals = {
(None, u'date'): now,
(None, u'level'): LOG_LEVELS[level],
}
attr_qnames = {
(None, u'date'): u'date',
(None, u'level'): u'level',
}
attrs = AttributesNSImpl(attr_vals, attr_qnames)
self._logger.startElementNS((None, u'entry'), u'entry', attrs)
self._logger.characters(msg)
self._logger.endElementNS((None, u'entry'), u'entry')
return
def close(self):
"""
Clean up the logger object
"""
self._logger.endElementNS((None, u'log'), u'log')
self._logger.endDocument()
return
if __name__ == "__main__":
#Test it out
import sys
xl = xml_logger(sys.stdout, 'utf-8')
xl.write_entry(2, u"Vanilla log entry")
xl.close()
You'll probably want to look at the rest of the article I got that from at http://www.xml.com/pub/a/2003/03/12/py-xml.html.
A:
I think I have your poison :
http://sourceforge.net/projects/xmlite
Cheers
A:
Some years ago I used MarkupWriter from 4suite
General-purpose utility class for generating XML (may eventually be
expanded to produce more output types)
Sample usage:
from Ft.Xml import MarkupWriter
writer = MarkupWriter(indent=u"yes")
writer.startDocument()
writer.startElement(u'xsa')
writer.startElement(u'vendor')
#Element with simple text (#PCDATA) content
writer.simpleElement(u'name', content=u'Centigrade systems')
#Note writer.text(content) still works
writer.simpleElement(u'email', content=u"info@centigrade.bogus")
writer.endElement(u'vendor')
#Element with an attribute
writer.startElement(u'product', attributes={u'id': u"100\u00B0"})
#Note writer.attribute(name, value, namespace=None) still works
writer.simpleElement(u'name', content=u"100\u00B0 Server")
#XML fragment
writer.xmlFragment('<version>1.0</version><last-release>20030401</last-release>')
#Empty element
writer.simpleElement(u'changes')
writer.endElement(u'product')
writer.endElement(u'xsa')
writer.endDocument()
Note on the difference between 4Suite writers and printers
Writer - module that exposes a broad public API for building output
bit by bit
Printer - module that simply takes a DOM and creates output from it
as a whole, within one API invokation
Recently i hear a lot about how lxml is great, but I don't have first-hand experience, and I had some fun working with gnosis.
A:
xml.etree.cElementTree, included in the default distribution of CPython since 2.5. Lightning fast for both reading and writing XML.
| What's the easiest non-memory intensive way to output XML from Python? | Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions?
| [
"I think you'll find XMLGenerator from xml.sax.saxutils is the closest thing to what you want.\n\nimport time\nfrom xml.sax.saxutils import XMLGenerator\nfrom xml.sax.xmlreader import AttributesNSImpl\n\nLOG_LEVELS = ['DEBUG', 'WARNING', 'ERROR']\n\n\nclass xml_logger:\n def __init__(self, output, encoding):\n \"\"\"\n Set up a logger object, which takes SAX events and outputs\n an XML log file\n \"\"\"\n logger = XMLGenerator(output, encoding)\n logger.startDocument()\n attrs = AttributesNSImpl({}, {})\n logger.startElementNS((None, u'log'), u'log', attrs)\n self._logger = logger\n self._output = output\n self._encoding = encoding\n return\n\n def write_entry(self, level, msg):\n \"\"\"\n Write a log entry to the logger\n level - the level of the entry\n msg - the text of the entry. Must be a Unicode object\n \"\"\"\n #Note: in a real application, I would use ISO 8601 for the date\n #asctime used here for simplicity\n now = time.asctime(time.localtime())\n attr_vals = {\n (None, u'date'): now,\n (None, u'level'): LOG_LEVELS[level],\n }\n attr_qnames = {\n (None, u'date'): u'date',\n (None, u'level'): u'level',\n }\n attrs = AttributesNSImpl(attr_vals, attr_qnames)\n self._logger.startElementNS((None, u'entry'), u'entry', attrs)\n self._logger.characters(msg)\n self._logger.endElementNS((None, u'entry'), u'entry')\n return\n\n def close(self):\n \"\"\"\n Clean up the logger object\n \"\"\"\n self._logger.endElementNS((None, u'log'), u'log')\n self._logger.endDocument()\n return\n\nif __name__ == \"__main__\":\n #Test it out\n import sys\n xl = xml_logger(sys.stdout, 'utf-8')\n xl.write_entry(2, u\"Vanilla log entry\")\n xl.close() \n\n\nYou'll probably want to look at the rest of the article I got that from at http://www.xml.com/pub/a/2003/03/12/py-xml.html.\n",
"I think I have your poison :\nhttp://sourceforge.net/projects/xmlite\nCheers\n",
"Some years ago I used MarkupWriter from 4suite\n\nGeneral-purpose utility class for generating XML (may eventually be\nexpanded to produce more output types)\n\nSample usage:\n\nfrom Ft.Xml import MarkupWriter\nwriter = MarkupWriter(indent=u\"yes\")\nwriter.startDocument()\nwriter.startElement(u'xsa')\nwriter.startElement(u'vendor')\n#Element with simple text (#PCDATA) content\nwriter.simpleElement(u'name', content=u'Centigrade systems')\n#Note writer.text(content) still works\nwriter.simpleElement(u'email', content=u\"info@centigrade.bogus\")\nwriter.endElement(u'vendor')\n#Element with an attribute\nwriter.startElement(u'product', attributes={u'id': u\"100\\u00B0\"})\n#Note writer.attribute(name, value, namespace=None) still works\nwriter.simpleElement(u'name', content=u\"100\\u00B0 Server\")\n#XML fragment\nwriter.xmlFragment('<version>1.0</version><last-release>20030401</last-release>')\n#Empty element\nwriter.simpleElement(u'changes')\nwriter.endElement(u'product')\nwriter.endElement(u'xsa')\nwriter.endDocument()\n\nNote on the difference between 4Suite writers and printers\nWriter - module that exposes a broad public API for building output\n bit by bit\nPrinter - module that simply takes a DOM and creates output from it\n as a whole, within one API invokation\n\n\nRecently i hear a lot about how lxml is great, but I don't have first-hand experience, and I had some fun working with gnosis.\n",
"xml.etree.cElementTree, included in the default distribution of CPython since 2.5. Lightning fast for both reading and writing XML.\n"
] | [
15,
2,
0,
-4
] | [
"I've always had good results with lxml. It's a pain to install, as it's mostly a wrapper around libxml2, but lxml.etree tree objects have a .write() method that takes a file-like object to stream to.\nfrom lxml.etree import XML\n\ntree = XML('<root><a><b/></a></root>')\ntree.write(your_file_object)\n\n",
"Second vote for ElementTree (cElementTree is a C implementation that is a little faster, like cPickle vs pickle). There's some short example code here that you can look at to give you an idea of how it works: http://effbot.org/zone/element-index.htm\n(this is Fredrik Lundh, who wrote the module in the first place. It's so good it got drafted into the standard library with 2.5 :-) )\n"
] | [
-1,
-2
] | [
"python",
"streaming",
"xml"
] | stackoverflow_0000093710_python_streaming_xml.txt |
Q:
How to load a python module into a fresh interactive shell in Komodo?
When using PyWin I can easily load a python file into a fresh interactive shell and I find this quite handy for prototyping and other exploratory tasks.
I would like to use Komodo as my python editor, but I haven't found a replacement for PyWin's ability to restart the shell and reload the current module. How can I do this in Komodo?
It is also very important to me that when I reload I get a fresh shell. I would prefer it if my previous interactions are in the shell history, but it is more important to me that the memory be isolated from the previous versions and attempts.
A:
I use Komodo Edit, which might be a little less sophisticated than full Komodo.
I create a "New Command" with %(python) -i %f as the text of the command. I have this run in a "New Console". I usually have the starting directory as %p, the top of the project directory.
The -i option runs the file and drops into interactive Python.
| How to load a python module into a fresh interactive shell in Komodo? | When using PyWin I can easily load a python file into a fresh interactive shell and I find this quite handy for prototyping and other exploratory tasks.
I would like to use Komodo as my python editor, but I haven't found a replacement for PyWin's ability to restart the shell and reload the current module. How can I do this in Komodo?
It is also very important to me that when I reload I get a fresh shell. I would prefer it if my previous interactions are in the shell history, but it is more important to me that the memory be isolated from the previous versions and attempts.
| [
"I use Komodo Edit, which might be a little less sophisticated than full Komodo.\nI create a \"New Command\" with %(python) -i %f as the text of the command. I have this run in a \"New Console\". I usually have the starting directory as %p, the top of the project directory.\nThe -i option runs the file and drops into interactive Python.\n"
] | [
5
] | [] | [] | [
"interpreter",
"komodo",
"python",
"shell"
] | stackoverflow_0000097513_interpreter_komodo_python_shell.txt |
Q:
What does BlazeDS Livecycle Data Services do, that something like PyAMF or RubyAMF not do?
I'm doing a tech review and looking at AMF integration with various backends (Rails, Python, Grails etc).
Lots of options are out there, question is, what do the Adobe products do (BlazeDS etc) that something like RubyAMF / pyAMF don't?
A:
Other than NIO (RTMP) channels, LCDS include also the "data management" features.
Using this feature, you basically implement, in an ActionScript class, a CRUD-like interface defined by LCDS, and you get:
automatic progressive list loading (large lists/datagrids loads while scrolling)
automatic crud management (you get object locally in flash, modify it, send it back and DB will get updated automatically)
feature for conflict resolution (if multiple user try to updated the same record at the same time)
if I remember well, also some improved integration with the LiveCycle ES workflow engine
IMO, it can be very fast to develop this way, but only if you have only basic requirements and a simple architecture (forget SOA, that otherwise works so well with Flex). I'm fine with BlazeDS.
A:
The data management features for LCDS described here are certainly valid, however I believe they do not let you actually develop a solution faster. A developer still has to write ALL the data access code, query execution, extracting data from datareaders into value objects. ALL of this has been solved a dozen of times with code generators. For instance the data management approach in WebORB for Java (much like in WebORB for .NET and PHP) is based on code generation which creates code for both client side AND server-side. You get all the ActionScript APIs out of the code generator to do full CRUD.
Additionally, WebORB provides video streaming and real-time messaging features and goes WAY beyond what both BlazeDS and LCDS offer combined, especially considering that the product is free. Just google it.
A:
Adobe has two products: Livecycle Data Services ES (LCDS) and BlazeDS. BlazeDS contains a subset of LCDS features and was made open source. Unfortunately NIO channels (RTMP NIO/HTTP) and the DataManagement features are implemented only in LCDS, not BlazeDS.
BlazeDS can be used only to integrate Flex with Java backend. It offers not only remoting services using AMF serialization (as RubyAMF) but also messaging and collaboration features - take a look at this link (http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=lcoverview_3.html). Also I suppose that the support is better compared with RubyAMF/pyAMF.
If your backend is JAVA and you want to use only a free product you can also use GraniteDS or WebORB (BlazeDS competitors)
A:
Good question. I'm not a ruby guy (i use java with flex), but what I believe differentiates blazeds vs commercial livecycle ds is
Streaming protocol support (rtmp) - competition for comet and such, delivering video
Some advanced stuff for hibernate detached objects and large resultset caching that I don't fully understand or need
support?
Might be others but those are the ones I know off the top of my head.
| What does BlazeDS Livecycle Data Services do, that something like PyAMF or RubyAMF not do? | I'm doing a tech review and looking at AMF integration with various backends (Rails, Python, Grails etc).
Lots of options are out there, question is, what do the Adobe products do (BlazeDS etc) that something like RubyAMF / pyAMF don't?
| [
"Other than NIO (RTMP) channels, LCDS include also the \"data management\" features. \nUsing this feature, you basically implement, in an ActionScript class, a CRUD-like interface defined by LCDS, and you get:\n\nautomatic progressive list loading (large lists/datagrids loads while scrolling)\nautomatic crud management (you get object locally in flash, modify it, send it back and DB will get updated automatically)\nfeature for conflict resolution (if multiple user try to updated the same record at the same time)\nif I remember well, also some improved integration with the LiveCycle ES workflow engine\n\nIMO, it can be very fast to develop this way, but only if you have only basic requirements and a simple architecture (forget SOA, that otherwise works so well with Flex). I'm fine with BlazeDS.\n",
"The data management features for LCDS described here are certainly valid, however I believe they do not let you actually develop a solution faster. A developer still has to write ALL the data access code, query execution, extracting data from datareaders into value objects. ALL of this has been solved a dozen of times with code generators. For instance the data management approach in WebORB for Java (much like in WebORB for .NET and PHP) is based on code generation which creates code for both client side AND server-side. You get all the ActionScript APIs out of the code generator to do full CRUD. \nAdditionally, WebORB provides video streaming and real-time messaging features and goes WAY beyond what both BlazeDS and LCDS offer combined, especially considering that the product is free. Just google it.\n",
"Adobe has two products: Livecycle Data Services ES (LCDS) and BlazeDS. BlazeDS contains a subset of LCDS features and was made open source. Unfortunately NIO channels (RTMP NIO/HTTP) and the DataManagement features are implemented only in LCDS, not BlazeDS.\nBlazeDS can be used only to integrate Flex with Java backend. It offers not only remoting services using AMF serialization (as RubyAMF) but also messaging and collaboration features - take a look at this link (http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=lcoverview_3.html). Also I suppose that the support is better compared with RubyAMF/pyAMF.\nIf your backend is JAVA and you want to use only a free product you can also use GraniteDS or WebORB (BlazeDS competitors)\n",
"Good question. I'm not a ruby guy (i use java with flex), but what I believe differentiates blazeds vs commercial livecycle ds is\n\nStreaming protocol support (rtmp) - competition for comet and such, delivering video\nSome advanced stuff for hibernate detached objects and large resultset caching that I don't fully understand or need\n\n\nsupport?\nMight be others but those are the ones I know off the top of my head.\n\n\n"
] | [
3,
3,
2,
1
] | [] | [] | [
"apache_flex",
"blazeds",
"python",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000077198_apache_flex_blazeds_python_ruby_ruby_on_rails.txt |