Mxode/SmolLM-Chinese-180M
Text Generation
•
Updated
•
46
content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How do I make a menu that does not require the user to press [enter] to make a selection?
I've got a menu in Python. That part was easy. I'm using raw_input() to get the selection from the user.
The problem is that raw_input (and input) require the user to press Enter after they make a selection. Is there any way to make the program act immediately upon a keystroke? Here's what I've got so far:
import sys
print """Menu
1) Say Foo
2) Say Bar"""
answer = raw_input("Make a selection> ")
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
It would be great to have something like
print menu
while lastKey = "":
lastKey = check_for_recent_keystrokes()
if "1" in lastKey: #do stuff...
A:
On Windows:
import msvcrt
answer=msvcrt.getch()
A:
On Linux:
set raw mode
select and read the keystroke
restore normal settings
import sys
import select
import termios
import tty
def getkey():
old_settings = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin.fileno())
select.select([sys.stdin], [], [], 0)
answer = sys.stdin.read(1)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
return answer
print """Menu
1) Say Foo
2) Say Bar"""
answer=getkey()
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
A:
Wow, that took forever. Ok, here's what I've ended up with
#!C:\python25\python.exe
import msvcrt
print """Menu
1) Say Foo
2) Say Bar"""
while 1:
char = msvcrt.getch()
if char == chr(27): #escape
break
if char == "1":
print "foo"
break
if char == "2":
print "Bar"
break
It fails hard using IDLE, the python...thing...that comes with python. But once I tried it in DOS (er, CMD.exe), as a real program, then it ran fine.
No one try it in IDLE, unless you have Task Manager handy.
I've already forgotten how I lived with menus that arn't super-instant responsive.
A:
The reason msvcrt fails in IDLE is because IDLE is not accessing the library that runs msvcrt. Whereas when you run the program natively in cmd.exe it works nicely. For the same reason that your program blows up on Mac and Linux terminals.
But I guess if you're going to be using this specifically for windows, more power to ya.
| How do I make a menu that does not require the user to press [enter] to make a selection? | I've got a menu in Python. That part was easy. I'm using raw_input() to get the selection from the user.
The problem is that raw_input (and input) require the user to press Enter after they make a selection. Is there any way to make the program act immediately upon a keystroke? Here's what I've got so far:
import sys
print """Menu
1) Say Foo
2) Say Bar"""
answer = raw_input("Make a selection> ")
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
It would be great to have something like
print menu
while lastKey = "":
lastKey = check_for_recent_keystrokes()
if "1" in lastKey: #do stuff...
| [
"On Windows:\nimport msvcrt\nanswer=msvcrt.getch()\n\n",
"On Linux:\n\nset raw mode\nselect and read the keystroke\nrestore normal settings\n\n\nimport sys\nimport select\nimport termios\nimport tty\n\ndef getkey():\n old_settings = termios.tcgetattr(sys.stdin)\n tty.setraw(sys.stdin.fileno())\n select.select([sys.stdin], [], [], 0)\n answer = sys.stdin.read(1)\n termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)\n return answer\n\nprint \"\"\"Menu\n1) Say Foo\n2) Say Bar\"\"\"\n\nanswer=getkey()\n\nif \"1\" in answer: print \"foo\"\nelif \"2\" in answer: print \"bar\"\n\n\n",
"Wow, that took forever. Ok, here's what I've ended up with \n#!C:\\python25\\python.exe\nimport msvcrt\nprint \"\"\"Menu\n1) Say Foo \n2) Say Bar\"\"\"\nwhile 1:\n char = msvcrt.getch()\n if char == chr(27): #escape\n break\n if char == \"1\":\n print \"foo\"\n break\n if char == \"2\":\n print \"Bar\"\n break\n\nIt fails hard using IDLE, the python...thing...that comes with python. But once I tried it in DOS (er, CMD.exe), as a real program, then it ran fine.\nNo one try it in IDLE, unless you have Task Manager handy.\nI've already forgotten how I lived with menus that arn't super-instant responsive.\n",
"The reason msvcrt fails in IDLE is because IDLE is not accessing the library that runs msvcrt. Whereas when you run the program natively in cmd.exe it works nicely. For the same reason that your program blows up on Mac and Linux terminals.\nBut I guess if you're going to be using this specifically for windows, more power to ya.\n"
] | [
10,
9,
4,
0
] | [] | [] | [
"python"
] | stackoverflow_0000001829_python.txt |
Q:
File size differences after copying a file to a server vía FTP
I have created a PHP-script to update a web server that is live inside a local directory.
I'm migrating the script into Python. It works fine for the most part, but after a PUT command, the size of the file appears to change. Thus, the size of the file is different from that of the file on the server.
Once I download again the file from the FTP server, the only difference is the CR/LF mark. This annoys me because the same script is comparing the size of the files to update. Also, in case it means anything, the script works perfectly in PHP vía ftp_put.
from ftplib import FTP
ftpserver = "myserver"
ftpuser = "myuser"
ftppass = "mypwd"
locfile = "g:/test/style.css"
ftpfile = "/temp/style.css"
try:
ftp = FTP(ftpserver, ftpuser, ftppass)
except:
exit ("Cannot connect")
f = open (locfile, "r")
try:
ftp.delete (ftpfile)
except:
pass
# ftp.sendcmd ("TYPE I")
# ftp.storlines("STOR %s" % ftpfile, f)
ftp.storbinary("STOR %s" % ftpfile, f)
f.close()
ftp.dir (ftpfile)
ftp.quit()
Any suggestions?
A:
Do you need to open the locfile in binary using rb?
f = open (locfile, "rb")
A:
Well if you go under the properties of your file in Windows or a *nix environment, you will notice two sizes. One is the sector size, and one is the actual size. The sector size is the number of sectors in bytes that are used up on your hard disk. That is because two files cannot be in the same sector with most modern file systems, so if your file fills up half of the sector the whole sector is marked as filled.
So you might be comparing the sector file size to the actual file size on the FTP server or vice versa.
A:
Small files take up a whole node on the file system whatever the size is.
My host tends to report all small files as 4KB in ftp but gives an accurate size in a shell so it might be a 'feature' common to ftp clients.
| File size differences after copying a file to a server vía FTP | I have created a PHP-script to update a web server that is live inside a local directory.
I'm migrating the script into Python. It works fine for the most part, but after a PUT command, the size of the file appears to change. Thus, the size of the file is different from that of the file on the server.
Once I download again the file from the FTP server, the only difference is the CR/LF mark. This annoys me because the same script is comparing the size of the files to update. Also, in case it means anything, the script works perfectly in PHP vía ftp_put.
from ftplib import FTP
ftpserver = "myserver"
ftpuser = "myuser"
ftppass = "mypwd"
locfile = "g:/test/style.css"
ftpfile = "/temp/style.css"
try:
ftp = FTP(ftpserver, ftpuser, ftppass)
except:
exit ("Cannot connect")
f = open (locfile, "r")
try:
ftp.delete (ftpfile)
except:
pass
# ftp.sendcmd ("TYPE I")
# ftp.storlines("STOR %s" % ftpfile, f)
ftp.storbinary("STOR %s" % ftpfile, f)
f.close()
ftp.dir (ftpfile)
ftp.quit()
Any suggestions?
| [
"Do you need to open the locfile in binary using rb?\nf = open (locfile, \"rb\")\n\n",
"Well if you go under the properties of your file in Windows or a *nix environment, you will notice two sizes. One is the sector size, and one is the actual size. The sector size is the number of sectors in bytes that are used up on your hard disk. That is because two files cannot be in the same sector with most modern file systems, so if your file fills up half of the sector the whole sector is marked as filled.\nSo you might be comparing the sector file size to the actual file size on the FTP server or vice versa.\n",
"Small files take up a whole node on the file system whatever the size is.\nMy host tends to report all small files as 4KB in ftp but gives an accurate size in a shell so it might be a 'feature' common to ftp clients.\n"
] | [
17,
3,
0
] | [] | [] | [
"ftp",
"ftplib",
"php",
"python",
"webserver"
] | stackoverflow_0000002311_ftp_ftplib_php_python_webserver.txt |
Q:
Programmatically talking to a Serial Port in OS X or Linux
I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial .
When i do this everything seems to be hunky-dory:
stty -f /dev/cu.usbserial
speed 9600 baud;
lflags: -icanon -isig -iexten -echo
iflags: -icrnl -ixon -ixany -imaxbel -brkint
oflags: -opost -onlcr -oxtabs
cflags: cs8 -parenb
Everything also works when I use the serial port tool to talk to it.
If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost.
#!/usr/bin/python
import serial
ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10)
ser.write("<ID01><PA> \r\n")
read_chars = ser.read(20)
print read_chars
ser.close()
So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial?
Nope, no serial numbers. The thing is, the problem persists even with sudo-running the python script, and the only thing that makes it go through if I open the connection in the gui tool that I mentioned.
A:
/dev/cu.xxxxx is the "callout" device, it's what you use when you establish a connection to the serial device and start talking to it. /dev/tty.xxxxx is the "dialin" device, used for monitoring a port for incoming calls for e.g. a fax listener.
A:
have you tried watching the traffic between the GUI and the serial port to see if there is some kind of special command being sent across? Also just curious, Python is sending ASCII and not UTF-8 or something else right? The reason I ask is because I noticed your quote changes for the strings and in some languages that actually is the difference between ASCII and UTF-8.
| Programmatically talking to a Serial Port in OS X or Linux | I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial .
When i do this everything seems to be hunky-dory:
stty -f /dev/cu.usbserial
speed 9600 baud;
lflags: -icanon -isig -iexten -echo
iflags: -icrnl -ixon -ixany -imaxbel -brkint
oflags: -opost -onlcr -oxtabs
cflags: cs8 -parenb
Everything also works when I use the serial port tool to talk to it.
If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost.
#!/usr/bin/python
import serial
ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10)
ser.write("<ID01><PA> \r\n")
read_chars = ser.read(20)
print read_chars
ser.close()
So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial?
Nope, no serial numbers. The thing is, the problem persists even with sudo-running the python script, and the only thing that makes it go through if I open the connection in the gui tool that I mentioned.
| [
"/dev/cu.xxxxx is the \"callout\" device, it's what you use when you establish a connection to the serial device and start talking to it. /dev/tty.xxxxx is the \"dialin\" device, used for monitoring a port for incoming calls for e.g. a fax listener.\n",
"have you tried watching the traffic between the GUI and the serial port to see if there is some kind of special command being sent across? Also just curious, Python is sending ASCII and not UTF-8 or something else right? The reason I ask is because I noticed your quote changes for the strings and in some languages that actually is the difference between ASCII and UTF-8.\n"
] | [
5,
0
] | [] | [] | [
"linux",
"macos",
"python",
"serial_port"
] | stackoverflow_0000003976_linux_macos_python_serial_port.txt |
Q:
Get a preview JPEG of a PDF on Windows?
I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF.
On the Mac I am spawning sips. Is there something similarly simple I can do on Windows?
A:
ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output):
gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \
-dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \
-sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \
-sOutputFile=$OUTPUT -f$INPUT
where $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.)
This is good for two reasons:
You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions.
ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step.
Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m.
A:
You can use ImageMagick's convert utility for this, see some examples in http://studio.imagemagick.org/pipermail/magick-users/2002-May/002636.html
:
Convert taxes.pdf taxes.jpg
Will convert a two page PDF file into [2] jpeg files: taxes.jpg.0,
taxes.jpg.1
I can also convert these JPEGS to a thumbnail as follows:
convert -size 120x120 taxes.jpg.0 -geometry 120x120 +profile '*' thumbnail.jpg
I can even convert the PDF directly to a jpeg thumbnail as follows:
convert -size 120x120 taxes.pdf -geometry 120x120 +profile '*' thumbnail.jpg
This will result in a thumbnail.jpg.0 and thumbnail.jpg.1 for the two
pages.
A:
Is the PC likely to have Acrobat installed? I think Acrobat installs a shell extension so previews of the first page of a PDF document appear in Windows Explorer's thumbnail view. You can get thumbnails yourself via the IExtractImage COM API, which you'll need to wrap. VBAccelerator has an example in C# that you could port to Python.
| Get a preview JPEG of a PDF on Windows? | I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF.
On the Mac I am spawning sips. Is there something similarly simple I can do on Windows?
| [
"ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output):\ngs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \\\n-dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \\\n-sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \\\n-sOutputFile=$OUTPUT -f$INPUT\n\nwhere $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.)\nThis is good for two reasons:\n\nYou don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions.\nImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step.\n\nOther things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m.\n",
"You can use ImageMagick's convert utility for this, see some examples in http://studio.imagemagick.org/pipermail/magick-users/2002-May/002636.html\n:\n\nConvert taxes.pdf taxes.jpg \n\nWill convert a two page PDF file into [2] jpeg files: taxes.jpg.0,\n taxes.jpg.1\nI can also convert these JPEGS to a thumbnail as follows:\nconvert -size 120x120 taxes.jpg.0 -geometry 120x120 +profile '*' thumbnail.jpg\n\nI can even convert the PDF directly to a jpeg thumbnail as follows:\nconvert -size 120x120 taxes.pdf -geometry 120x120 +profile '*' thumbnail.jpg\n\nThis will result in a thumbnail.jpg.0 and thumbnail.jpg.1 for the two\n pages.\n\n",
"Is the PC likely to have Acrobat installed? I think Acrobat installs a shell extension so previews of the first page of a PDF document appear in Windows Explorer's thumbnail view. You can get thumbnails yourself via the IExtractImage COM API, which you'll need to wrap. VBAccelerator has an example in C# that you could port to Python.\n"
] | [
44,
16,
5
] | [] | [] | [
"image",
"pdf",
"python",
"windows"
] | stackoverflow_0000000502_image_pdf_python_windows.txt |
Q:
Best way to abstract season/show/episode data
Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here.
It grabs data from the API as requested, and has to store the data somehow, and make it available by doing:
print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1
What is the "best" way to abstract this data within the Tvdb() class?
I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = "something" without having to do if x[1].has_key(2): x[1][2] = [] and so on)
Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = "something"
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception).
Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key.
This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on.
The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems).
The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant.
I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems?
A:
OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name).
import new
myexc=new.classobj("ExcName",(Exception,),{})
i=myexc("This is the exc msg!")
raise i
this gives you:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
__main__.ExcName: This is the exc msg!
remember that you can always get the class name through:
self.__class__.__name__
So, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.
P.S. - you can also raise strings, but this is deprecated.
raise(self.__class__.__name__+"Exception")
A:
Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for sqlite3
If you don't want to use SQLite you could do an array of dicts.
episodes = []
episodes.append({'season':1, 'episode': 2, 'name':'Something'})
episodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']})
That way you add metadata to any record and search it very easily
season_1 = [e for e in episodes if e['season'] == 1]
billy_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]
for episode in billy_bob:
print "Billy bob was in Season %s Episode %s" % (episode['season'], episode['episode'])
A:
I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out.
NOTE: I'm not a Python guy so I don't know what your xml support is like.
NOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend.
A:
I don't get this part here:
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception)
There is a way to do it - called in:
>>>x={}
>>>x[1]={}
>>>x[1][2]={}
>>>x
{1: {2: {}}}
>>> 2 in x[1]
True
>>> 3 in x[1]
False
what seems to be the problem with that?
A:
Bartosz/To clarify "This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not"
x['some show'][3][24] would return season 3, episode 24 of "some show". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if "some show" doesn't exist, then raise tvdb_shownotfound
The current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on.
It works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)
| Best way to abstract season/show/episode data | Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here.
It grabs data from the API as requested, and has to store the data somehow, and make it available by doing:
print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1
What is the "best" way to abstract this data within the Tvdb() class?
I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = "something" without having to do if x[1].has_key(2): x[1][2] = [] and so on)
Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = "something"
This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception).
Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key.
This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on.
The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems).
The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant.
I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems?
| [
"OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name). \nimport new\nmyexc=new.classobj(\"ExcName\",(Exception,),{})\ni=myexc(\"This is the exc msg!\")\nraise i\n\nthis gives you:\nTraceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\n__main__.ExcName: This is the exc msg!\n\nremember that you can always get the class name through:\nself.__class__.__name__\n\nSo, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.\nP.S. - you can also raise strings, but this is deprecated.\nraise(self.__class__.__name__+\"Exception\")\n\n",
"Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for sqlite3\n\nIf you don't want to use SQLite you could do an array of dicts.\nepisodes = []\nepisodes.append({'season':1, 'episode': 2, 'name':'Something'})\nepisodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']})\n\nThat way you add metadata to any record and search it very easily\nseason_1 = [e for e in episodes if e['season'] == 1]\nbilly_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]\n\nfor episode in billy_bob:\n print \"Billy bob was in Season %s Episode %s\" % (episode['season'], episode['episode'])\n\n",
"I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out.\nNOTE: I'm not a Python guy so I don't know what your xml support is like.\nNOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend.\n",
"I don't get this part here:\n\nThis worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception)\n\nThere is a way to do it - called in:\n>>>x={}\n>>>x[1]={}\n>>>x[1][2]={}\n>>>x\n{1: {2: {}}}\n>>> 2 in x[1]\nTrue\n>>> 3 in x[1]\nFalse\n\nwhat seems to be the problem with that?\n",
"Bartosz/To clarify \"This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not\"\nx['some show'][3][24] would return season 3, episode 24 of \"some show\". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if \"some show\" doesn't exist, then raise tvdb_shownotfound\nThe current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on.\nIt works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)\n"
] | [
7,
4,
0,
0,
0
] | [] | [] | [
"data_structures",
"python"
] | stackoverflow_0000005966_data_structures_python.txt |
Q:
Python Regular Expressions to implement string unescaping
I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what...
>>> import re
>>> mystring = r"This is \n a test \r"
>>> p = re.compile( "\\\\(\\S)" )
>>> p.sub( "\\1", mystring )
'This is n a test r'
>>> p.sub( "\\\\\\1", mystring )
'This is \\n a test \\r'
>>> p.sub( "\\\\1", mystring )
'This is \\1 a test \\1'
I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
A:
Isn't that what Anders' second example does?
In 2.5 there's also a string-escape encoding you can apply:
>>> mystring = r"This is \n a test \r"
>>> mystring.decode('string-escape')
'This is \n a test \r'
>>> print mystring.decode('string-escape')
This is
a test
>>>
A:
Well, I think you might have missed the r or miscounted the backslashes...
"\\n" == r"\n"
>>> import re
>>> mystring = r"This is \\n a test \\r"
>>> p = re.compile( r"[\\][\\](.)" )
>>> print p.sub( r"\\\1", mystring )
This is \n a test \r
>>>
Which, if I understood is what was requested.
I suspect the more common request is this:
>>> d = {'n':'\n', 'r':'\r', 'f':'\f'}
>>> p = re.compile(r"[\\]([nrfv])")
>>> print p.sub(lambda mo: d[mo.group(1)], mystring)
This is \
a test \
>>>
The interested student should also read Ken Thompson's Reflections on Trusting Trust", wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.
A:
The idea is that I'll read in an escaped string, and unescape it (a feature notably lacking from Python, which you shouldn't need to resort to regular expressions for in the first place). Unfortunately I'm not being tricked by the backslashes...
Another illustrative example:
>>> mystring = r"This is \n ridiculous"
>>> print mystring
This is \n ridiculous
>>> p = re.compile( r"\\(\S)" )
>>> print p.sub( 'bloody', mystring )
This is bloody ridiculous
>>> print p.sub( r'\1', mystring )
This is n ridiculous
>>> print p.sub( r'\\1', mystring )
This is \1 ridiculous
>>> print p.sub( r'\\\1', mystring )
This is \n ridiculous
What I'd like it to print is
This is
ridiculous
A:
You are being tricked by Python's representation of the result string. The Python expression:
'This is \\n a test \\r'
represents the string
This is \n a test \r
which is I think what you wanted. Try adding 'print' in front of each of your p.sub() calls to print the actual string returned instead of a Python representation of the string.
>>> mystring = r"This is \n a test \r"
>>> mystring
'This is \\n a test \\r'
>>> print mystring
This is \n a test \r
A:
Mark; his second example requires every escaped character thrown into an array initially, which generates a KeyError if the escape sequence happens not to be in the array. It will die on anything but the three characters provided (give \v a try), and enumerating every possible escape sequence every time you want to unescape a string (or keeping a global array) is a really bad solution. Analogous to PHP, that's using preg_replace_callback() with a lambda instead of preg_replace(), which is utterly unnecessary in this situation.
I'm sorry if I'm coming off as a dick about it, I'm just utterly frustrated with Python. This is supported by every other regular expression engine I've ever used, and I can't understand why this wouldn't work.
Thank you for responding; the string.decode('string-escape') function is precisely what i was looking for initially. If someone has a general solution to the regex backreference problem, feel free to post it and I'll accept that as an answer as well.
| Python Regular Expressions to implement string unescaping | I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what...
>>> import re
>>> mystring = r"This is \n a test \r"
>>> p = re.compile( "\\\\(\\S)" )
>>> p.sub( "\\1", mystring )
'This is n a test r'
>>> p.sub( "\\\\\\1", mystring )
'This is \\n a test \\r'
>>> p.sub( "\\\\1", mystring )
'This is \\1 a test \\1'
I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
| [
"Isn't that what Anders' second example does?\nIn 2.5 there's also a string-escape encoding you can apply:\n>>> mystring = r\"This is \\n a test \\r\"\n>>> mystring.decode('string-escape')\n'This is \\n a test \\r'\n>>> print mystring.decode('string-escape')\nThis is \n a test \n>>> \n\n",
"Well, I think you might have missed the r or miscounted the backslashes...\n\"\\\\n\" == r\"\\n\"\n\n>>> import re\n>>> mystring = r\"This is \\\\n a test \\\\r\"\n>>> p = re.compile( r\"[\\\\][\\\\](.)\" )\n>>> print p.sub( r\"\\\\\\1\", mystring )\nThis is \\n a test \\r\n>>>\n\nWhich, if I understood is what was requested.\nI suspect the more common request is this:\n>>> d = {'n':'\\n', 'r':'\\r', 'f':'\\f'}\n>>> p = re.compile(r\"[\\\\]([nrfv])\")\n>>> print p.sub(lambda mo: d[mo.group(1)], mystring)\nThis is \\\n a test \\\n>>>\n\nThe interested student should also read Ken Thompson's Reflections on Trusting Trust\", wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.\n",
"The idea is that I'll read in an escaped string, and unescape it (a feature notably lacking from Python, which you shouldn't need to resort to regular expressions for in the first place). Unfortunately I'm not being tricked by the backslashes...\nAnother illustrative example:\n>>> mystring = r\"This is \\n ridiculous\"\n>>> print mystring\nThis is \\n ridiculous\n>>> p = re.compile( r\"\\\\(\\S)\" )\n>>> print p.sub( 'bloody', mystring )\nThis is bloody ridiculous\n>>> print p.sub( r'\\1', mystring )\nThis is n ridiculous\n>>> print p.sub( r'\\\\1', mystring )\nThis is \\1 ridiculous\n>>> print p.sub( r'\\\\\\1', mystring )\nThis is \\n ridiculous\n\nWhat I'd like it to print is\nThis is \nridiculous\n\n",
"You are being tricked by Python's representation of the result string. The Python expression:\n'This is \\\\n a test \\\\r'\n\nrepresents the string\nThis is \\n a test \\r\n\nwhich is I think what you wanted. Try adding 'print' in front of each of your p.sub() calls to print the actual string returned instead of a Python representation of the string.\n>>> mystring = r\"This is \\n a test \\r\"\n>>> mystring\n'This is \\\\n a test \\\\r'\n>>> print mystring\nThis is \\n a test \\r\n\n",
"Mark; his second example requires every escaped character thrown into an array initially, which generates a KeyError if the escape sequence happens not to be in the array. It will die on anything but the three characters provided (give \\v a try), and enumerating every possible escape sequence every time you want to unescape a string (or keeping a global array) is a really bad solution. Analogous to PHP, that's using preg_replace_callback() with a lambda instead of preg_replace(), which is utterly unnecessary in this situation.\nI'm sorry if I'm coming off as a dick about it, I'm just utterly frustrated with Python. This is supported by every other regular expression engine I've ever used, and I can't understand why this wouldn't work.\nThank you for responding; the string.decode('string-escape') function is precisely what i was looking for initially. If someone has a general solution to the regex backreference problem, feel free to post it and I'll accept that as an answer as well.\n"
] | [
10,
3,
1,
0,
0
] | [] | [] | [
"backreference",
"python",
"regex"
] | stackoverflow_0000013791_backreference_python_regex.txt |
Q:
What's the best way to distribute python command-line tools?
My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar..
Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
A:
Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want.
To reproduce here:
from setuptools import setup
setup(
# other arguments here...
entry_points = {
'console_scripts': [
'foo = package.module:func',
'bar = othermodule:somefunc',
],
}
)
| What's the best way to distribute python command-line tools? | My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar..
Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
| [
"Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want.\nTo reproduce here:\nfrom setuptools import setup\n\nsetup(\n # other arguments here...\n entry_points = {\n 'console_scripts': [\n 'foo = package.module:func',\n 'bar = othermodule:somefunc',\n ],\n }\n)\n\n"
] | [
38
] | [] | [] | [
"command_line",
"packaging",
"python"
] | stackoverflow_0000017893_command_line_packaging_python.txt |
Q:
Introducing Python
The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development.
But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now.
How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company.
Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them.
A:
I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments ("can you parse the stats in these files into a CSV file organized by date and site?", etc) and had a quick turnaround time on all of them.
I also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc.
Eventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc.
This has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before.
So if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that.
A:
If the mandate of the new lead is to put the house in order, the current situation should likely be simplified as much as possible prior. If I had to bring things to order, I wouldn't want to have to manage an ongoing language conversion project on top of everything else, or at least I'd like some choice when initiating the project. When making your recommendation, did you think about the additional managerial complexity that coming into the middle of a conversion would entail?
A:
@darkdog:
Using a new language in production code is about more than easy syntax and high-level capability. You want to be familiar with core APIs and feel like you can fix something through logic instead of having to comb through the documentation.
I'm not saying transitioning to Python would be a bad idea for this company, but I'm with John--keep things simple during the transition. The new lead will appreciate having a say in such decisions.
If you'd really, really, really like to introduce Python, consider writing some extensions or utilities in straight-up Python or in the framework. You won't be upsetting your core initiatives, so it will be a low/no-risk opportunity to prove the merits of a switch.
A:
I think the language itself is not an issue here, as python is really nice high level language with good and easy to find, thorough documentation.
From what I've seen, the Django framework is also a great tooklit for web development, giving much the same developer performance boost Rails is touted to give.
The real issue is at the maintenance and management level.
How will this move fragment the maintenance between PHP and Python code. Is there a need to migrate existing code from one platform to another? What problems will adopting Python and Django solve that you have in your current development workflow and frameworks, etc.
A:
It's really all about schedules. To me the break should be with a specific project. If you decide your direction is Django then start new projects with that. Before you start a new project with a new language/framework, either make sure that you have scheduled time to get up to speed in this new direction, or get up to speed before using on new projects.
I would avoid going with a tool of the month. Make sure you want it to be your direction and commit some time/resources to learning enough to make a good decision.
A:
Well, python is a high level language.. its not hard to learn and if the guys already have programming knowledge it should be much easier to learn.. i like django.. i think it should be a nice try to use django ..
A:
I don't think it's a matter of a programming language as such.
What is the proficiency level of PHP in the team you're talking about? Are they doing spaghetti code or using some structured framework like Zend? If this is the first case then I absolutely understand the guy's interest in Python and Django. It this is the latter, it's just a hype.
A:
I love Python and Django, and use both to develop the our core webapps.
That said, it's hard to make a business case for switching at this point. Specifically:
Any new platform is risky compared to staying with the tried and true
You'll have the developer fragmentation you mentioned
It's far easier to find PHP programmers than python programmers
Moreover, as other posters have mention, if the issue is more with spaghetti code than PHP itself, there are plenty of nice PHP frameworks that could be used to refactor the code.
That said, if this developer is excited about python, stopping them outright is probably demoralizing. My suggestion would be to encourage them to develop in python, but not the mission critical parts of the app. Instead they could write some utility scripts, some small internal application that needs doing, etc.
In conclusion: I don't recommend switching from PHP, but I do recommend accommodating the developer's interest in some way at work.
| Introducing Python | The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development.
But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now.
How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company.
Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them.
| [
"I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments (\"can you parse the stats in these files into a CSV file organized by date and site?\", etc) and had a quick turnaround time on all of them.\nI also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc.\nEventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc.\nThis has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before.\nSo if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that.\n",
"If the mandate of the new lead is to put the house in order, the current situation should likely be simplified as much as possible prior. If I had to bring things to order, I wouldn't want to have to manage an ongoing language conversion project on top of everything else, or at least I'd like some choice when initiating the project. When making your recommendation, did you think about the additional managerial complexity that coming into the middle of a conversion would entail?\n",
"@darkdog:\nUsing a new language in production code is about more than easy syntax and high-level capability. You want to be familiar with core APIs and feel like you can fix something through logic instead of having to comb through the documentation.\nI'm not saying transitioning to Python would be a bad idea for this company, but I'm with John--keep things simple during the transition. The new lead will appreciate having a say in such decisions.\nIf you'd really, really, really like to introduce Python, consider writing some extensions or utilities in straight-up Python or in the framework. You won't be upsetting your core initiatives, so it will be a low/no-risk opportunity to prove the merits of a switch.\n",
"I think the language itself is not an issue here, as python is really nice high level language with good and easy to find, thorough documentation.\nFrom what I've seen, the Django framework is also a great tooklit for web development, giving much the same developer performance boost Rails is touted to give.\nThe real issue is at the maintenance and management level.\nHow will this move fragment the maintenance between PHP and Python code. Is there a need to migrate existing code from one platform to another? What problems will adopting Python and Django solve that you have in your current development workflow and frameworks, etc.\n",
"It's really all about schedules. To me the break should be with a specific project. If you decide your direction is Django then start new projects with that. Before you start a new project with a new language/framework, either make sure that you have scheduled time to get up to speed in this new direction, or get up to speed before using on new projects.\nI would avoid going with a tool of the month. Make sure you want it to be your direction and commit some time/resources to learning enough to make a good decision.\n",
"Well, python is a high level language.. its not hard to learn and if the guys already have programming knowledge it should be much easier to learn.. i like django.. i think it should be a nice try to use django .. \n",
"I don't think it's a matter of a programming language as such. \nWhat is the proficiency level of PHP in the team you're talking about? Are they doing spaghetti code or using some structured framework like Zend? If this is the first case then I absolutely understand the guy's interest in Python and Django. It this is the latter, it's just a hype.\n",
"I love Python and Django, and use both to develop the our core webapps.\nThat said, it's hard to make a business case for switching at this point. Specifically:\n\nAny new platform is risky compared to staying with the tried and true\nYou'll have the developer fragmentation you mentioned\nIt's far easier to find PHP programmers than python programmers\n\nMoreover, as other posters have mention, if the issue is more with spaghetti code than PHP itself, there are plenty of nice PHP frameworks that could be used to refactor the code.\nThat said, if this developer is excited about python, stopping them outright is probably demoralizing. My suggestion would be to encourage them to develop in python, but not the mission critical parts of the app. Instead they could write some utility scripts, some small internal application that needs doing, etc.\nIn conclusion: I don't recommend switching from PHP, but I do recommend accommodating the developer's interest in some way at work.\n"
] | [
15,
4,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"php",
"python"
] | stackoverflow_0000019654_php_python.txt |
Q:
How to check set of files conform to a naming scheme
I have a bunch of files (TV episodes, although that is fairly arbitrary) that I want to check match a specific naming/organisation scheme..
Currently: I have three arrays of regex, one for valid filenames, one for files missing an episode name, and one for valid paths.
Then, I loop though each valid-filename regex, if it matches, append it to a "valid" dict, if not, do the same with the missing-ep-name regexs, if it matches this I append it to an "invalid" dict with an error code (2:'missing epsiode name'), if it matches neither, it gets added to invalid with the 'malformed name' error code.
The current code can be found here
I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state..
How could I write this system in a more expandable way?
The rules it needs to check would be..
File is in the format Show Name - [01x23] - Episode Name.avi or Show Name - [01xSpecial02] - Special Name.avi or Show Name - [01xExtra01] - Extra Name.avi
If filename is in the format Show Name - [01x23].avi display it a 'missing episode name' section of the output
The path should be in the format Show Name/season 2/the_file.avi (where season 2 should be the correct season number in the filename)
each Show Name/season 1/ folder should contain "folder.jpg"
.any ideas? While I'm trying to check TV episodes, this concept/code should be able to apply to many things..
The only thought I had was a list of dicts in the format:
checker = [
{
'name':'valid files',
'type':'file',
'function':check_valid(), # runs check_valid() on all files
'status':0 # if it returns True, this is the status the file gets
}
A:
I want to add a rule that checks for
the presence of a folder.jpg file in
each directory, but to add this would
make the code substantially more messy
in it's current state..
This doesn't look bad. In fact your current code does it very nicely, and Sven mentioned a good way to do it as well:
Get a list of all the files
Check for "required" files
You would just have have add to your dictionary a list of required files:
checker = {
...
'required': ['file', 'list', 'for_required']
}
As far as there being a better/extensible way to do this? I am not exactly sure. I could only really think of a way to possibly drop the "multiple" regular expressions and build off of Sven's idea for using a delimiter. So my strategy would be defining a dictionary as follows (and I'm sorry I don't know Python syntax and I'm a tad to lazy to look it up but it should make sense. The /regex/ is shorthand for a regex):
check_dict = {
'delim' : /\-/,
'parts' : [ 'Show Name', 'Episode Name', 'Episode Number' ],
'patterns' : [/valid name/, /valid episode name/, /valid number/ ],
'required' : ['list', 'of', 'files'],
'ignored' : ['.*', 'hidden.txt'],
'start_dir': '/path/to/dir/to/test/'
}
Split the filename based on the delimiter.
Check each of the parts.
Because its an ordered list you can determine what parts are missing and if a section doesn't match any pattern it is malformed. Here the parts and patterns have a 1 to 1 ratio. Two arrays instead of a dictionary enforces the order.
Ignored and required files can be listed. The . and .. files should probably be ignored automatically. The user should be allowed to input "globs" which can be shell expanded. I'm thinking here of svn:ignore properties, but globbing is natural for listing files.
Here start_dir would be default to the current directory but if you wanted a single file to run automated testing of a bunch of directories this would be useful.
The real loose end here is the path template and along the same lines what path is required for "valid files". I really couldn't come up with a solid idea without writing one large regular expression and taking groups from it... to build a template. It felt a lot like writing a TextMate language grammar. But that starts to stray on the ease of use. The real problem was that the path template was not composed of parts, which makes sense but adds complexity.
Is this strategy in tune with what you were thinking of?
A:
maybe you should take the approach of defaulting to: "the filename is correct" and work from there to disprove that statement:
with the fact that you only allow filenames with: 'show name', 'season number x episode number' and 'episode name', you know for certain that these items should be separated by a "-" (dash) so you have to have 2 of those for a filename to be correct.
if that checks out, you can use your code to check that the show name matches the show name as seen in the parent's parent folder (case insensitive i assume), the season number matches the parents folder numeric value (with or without an extra 0 prepended).
if however you don't see the correct amount of dashes you instantly know that there is something wrong and stop before the rest of the tests etc.
and separately you can check if the file folder.jpg exists and take the necessary actions. or do that first and filter that file from the rest of the files in that folder.
| How to check set of files conform to a naming scheme | I have a bunch of files (TV episodes, although that is fairly arbitrary) that I want to check match a specific naming/organisation scheme..
Currently: I have three arrays of regex, one for valid filenames, one for files missing an episode name, and one for valid paths.
Then, I loop though each valid-filename regex, if it matches, append it to a "valid" dict, if not, do the same with the missing-ep-name regexs, if it matches this I append it to an "invalid" dict with an error code (2:'missing epsiode name'), if it matches neither, it gets added to invalid with the 'malformed name' error code.
The current code can be found here
I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state..
How could I write this system in a more expandable way?
The rules it needs to check would be..
File is in the format Show Name - [01x23] - Episode Name.avi or Show Name - [01xSpecial02] - Special Name.avi or Show Name - [01xExtra01] - Extra Name.avi
If filename is in the format Show Name - [01x23].avi display it a 'missing episode name' section of the output
The path should be in the format Show Name/season 2/the_file.avi (where season 2 should be the correct season number in the filename)
each Show Name/season 1/ folder should contain "folder.jpg"
.any ideas? While I'm trying to check TV episodes, this concept/code should be able to apply to many things..
The only thought I had was a list of dicts in the format:
checker = [
{
'name':'valid files',
'type':'file',
'function':check_valid(), # runs check_valid() on all files
'status':0 # if it returns True, this is the status the file gets
}
| [
"\nI want to add a rule that checks for\n the presence of a folder.jpg file in\n each directory, but to add this would\n make the code substantially more messy\n in it's current state..\n\nThis doesn't look bad. In fact your current code does it very nicely, and Sven mentioned a good way to do it as well:\n\nGet a list of all the files\nCheck for \"required\" files\n\nYou would just have have add to your dictionary a list of required files:\nchecker = {\n ...\n 'required': ['file', 'list', 'for_required']\n}\n\nAs far as there being a better/extensible way to do this? I am not exactly sure. I could only really think of a way to possibly drop the \"multiple\" regular expressions and build off of Sven's idea for using a delimiter. So my strategy would be defining a dictionary as follows (and I'm sorry I don't know Python syntax and I'm a tad to lazy to look it up but it should make sense. The /regex/ is shorthand for a regex):\ncheck_dict = {\n 'delim' : /\\-/,\n 'parts' : [ 'Show Name', 'Episode Name', 'Episode Number' ],\n 'patterns' : [/valid name/, /valid episode name/, /valid number/ ],\n 'required' : ['list', 'of', 'files'],\n 'ignored' : ['.*', 'hidden.txt'],\n 'start_dir': '/path/to/dir/to/test/'\n}\n\n\nSplit the filename based on the delimiter.\nCheck each of the parts.\n\nBecause its an ordered list you can determine what parts are missing and if a section doesn't match any pattern it is malformed. Here the parts and patterns have a 1 to 1 ratio. Two arrays instead of a dictionary enforces the order.\nIgnored and required files can be listed. The . and .. files should probably be ignored automatically. The user should be allowed to input \"globs\" which can be shell expanded. I'm thinking here of svn:ignore properties, but globbing is natural for listing files.\nHere start_dir would be default to the current directory but if you wanted a single file to run automated testing of a bunch of directories this would be useful.\nThe real loose end here is the path template and along the same lines what path is required for \"valid files\". I really couldn't come up with a solid idea without writing one large regular expression and taking groups from it... to build a template. It felt a lot like writing a TextMate language grammar. But that starts to stray on the ease of use. The real problem was that the path template was not composed of parts, which makes sense but adds complexity.\nIs this strategy in tune with what you were thinking of?\n",
"maybe you should take the approach of defaulting to: \"the filename is correct\" and work from there to disprove that statement:\nwith the fact that you only allow filenames with: 'show name', 'season number x episode number' and 'episode name', you know for certain that these items should be separated by a \"-\" (dash) so you have to have 2 of those for a filename to be correct.\nif that checks out, you can use your code to check that the show name matches the show name as seen in the parent's parent folder (case insensitive i assume), the season number matches the parents folder numeric value (with or without an extra 0 prepended).\nif however you don't see the correct amount of dashes you instantly know that there is something wrong and stop before the rest of the tests etc.\nand separately you can check if the file folder.jpg exists and take the necessary actions. or do that first and filter that file from the rest of the files in that folder.\n"
] | [
2,
0
] | [] | [] | [
"naming",
"python",
"validation"
] | stackoverflow_0000019030_naming_python_validation.txt |
Q:
Date/time conversion using time.mktime seems wrong
>>> import time
>>> time.strptime("01-31-2009", "%m-%d-%Y")
(2009, 1, 31, 0, 0, 0, 5, 31, -1)
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233378000.0
>>> 60*60*24 # seconds in a day
86400
>>> 1233378000.0 / 86400
14275.208333333334
time.mktime should return the number of seconds since the epoch. Since I'm giving it a time at midnight and the epoch is at midnight, shouldn't the result be evenly divisible by the number of seconds in a day?
A:
Short answer: Because of timezones.
The Epoch is in UTC.
For example, I'm on IST (Irish Standard Time) or UTC+1. time.mktime() is relative to my timezone, so on my system this refers to
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233360000.0
Because you got the result 1233378000, that would suggest that you're 5 hours behind me
>>> (1233378000 - 1233360000) / (60*60)
5
Have a look at the time.gmtime() function which works off UTC.
A:
mktime(...)
mktime(tuple) -> floating point number
Convert a time tuple in local time to seconds since the Epoch.
local time... fancy that.
The time tuple:
The other representation is a tuple of 9 integers giving local time.
The tuple items are:
year (four digits, e.g. 1998)
month (1-12)
day (1-31)
hours (0-23)
minutes (0-59)
seconds (0-59)
weekday (0-6, Monday is 0)
Julian day (day in the year, 1-366)
DST (Daylight Savings Time) flag (-1, 0 or 1)
If the DST flag is 0, the time is given in the regular time zone;
if it is 1, the time is given in the DST time zone;
if it is -1, mktime() should guess based on the date and time.
Incidentally, we seem to be 6 hours apart:
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233356400.0
>>> (1233378000.0 - 1233356400)/(60*60)
6.0
A:
Phil's answer really solved it, but I'll elaborate a little more. Since the epoch is in UTC, if I want to compare other times to the epoch, I need to interpret them as UTC as well.
>>> calendar.timegm((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233360000
>>> 1233360000 / (60*60*24)
14275
By converting the time tuple to a timestamp treating is as UTC time, I get a number which is evenly divisible by the number of seconds in a day.
I can use this to convert a date to a days-from-the-epoch representation which is what I'm ultimately after.
A:
Interesting. I don't know, but I did try this:
>>> now = time.mktime((2008, 8, 22, 11 ,17, -1, -1, -1, -1))
>>> tomorrow = time.mktime((2008, 8, 23, 11 ,17, -1, -1, -1, -1))
>>> tomorrow - now
86400.0
which is what you expected. My guess? Maybe some time correction was done since the epoch. This could be only a few seconds, something like a leap year. I think I heard something like this before, but can't remember exactly how and when it is done...
| Date/time conversion using time.mktime seems wrong | >>> import time
>>> time.strptime("01-31-2009", "%m-%d-%Y")
(2009, 1, 31, 0, 0, 0, 5, 31, -1)
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233378000.0
>>> 60*60*24 # seconds in a day
86400
>>> 1233378000.0 / 86400
14275.208333333334
time.mktime should return the number of seconds since the epoch. Since I'm giving it a time at midnight and the epoch is at midnight, shouldn't the result be evenly divisible by the number of seconds in a day?
| [
"Short answer: Because of timezones.\nThe Epoch is in UTC.\nFor example, I'm on IST (Irish Standard Time) or UTC+1. time.mktime() is relative to my timezone, so on my system this refers to\n>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233360000.0\n\nBecause you got the result 1233378000, that would suggest that you're 5 hours behind me\n>>> (1233378000 - 1233360000) / (60*60) \n5\n\nHave a look at the time.gmtime() function which works off UTC.\n",
"mktime(...)\n mktime(tuple) -> floating point number\n\n Convert a time tuple in local time to seconds since the Epoch.\n\nlocal time... fancy that.\nThe time tuple:\nThe other representation is a tuple of 9 integers giving local time.\nThe tuple items are:\n year (four digits, e.g. 1998)\n month (1-12)\n day (1-31)\n hours (0-23)\n minutes (0-59)\n seconds (0-59)\n weekday (0-6, Monday is 0)\n Julian day (day in the year, 1-366)\n DST (Daylight Savings Time) flag (-1, 0 or 1)\nIf the DST flag is 0, the time is given in the regular time zone;\nif it is 1, the time is given in the DST time zone;\nif it is -1, mktime() should guess based on the date and time.\n\nIncidentally, we seem to be 6 hours apart:\n>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233356400.0\n>>> (1233378000.0 - 1233356400)/(60*60)\n6.0\n\n",
"Phil's answer really solved it, but I'll elaborate a little more. Since the epoch is in UTC, if I want to compare other times to the epoch, I need to interpret them as UTC as well.\n>>> calendar.timegm((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233360000\n>>> 1233360000 / (60*60*24)\n14275\n\nBy converting the time tuple to a timestamp treating is as UTC time, I get a number which is evenly divisible by the number of seconds in a day.\nI can use this to convert a date to a days-from-the-epoch representation which is what I'm ultimately after.\n",
"Interesting. I don't know, but I did try this:\n>>> now = time.mktime((2008, 8, 22, 11 ,17, -1, -1, -1, -1))\n>>> tomorrow = time.mktime((2008, 8, 23, 11 ,17, -1, -1, -1, -1))\n>>> tomorrow - now\n86400.0\n\nwhich is what you expected. My guess? Maybe some time correction was done since the epoch. This could be only a few seconds, something like a leap year. I think I heard something like this before, but can't remember exactly how and when it is done...\n"
] | [
7,
3,
2,
0
] | [] | [] | [
"datetime",
"python"
] | stackoverflow_0000021961_datetime_python.txt |
Q:
Does PHP have an equivalent to this type of Python string substitution?
Python has this wonderful way of handling string substitutions using dictionaries:
>>> 'The %(site)s site %(adj)s because it %(adj)s' % {'site':'Stackoverflow', 'adj':'rocks'}
'The Stackoverflow site rocks because it rocks'
I love this because you can specify a value once in the dictionary and then replace it all over the place in the string.
I've tried to achieve something similar in PHP using various string replace functions but everything I've come up with feels awkward.
Does anybody have a nice clean way to do this kind of string substitution in PHP?
Edit
Here's the code from the sprintf page that I liked best.
<?php
function sprintf3($str, $vars, $char = '%')
{
$tmp = array();
foreach($vars as $k => $v)
{
$tmp[$char . $k . $char] = $v;
}
return str_replace(array_keys($tmp), array_values($tmp), $str);
}
echo sprintf3( 'The %site% site %adj% because it %adj%', array('site'=>'Stackoverflow', 'adj'=>'rocks'));
?>
A:
function subst($str, $dict){
return preg_replace(array_map(create_function('$a', 'return "/%\\($a\\)s/";'), array_keys($dict)), array_values($dict), $str);
}
You call it like so:
echo subst('The %(site)s site %(adj)s because it %(adj)s', array('site'=>'Stackoverflow', 'adj'=>'rocks'));
A:
@Marius
I don't know if it's faster, but you can do it without regexes:
function subst($str, $dict)
{
foreach ($dict AS $key, $value)
{
$str = str_replace($key, $value, $str);
}
return $str;
}
A:
Some of the user-contributed notes and functions in PHP's documentation for sprintf come quite close.
Note: search the page for "sprintf2".
| Does PHP have an equivalent to this type of Python string substitution? | Python has this wonderful way of handling string substitutions using dictionaries:
>>> 'The %(site)s site %(adj)s because it %(adj)s' % {'site':'Stackoverflow', 'adj':'rocks'}
'The Stackoverflow site rocks because it rocks'
I love this because you can specify a value once in the dictionary and then replace it all over the place in the string.
I've tried to achieve something similar in PHP using various string replace functions but everything I've come up with feels awkward.
Does anybody have a nice clean way to do this kind of string substitution in PHP?
Edit
Here's the code from the sprintf page that I liked best.
<?php
function sprintf3($str, $vars, $char = '%')
{
$tmp = array();
foreach($vars as $k => $v)
{
$tmp[$char . $k . $char] = $v;
}
return str_replace(array_keys($tmp), array_values($tmp), $str);
}
echo sprintf3( 'The %site% site %adj% because it %adj%', array('site'=>'Stackoverflow', 'adj'=>'rocks'));
?>
| [
"function subst($str, $dict){\n return preg_replace(array_map(create_function('$a', 'return \"/%\\\\($a\\\\)s/\";'), array_keys($dict)), array_values($dict), $str);\n }\n\nYou call it like so:\necho subst('The %(site)s site %(adj)s because it %(adj)s', array('site'=>'Stackoverflow', 'adj'=>'rocks'));\n\n",
"@Marius\nI don't know if it's faster, but you can do it without regexes:\nfunction subst($str, $dict)\n{\n foreach ($dict AS $key, $value)\n {\n $str = str_replace($key, $value, $str);\n }\n\n return $str;\n}\n\n",
"Some of the user-contributed notes and functions in PHP's documentation for sprintf come quite close.\nNote: search the page for \"sprintf2\".\n"
] | [
5,
4,
1
] | [] | [] | [
"php",
"python",
"string"
] | stackoverflow_0000028165_php_python_string.txt |
Q:
How do I create an xml document in python
Here is my sample code:
from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
when I run the above code I get this:
<?xml version="1.0" ?>
<foo/>
I would like to get:
<?xml version="1.0" ?>
<foo>bar</foo>
I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?
A:
@Daniel
Thanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)
from xml.dom.minidom import *
def make_xml():
doc = Document();
node = doc.createElement('foo')
node.appendChild(doc.createTextNode('bar'))
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
I swear I tried this before posting my question...
A:
Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. "node.noSuchAttr = 'bar'" would also not give an error).
Unless you need a specific feature of minidom, I would look at ElementTree:
import sys
from xml.etree.cElementTree import Element, ElementTree
def make_xml():
node = Element('foo')
node.text = 'bar'
doc = ElementTree(node)
return doc
if __name__ == '__main__':
make_xml().write(sys.stdout)
| How do I create an xml document in python | Here is my sample code:
from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
when I run the above code I get this:
<?xml version="1.0" ?>
<foo/>
I would like to get:
<?xml version="1.0" ?>
<foo>bar</foo>
I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?
| [
"@Daniel\nThanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)\n\n\nfrom xml.dom.minidom import *\ndef make_xml():\n doc = Document();\n node = doc.createElement('foo')\n node.appendChild(doc.createTextNode('bar'))\n doc.appendChild(node)\n return doc\nif __name__ == '__main__':\n make_xml().writexml(sys.stdout)\n\n\nI swear I tried this before posting my question...\n",
"Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. \"node.noSuchAttr = 'bar'\" would also not give an error).\nUnless you need a specific feature of minidom, I would look at ElementTree:\nimport sys\nfrom xml.etree.cElementTree import Element, ElementTree\n\ndef make_xml():\n node = Element('foo')\n node.text = 'bar'\n doc = ElementTree(node)\n return doc\n\nif __name__ == '__main__':\n make_xml().write(sys.stdout)\n\n"
] | [
13,
9
] | [] | [] | [
"python",
"xml"
] | stackoverflow_0000029243_python_xml.txt |
Q:
Proprietary plug-ins for GPL programs: what about interpreted languages?
I am developing a GPL-licensed application in Python and need to know if the GPL allows my program to use proprietary plug-ins. This is what the FSF has to say on the issue:
If a program released under the GPL uses plug-ins, what are the requirements for the licenses of a plug-in?
It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them.
If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed.
If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?
(edit: I understand why the distinction between fork/exec and dynamic linking, but it seems like someone who wanted to comply with the GPL but go against the "spirit" --I don't-- could just use fork/exec and interprocess communication to do pretty much anything).
The best solution would be to add an exception to my license to explicitly allow the use of proprietary plugins, but I am unable to do so since I'm using Qt/PyQt which is GPL.
A:
he distinction between fork/exec and dynamic linking, besides being kind of artificial,
I don't think its artificial at all. Basically they are just making the division based upon the level of integration. If the program has "plugins" which are essentially fire and forget with no API level integration, then the resulting work is unlikely to be considered a derived work. Generally speaking a plugin which is merely forked/exec'ed would fit this criteria, though there may be cases where it does not. This case especially applies if the "plugin" code would work independently of your code as well.
If, on the other hand, the code is deeply dependent upon the GPL'ed work, such as extensively calling APIs, or tight data structure integration, then things are more likely to be considered a derived work. Ie, the "plugin" cannot exist on its own without the GPL product, and a product with this plugin installed is essentially a derived work of the GPLed product.
So to make it a little more clear, the same principles could apply to your interpreted code. If the interpreted code relies heavily upon your APIs (or vice-versa) then it would be considered a derived work. If it is just a script that executes on its own with extremely little integration, then it may not.
Does that make more sense?
A:
@Daniel The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?
I'm not sure that the distinction is artificial. After a dynamic load the plugin code shares an execution context with the GPLed code. After a fork/exec it does not.
In anycase I would guess that importing causes the new code to run in the same execution context as the GPLed bit, and you should treat it like the dynamic link case. No?
A:
How much info are you sharing between the Plugins and the main program? If you are doing anything more than just executing them and waiting for the results (sharing no data between the program and the plugin in the process) then you could most likely get away with them being proprietary, otherwise they would probably need to be GPL'd.
| Proprietary plug-ins for GPL programs: what about interpreted languages? | I am developing a GPL-licensed application in Python and need to know if the GPL allows my program to use proprietary plug-ins. This is what the FSF has to say on the issue:
If a program released under the GPL uses plug-ins, what are the requirements for the licenses of a plug-in?
It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them.
If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed.
If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?
(edit: I understand why the distinction between fork/exec and dynamic linking, but it seems like someone who wanted to comply with the GPL but go against the "spirit" --I don't-- could just use fork/exec and interprocess communication to do pretty much anything).
The best solution would be to add an exception to my license to explicitly allow the use of proprietary plugins, but I am unable to do so since I'm using Qt/PyQt which is GPL.
| [
"\nhe distinction between fork/exec and dynamic linking, besides being kind of artificial,\n\nI don't think its artificial at all. Basically they are just making the division based upon the level of integration. If the program has \"plugins\" which are essentially fire and forget with no API level integration, then the resulting work is unlikely to be considered a derived work. Generally speaking a plugin which is merely forked/exec'ed would fit this criteria, though there may be cases where it does not. This case especially applies if the \"plugin\" code would work independently of your code as well.\nIf, on the other hand, the code is deeply dependent upon the GPL'ed work, such as extensively calling APIs, or tight data structure integration, then things are more likely to be considered a derived work. Ie, the \"plugin\" cannot exist on its own without the GPL product, and a product with this plugin installed is essentially a derived work of the GPLed product.\nSo to make it a little more clear, the same principles could apply to your interpreted code. If the interpreted code relies heavily upon your APIs (or vice-versa) then it would be considered a derived work. If it is just a script that executes on its own with extremely little integration, then it may not.\nDoes that make more sense?\n",
"@Daniel The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?\nI'm not sure that the distinction is artificial. After a dynamic load the plugin code shares an execution context with the GPLed code. After a fork/exec it does not.\nIn anycase I would guess that importing causes the new code to run in the same execution context as the GPLed bit, and you should treat it like the dynamic link case. No?\n",
"How much info are you sharing between the Plugins and the main program? If you are doing anything more than just executing them and waiting for the results (sharing no data between the program and the plugin in the process) then you could most likely get away with them being proprietary, otherwise they would probably need to be GPL'd.\n"
] | [
7,
1,
0
] | [] | [] | [
"interpreted_language",
"licensing",
"open_source",
"plugins",
"python"
] | stackoverflow_0000031412_interpreted_language_licensing_open_source_plugins_python.txt |
Q:
Install Python to match directory layout in OS X 10.5
The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work).
I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it.
A:
Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python?
A:
Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.
You can also add a second python installation, but that also causes more problems than it's worth IMO.
So I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?
A:
Hyposaurus,
It is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above.
The easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local
Another method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time
./configure --prefix=/usr/local/python64
make
sudo make install
Then you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive
A:
Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.
A:
The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program.
EDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.
| Install Python to match directory layout in OS X 10.5 | The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work).
I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it.
| [
"Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python?\n",
"Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.\nYou can also add a second python installation, but that also causes more problems than it's worth IMO.\nSo I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?\n",
"Hyposaurus,\nIt is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above. \nThe easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local\nAnother method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time\n./configure --prefix=/usr/local/python64\nmake\nsudo make install\n\nThen you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive\n",
"Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.\n",
"The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program.\nEDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.\n"
] | [
1,
1,
1,
0,
0
] | [] | [] | [
"64_bit",
"macos",
"python"
] | stackoverflow_0000029856_64_bit_macos_python.txt |
Q:
ssh hangs when command invoked directly, but exits cleanly when run interactive
I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr.
I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this:
import re, subprocess
executable = ... # Name of executable
regex = ... # Regex to extract the port number from the output
p = subprocess.Popen(executable,
bufsize=1, # line buffered
stderr=subprocess.PIPE
)
s = p.stderr.readline()
port = re.match(regex).groups()[0]
print port
If I log in interactively, this script works:
$ ssh remotehost.example.com
Last login: Thu Aug 28 17:31:18 2008 from localhost
$ ./invokejob.py
63409
$ exit
logout
Connection to remotehost.example.com closed.
(Note: successful logout, it did not hang).
However, if I try to invoke it from the command-line, it just hangs:
$ ssh remotehost.example.com invokejob.py
Does anybody know why it hangs in the second case, and what I can do to avoid this?
Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
A:
s = p.stderr.readline()
I suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.
When you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.
A:
what if you do the following:
ssh <remote host> '<your command> ;<your regexp using awk or something>'
For example
ssh <remote host> '<your program>; ps aux | awk \'/root/ {print $2}\''
This will connect to , execute and then print each PSID for any user root or any process with root in its description.
I have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).
| ssh hangs when command invoked directly, but exits cleanly when run interactive | I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr.
I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this:
import re, subprocess
executable = ... # Name of executable
regex = ... # Regex to extract the port number from the output
p = subprocess.Popen(executable,
bufsize=1, # line buffered
stderr=subprocess.PIPE
)
s = p.stderr.readline()
port = re.match(regex).groups()[0]
print port
If I log in interactively, this script works:
$ ssh remotehost.example.com
Last login: Thu Aug 28 17:31:18 2008 from localhost
$ ./invokejob.py
63409
$ exit
logout
Connection to remotehost.example.com closed.
(Note: successful logout, it did not hang).
However, if I try to invoke it from the command-line, it just hangs:
$ ssh remotehost.example.com invokejob.py
Does anybody know why it hangs in the second case, and what I can do to avoid this?
Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
| [
"\ns = p.stderr.readline()\n\n\nI suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.\nWhen you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.\n",
"what if you do the following:\nssh <remote host> '<your command> ;<your regexp using awk or something>'\nFor example\nssh <remote host> '<your program>; ps aux | awk \\'/root/ {print $2}\\''\nThis will connect to , execute and then print each PSID for any user root or any process with root in its description.\nI have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).\n"
] | [
3,
0
] | [] | [] | [
"python",
"ssh"
] | stackoverflow_0000033475_python_ssh.txt |
Q:
Why does Python's iter() on a mapping return iterkeys() instead of iteritems()?
It seems like if you want to get the keys of a mapping, you ask for them; otherwise, give me the whole mapping (constituted by a set of key-value pairs). Is there a historical reason for this?
A:
Check out this thread for a discussion on the reasons behind this behavior (including that Guido likes it, and it's not likely to change).
| Why does Python's iter() on a mapping return iterkeys() instead of iteritems()? | It seems like if you want to get the keys of a mapping, you ask for them; otherwise, give me the whole mapping (constituted by a set of key-value pairs). Is there a historical reason for this?
| [
"Check out this thread for a discussion on the reasons behind this behavior (including that Guido likes it, and it's not likely to change).\n"
] | [
11
] | [] | [] | [
"iteration",
"mapping",
"python"
] | stackoverflow_0000035569_iteration_mapping_python.txt |
Q:
Django ImageField core=False in newforms admin
In the transition to newforms admin I'm having difficulty figuring out how specify core=False for ImageFields.
I get the following error:
TypeError: __init__() got an unexpected keyword argument 'core'
[Edit] However, by just removing the core argument I get a "This field is required." error in the admin interface on attempted submission. How does one accomplish what core=False is meant to do using newforms admin?
A:
To get rid of "This field is required," you need to make it not required, by using blank=True (and possibly null=True as well, if it's not a CharField).
A:
The core attribute isn't used anymore.
From Brian Rosner's Blog:
You can safely just remove any and all core arguments. They are no longer used. newforms-admin now provides a nice delete checkbox for exisiting instances in inlines.
A:
This is simple. I started getting this problems a few revisions ago. Basically, just remove the "core=True" parameter in the ImageField in the models, and then follow the instructions here to convert to what the newforms admin uses.
| Django ImageField core=False in newforms admin | In the transition to newforms admin I'm having difficulty figuring out how specify core=False for ImageFields.
I get the following error:
TypeError: __init__() got an unexpected keyword argument 'core'
[Edit] However, by just removing the core argument I get a "This field is required." error in the admin interface on attempted submission. How does one accomplish what core=False is meant to do using newforms admin?
| [
"To get rid of \"This field is required,\" you need to make it not required, by using blank=True (and possibly null=True as well, if it's not a CharField).\n",
"The core attribute isn't used anymore.\nFrom Brian Rosner's Blog:\n\nYou can safely just remove any and all core arguments. They are no longer used. newforms-admin now provides a nice delete checkbox for exisiting instances in inlines.\n\n",
"This is simple. I started getting this problems a few revisions ago. Basically, just remove the \"core=True\" parameter in the ImageField in the models, and then follow the instructions here to convert to what the newforms admin uses.\n"
] | [
5,
4,
2
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000034209_django_django_models_python.txt |
Q:
Programmatically editing Python source
This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
A:
Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.
A:
I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do.
Otherwise AFAIK you have to use some conf objects.
A:
Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.
It's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).
There's a lot of power in this feature and something along these lines is probably what you're looking for. :)
[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)
| Programmatically editing Python source | This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
| [
"Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.\n",
"I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do. \nOtherwise AFAIK you have to use some conf objects.\n",
"Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.\nIt's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).\nThere's a lot of power in this feature and something along these lines is probably what you're looking for. :)\n[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)\n"
] | [
6,
0,
0
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0000032385_file_io_python.txt |
Q:
"The system cannot find the file specified" when invoking subprocess.Popen in python
I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue.
I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path):
P:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> i,k = os.popen4("svn --version")
>>> i.close()
>>> k.readline()
'svn, version 1.4.2 (r22196)\n'
Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking
os.popen4() it uses subprocess.Popen(). Trying that reproduces the error:
C:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE,
>>> close_fds=False, stderr=subprocess.PIPE)
Traceback (most recent call last):
File "", line 1, in
File "C:\Python25\lib\subprocess.py", line 594, in __init__
errread, errwrite)
File "C:\Python25\lib\subprocess.py", line 816, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
>>>
For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution.
If it's not obvious, I'm a complete python newbie, but Google hasn't helped.
A:
It's a bug, see the documentation of subprocess.Popen. There either needs to be a "shell=True" option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, "svn --version" which it doesn't find.
I don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it.
| "The system cannot find the file specified" when invoking subprocess.Popen in python | I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue.
I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path):
P:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> i,k = os.popen4("svn --version")
>>> i.close()
>>> k.readline()
'svn, version 1.4.2 (r22196)\n'
Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking
os.popen4() it uses subprocess.Popen(). Trying that reproduces the error:
C:\>python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE,
>>> close_fds=False, stderr=subprocess.PIPE)
Traceback (most recent call last):
File "", line 1, in
File "C:\Python25\lib\subprocess.py", line 594, in __init__
errread, errwrite)
File "C:\Python25\lib\subprocess.py", line 816, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
>>>
For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution.
If it's not obvious, I'm a complete python newbie, but Google hasn't helped.
| [
"It's a bug, see the documentation of subprocess.Popen. There either needs to be a \"shell=True\" option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, \"svn --version\" which it doesn't find.\nI don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it.\n"
] | [
21
] | [] | [] | [
"python",
"svn_merge"
] | stackoverflow_0000036324_python_svn_merge.txt |
Q:
How do I add data to an existing model in Django?
Currently, I am writing up a bit of a product-based CMS as my first project.
Here is my question. How can I add additional data (products) to my Product model?
I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django
How can I do this all without using this existing django admin interface.
A:
You will want to wire your URL to the Django create_object generic view, and pass it either "model" (the model you want to create) or "form_class" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors.
Sample URLconf for the simplest case:
from django.conf.urls.defaults import *
from django.views.generic.create_update import create_object
from my_products_app.models import Product
urlpatterns = patterns('',
url(r'^admin/products/add/$', create_object, {'model': Product}))
Your template will get the context variable "form", which you just need to wrap in a <form> tag and add a submit button. The simplest working template (by default should go in "my_products_app/product_form.html"):
<form action="." method="POST">
{{ form }}
<input type="submit" name="submit" value="add">
</form>
Note that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save.
| How do I add data to an existing model in Django? | Currently, I am writing up a bit of a product-based CMS as my first project.
Here is my question. How can I add additional data (products) to my Product model?
I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django
How can I do this all without using this existing django admin interface.
| [
"You will want to wire your URL to the Django create_object generic view, and pass it either \"model\" (the model you want to create) or \"form_class\" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors.\nSample URLconf for the simplest case:\nfrom django.conf.urls.defaults import *\nfrom django.views.generic.create_update import create_object\n\nfrom my_products_app.models import Product\n\nurlpatterns = patterns('',\n url(r'^admin/products/add/$', create_object, {'model': Product}))\n\nYour template will get the context variable \"form\", which you just need to wrap in a <form> tag and add a submit button. The simplest working template (by default should go in \"my_products_app/product_form.html\"):\n<form action=\".\" method=\"POST\">\n {{ form }}\n <input type=\"submit\" name=\"submit\" value=\"add\">\n</form>\n\nNote that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save.\n"
] | [
7
] | [
"This topic is covered in Django tutorials.\n",
"Follow the Django tutorial for setting up the \"admin\" part of an application. This will allow you to modify your database.\nDjango Admin Setup\nAlternatively, you can just connect directly to the database using the standard tools for whatever database type you are using.\n"
] | [
-1,
-2
] | [
"django",
"python"
] | stackoverflow_0000036812_django_python.txt |
Q:
How can I simply inherit methods from an existing instance?
Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name.
import cgi
class ClassX(object):
pass # ... with own __repr__
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
inst_z=[ i*i for i in range(25) ]
inst_b=True
class HTMLDecorator(object):
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_y).html()
wrapped_z = HTMLDecorator(inst_z)
inst_z[0] += 70
wrapped_z[0] += 71
print wrapped_z.html()
print HTMLDecorator(inst_b).html()
Output:
Traceback (most recent call last):
File "html.py", line 21, in
print HTMLDecorator(inst_x).html()
TypeError: default __new__ takes no parameters
Is what I'm trying to do possible? If so, what am I doing wrong?
A:
Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.
Looks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like bool, but it will for your user-defined classes:
def HTMLDecorator (obj):
def html ():
sep = cgi.escape (repr (obj))
return sep.join (("<H1>", "</H1>"))
obj.html = html
return obj
And here is the proxy version:
class HTMLDecorator(object):
def __init__ (self, wrapped):
self.__wrapped = wrapped
def html (self):
sep = cgi.escape (repr (self.__wrapped))
return sep.join (("<H1>", "</H1>"))
def __getattr__ (self, name):
return getattr (self.__wrapped, name)
def __setattr__ (self, name, value):
if not name.startswith ('_HTMLDecorator__'):
setattr (self.__wrapped, name, value)
return
super (HTMLDecorator, self).__setattr__ (name, value)
def __delattr__ (self, name):
delattr (self.__wraped, name)
A:
Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types:
import cgi
class ClassX(object):
pass # ... with own __repr__
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
class HTMLDecorator:
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
ClassX.__bases__ += (HTMLDecorator,)
ClassY.__bases__ += (HTMLDecorator,)
print inst_x.html()
print inst_y.html()
Be warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library.
A:
Is what I'm trying to do possible? If so, what am I doing wrong?
It's certainly possible. What's wrong is that HTMLDecorator.__init__() doesn't accept parameters.
Here's a simple example:
def decorator (func):
def new_func ():
return "new_func %s" % func ()
return new_func
@decorator
def a ():
return "a"
def b ():
return "b"
print a() # new_func a
print decorator (b)() # new_func b
A:
@John (37448):
Sorry, I might have misled you with the name (bad choice). I'm not really looking for a decorator function, or anything to do with decorators at all. What I'm after is for the html(self) def to use ClassX or ClassY's __repr__. I want this to work without modifying ClassX or ClassY.
A:
Ah, in that case, perhaps code like this will be useful? It doesn't really have anything to do with decorators, but demonstrates how to pass arguments to a class's initialization function and to retrieve those arguments for later.
import cgi
class ClassX(object):
def __repr__ (self):
return "<class X>"
class HTMLDecorator(object):
def __init__ (self, wrapped):
self.__wrapped = wrapped
def html (self):
sep = cgi.escape (repr (self.__wrapped))
return sep.join (("<H1>", "</H1>"))
inst_x=ClassX()
inst_b=True
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_b).html()
A:
@John (37479):
Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.
import cgi
from math import sqrt
class ClassX(object):
def __repr__(self):
return "Best Guess"
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
inst_z=[ i*i for i in range(25) ]
inst_b=True
avoid="__class__ __init__ __dict__ __weakref__"
class HTMLDecorator(object):
def __init__(self,master):
self.master = master
for attr in dir(self.master):
if ( not attr.startswith("__") or
attr not in avoid.split() and "attr" not in attr):
self.__setattr__(attr, self.master.__getattribute__(attr))
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
def length(self):
return sqrt(sum(self.__iter__()))
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_y).html()
wrapped_z = HTMLDecorator(inst_z)
print wrapped_z.length()
inst_z[0] += 70
#wrapped_z[0] += 71
wrapped_z.__setitem__(0,wrapped_z.__getitem__(0)+ 71)
print wrapped_z.html()
print HTMLDecorator(inst_b).html()
Output:
<H1>Best Guess</H1>
<H1><__main__.ClassY object at 0x891df0c></H1>
70.0
<H1>[141, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576]</H1>
<H1>True</H1>
| How can I simply inherit methods from an existing instance? | Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name.
import cgi
class ClassX(object):
pass # ... with own __repr__
class ClassY(object):
pass # ... with own __repr__
inst_x=ClassX()
inst_y=ClassY()
inst_z=[ i*i for i in range(25) ]
inst_b=True
class HTMLDecorator(object):
def html(self): # an "enhanced" version of __repr__
return cgi.escape(self.__repr__()).join(("<H1>","</H1>"))
print HTMLDecorator(inst_x).html()
print HTMLDecorator(inst_y).html()
wrapped_z = HTMLDecorator(inst_z)
inst_z[0] += 70
wrapped_z[0] += 71
print wrapped_z.html()
print HTMLDecorator(inst_b).html()
Output:
Traceback (most recent call last):
File "html.py", line 21, in
print HTMLDecorator(inst_x).html()
TypeError: default __new__ takes no parameters
Is what I'm trying to do possible? If so, what am I doing wrong?
| [
"\nVery close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.\n\nLooks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like bool, but it will for your user-defined classes:\ndef HTMLDecorator (obj):\n def html ():\n sep = cgi.escape (repr (obj))\n return sep.join ((\"<H1>\", \"</H1>\"))\n obj.html = html\n return obj\n\nAnd here is the proxy version:\nclass HTMLDecorator(object):\n def __init__ (self, wrapped):\n self.__wrapped = wrapped\n\n def html (self):\n sep = cgi.escape (repr (self.__wrapped))\n return sep.join ((\"<H1>\", \"</H1>\"))\n\n def __getattr__ (self, name):\n return getattr (self.__wrapped, name)\n\n def __setattr__ (self, name, value):\n if not name.startswith ('_HTMLDecorator__'):\n setattr (self.__wrapped, name, value)\n return\n super (HTMLDecorator, self).__setattr__ (name, value)\n\n def __delattr__ (self, name):\n delattr (self.__wraped, name)\n\n",
"Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types:\nimport cgi\n\nclass ClassX(object):\n pass # ... with own __repr__\n\nclass ClassY(object):\n pass # ... with own __repr__\n\ninst_x=ClassX()\ninst_y=ClassY()\n\nclass HTMLDecorator:\n def html(self): # an \"enhanced\" version of __repr__\n return cgi.escape(self.__repr__()).join((\"<H1>\",\"</H1>\"))\n\nClassX.__bases__ += (HTMLDecorator,)\nClassY.__bases__ += (HTMLDecorator,)\n\nprint inst_x.html()\nprint inst_y.html()\n\nBe warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library.\n",
"\nIs what I'm trying to do possible? If so, what am I doing wrong?\n\nIt's certainly possible. What's wrong is that HTMLDecorator.__init__() doesn't accept parameters.\nHere's a simple example:\ndef decorator (func):\n def new_func ():\n return \"new_func %s\" % func ()\n return new_func\n\n@decorator\ndef a ():\n return \"a\"\n\ndef b ():\n return \"b\"\n\nprint a() # new_func a\nprint decorator (b)() # new_func b\n\n",
"@John (37448):\nSorry, I might have misled you with the name (bad choice). I'm not really looking for a decorator function, or anything to do with decorators at all. What I'm after is for the html(self) def to use ClassX or ClassY's __repr__. I want this to work without modifying ClassX or ClassY.\n",
"Ah, in that case, perhaps code like this will be useful? It doesn't really have anything to do with decorators, but demonstrates how to pass arguments to a class's initialization function and to retrieve those arguments for later.\nimport cgi\n\nclass ClassX(object):\n def __repr__ (self):\n return \"<class X>\"\n\nclass HTMLDecorator(object):\n def __init__ (self, wrapped):\n self.__wrapped = wrapped\n\n def html (self):\n sep = cgi.escape (repr (self.__wrapped))\n return sep.join ((\"<H1>\", \"</H1>\"))\n\ninst_x=ClassX()\ninst_b=True\n\nprint HTMLDecorator(inst_x).html()\nprint HTMLDecorator(inst_b).html()\n\n",
"@John (37479):\nVery close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way.\nimport cgi\nfrom math import sqrt\n\nclass ClassX(object): \n def __repr__(self): \n return \"Best Guess\"\n\nclass ClassY(object):\n pass # ... with own __repr__\n\ninst_x=ClassX()\n\ninst_y=ClassY()\n\ninst_z=[ i*i for i in range(25) ]\n\ninst_b=True\n\navoid=\"__class__ __init__ __dict__ __weakref__\"\n\nclass HTMLDecorator(object):\n def __init__(self,master):\n self.master = master\n for attr in dir(self.master):\n if ( not attr.startswith(\"__\") or \n attr not in avoid.split() and \"attr\" not in attr):\n self.__setattr__(attr, self.master.__getattribute__(attr))\n\n def html(self): # an \"enhanced\" version of __repr__\n return cgi.escape(self.__repr__()).join((\"<H1>\",\"</H1>\"))\n\n def length(self):\n return sqrt(sum(self.__iter__()))\n\nprint HTMLDecorator(inst_x).html()\nprint HTMLDecorator(inst_y).html()\nwrapped_z = HTMLDecorator(inst_z)\nprint wrapped_z.length()\ninst_z[0] += 70\n#wrapped_z[0] += 71\nwrapped_z.__setitem__(0,wrapped_z.__getitem__(0)+ 71)\nprint wrapped_z.html()\nprint HTMLDecorator(inst_b).html()\n\nOutput:\n<H1>Best Guess</H1>\n<H1><__main__.ClassY object at 0x891df0c></H1>\n70.0\n<H1>[141, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576]</H1>\n<H1>True</H1>\n"
] | [
2,
2,
0,
0,
0,
0
] | [] | [] | [
"inheritance",
"object",
"oop",
"python"
] | stackoverflow_0000037479_inheritance_object_oop_python.txt |
Q:
Why is the subprocess.Popen class not named Subprocess?
The primary class in the subprocess module is name Popen, and represents a subprocess. Popen sounds like someone was trying to force the name to follow some function naming format, rather than chosing a name that actually represents what the object is. Does anyone know why it was chosen over something simple like, say, Subprocess?
A:
Now, I'm not saying that this is the greatest name in the world, but here was the idea as I understand it.
Originally, the popen family was in the os module and was an implementation of the venerable posix popen. The movement to the subprocess module would have been an opportune time to rename them, but I guess that keeping Popen makes it easier to find in the docs for those who have a long history with python or even to the venerable posix functions.
From its earliest posix incarnation, Popen has always been meant to open a Process and allow you to read and write from its stdio like a file. Thus the mnemonic for Popen is that it is short for ProcessOpen in an attempt to kind of, sorta, look like open.
A:
subprocess.Popen replaces the group of os.popenX POSIX functions (which have a long history). I suppose that the name Popen makes it more likely for people used to the old functions to find and use the new ones.
The PEP for subprocess (PEP 324) has a little bit of discussion on the name of the module but not of class Popen. The list of PEPs (Python enhancement proposals) is in general an excellent place to start if you're looking for the rationale for features of Python.
| Why is the subprocess.Popen class not named Subprocess? | The primary class in the subprocess module is name Popen, and represents a subprocess. Popen sounds like someone was trying to force the name to follow some function naming format, rather than chosing a name that actually represents what the object is. Does anyone know why it was chosen over something simple like, say, Subprocess?
| [
"Now, I'm not saying that this is the greatest name in the world, but here was the idea as I understand it.\nOriginally, the popen family was in the os module and was an implementation of the venerable posix popen. The movement to the subprocess module would have been an opportune time to rename them, but I guess that keeping Popen makes it easier to find in the docs for those who have a long history with python or even to the venerable posix functions.\nFrom its earliest posix incarnation, Popen has always been meant to open a Process and allow you to read and write from its stdio like a file. Thus the mnemonic for Popen is that it is short for ProcessOpen in an attempt to kind of, sorta, look like open.\n",
"subprocess.Popen replaces the group of os.popenX POSIX functions (which have a long history). I suppose that the name Popen makes it more likely for people used to the old functions to find and use the new ones.\nThe PEP for subprocess (PEP 324) has a little bit of discussion on the name of the module but not of class Popen. The list of PEPs (Python enhancement proposals) is in general an excellent place to start if you're looking for the rationale for features of Python.\n"
] | [
8,
5
] | [
"I suppose the name was chosen because the functionality subprocess is replacing was formerly in the os module as the os.popen function. There could be even ways to automate migration between the two.\n"
] | [
-1
] | [
"python",
"subprocess"
] | stackoverflow_0000038197_python_subprocess.txt |
Q:
Retrieving an Oracle timestamp using Python's Win32 ODBC module
Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?
A:
I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.
In your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.
A:
My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:
cursor.execute("SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log")
This works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.
| Retrieving an Oracle timestamp using Python's Win32 ODBC module | Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?
| [
"I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.\nIn your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.\n",
"My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:\ncursor.execute(\"SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log\")\n\nThis works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.\n"
] | [
2,
1
] | [] | [] | [
"ora_00932",
"oracle",
"python"
] | stackoverflow_0000038435_ora_00932_oracle_python.txt |
Q:
Ruby "is" equivalent
Is there a Ruby equivalent for Python's "is"? It tests whether two objects are identical (i.e. have the same memory location).
A:
Use a.equal? b
http://www.ruby-doc.org/core/classes/Object.html
Unlike ==, the equal? method should never be overridden by subclasses: it is used to determine object identity (that is, a.equal?(b) iff a is the same object as b).
A:
You could also use __id__. This gives you the objects internal ID number, which is always unique. To check if to objects are the same, try
a.__id__ = b.__id__
This is how Ruby's standard library does it as far as I can tell (see group_by and others).
| Ruby "is" equivalent | Is there a Ruby equivalent for Python's "is"? It tests whether two objects are identical (i.e. have the same memory location).
| [
"Use a.equal? b\nhttp://www.ruby-doc.org/core/classes/Object.html\n\nUnlike ==, the equal? method should never be overridden by subclasses: it is used to determine object identity (that is, a.equal?(b) iff a is the same object as b). \n\n",
"You could also use __id__. This gives you the objects internal ID number, which is always unique. To check if to objects are the same, try\n\na.__id__ = b.__id__\n\nThis is how Ruby's standard library does it as far as I can tell (see group_by and others).\n"
] | [
13,
2
] | [] | [] | [
"python",
"ruby"
] | stackoverflow_0000035634_python_ruby.txt |
Q:
Why is my instance variable not in __dict__?
If I create a class A as follows:
class A:
def __init__(self):
self.name = 'A'
Inspecting the __dict__ member looks like {'name': 'A'}
If however I create a class B:
class B:
name = 'B'
__dict__ is empty.
What is the difference between the two, and why doesn't name show up in B's __dict__?
A:
B.name is a class attribute, not an instance attribute. It shows up in B.__dict__, but not in b = B(); b.__dict__.
The distinction is obscured somewhat because when you access an attribute on an instance, the class dict is a fallback. So in the above example, b.name will give you the value of B.name.
A:
class A:
def _ _init_ _(self):
self.name = 'A'
a = A()
Creates an attribute on the object instance a of type A and it can therefore be found in: a.__dict__
class B:
name = 'B'
b = B()
Creates an attribute on the class B and the attribute can be found in B.__dict__ alternatively if you have an instance b of type B you can see the class level attributes in b.__class__.__dict__
| Why is my instance variable not in __dict__? | If I create a class A as follows:
class A:
def __init__(self):
self.name = 'A'
Inspecting the __dict__ member looks like {'name': 'A'}
If however I create a class B:
class B:
name = 'B'
__dict__ is empty.
What is the difference between the two, and why doesn't name show up in B's __dict__?
| [
"B.name is a class attribute, not an instance attribute. It shows up in B.__dict__, but not in b = B(); b.__dict__.\nThe distinction is obscured somewhat because when you access an attribute on an instance, the class dict is a fallback. So in the above example, b.name will give you the value of B.name.\n",
"class A:\n def _ _init_ _(self):\n self.name = 'A'\na = A()\n\nCreates an attribute on the object instance a of type A and it can therefore be found in: a.__dict__\nclass B:\n name = 'B'\nb = B()\n\nCreates an attribute on the class B and the attribute can be found in B.__dict__ alternatively if you have an instance b of type B you can see the class level attributes in b.__class__.__dict__\n"
] | [
46,
12
] | [] | [] | [
"python"
] | stackoverflow_0000035805_python.txt |
Q:
Javascript equivalent of Python's locals()?
In Python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals(). Is there some equivalent way of doing this in Javascript? For instance, I would like to do something like the following:
var foo = function(){ alert('foo'); };
var bar = function(){ alert('bar'); };
var s = 'foo';
locals()[s](); // alerts 'foo'
Is this at all possible, or should I just be using a local object for the lookup?
A:
locals() - No.
globals() - Yes.
window is a reference to the global scope, like globals() in python.
globals()["foo"]
is the same as:
window["foo"]
A:
Well, I don't think that there is something like that in js. You can always use eval instead of locals(). Like this:
eval(s+"()");
You just have to know that actually function foo exists.
Edit:
Don't use eval:) Use:
var functionName="myFunctionName";
window[functionName]();
A:
I seem to remember Brendan Eich commented on this in a recent podcast; if i recall correctly, it's not being considered, as it adds unreasonable restrictions to optimization. He compared it to the arguments local in that, while useful for varargs, its very existence removes the ability to guess at what a function will touch just by looking at its definition.
BTW: i believe JS did have support for accessing locals through the arguments local at one time - a quick search shows this has been deprecated though.
A:
@e-bartek, I think that window[functionName] won't work if you in some closure, and the function name is local to that closure. For example:
function foo() {
var bar = function () {
alert('hello world');
};
var s = 'bar';
window[s](); // this won't work
}
In this case, s is 'bar', but the function 'bar' only exists inside the scope of the function 'foo'. It is not defined in the window scope.
Of course, this doesn't really answer the original question, I just wanted to chime in on this response. I don't believe there is a way to do what the original question asked.
A:
@pkaeding
Yes, you're right. window[functionName]() doesn't work in this case, but eval does. If I needed something like this, I'd create my own object to keep those functions together.
var func = {};
func.bar = ...;
var s = "bar";
func[s]();
| Javascript equivalent of Python's locals()? | In Python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals(). Is there some equivalent way of doing this in Javascript? For instance, I would like to do something like the following:
var foo = function(){ alert('foo'); };
var bar = function(){ alert('bar'); };
var s = 'foo';
locals()[s](); // alerts 'foo'
Is this at all possible, or should I just be using a local object for the lookup?
| [
"\nlocals() - No. \nglobals() - Yes.\n\nwindow is a reference to the global scope, like globals() in python.\nglobals()[\"foo\"]\n\nis the same as:\nwindow[\"foo\"]\n\n",
"Well, I don't think that there is something like that in js. You can always use eval instead of locals(). Like this: \neval(s+\"()\");\n\nYou just have to know that actually function foo exists.\nEdit:\nDon't use eval:) Use:\nvar functionName=\"myFunctionName\";\nwindow[functionName]();\n\n",
"I seem to remember Brendan Eich commented on this in a recent podcast; if i recall correctly, it's not being considered, as it adds unreasonable restrictions to optimization. He compared it to the arguments local in that, while useful for varargs, its very existence removes the ability to guess at what a function will touch just by looking at its definition. \nBTW: i believe JS did have support for accessing locals through the arguments local at one time - a quick search shows this has been deprecated though.\n",
"@e-bartek, I think that window[functionName] won't work if you in some closure, and the function name is local to that closure. For example:\nfunction foo() {\n var bar = function () {\n alert('hello world');\n };\n var s = 'bar';\n window[s](); // this won't work\n}\n\nIn this case, s is 'bar', but the function 'bar' only exists inside the scope of the function 'foo'. It is not defined in the window scope.\nOf course, this doesn't really answer the original question, I just wanted to chime in on this response. I don't believe there is a way to do what the original question asked.\n",
"@pkaeding\nYes, you're right. window[functionName]() doesn't work in this case, but eval does. If I needed something like this, I'd create my own object to keep those functions together.\nvar func = {};\nfunc.bar = ...;\nvar s = \"bar\";\nfunc[s]();\n\n"
] | [
18,
4,
3,
0,
0
] | [
"AFAIK, no. If you just want to check the existence of a given variable, you can do it by testing for it, something like this:\nif (foo) foo();\n\n"
] | [
-1
] | [
"javascript",
"python"
] | stackoverflow_0000039960_javascript_python.txt |
Q:
Using C in a shared multi-platform POSIX environment
I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
A:
Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.
In /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.
This won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.
I wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.
A:
Also, you could use autoconf and distribute your application in source form only. :)
A:
You know, you should look at static linking.
These days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore.
You could also try running your applications in chroot() jails and distributing those.
A:
Depending on your mix os OSes, you might be better off creating packages for each class of system.
Alternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.
| Using C in a shared multi-platform POSIX environment | I write tools that are used in a shared workspace. Since there are multiple OS's working in this space, we generally use Python and standardize the version that is installed across machines. However, if I wanted to write some things in C, I was wondering if maybe I could have the application wrapped in a Python script, that detected the operating system and fired off the correct version of the C application. Each platform has GCC available and uses the same shell.
One idea was to have the C compiled to the users local ~/bin, with timestamp comparison with C code so it is not compiled each run, but only when code is updated. Another was to just compile it for each platform, and have the wrapper script select the proper executable.
Is there an accepted/stable process for this? Are there any catches? Are there alternatives (assuming the absolute need to use native C code)?
Clarification: Multiple OS's are involved that do not share ABI. Eg. OS X, various Linuxes, BSD etc. I need to be able to update the code in place in shared folders and have the new code working more or less instantaneously. Distributing binary or source packages is less than ideal.
| [
"Launching a Python interpreter instance just to select the right binary to run would be much heavier than you need. I'd distribute a shell .rc file which provides aliases.\nIn /shared/bin, you put the various binaries: /shared/bin/toolname-mac, /shared/bin/toolname-debian-x86, /shared/bin/toolname-netbsd-dreamcast, etc. Then, in the common shared shell .rc file, you put the logic to set the aliases according to platform, so that on OSX, it gets alias toolname=/shared/bin/toolname-mac, and so forth.\nThis won't work as well if you're adding new tools all the time, because the users will need to reload the aliases.\nI wouldn't recommend distributing tools this way, though. Testing and qualifying new builds of the tools should be taking up enough time and effort that the extra time required to distribute the tools to the users is trivial. You seem to be optimizing to reduce the distribution time. Replacing tools that quickly in a live environment is all too likely to result in lengthy and confusing downtime if anything goes wrong in writing and building the tools--especially when subtle cross-platform issues creep in.\n",
"Also, you could use autoconf and distribute your application in source form only. :)\n",
"You know, you should look at static linking.\nThese days, we all have HUGE hard drives, and a few extra megabytes (for carrying around libc and what not) is really not that big a deal anymore. \nYou could also try running your applications in chroot() jails and distributing those.\n",
"Depending on your mix os OSes, you might be better off creating packages for each class of system.\nAlternatively, if they all share the same ABI and hardware architecture, you could also compile static binaries.\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"c",
"cross_platform",
"posix",
"python",
"scripting"
] | stackoverflow_0000039847_c_cross_platform_posix_python_scripting.txt |
Q:
cx_Oracle: how do I get the ORA-xxxxx error number?
In a try/except block, how do I extract the Oracle error number?
A:
try:
cursor.execute("select 1 / 0 from dual")
except cx_Oracle.DatabaseError, e:
error, = e
print "Code:", error.code
print "Message:", error.message
This results in the following output:
Code: 1476
Message: ORA-01476: divisor is equal to zero
| cx_Oracle: how do I get the ORA-xxxxx error number? | In a try/except block, how do I extract the Oracle error number?
| [
"try:\n cursor.execute(\"select 1 / 0 from dual\")\nexcept cx_Oracle.DatabaseError, e:\n error, = e\n print \"Code:\", error.code\n print \"Message:\", error.message\n\nThis results in the following output:\nCode: 1476\nMessage: ORA-01476: divisor is equal to zero\n\n"
] | [
13
] | [] | [] | [
"cx_oracle",
"oracle",
"python"
] | stackoverflow_0000040586_cx_oracle_oracle_python.txt |
Q:
Is there a python module for regex matching in zip files
I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files.
Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
A:
There's nothing that will automatically do what you want.
However, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file.
#!/usr/bin/python
import zipfile
f = zipfile.ZipFile('myfile.zip')
for subfile in f.namelist():
print subfile
data = f.read(subfile)
for line in data.split('\n'):
print line
A:
You could loop through the zip files, reading individual files using the zipfile module and running your regex on those, eliminating to unzip all the files at once.
I'm fairly certain that you can't run a regex over the zipped data, at least not meaningfully.
A:
To access the contents of a zip file you have to unzip it, although the zipfile package makes this fairly easy, as you can unzip each file within an archive individually.
Python zipfile module
A:
Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp?
(Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.)
EDIT: Also note that it's probably much more sensible to just use the zipfile solution.
| Is there a python module for regex matching in zip files | I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files.
Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
| [
"There's nothing that will automatically do what you want.\nHowever, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file.\n#!/usr/bin/python\n\nimport zipfile\nf = zipfile.ZipFile('myfile.zip')\n\nfor subfile in f.namelist():\n print subfile\n data = f.read(subfile)\n for line in data.split('\\n'):\n print line\n\n",
"You could loop through the zip files, reading individual files using the zipfile module and running your regex on those, eliminating to unzip all the files at once. \nI'm fairly certain that you can't run a regex over the zipped data, at least not meaningfully.\n",
"To access the contents of a zip file you have to unzip it, although the zipfile package makes this fairly easy, as you can unzip each file within an archive individually.\nPython zipfile module\n",
"Isn't it (at least theoretically) possible, to read in the ZIP's Huffman coding and then translate the regexp into the Huffman code? Might this be more efficient than first de-compressing the data, then running the regexp?\n(Note: I know it wouldn't be quite that simple: you'd also have to deal with other aspects of the ZIP coding—file layout, block structures, back-references—but one imagines this could be fairly lightweight.)\nEDIT: Also note that it's probably much more sensible to just use the zipfile solution.\n"
] | [
10,
0,
0,
0
] | [] | [] | [
"python",
"regex",
"text_processing",
"zip"
] | stackoverflow_0000014281_python_regex_text_processing_zip.txt |
Q:
How do I do monkeypatching in python?
I've had to do some introspection in python and it wasn't pretty:
name = sys._getframe(1).f_code
name = "%s:%d %s()" %(os.path.split(name.co_filename)[1],name.co_firstlineno,name.co_name)
To get something like
foo.py:22 bar() blah blah
In our debugging output.
I'd ideally like to prepend anything to stderr with this sort of information --
Is it possible to change the behaviour of print globally within python?
A:
A print statement does its IO through "sys.stdout.write" so you can override sys.stdout if you want to manipulate the print stream.
A:
The python inspect module makes this a lot easier and cleaner.
| How do I do monkeypatching in python? | I've had to do some introspection in python and it wasn't pretty:
name = sys._getframe(1).f_code
name = "%s:%d %s()" %(os.path.split(name.co_filename)[1],name.co_firstlineno,name.co_name)
To get something like
foo.py:22 bar() blah blah
In our debugging output.
I'd ideally like to prepend anything to stderr with this sort of information --
Is it possible to change the behaviour of print globally within python?
| [
"A print statement does its IO through \"sys.stdout.write\" so you can override sys.stdout if you want to manipulate the print stream.\n",
"The python inspect module makes this a lot easier and cleaner. \n"
] | [
3,
1
] | [] | [] | [
"monkeypatching",
"python"
] | stackoverflow_0000041562_monkeypatching_python.txt |
Q:
Standard way to open a folder window in linux?
I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application.
On OSX, I can open a window in the finder with
os.system('open "%s"' % foldername)
and on Windows with
os.startfile(foldername)
What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)?
This looks like something that could be specified by the freedesktop.org folks (a python module, similar to webbrowser, would also be nice!).
A:
os.system('xdg-open "%s"' % foldername)
xdg-open can be used for files/urls also
A:
this would probably have to be done manually, or have as a config item since there are many file managers that users may want to use. Providing a way for command options as well.
There might be an function that launches the defaults for kde or gnome in their respective toolkits but I haven't had reason to look for them.
A:
You're going to have to do this based on the running window manager. OSX and Windows have a (defacto) standard way because there is only one choice.
You shouldn't need to specify the exact filemanager application, though, this should be possible to do through the wm. I know Gnome does, and it's important to do this in KDE since there are two possible file managers (Konqueror/Dolphin) that may be in use.
I agree that this would be a good thing for freedesktop.org to standardize, although I doubt it will happen unless someone steps up and volunteers to do it.
EDIT: I wasn't aware of xdg-open. Good to know!
| Standard way to open a folder window in linux? | I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application.
On OSX, I can open a window in the finder with
os.system('open "%s"' % foldername)
and on Windows with
os.startfile(foldername)
What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)?
This looks like something that could be specified by the freedesktop.org folks (a python module, similar to webbrowser, would also be nice!).
| [
"os.system('xdg-open \"%s\"' % foldername)\n\nxdg-open can be used for files/urls also\n",
"this would probably have to be done manually, or have as a config item since there are many file managers that users may want to use. Providing a way for command options as well.\nThere might be an function that launches the defaults for kde or gnome in their respective toolkits but I haven't had reason to look for them.\n",
"You're going to have to do this based on the running window manager. OSX and Windows have a (defacto) standard way because there is only one choice.\nYou shouldn't need to specify the exact filemanager application, though, this should be possible to do through the wm. I know Gnome does, and it's important to do this in KDE since there are two possible file managers (Konqueror/Dolphin) that may be in use.\nI agree that this would be a good thing for freedesktop.org to standardize, although I doubt it will happen unless someone steps up and volunteers to do it.\n\nEDIT: I wasn't aware of xdg-open. Good to know!\n"
] | [
15,
0,
0
] | [] | [] | [
"cross_platform",
"desktop",
"linux",
"python"
] | stackoverflow_0000041969_cross_platform_desktop_linux_python.txt |
Q:
Pure Python library to generate Identicons?
Does anyone know of a FOSS Python lib for generating Identicons? I've looked, but so far I haven't had much luck.
A:
I've found two implementations:
http://coderepos.org/share/browser/lang/python/misc/identicon.py
http://code.google.com/p/visicon/
| Pure Python library to generate Identicons? | Does anyone know of a FOSS Python lib for generating Identicons? I've looked, but so far I haven't had much luck.
| [
"I've found two implementations:\nhttp://coderepos.org/share/browser/lang/python/misc/identicon.py\nhttp://code.google.com/p/visicon/\n"
] | [
12
] | [] | [] | [
"identicon",
"python"
] | stackoverflow_0000042093_identicon_python.txt |
Q:
How can I get a commit message from a bzr post-commit hook?
I'm trying to write a bzr post-commit hook for my private bugtracker, but I'm stuck at the function signature of
post_commit(local, master, old_revno, old_revid, new_revno, mew_revid)
How can I extract the commit message for the branch from this with bzrlib in Python?
A:
And the answer is like so:
def check_commit_msg(local, master, old_revno, old_revid, new_revno, new_revid):
branch = local or master
revision = branch.repository.get_revision(new_revid)
print revision.message
local and master are Branch objects, so once you have a revision, it's easy to extract the message.
| How can I get a commit message from a bzr post-commit hook? | I'm trying to write a bzr post-commit hook for my private bugtracker, but I'm stuck at the function signature of
post_commit(local, master, old_revno, old_revid, new_revno, mew_revid)
How can I extract the commit message for the branch from this with bzrlib in Python?
| [
"And the answer is like so:\ndef check_commit_msg(local, master, old_revno, old_revid, new_revno, new_revid):\n branch = local or master\n revision = branch.repository.get_revision(new_revid)\n print revision.message\n\nlocal and master are Branch objects, so once you have a revision, it's easy to extract the message.\n"
] | [
5
] | [] | [] | [
"bazaar",
"dvcs",
"python"
] | stackoverflow_0000043099_bazaar_dvcs_python.txt |
Q:
Can the HTTP version or headers affect the visual appearance of a web page?
I know, I would have thought the answer was obviously "no" as well, but I am experiencing a strange situation where when I view my site from our staging server it appears slightly larger than when I view it from my local dev server. I have used Charles to confirm that all of the content -- the HTML, the images, the CSS, the javascript, everything is the same.
The ONLY difference in the traffic is that (because the local site is served from the Django development mode server) the response headers look like this:
HTTP/1.0 200 OK
Server WSGIServer/0.1 Python/2.5.2
Date Thu, 04 Sep 2008 23:56:10 GMT
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
Whereas on the staging server (where Django is running inside Apache) the headers look like this:
HTTP/1.1 200 OK
Date Thu, 04 Sep 2008 23:56:06 GMT
Server Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5 with Suhosin-Patch
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
So, as far as I can tell the only differences are HTTP/1.1 vs HTTP/1.0, the server identifer (Apache vs WSGIServer) and the order of the Date/Server headers.
To elaborate a bit further on the differences in appearance, basically it appears as if the version of the site on the staging server is "zoomed in" by about 10%. For example, the primary logo which dominates our home page is 220 pixels wide but when server from our staging server shows up as 245 pixels wide. Everything else on the page, (other images, text, spacing, etc) is also proportionately larger.
This is all in Firefox 3. I don't have any other browsers available to test with at the moment.
Has anyone else encountered any bizarre behavior anything like this before? I am at a loss.
A:
Have you tried View -> Zoom -> Reset on both sites?
| Can the HTTP version or headers affect the visual appearance of a web page? | I know, I would have thought the answer was obviously "no" as well, but I am experiencing a strange situation where when I view my site from our staging server it appears slightly larger than when I view it from my local dev server. I have used Charles to confirm that all of the content -- the HTML, the images, the CSS, the javascript, everything is the same.
The ONLY difference in the traffic is that (because the local site is served from the Django development mode server) the response headers look like this:
HTTP/1.0 200 OK
Server WSGIServer/0.1 Python/2.5.2
Date Thu, 04 Sep 2008 23:56:10 GMT
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
Whereas on the staging server (where Django is running inside Apache) the headers look like this:
HTTP/1.1 200 OK
Date Thu, 04 Sep 2008 23:56:06 GMT
Server Apache/2.2.8 (Ubuntu) mod_python/3.3.1 Python/2.5.2 PHP/5.2.4-2ubuntu5 with Suhosin-Patch
Vary Cookie
Content-Length 2301
Content-Type text/html; charset=utf-8
So, as far as I can tell the only differences are HTTP/1.1 vs HTTP/1.0, the server identifer (Apache vs WSGIServer) and the order of the Date/Server headers.
To elaborate a bit further on the differences in appearance, basically it appears as if the version of the site on the staging server is "zoomed in" by about 10%. For example, the primary logo which dominates our home page is 220 pixels wide but when server from our staging server shows up as 245 pixels wide. Everything else on the page, (other images, text, spacing, etc) is also proportionately larger.
This is all in Firefox 3. I don't have any other browsers available to test with at the moment.
Has anyone else encountered any bizarre behavior anything like this before? I am at a loss.
| [
"Have you tried View -> Zoom -> Reset on both sites?\n"
] | [
9
] | [] | [] | [
"django",
"firefox",
"python"
] | stackoverflow_0000045013_django_firefox_python.txt |
Q:
Python packages - import by class, not file
Say I have the following file structure:
app/
app.py
controllers/
__init__.py
project.py
plugin.py
If app/controllers/project.py defines a class Project, app.py would import it like this:
from app.controllers.project import Project
I'd like to just be able to do:
from app.controllers import Project
How would this be done?
A:
You need to put
from project import Project
in controllers/__init__.py.
Note that when Absolute imports become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named project), i.e.,
from .project import Project
| Python packages - import by class, not file | Say I have the following file structure:
app/
app.py
controllers/
__init__.py
project.py
plugin.py
If app/controllers/project.py defines a class Project, app.py would import it like this:
from app.controllers.project import Project
I'd like to just be able to do:
from app.controllers import Project
How would this be done?
| [
"You need to put\nfrom project import Project\n\nin controllers/__init__.py.\nNote that when Absolute imports become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named project), i.e.,\nfrom .project import Project\n\n"
] | [
103
] | [] | [] | [
"package",
"python"
] | stackoverflow_0000045122_package_python.txt |
Q:
Where can I find the time and space complexity of the built-in sequence types in Python
I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
A:
Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.
A:
Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on set and dict, but list was covered too.
There are also some photos of the pertinent slides from EuroPython in a blog.
Here is a summary of my notes on list:
Stores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.
Tries to avoid memcpy when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.
Some operations pre-size. Examples given were range(n), map(), list(), [None] * n, and slicing.
When shrinking, the array is realloced only when it is wasting 50% of space. pop is cheap.
A:
If your asking what I think your asking, you can find them Here... page 476 and on.
It's written around optimization techniques for Python; It's mostly Big-O notation of time efficiencies not much memory.
| Where can I find the time and space complexity of the built-in sequence types in Python | I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
| [
"Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.\n",
"Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on set and dict, but list was covered too.\nThere are also some photos of the pertinent slides from EuroPython in a blog.\nHere is a summary of my notes on list:\n\nStores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.\nTries to avoid memcpy when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.\nSome operations pre-size. Examples given were range(n), map(), list(), [None] * n, and slicing.\nWhen shrinking, the array is realloced only when it is wasting 50% of space. pop is cheap.\n\n",
"If your asking what I think your asking, you can find them Here... page 476 and on.\nIt's written around optimization techniques for Python; It's mostly Big-O notation of time efficiencies not much memory.\n"
] | [
19,
15,
2
] | [] | [] | [
"big_o",
"complexity_theory",
"performance",
"python",
"sequences"
] | stackoverflow_0000045228_big_o_complexity_theory_performance_python_sequences.txt |
Q:
Pylons error - 'MySQL server has gone away'
I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away')
I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :).
Maybe something in my MySQL config is goofy? Not sure where to look exactly.
Other relevant details:
Python 2.5
Pylons: 0.9.6.2 (w/ sql_alchemy)
MySQL: 5.0.51
A:
I think I fixed it. It's turns out I had a simple config error. My ini file read:
sqlalchemy.default.url = [connection string here]
sqlalchemy.pool_recycle = 1800
The problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored.
The solution is to simply change the second line in the ini to:
sqlalchemy.default.pool_recycle = 1800
A:
You might want to check MySQL's timeout variables:
show variables like '%timeout%';
You're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently.
AFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?
| Pylons error - 'MySQL server has gone away' | I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away')
I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :).
Maybe something in my MySQL config is goofy? Not sure where to look exactly.
Other relevant details:
Python 2.5
Pylons: 0.9.6.2 (w/ sql_alchemy)
MySQL: 5.0.51
| [
"I think I fixed it. It's turns out I had a simple config error. My ini file read:\nsqlalchemy.default.url = [connection string here]\nsqlalchemy.pool_recycle = 1800\n\nThe problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored.\nThe solution is to simply change the second line in the ini to:\nsqlalchemy.default.pool_recycle = 1800\n\n",
"You might want to check MySQL's timeout variables:\nshow variables like '%timeout%';\n\nYou're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently.\nAFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?\n"
] | [
8,
2
] | [] | [] | [
"mysql",
"pylons",
"python"
] | stackoverflow_0000008154_mysql_pylons_python.txt |
Q:
Django: Print url of view without hardcoding the url
Can i print out a url /admin/manage/products/add of a certain view in a template?
Here is the rule i want to create a link for
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
Edit: I am not using the default admin (well, i am but it is at another url), this is my own
A:
You can use get_absolute_url, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.
You want to use named URL patterns. Here's a quick intro:
Change the line in your urls.py to:
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, "create-product"),
Then, in your template you use this to display the URL:
{% url create-product %}
If you're using Django 1.5 or higher you need this:
{% url 'create-product' %}
You can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).
A:
If you use named url patterns you can do the follwing in your template
{% url create_object %}
A:
The preferred way of creating the URL is by adding a get_absolute_url method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy.
You can go further by utilizing the permalink decorator that figures the path based on the urls configuration.
You can read more in the django documentation here.
| Django: Print url of view without hardcoding the url | Can i print out a url /admin/manage/products/add of a certain view in a template?
Here is the rule i want to create a link for
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
Edit: I am not using the default admin (well, i am but it is at another url), this is my own
| [
"You can use get_absolute_url, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.\nYou want to use named URL patterns. Here's a quick intro:\nChange the line in your urls.py to:\n(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, \"create-product\"),\n\nThen, in your template you use this to display the URL:\n{% url create-product %}\n\nIf you're using Django 1.5 or higher you need this:\n{% url 'create-product' %}\n\nYou can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).\n",
"If you use named url patterns you can do the follwing in your template\n{% url create_object %}\n\n",
"The preferred way of creating the URL is by adding a get_absolute_url method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy.\nYou can go further by utilizing the permalink decorator that figures the path based on the urls configuration.\nYou can read more in the django documentation here.\n"
] | [
17,
2,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000047207_django_python.txt |
Q:
Python: No module named core.exceptions
I'm trying to get Google AppEngine to work on my Debian box and am getting the following error when I try to access my page:
<type 'exceptions.ImportError'>: No module named core.exceptions
The same app works fine for me when I run it on my other Ubuntu box, so I know it's not a problem with the app itself. However, I need to get it working on this Debian box. It originally had python 2.4 but after AppEngine complained about it I installed the python2.5 and python2.5-dev packages (to no avail).
I saw on this Google Group post that it may be due to the version of AppEngine and just to reinstall it, but that didn't work. Any ideas?
Edit 1: Also tried uninstalling python2.4 and 2.5 then reinstalling 2.5, which also didn't work.
Edit 2: Turns out when I made AppEngine into a CVS project it didn't add the core directory into my project, so when I checked it out there literally was no module named core.exceptions. Re-downloading that folder resolved the problem.
A:
core.exceptions is part of django; what version of django do you have installed? The AppEngine comes with the appropriate version for whatever release you've downloaded (in the lib/django directory). It can be installed by going to that directory and running python setup.py install
| Python: No module named core.exceptions | I'm trying to get Google AppEngine to work on my Debian box and am getting the following error when I try to access my page:
<type 'exceptions.ImportError'>: No module named core.exceptions
The same app works fine for me when I run it on my other Ubuntu box, so I know it's not a problem with the app itself. However, I need to get it working on this Debian box. It originally had python 2.4 but after AppEngine complained about it I installed the python2.5 and python2.5-dev packages (to no avail).
I saw on this Google Group post that it may be due to the version of AppEngine and just to reinstall it, but that didn't work. Any ideas?
Edit 1: Also tried uninstalling python2.4 and 2.5 then reinstalling 2.5, which also didn't work.
Edit 2: Turns out when I made AppEngine into a CVS project it didn't add the core directory into my project, so when I checked it out there literally was no module named core.exceptions. Re-downloading that folder resolved the problem.
| [
"core.exceptions is part of django; what version of django do you have installed? The AppEngine comes with the appropriate version for whatever release you've downloaded (in the lib/django directory). It can be installed by going to that directory and running python setup.py install\n"
] | [
6
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000048777_google_app_engine_python.txt |
Q:
Python descriptor protocol analog in other languages?
Is there something like the Python descriptor protocol implemented in other languages? It seems like a nice way to increase modularity/encapsulation without bloating your containing class' implementation, but I've never heard of a similar thing in any other languages. Is it likely absent from other languages because of the lookup overhead?
A:
I've not heard of a direct equivalent either. You could probably achieve the same effect with macros, especially in a language like Lisp which has extremely powerful macros.
I wouldn't be at all surprised if other languages start to incorporate something similar because it is so powerful.
A:
Ruby and C# both easily let you create accessors by specifying getter/setter methods for an attribute, much like in Python. However, this isn't designed to naturally let you write the code for these methods in another class the way that Python allows. In practice, I'm not sure how much this matters, since every time I've seen an attribute defined through the descriptor protocol its been implemented in the same class.
EDIT: Darn my dyslexia (by which I mean careless reading). For some reason I've always read "descriptor" as "decorator" and vice versa, even when I'm the one typing both of them. I'll leave my post intact since it has valid information, albeit information which has absolutely nothing to do with the question.
The term "decorator" itself is actually the name of a design pattern described in the famous "Design Patterns" book. The Wikipedia article contains many examples in different programming languages of decorator usage: http://en.wikipedia.org/wiki/Decorator_pattern
However, the decorators in that article object-oriented; they have classes implementing a predefined interface which lets another existing class behave differently somehow, etc. Python decorators act in a functional way by replacing a function at runtime with another function, allowing you to effectively modify/replace that function, insert code, etc.
This is known in the Java world as Aspect-Oriented programming, and the AspectJ Java compiler lets you do these kinds of things and compile your AspectJ code (which is a superset of Java) into Java bytecode.
I'm not familiar enough with C# or Ruby to know what their version of decorators would be.
| Python descriptor protocol analog in other languages? | Is there something like the Python descriptor protocol implemented in other languages? It seems like a nice way to increase modularity/encapsulation without bloating your containing class' implementation, but I've never heard of a similar thing in any other languages. Is it likely absent from other languages because of the lookup overhead?
| [
"I've not heard of a direct equivalent either. You could probably achieve the same effect with macros, especially in a language like Lisp which has extremely powerful macros.\nI wouldn't be at all surprised if other languages start to incorporate something similar because it is so powerful.\n",
"Ruby and C# both easily let you create accessors by specifying getter/setter methods for an attribute, much like in Python. However, this isn't designed to naturally let you write the code for these methods in another class the way that Python allows. In practice, I'm not sure how much this matters, since every time I've seen an attribute defined through the descriptor protocol its been implemented in the same class.\nEDIT: Darn my dyslexia (by which I mean careless reading). For some reason I've always read \"descriptor\" as \"decorator\" and vice versa, even when I'm the one typing both of them. I'll leave my post intact since it has valid information, albeit information which has absolutely nothing to do with the question.\nThe term \"decorator\" itself is actually the name of a design pattern described in the famous \"Design Patterns\" book. The Wikipedia article contains many examples in different programming languages of decorator usage: http://en.wikipedia.org/wiki/Decorator_pattern\nHowever, the decorators in that article object-oriented; they have classes implementing a predefined interface which lets another existing class behave differently somehow, etc. Python decorators act in a functional way by replacing a function at runtime with another function, allowing you to effectively modify/replace that function, insert code, etc.\nThis is known in the Java world as Aspect-Oriented programming, and the AspectJ Java compiler lets you do these kinds of things and compile your AspectJ code (which is a superset of Java) into Java bytecode.\nI'm not familiar enough with C# or Ruby to know what their version of decorators would be.\n"
] | [
4,
0
] | [] | [] | [
"encapsulation",
"language_features",
"python"
] | stackoverflow_0000034243_encapsulation_language_features_python.txt |
Q:
How do I implement a pre-commit hook script in SVN that calls dos2unix to validate checked-in file
I was wondering if anyone here had some experience writing this type of script and if they could give me some pointers.
I would like to modify this script to validate that the check-in file does not have a Carriage Return in the EOL formatting. The EOL format is CR LF in Windows and LF in Unix. When a User checks-in code with the Windows format. It does not compile in Unix anymore. I know this can be done on the client side but I need to have this validation done on the server side. To achieve this, I need to do the following:
1) Make sure the file I check is not a binary, I dont know how to do this with svnlook, should I check the mime:type of the file? The Red Book does not indicate this clearly or I must have not seen it.
2) I would like to run the dos2unix command to validate that the file has the correct EOL format. I would compare the output of the dos2unix command against the original file. If there is a diff between both, I give an error message to the client and cancel the check-in.
I would like your comments/feedback on this approach.
A:
I think you can avoid a commit hook script in this case by using the svn:eol-style property as described in the SVNBook:
End-of-Line Character Sequences
Subversion Properties
This way SVN can worry about your line endings for you.
Good luck!
A:
What exactly are you trying to do?
Of course, there are numerous places to learn about svn pre-commit hooks (e.g. here , here, and in the Red Book) but it depends what you're trying to do and what is available on your system.
Can you be more specific?
| How do I implement a pre-commit hook script in SVN that calls dos2unix to validate checked-in file | I was wondering if anyone here had some experience writing this type of script and if they could give me some pointers.
I would like to modify this script to validate that the check-in file does not have a Carriage Return in the EOL formatting. The EOL format is CR LF in Windows and LF in Unix. When a User checks-in code with the Windows format. It does not compile in Unix anymore. I know this can be done on the client side but I need to have this validation done on the server side. To achieve this, I need to do the following:
1) Make sure the file I check is not a binary, I dont know how to do this with svnlook, should I check the mime:type of the file? The Red Book does not indicate this clearly or I must have not seen it.
2) I would like to run the dos2unix command to validate that the file has the correct EOL format. I would compare the output of the dos2unix command against the original file. If there is a diff between both, I give an error message to the client and cancel the check-in.
I would like your comments/feedback on this approach.
| [
"I think you can avoid a commit hook script in this case by using the svn:eol-style property as described in the SVNBook:\n\nEnd-of-Line Character Sequences\nSubversion Properties\n\nThis way SVN can worry about your line endings for you.\nGood luck!\n",
"What exactly are you trying to do?\nOf course, there are numerous places to learn about svn pre-commit hooks (e.g. here , here, and in the Red Book) but it depends what you're trying to do and what is available on your system. \nCan you be more specific? \n"
] | [
4,
1
] | [] | [] | [
"dos2unix",
"python",
"svn"
] | stackoverflow_0000048562_dos2unix_python_svn.txt |