content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How can I extract x, y and z coordinates from geographical data by Python?
I have geographical data which has 14 variables. The data is in the following format:
QUADNAME: rockport_colony_SD RESOLUTION: 10 ULLAT: 43.625
ULLON: -97.87527466 LRLAT: 43.5
LRLON: -97.75027466 HDATUM: 27
ZMIN: 361.58401489 ZMAX:
413.38400269 ZMEAN: 396.1293335 ZSIGMA: 12.36359215 PMETHOD: 5
QUADDATE: 20001001
The whole data has many previous variables in the sequence.
How can I extract the coordinates ULLAT, ULLON and LRLAT from the data into three lists, so that the each row corresponds to one location?
This question was raised by the problem in the post.
A:
Something like this might work if the data is all in a big flat text file:
import re
data = """
QUADNAME: rockport_colony_SD RESOLUTION: 10 ULLAT: 43.625
ULLON: -97.87527466 LRLAT: 43.5
LRLON: -97.75027466 HDATUM: 27
ZMIN: 361.58401489 ZMAX: 413.38400269 ZMEAN: 396.1293335 ZSIGMA: 12.36359215 PMETHOD: 5
QUADDATE: 20001001
"""
regex = re.compile(
r"""ULLAT:\ (?P<ullat>-?[\d.]+).*?
ULLON:\ (?P<ullon>-?[\d.]+).*?
LRLAT:\ (?P<lrlat>-?[\d.]+)""", re.DOTALL|re.VERBOSE)
print regex.findall(data) # Yields: [('43.625', '-97.87527466', '43.5')]
A:
Given a StreamReader named reader, this should give you a list of (float, float, float). I suggest a list of 3-tuples because it'll probably be more convenient and more efficient to walk through, unless for some reason you only want to get all the points individually.
coords = []
reader
while line=reader.readline():
index_ullat = line.find("ULLAT")
if index_ullat >= 0:
ullat = float(line[ index_ULLAT+7 : ])
line = reader.readline()
index_ullon = line.find("ULLON")
index_lrlat = line.find("LRLAT")
if index_ullon >= 0 and index_lrlat >= 0:
ullon = float(line[ index_ullon+7 : index_lrlat-1 ])
lrlat = float(line[ index_lrlat+7 : ])
else:
raise InputError, "ULLON and LRLAT didn't follow ULLAT."
coords.append(ullat, ullon, lrlat)
It may work, but it's ugly. I'm no expert at string parsing.
| How can I extract x, y and z coordinates from geographical data by Python? | I have geographical data which has 14 variables. The data is in the following format:
QUADNAME: rockport_colony_SD RESOLUTION: 10 ULLAT: 43.625
ULLON: -97.87527466 LRLAT: 43.5
LRLON: -97.75027466 HDATUM: 27
ZMIN: 361.58401489 ZMAX:
413.38400269 ZMEAN: 396.1293335 ZSIGMA: 12.36359215 PMETHOD: 5
QUADDATE: 20001001
The whole data has many previous variables in the sequence.
How can I extract the coordinates ULLAT, ULLON and LRLAT from the data into three lists, so that the each row corresponds to one location?
This question was raised by the problem in the post.
| [
"Something like this might work if the data is all in a big flat text file:\nimport re\n\ndata = \"\"\"\nQUADNAME: rockport_colony_SD RESOLUTION: 10 ULLAT: 43.625\nULLON: -97.87527466 LRLAT: 43.5\nLRLON: -97.75027466 HDATUM: 27\nZMIN: 361.58401489 ZMAX: 413.38400269 ZMEAN: 396.1293335 ZSIGMA: 12.36359215 PMETHOD: 5\nQUADDATE: 20001001\n\"\"\"\n\nregex = re.compile(\n r\"\"\"ULLAT:\\ (?P<ullat>-?[\\d.]+).*?\n ULLON:\\ (?P<ullon>-?[\\d.]+).*?\n LRLAT:\\ (?P<lrlat>-?[\\d.]+)\"\"\", re.DOTALL|re.VERBOSE)\n\nprint regex.findall(data) # Yields: [('43.625', '-97.87527466', '43.5')]\n\n",
"Given a StreamReader named reader, this should give you a list of (float, float, float). I suggest a list of 3-tuples because it'll probably be more convenient and more efficient to walk through, unless for some reason you only want to get all the points individually. \ncoords = []\nreader\nwhile line=reader.readline():\n\n index_ullat = line.find(\"ULLAT\")\n if index_ullat >= 0:\n ullat = float(line[ index_ULLAT+7 : ])\n\n line = reader.readline()\n\n index_ullon = line.find(\"ULLON\")\n index_lrlat = line.find(\"LRLAT\")\n if index_ullon >= 0 and index_lrlat >= 0:\n ullon = float(line[ index_ullon+7 : index_lrlat-1 ])\n lrlat = float(line[ index_lrlat+7 : ])\n else:\n raise InputError, \"ULLON and LRLAT didn't follow ULLAT.\"\n\n coords.append(ullat, ullon, lrlat)\n\nIt may work, but it's ugly. I'm no expert at string parsing.\n"
] | [
4,
2
] | [] | [] | [
"extraction",
"geography",
"python"
] | stackoverflow_0000489901_extraction_geography_python.txt |
Q:
Given an rpm package name, query the yum database for updates
I was imagining a 3-line Python script to do this but the yum Python API is impenetrable. Is this even possible?
Is writing a wrapper for 'yum list package-name' the only way to do this?
A:
http://fpaste.org/paste/2453
and there are many examples of the yum api and some guides to getting started with it here:
http://yum.baseurl.org/#DeveloperDocumentationExamples
A:
As Seth points out, you can use the updates APIs to ask if something is available as an update. For something that's close to what the "yum list" does you probably want to use the doPackageLists(). Eg.
import os, sys
import yum
yb = yum.YumBase()
yb.conf.cache = os.geteuid() != 1
pl = yb.doPackageLists(patterns=sys.argv[1:])
if pl.installed:
print "Installed Packages"
for pkg in sorted(pl.installed):
print pkg
if pl.available:
print "Available Packages"
for pkg in sorted(pl.available):
print pkg, pkg.repo
if pl.reinstall_available:
print "Re-install Available Packages"
for pkg in sorted(pl.reinstall_available):
print pkg, pkg.repo
| Given an rpm package name, query the yum database for updates | I was imagining a 3-line Python script to do this but the yum Python API is impenetrable. Is this even possible?
Is writing a wrapper for 'yum list package-name' the only way to do this?
| [
"http://fpaste.org/paste/2453\nand there are many examples of the yum api and some guides to getting started with it here:\nhttp://yum.baseurl.org/#DeveloperDocumentationExamples\n",
"As Seth points out, you can use the updates APIs to ask if something is available as an update. For something that's close to what the \"yum list\" does you probably want to use the doPackageLists(). Eg.\nimport os, sys\nimport yum\n\nyb = yum.YumBase()\nyb.conf.cache = os.geteuid() != 1\npl = yb.doPackageLists(patterns=sys.argv[1:])\nif pl.installed:\n print \"Installed Packages\"\n for pkg in sorted(pl.installed):\n print pkg\nif pl.available:\n print \"Available Packages\"\n for pkg in sorted(pl.available):\n print pkg, pkg.repo\nif pl.reinstall_available:\n print \"Re-install Available Packages\"\n for pkg in sorted(pl.reinstall_available):\n print pkg, pkg.repo\n\n"
] | [
7,
4
] | [] | [] | [
"python",
"rpm",
"yum"
] | stackoverflow_0000489113_python_rpm_yum.txt |
Q:
What is the most pythonic way to make a bound method act like a function?
I'm using a Python API that expects me to pass it a function. However, for various reasons, I want to pass it a method, because I want the function to behave different depending on the instance it belongs to. If I pass it a method, the API will not call it with the correct 'self' argument, so I'm wondering how to turn a method into a function that knows what 'self' it belongs to.
There are a couple of ways that I can think of to do this, including using a lambda and a closure. I've included some examples of this below, but I'm wondering if there is a standard mechanism for achieving the same effect.
class A(object):
def hello(self, salutation):
print('%s, my name is %s' % (salutation, str(self)))
def bind_hello1(self):
return lambda x: self.hello(x)
def bind_hello2(self):
def hello2(*args):
self.hello(*args)
return hello2
>>> a1, a2 = A(), A()
>>> a1.hello('Greetings'); a2.hello('Greetings')
Greetings, my name is <__main__.A object at 0x71570>
Greetings, my name is <__main__.A object at 0x71590>
>>> f1, f2 = a1.bind_hello1(), a2.bind_hello1()
>>> f1('Salutations'); f2('Salutations')
Salutations, my name is <__main__.A object at 0x71570>
Salutations, my name is <__main__.A object at 0x71590>
>>> f1, f2 = a1.bind_hello2(), a2.bind_hello2()
>>> f1('Aloha'); f2('Aloha')
Aloha, my name is <__main__.A object at 0x71570>
Aloha, my name is <__main__.A object at 0x71590>
A:
Will passing in the method bound to a instance work? If so, you don't have to do anything special.
In [2]: class C(object):
...: def method(self, a, b, c):
...: print a, b, c
...:
...:
In [3]: def api_function(a_func):
...: a_func("One Fish", "Two Fish", "Blue Fish")
...:
...:
In [4]: c = C()
In [5]: api_function(c.method)
One Fish Two Fish Blue Fish
A:
You may want to clarify your question. As Ryan points out,
def callback(fn):
fn('Welcome')
callback(a1.hello)
callback(a2.hello)
will result in hello being called with the correct self bound, a1 or a2. If you are not experiencing this, something is deeply wrong, because that's how Python works.
What you seem to want, judging by what you've written, is to bind arguments -- in other words, currying. You can find examples all over the place, but Recipe 52549 has the best Pythonic appearance, by my tastes.
class curry:
def __init__(self, fun, *args, **kwargs):
self.fun = fun
self.pending = args[:]
self.kwargs = kwargs.copy()
def __call__(self, *args, **kwargs):
if kwargs and self.kwargs:
kw = self.kwargs.copy()
kw.update(kwargs)
else:
kw = kwargs or self.kwargs
return self.fun(*(self.pending + args), **kw)
f1 = curry(a1.hello, 'Salutations')
f1() # == a1.hello('Salutations')
| What is the most pythonic way to make a bound method act like a function? | I'm using a Python API that expects me to pass it a function. However, for various reasons, I want to pass it a method, because I want the function to behave different depending on the instance it belongs to. If I pass it a method, the API will not call it with the correct 'self' argument, so I'm wondering how to turn a method into a function that knows what 'self' it belongs to.
There are a couple of ways that I can think of to do this, including using a lambda and a closure. I've included some examples of this below, but I'm wondering if there is a standard mechanism for achieving the same effect.
class A(object):
def hello(self, salutation):
print('%s, my name is %s' % (salutation, str(self)))
def bind_hello1(self):
return lambda x: self.hello(x)
def bind_hello2(self):
def hello2(*args):
self.hello(*args)
return hello2
>>> a1, a2 = A(), A()
>>> a1.hello('Greetings'); a2.hello('Greetings')
Greetings, my name is <__main__.A object at 0x71570>
Greetings, my name is <__main__.A object at 0x71590>
>>> f1, f2 = a1.bind_hello1(), a2.bind_hello1()
>>> f1('Salutations'); f2('Salutations')
Salutations, my name is <__main__.A object at 0x71570>
Salutations, my name is <__main__.A object at 0x71590>
>>> f1, f2 = a1.bind_hello2(), a2.bind_hello2()
>>> f1('Aloha'); f2('Aloha')
Aloha, my name is <__main__.A object at 0x71570>
Aloha, my name is <__main__.A object at 0x71590>
| [
"Will passing in the method bound to a instance work? If so, you don't have to do anything special.\nIn [2]: class C(object):\n ...: def method(self, a, b, c):\n ...: print a, b, c\n ...:\n ...:\n\nIn [3]: def api_function(a_func):\n ...: a_func(\"One Fish\", \"Two Fish\", \"Blue Fish\")\n ...:\n ...:\n\nIn [4]: c = C()\n\nIn [5]: api_function(c.method)\nOne Fish Two Fish Blue Fish\n\n",
"You may want to clarify your question. As Ryan points out,\ndef callback(fn):\n fn('Welcome')\ncallback(a1.hello)\ncallback(a2.hello)\n\nwill result in hello being called with the correct self bound, a1 or a2. If you are not experiencing this, something is deeply wrong, because that's how Python works.\nWhat you seem to want, judging by what you've written, is to bind arguments -- in other words, currying. You can find examples all over the place, but Recipe 52549 has the best Pythonic appearance, by my tastes.\nclass curry:\n def __init__(self, fun, *args, **kwargs):\n self.fun = fun\n self.pending = args[:]\n self.kwargs = kwargs.copy()\n\n def __call__(self, *args, **kwargs):\n if kwargs and self.kwargs:\n kw = self.kwargs.copy()\n kw.update(kwargs)\n else:\n kw = kwargs or self.kwargs\n\n return self.fun(*(self.pending + args), **kw)\n\nf1 = curry(a1.hello, 'Salutations')\nf1() # == a1.hello('Salutations')\n\n"
] | [
9,
0
] | [] | [] | [
"closures",
"function",
"methods",
"python"
] | stackoverflow_0000490429_closures_function_methods_python.txt |
Q:
How can I pass a filename as a parameter into my module?
I have the following code in .py file:
import re
regex = re.compile(
r"""ULLAT:\ (?P<ullat>-?[\d.]+).*?
ULLON:\ (?P<ullon>-?[\d.]+).*?
LRLAT:\ (?P<lrlat>-?[\d.]+)""", re.DOTALL|re.VERBOSE)
I have the data in .txt file as a sequence:
QUADNAME: rockport_colony_SD
RESOLUTION: 10 ULLAT: 43.625 ULLON:
-97.87527466 LRLAT: 43.5 LRLON: -97.75027466 HDATUM: 27 ZMIN: 361.58401489 ZMAX: 413.38400269 ZMEAN: 396.1293335 ZSIGMA: 12.36359215 PMETHOD: 5 QUADDATE: 20001001
How can I use the Python -file to process the .txt -file?
I guess that we need a parameter in the .py file, so that we can use a syntax like in terminal:
$ py-file file-to-be-processed
This question was raised by the post here.
A:
You need to read the file in and then search the contents using the regular expression. The sys module contains a list, argv, which contains all the command line parameters. We pull out the second one (the first is the file name used to run the script), open the file, and then read in the contents.
import re
import sys
file_name = sys.argv[1]
fp = open(file_name)
contents = fp.read()
regex = re.compile(
r"""ULLAT:\ (?P-?[\d.]+).*?
ULLON:\ (?P-?[\d.]+).*?
LRLAT:\ (?P-?[\d.]+)""", re.DOTALL|re.VERBOSE)
match = regex.search(contents)
See the Python regular expression documentation for details on what you can do with the match object. See this part of the documentation for why we need search rather than match when scanning the file.
This code will allow you to use the syntax you specified in your question.
| How can I pass a filename as a parameter into my module? | I have the following code in .py file:
import re
regex = re.compile(
r"""ULLAT:\ (?P<ullat>-?[\d.]+).*?
ULLON:\ (?P<ullon>-?[\d.]+).*?
LRLAT:\ (?P<lrlat>-?[\d.]+)""", re.DOTALL|re.VERBOSE)
I have the data in .txt file as a sequence:
QUADNAME: rockport_colony_SD
RESOLUTION: 10 ULLAT: 43.625 ULLON:
-97.87527466 LRLAT: 43.5 LRLON: -97.75027466 HDATUM: 27 ZMIN: 361.58401489 ZMAX: 413.38400269 ZMEAN: 396.1293335 ZSIGMA: 12.36359215 PMETHOD: 5 QUADDATE: 20001001
How can I use the Python -file to process the .txt -file?
I guess that we need a parameter in the .py file, so that we can use a syntax like in terminal:
$ py-file file-to-be-processed
This question was raised by the post here.
| [
"You need to read the file in and then search the contents using the regular expression. The sys module contains a list, argv, which contains all the command line parameters. We pull out the second one (the first is the file name used to run the script), open the file, and then read in the contents.\n\nimport re\nimport sys\n\nfile_name = sys.argv[1]\nfp = open(file_name)\ncontents = fp.read()\n\nregex = re.compile(\n r\"\"\"ULLAT:\\ (?P-?[\\d.]+).*?\n ULLON:\\ (?P-?[\\d.]+).*?\n LRLAT:\\ (?P-?[\\d.]+)\"\"\", re.DOTALL|re.VERBOSE)\n\nmatch = regex.search(contents)\n\nSee the Python regular expression documentation for details on what you can do with the match object. See this part of the documentation for why we need search rather than match when scanning the file.\nThis code will allow you to use the syntax you specified in your question.\n"
] | [
22
] | [] | [] | [
"command_line",
"module",
"parameters",
"python"
] | stackoverflow_0000491085_command_line_module_parameters_python.txt |
Q:
MS Outlook CDO/MAPI Blocking Python File Output?
Here is an example of the problem I am running into. I am using the Python Win32 extensions to access an Outlook mailbox and retrieve messages.
Below is a script that should write "hello world" to a text file. I need to grab some messages from an Outlook mailbox and I noticed something weird. After I attach to the mailbox once, I can no longer print anything to a file. Here is a trimmed down version showing the problem:
#!/usr/bin/env python
from win32com.client import Dispatch
fh = open('foo.txt', 'w')
fh.write('hello ')
fh.close()
session = Dispatch('MAPI.session')
session.Logon('','',0,1,0,0,'exchange.foo.com\nprodreport');
session.Logoff()
fh = open('foo.txt', 'a')
fh.write('world')
fh.close()
If I don't attach to the mailbox and comment out the following lines, it obviously works fine:
session = Dispatch('MAPI.session')
session.Logon('','',0,1,0,0,'exchange.foo.com\ncorey');
session.Logoff()
Why is opening a session to a mailbox in the middle of my script blocking further file output? any ideas? (other operations are not blocked, just this file i/o asfaik)
A:
answering my own question. it looks like your working directory gets changed when you read the email. If you set it back, your file i/o works fine.
the correct script would look like this:
#!/usr/bin/env python
import os
from win32com.client import Dispatch
fh = open('foo.txt', 'w')
fh.write('hello ')
fh.close()
cwd = os.getcwd()
session = Dispatch('MAPI.session')
session.Logon('','',0,1,0,0,'exchange.foo.com\ncorey');
session.Logoff()
os.chdir(cwd)
fh = open('foo.txt', 'a')
fh.write('world')
fh.close()
A:
Yes, the directory change is a known gotcha when using CDO/MAPI. It is "documented" somewhere in MSDN (eg http://support.microsoft.com/kb/269170). You can reproduce it easily in Python like this:
import os
import win32com.client
print os.getcwd ()
win32com.client.Dispatch ("MAPI.Session")
print os.getcwd ()
| MS Outlook CDO/MAPI Blocking Python File Output? | Here is an example of the problem I am running into. I am using the Python Win32 extensions to access an Outlook mailbox and retrieve messages.
Below is a script that should write "hello world" to a text file. I need to grab some messages from an Outlook mailbox and I noticed something weird. After I attach to the mailbox once, I can no longer print anything to a file. Here is a trimmed down version showing the problem:
#!/usr/bin/env python
from win32com.client import Dispatch
fh = open('foo.txt', 'w')
fh.write('hello ')
fh.close()
session = Dispatch('MAPI.session')
session.Logon('','',0,1,0,0,'exchange.foo.com\nprodreport');
session.Logoff()
fh = open('foo.txt', 'a')
fh.write('world')
fh.close()
If I don't attach to the mailbox and comment out the following lines, it obviously works fine:
session = Dispatch('MAPI.session')
session.Logon('','',0,1,0,0,'exchange.foo.com\ncorey');
session.Logoff()
Why is opening a session to a mailbox in the middle of my script blocking further file output? any ideas? (other operations are not blocked, just this file i/o asfaik)
| [
"answering my own question. it looks like your working directory gets changed when you read the email. If you set it back, your file i/o works fine.\nthe correct script would look like this:\n#!/usr/bin/env python\n\nimport os\nfrom win32com.client import Dispatch\n\nfh = open('foo.txt', 'w')\nfh.write('hello ')\nfh.close()\n\ncwd = os.getcwd()\n\nsession = Dispatch('MAPI.session')\nsession.Logon('','',0,1,0,0,'exchange.foo.com\\ncorey');\nsession.Logoff()\n\nos.chdir(cwd)\n\nfh = open('foo.txt', 'a')\nfh.write('world')\nfh.close()\n\n",
"Yes, the directory change is a known gotcha when using CDO/MAPI. It is \"documented\" somewhere in MSDN (eg http://support.microsoft.com/kb/269170). You can reproduce it easily in Python like this:\n\n\nimport os\nimport win32com.client\n\nprint os.getcwd ()\nwin32com.client.Dispatch (\"MAPI.Session\")\nprint os.getcwd ()\n\n\n"
] | [
1,
1
] | [] | [] | [
"mapi",
"outlook",
"python",
"winapi"
] | stackoverflow_0000488504_mapi_outlook_python_winapi.txt |
Q:
regex: Matching parts of a string when the string contains part of a regex pattern
I want to reduce the number of patterns I have to write by using a regex that picks up any or all of the pattern when it appears in a string.
Is this possible with Regex?
E.g. Pattern is: "the cat sat on the mat"
I would like pattern to match on following strings:
"the"
"the cat"
"the cat sat"
...
"the cat sat on the mat"
But it should not match on the following string because although some words match, they are split by a non matching word:
"the dog sat"
A:
This:
the( cat( sat( on( the( mat)?)?)?)?)?
would answer your question. Remove "optional group" parens "(...)?" for parts that are not optional, add additional groups for things that must match together.
the // complete match
the cat // complete match
the cat sat // complete match
the cat sat on // complete match
the cat sat on the // complete match
the cat sat on the mat // complete match
the dog sat on the mat // two partial matches ("the")
You might want to add some pre-condition, like a start of line anchor, to prevent the expression from matching the second "the" in the last line:
^the( cat( sat( on( the( mat)?)?)?)?)?
EDIT: If you add a post-condition, like the end-of-line anchor, matching will be prevented entirely on the last example, that is, the last example won't match at all:
the( cat( sat( on( the( mat)?)?)?)?)?$
Credits for the tip go to VonC. Thanks!
The post-condition may of course be something else you expect to follow the match.
Alternatively, you remove the last question mark:
the( cat( sat( on( the( mat)?)?)?)?)
Be aware though: This would make a single "the" a non-match, so the first line will also not match.
A:
It could be fairly complicated:
(?ms)the(?=(\s+cat)|[\r\n]+)(:?\s+cat(?=(\s+sat)|[\r\n]+))?(:?\s+sat(?=(\s+on)|[\r\n]+))?(:?\s+on(?=(\s+the)|[\r\n]+))?(:?\s+the(?=(\s+mat)|[\r\n]+))?(:?\s+mat)?[\r\n]+
Meaning:
I want "the" only if followed by "cat" or end of line
then I want "cat" (optional) only if followed by "sat"
and so one
followed by and end of line (which ensure to not match partial "the cat walk...")
It does match
the cat sat on the mat
the cat
the cat sat
the cat sat aa on the mat (nothing is match either)
the dog sat (nothing is matched there)
On second thought, Tomalak's answer is simpler (if fixed, that is ended with a '$').
I keep mine as a wiki post.
A:
If you know the match always begins at the first character, it would be much faster to match the characters directly in a loop. I don't think Regex will do it anyway.
A:
Perhaps it would be easier and more logical to think about the problem a little differently..
Instead of matching the pattern against the string.... how about using the string as the pattern and looking for it in the pattern.
For example where
string = "the cat sat on"
pattern = "the cat sat on the mat"
string is always a subset of pattern and is simply a case of doing a regex match.
If that makes sense ;-)
| regex: Matching parts of a string when the string contains part of a regex pattern | I want to reduce the number of patterns I have to write by using a regex that picks up any or all of the pattern when it appears in a string.
Is this possible with Regex?
E.g. Pattern is: "the cat sat on the mat"
I would like pattern to match on following strings:
"the"
"the cat"
"the cat sat"
...
"the cat sat on the mat"
But it should not match on the following string because although some words match, they are split by a non matching word:
"the dog sat"
| [
"This:\nthe( cat( sat( on( the( mat)?)?)?)?)?\n\nwould answer your question. Remove \"optional group\" parens \"(...)?\" for parts that are not optional, add additional groups for things that must match together.\nthe // complete match\nthe cat // complete match\nthe cat sat // complete match\nthe cat sat on // complete match\nthe cat sat on the // complete match\nthe cat sat on the mat // complete match\nthe dog sat on the mat // two partial matches (\"the\")\n\nYou might want to add some pre-condition, like a start of line anchor, to prevent the expression from matching the second \"the\" in the last line:\n^the( cat( sat( on( the( mat)?)?)?)?)?\n\nEDIT: If you add a post-condition, like the end-of-line anchor, matching will be prevented entirely on the last example, that is, the last example won't match at all:\nthe( cat( sat( on( the( mat)?)?)?)?)?$\n\nCredits for the tip go to VonC. Thanks!\nThe post-condition may of course be something else you expect to follow the match. \nAlternatively, you remove the last question mark:\nthe( cat( sat( on( the( mat)?)?)?)?)\n\nBe aware though: This would make a single \"the\" a non-match, so the first line will also not match.\n",
"It could be fairly complicated:\n(?ms)the(?=(\\s+cat)|[\\r\\n]+)(:?\\s+cat(?=(\\s+sat)|[\\r\\n]+))?(:?\\s+sat(?=(\\s+on)|[\\r\\n]+))?(:?\\s+on(?=(\\s+the)|[\\r\\n]+))?(:?\\s+the(?=(\\s+mat)|[\\r\\n]+))?(:?\\s+mat)?[\\r\\n]+\n\nMeaning:\n\nI want \"the\" only if followed by \"cat\" or end of line\nthen I want \"cat\" (optional) only if followed by \"sat\"\nand so one\nfollowed by and end of line (which ensure to not match partial \"the cat walk...\")\n\nIt does match \nthe cat sat on the mat\nthe cat\nthe cat sat\nthe cat sat aa on the mat (nothing is match either)\nthe dog sat (nothing is matched there)\n\nOn second thought, Tomalak's answer is simpler (if fixed, that is ended with a '$').\nI keep mine as a wiki post.\n",
"If you know the match always begins at the first character, it would be much faster to match the characters directly in a loop. I don't think Regex will do it anyway.\n",
"Perhaps it would be easier and more logical to think about the problem a little differently..\nInstead of matching the pattern against the string.... how about using the string as the pattern and looking for it in the pattern.\nFor example where\nstring = \"the cat sat on\"\npattern = \"the cat sat on the mat\"\nstring is always a subset of pattern and is simply a case of doing a regex match.\nIf that makes sense ;-)\n"
] | [
7,
2,
1,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000490762_python_regex.txt |
Q:
Python: Problem with local modules shadowing global modules
I've got a package set up like so:
packagename/
__init__.py
numbers.py
tools.py
...other stuff
Now inside tools.py, I'm trying to import the standard library module fractions. However, the fractions module itself imports the numbers module, which is supposed to be the one in the standard library.
The problem is that it tries to import the numbers modules from my package instead (ie my numbers.py is shadowing the stdlib numbers module), and then complains about it, instead of importing the stdlib module.
My question is, is there a workaround so that I can keep the current structure of my package, or is the only solution to rename my own offending module (numbers.py)?
A:
absolute and relative imports can be used since python2.5 (with __future__ import) and seem to be what you're looking for.
A:
I try to avoid shadowing the standard library. How about renaming your module to "_numbers.py" ?
And of course, you could still do:
import _numbers as numbers
| Python: Problem with local modules shadowing global modules | I've got a package set up like so:
packagename/
__init__.py
numbers.py
tools.py
...other stuff
Now inside tools.py, I'm trying to import the standard library module fractions. However, the fractions module itself imports the numbers module, which is supposed to be the one in the standard library.
The problem is that it tries to import the numbers modules from my package instead (ie my numbers.py is shadowing the stdlib numbers module), and then complains about it, instead of importing the stdlib module.
My question is, is there a workaround so that I can keep the current structure of my package, or is the only solution to rename my own offending module (numbers.py)?
| [
"absolute and relative imports can be used since python2.5 (with __future__ import) and seem to be what you're looking for.\n",
"I try to avoid shadowing the standard library. How about renaming your module to \"_numbers.py\" ?\nAnd of course, you could still do:\nimport _numbers as numbers\n\n"
] | [
9,
7
] | [] | [] | [
"python"
] | stackoverflow_0000491705_python.txt |
Q:
Emacs 23 and iPython
Is there anyone out there using iPython with emacs 23? The documents on the emacs wiki are a bit of a muddle and I would be interested in hearing from anyone using emacs for Python development. Do you use the download python-mode and ipython.el? What do you recommend?
A:
I got it working quite well with emacs 23. The only open issue is the focus not returning to the python buffer after sending the buffer to the iPython interpreter.
http://www.emacswiki.org/emacs/PythonMode#toc10
(setq load-path
(append (list nil
"~/.emacs.d/python-mode-1.0/"
"~/.emacs.d/pymacs/"
"~/.emacs.d/ropemacs-0.6"
)
load-path))
(setq py-shell-name "ipython")
(defadvice py-execute-buffer (around python-keep-focus activate)
"return focus to python code buffer"
(save-excursion ad-do-it))
(setenv "PYMACS_PYTHON" "python2.5")
(require 'pymacs)
(pymacs-load "ropemacs" "rope-")
(provide 'python-programming)
A:
never used it myself, but I do follow the ipython mailing list, and there was a thread a couple months back.
maybe this will help
http://lists.ipython.scipy.org/pipermail/ipython-user/2008-September/005791.html
It's also a very responsive mailing list if you run into trouble.
A:
I've used ipython with emacs cvs (which has been emacs 23 for some time now) in my python development. I, however, use it the other way around: I call emacs from the ipython promt through the $EDITOR environment variable. I tried it the other way around, but got a bit tired of all the process buffers and what not.
Emacs is great, but a command-line far more versatile.
| Emacs 23 and iPython | Is there anyone out there using iPython with emacs 23? The documents on the emacs wiki are a bit of a muddle and I would be interested in hearing from anyone using emacs for Python development. Do you use the download python-mode and ipython.el? What do you recommend?
| [
"I got it working quite well with emacs 23. The only open issue is the focus not returning to the python buffer after sending the buffer to the iPython interpreter.\nhttp://www.emacswiki.org/emacs/PythonMode#toc10\n(setq load-path\n (append (list nil\n \"~/.emacs.d/python-mode-1.0/\"\n \"~/.emacs.d/pymacs/\"\n \"~/.emacs.d/ropemacs-0.6\"\n )\n load-path))\n(setq py-shell-name \"ipython\")\n\n(defadvice py-execute-buffer (around python-keep-focus activate)\n \"return focus to python code buffer\"\n (save-excursion ad-do-it))\n\n(setenv \"PYMACS_PYTHON\" \"python2.5\") \n(require 'pymacs)\n\n(pymacs-load \"ropemacs\" \"rope-\")\n\n(provide 'python-programming)\n\n",
"never used it myself, but I do follow the ipython mailing list, and there was a thread a couple months back.\nmaybe this will help\nhttp://lists.ipython.scipy.org/pipermail/ipython-user/2008-September/005791.html\nIt's also a very responsive mailing list if you run into trouble.\n",
"I've used ipython with emacs cvs (which has been emacs 23 for some time now) in my python development. I, however, use it the other way around: I call emacs from the ipython promt through the $EDITOR environment variable. I tried it the other way around, but got a bit tired of all the process buffers and what not.\nEmacs is great, but a command-line far more versatile.\n"
] | [
8,
2,
0
] | [] | [] | [
"emacs",
"emacs23",
"ipython",
"python"
] | stackoverflow_0000304049_emacs_emacs23_ipython_python.txt |
Q:
How do I make IPython organize tab completion possibilities by class?
When an object has hundreds of methods, tab completion is hard to use. More often than not the interesting methods are the ones defined or overridden by the inspected object's class and not its base classes.
How can I get IPython to group its tab completion possibilities so the methods and properties defined in the inspected object's class come first, followed by those in base classes?
It looks like the undocumented inspect.classify_class_attrs(cls) function along with inspect.getmro(cls) give me most of the information I need (these were originally written to implement python's help(object) feature).
By default readline displays completions alphabetically, but the function used to display completions can be replaced with ctypes or the readline module included with Python 2.6 and above. I've overridden readline's completions display and it works great.
Now all I need is a method to merge per-class information (from inspect.* per above) with per-instance information, sort the results by method resolution order, pretty print and paginate.
For extra credit, it would be great to store the chosen autocompletion, and display the most popular choices first next time autocomplete is attempted on the same object.
A:
Since I am not using Python 2.6 or 3.0 yet and don't have readline.set_completion_display_matches_hook(), I can use ctypes to set completion_display_func like so:
from ctypes import *
rl = cdll.LoadLibrary('libreadline.so')
def completion_display_func(matches, num_matches, max_length):
print "Hello from Python"
for i in range(num_matches):
print matches[i]
COMPLETION_DISPLAY_FUNC = CFUNCTYPE(None, POINTER(c_char_p), c_int, c_int)
hook = COMPLETION_DISPLAY_FUNC(completion_display_func)
ptr = c_void_p.in_dll(rl, 'rl_completion_display_matches_hook')
ptr.value = cast(hook, c_void_p).value
Now, when I press 'tab' to complete, my own function prints the list of completions. So that answers the question 'how do I change the way readline displays completions'.
A:
I don't think this can be accomplished easily. There's no mechanism in Ipython to perform it in any case.
Initially I had thought you could modify Ipython's source to change the order (eg by changing the dir2() function in genutils.py). However it looks like readline alphabetically sorts the completions you pass to it, so this won't work (at least not without a lot more effort), though you could perhaps exclude methods on the base class completely.
A:
It looks like I can use readline.set_completion_display_matches_hook([function]) (new in Python 2.6) to display the results. The completer would return a list of possibilities as usual, but would also store the results of inspect.classify_class_attrs(cls) where applicable. The completion_display_matches_hook would have to hold a reference to the completer to retrieve the most recent list of completions plus the classification information I am looking for because only receives a list of match names in its arguments. Then the hook displays the list of completions in a pleasing way.
| How do I make IPython organize tab completion possibilities by class? | When an object has hundreds of methods, tab completion is hard to use. More often than not the interesting methods are the ones defined or overridden by the inspected object's class and not its base classes.
How can I get IPython to group its tab completion possibilities so the methods and properties defined in the inspected object's class come first, followed by those in base classes?
It looks like the undocumented inspect.classify_class_attrs(cls) function along with inspect.getmro(cls) give me most of the information I need (these were originally written to implement python's help(object) feature).
By default readline displays completions alphabetically, but the function used to display completions can be replaced with ctypes or the readline module included with Python 2.6 and above. I've overridden readline's completions display and it works great.
Now all I need is a method to merge per-class information (from inspect.* per above) with per-instance information, sort the results by method resolution order, pretty print and paginate.
For extra credit, it would be great to store the chosen autocompletion, and display the most popular choices first next time autocomplete is attempted on the same object.
| [
"Since I am not using Python 2.6 or 3.0 yet and don't have readline.set_completion_display_matches_hook(), I can use ctypes to set completion_display_func like so:\nfrom ctypes import *\n\nrl = cdll.LoadLibrary('libreadline.so')\n\ndef completion_display_func(matches, num_matches, max_length):\n print \"Hello from Python\"\n for i in range(num_matches):\n print matches[i]\n\nCOMPLETION_DISPLAY_FUNC = CFUNCTYPE(None, POINTER(c_char_p), c_int, c_int)\nhook = COMPLETION_DISPLAY_FUNC(completion_display_func)\nptr = c_void_p.in_dll(rl, 'rl_completion_display_matches_hook')\nptr.value = cast(hook, c_void_p).value\n\nNow, when I press 'tab' to complete, my own function prints the list of completions. So that answers the question 'how do I change the way readline displays completions'.\n",
"I don't think this can be accomplished easily. There's no mechanism in Ipython to perform it in any case.\nInitially I had thought you could modify Ipython's source to change the order (eg by changing the dir2() function in genutils.py). However it looks like readline alphabetically sorts the completions you pass to it, so this won't work (at least not without a lot more effort), though you could perhaps exclude methods on the base class completely.\n",
"It looks like I can use readline.set_completion_display_matches_hook([function]) (new in Python 2.6) to display the results. The completer would return a list of possibilities as usual, but would also store the results of inspect.classify_class_attrs(cls) where applicable. The completion_display_matches_hook would have to hold a reference to the completer to retrieve the most recent list of completions plus the classification information I am looking for because only receives a list of match names in its arguments. Then the hook displays the list of completions in a pleasing way.\n"
] | [
5,
1,
1
] | [] | [] | [
"ipython",
"python",
"readline"
] | stackoverflow_0000465605_ipython_python_readline.txt |
Q:
Using Python split to splice a variable together
I have this list
["camilla_farnestam@hotmail.com : martin00", ""],
How do I split so it only be left with:
camilla_farnestam@hotmail.com:martin00
A:
Do you want to have: aList[0] ?
EDIT::
Oh, you have a tuple with the list in it!
Now I see:
al = ["camilla_farnestam@hotmail.com : martin00", ""],
#type(al) == tuple
#len(al) == 1
aList = al[0]
#type(aList) == list
#len(aList) == 2
#Now you can type:
aList[0]
#and you get:
"camilla_farnestam@hotmail.com : martin00"
You can use aList[0].replace(' : ', ':') if you wish to remove spaces before and after colon, suit your needs.
I think that the most confusing thing here is the coma ending the first line. It creates a new tuple, that contains your list.
A:
al = ["camilla_farnestam@hotmail.com : martin00", ""],
print al[0][0].replace(" : ", ":")
A:
comma at the end means that list is first member of a tuple, but to your question:
in_list = ["camilla_farnestam@hotmail.com : martin00", ""]
result = ''.join(in_list[0].split(' '))
A:
Exactly.
$ python
Python 2.6 (r26:66714, Dec 4 2008, 11:34:15)
[GCC 4.0.1 (Apple Inc. build 5488)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> al = ["camilla_farnestam@hotmail.com : martin00", ""]
>>> print al[0]
camilla_farnestam@hotmail.com : martin00
>>>
A:
Abgan, is probably correct, although if you still want a list, ie.,
["camilla_farnestam@hotmail.com : martin00"]
you'd want:
the_list[:1]
A:
lists can be accessed by index or sliced into smaller lists.
http://diveintopython3.ep.io/native-datatypes.html
| Using Python split to splice a variable together | I have this list
["camilla_farnestam@hotmail.com : martin00", ""],
How do I split so it only be left with:
camilla_farnestam@hotmail.com:martin00
| [
"Do you want to have: aList[0] ? \nEDIT::\nOh, you have a tuple with the list in it!\nNow I see:\nal = [\"camilla_farnestam@hotmail.com : martin00\", \"\"],\n#type(al) == tuple\n#len(al) == 1\naList = al[0]\n#type(aList) == list\n#len(aList) == 2\n#Now you can type:\naList[0]\n#and you get:\n\"camilla_farnestam@hotmail.com : martin00\" \n\nYou can use aList[0].replace(' : ', ':') if you wish to remove spaces before and after colon, suit your needs.\nI think that the most confusing thing here is the coma ending the first line. It creates a new tuple, that contains your list.\n",
"al = [\"camilla_farnestam@hotmail.com : martin00\", \"\"],\nprint al[0][0].replace(\" : \", \":\")\n\n",
"comma at the end means that list is first member of a tuple, but to your question:\nin_list = [\"camilla_farnestam@hotmail.com : martin00\", \"\"]\nresult = ''.join(in_list[0].split(' '))\n\n",
"Exactly. \n\n $ python\n Python 2.6 (r26:66714, Dec 4 2008, 11:34:15) \n [GCC 4.0.1 (Apple Inc. build 5488)] on darwin\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> al = [\"camilla_farnestam@hotmail.com : martin00\", \"\"]\n >>> print al[0]\n camilla_farnestam@hotmail.com : martin00\n >>> \n\n",
"Abgan, is probably correct, although if you still want a list, ie., \n[\"camilla_farnestam@hotmail.com : martin00\"]\n\nyou'd want:\nthe_list[:1]\n\n",
"lists can be accessed by index or sliced into smaller lists.\nhttp://diveintopython3.ep.io/native-datatypes.html\n"
] | [
3,
3,
2,
1,
0,
0
] | [] | [] | [
"python",
"split"
] | stackoverflow_0000492452_python_split.txt |
Q:
How do I use ctypes to set a library's extern function pointer to a Python callback function?
Some C libraries export function pointers such that the user of the library sets that function pointer to the address of their own function to implement a hook or callback.
In this example library liblibrary.so, how do I set library_hook to a Python function using ctypes?
library.h:
typedef int exported_function_t(char**, int);
extern exported_function_t *library_hook;
A:
This is tricky in ctypes because ctypes function pointers do not implement the .value property used to set other pointers. Instead, cast your callback function and the extern function pointer to void * with the c_void_p function. After setting the function pointer as void * as shown, C can call your Python function, and you can retrieve the function as a function pointer and call it with normal ctypes calls.
from ctypes import *
liblibrary = cdll.LoadLibrary('liblibrary.so')
def py_library_hook(strings, n):
return 0
# First argument to CFUNCTYPE is the return type:
LIBRARY_HOOK_FUNC = CFUNCTYPE(c_int, POINTER(c_char_p), c_int)
hook = LIBRARY_HOOK_FUNC(py_library_Hook)
ptr = c_void_p.in_dll(liblibrary, 'library_hook')
ptr.value = cast(hook, c_void_p).value
| How do I use ctypes to set a library's extern function pointer to a Python callback function? | Some C libraries export function pointers such that the user of the library sets that function pointer to the address of their own function to implement a hook or callback.
In this example library liblibrary.so, how do I set library_hook to a Python function using ctypes?
library.h:
typedef int exported_function_t(char**, int);
extern exported_function_t *library_hook;
| [
"This is tricky in ctypes because ctypes function pointers do not implement the .value property used to set other pointers. Instead, cast your callback function and the extern function pointer to void * with the c_void_p function. After setting the function pointer as void * as shown, C can call your Python function, and you can retrieve the function as a function pointer and call it with normal ctypes calls.\nfrom ctypes import *\n\nliblibrary = cdll.LoadLibrary('liblibrary.so')\n\ndef py_library_hook(strings, n):\n return 0\n\n# First argument to CFUNCTYPE is the return type:\nLIBRARY_HOOK_FUNC = CFUNCTYPE(c_int, POINTER(c_char_p), c_int)\nhook = LIBRARY_HOOK_FUNC(py_library_Hook)\nptr = c_void_p.in_dll(liblibrary, 'library_hook')\nptr.value = cast(hook, c_void_p).value\n\n"
] | [
9
] | [] | [] | [
"ctypes",
"python"
] | stackoverflow_0000492377_ctypes_python.txt |
Q:
One-to-many relationship in Datastore and de-referencing in Google App Engine
I have a one to many relationship between two entities: the first one is a satellite and the second one is channel. The satellite form returns a satellite name which I want to appear in another HTML page, with the channel data where you can say that this channel is related to that satellite.
How can I do this?
A:
This sounds like a good case for using the ReferenceProperty that is part of the Datastore API of App Engine. Here's an idea to get you started:
class Satellite(db.Model):
name = db.StringProperty()
class Channel(db.Model):
satellite = db.ReferenceProperty(Satellite, collection_name='channels')
freq = db.StringProperty()
With this you can assign channels like so:
my_sat = Satellite(name='SatCOM1')
my_sat.put()
Channel(satellite=my_sat,freq='28.1200Hz').put()
... #Add other channels ...
Then loop through channels for a given Satellite object:
for chan in my_sat.channels:
print 'Channel frequency: %s' % (chan.freq)
Anyway, this pretty much follows this article that describes how to model entity relationships in App Engine. Hope this helps.
| One-to-many relationship in Datastore and de-referencing in Google App Engine | I have a one to many relationship between two entities: the first one is a satellite and the second one is channel. The satellite form returns a satellite name which I want to appear in another HTML page, with the channel data where you can say that this channel is related to that satellite.
How can I do this?
| [
"This sounds like a good case for using the ReferenceProperty that is part of the Datastore API of App Engine. Here's an idea to get you started:\nclass Satellite(db.Model):\n name = db.StringProperty()\n\nclass Channel(db.Model):\n satellite = db.ReferenceProperty(Satellite, collection_name='channels')\n freq = db.StringProperty()\n\nWith this you can assign channels like so:\nmy_sat = Satellite(name='SatCOM1')\nmy_sat.put()\nChannel(satellite=my_sat,freq='28.1200Hz').put()\n... #Add other channels ...\n\nThen loop through channels for a given Satellite object:\nfor chan in my_sat.channels: \n print 'Channel frequency: %s' % (chan.freq)\n\nAnyway, this pretty much follows this article that describes how to model entity relationships in App Engine. Hope this helps.\n"
] | [
6
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000488498_google_app_engine_python.txt |
Q:
In Django, how do you retrieve data from extra fields on many-to-many relationships without an explicit query for it?
Given a situation in Django 1.0 where you have extra data on a Many-to-Many relationship:
class Player(models.Model):
name = models.CharField(max_length=80)
class Team(models.Model):
name = models.CharField(max_length=40)
players = models.ManyToManyField(Player, through='TeamPlayer', related_name='teams')
class TeamPlayer(models.Model):
player = models.ForeignKey(Player)
team = models.ForeignKey(Team)
captain = models.BooleanField()
The many-to-many relationship allows you to access the related data using attributes (the "players" attribute on the Team object or using the "teams" attribute on the Player object by way of its related name). When one of the objects is placed into a context for a template (e.g. a Team placed into a Context for rendering a template that generates the Team's roster), the related objects can be accessed (i.e. the players on the teams), but how can the extra data (e.g. 'captain') be accessed along with the related objects from the object in context (e.g.the Team) without adding additional data into the context?
I know it is possible to query directly against the intermediary table to get the extra data. For example:
TeamPlayer.objects.get(player=790, team=168).captain
Or:
for x in TeamPlayer.objects.filter(team=168):
if x.captain:
print "%s (Captain)" % (x.player.name)
else:
print x.player.name
Doing this directly on the intermediary table, however requires me to place additional data in a template's context (the result of the query on TeamPlayer) which I am trying to avoid if such a thing is possible.
A:
So, 15 minutes after asking the question, and I found my own answer.
Using dir(Team), I can see another generated attribute named teamplayer_set (it also exists on Player).
t = Team.objects.get(pk=168)
for x in t.teamplayer_set.all():
if x.captain:
print "%s (Captain)" % (x.player.name)
else:
print x.player.name
Not sure how I would customize that generated related_name, but at least I know I can get to the data from the template without adding additional query results into the context.
| In Django, how do you retrieve data from extra fields on many-to-many relationships without an explicit query for it? | Given a situation in Django 1.0 where you have extra data on a Many-to-Many relationship:
class Player(models.Model):
name = models.CharField(max_length=80)
class Team(models.Model):
name = models.CharField(max_length=40)
players = models.ManyToManyField(Player, through='TeamPlayer', related_name='teams')
class TeamPlayer(models.Model):
player = models.ForeignKey(Player)
team = models.ForeignKey(Team)
captain = models.BooleanField()
The many-to-many relationship allows you to access the related data using attributes (the "players" attribute on the Team object or using the "teams" attribute on the Player object by way of its related name). When one of the objects is placed into a context for a template (e.g. a Team placed into a Context for rendering a template that generates the Team's roster), the related objects can be accessed (i.e. the players on the teams), but how can the extra data (e.g. 'captain') be accessed along with the related objects from the object in context (e.g.the Team) without adding additional data into the context?
I know it is possible to query directly against the intermediary table to get the extra data. For example:
TeamPlayer.objects.get(player=790, team=168).captain
Or:
for x in TeamPlayer.objects.filter(team=168):
if x.captain:
print "%s (Captain)" % (x.player.name)
else:
print x.player.name
Doing this directly on the intermediary table, however requires me to place additional data in a template's context (the result of the query on TeamPlayer) which I am trying to avoid if such a thing is possible.
| [
"So, 15 minutes after asking the question, and I found my own answer. \nUsing dir(Team), I can see another generated attribute named teamplayer_set (it also exists on Player). \nt = Team.objects.get(pk=168)\nfor x in t.teamplayer_set.all():\n if x.captain:\n print \"%s (Captain)\" % (x.player.name)\n else:\n print x.player.name\n\nNot sure how I would customize that generated related_name, but at least I know I can get to the data from the template without adding additional query results into the context.\n"
] | [
10
] | [] | [] | [
"django",
"manytomanyfield",
"python"
] | stackoverflow_0000493304_django_manytomanyfield_python.txt |
Q:
Adding a SOAP header to a SOAPpy request
Does anyone know how to do this? I need to add a header of the form:
value1
value2
A:
As the question is phrased, it's hard to guess what the intention (or even the intended semantics) is. For setting headers, try the following:
import SOAPpy
headers = SOAPpy.Types.headerType()
headers.value1 = value2
or
[...]
headers.foo = value1
headers.bar = value2
| Adding a SOAP header to a SOAPpy request | Does anyone know how to do this? I need to add a header of the form:
value1
value2
| [
"As the question is phrased, it's hard to guess what the intention (or even the intended semantics) is. For setting headers, try the following:\nimport SOAPpy\nheaders = SOAPpy.Types.headerType()\nheaders.value1 = value2\n\nor\n[...]\nheaders.foo = value1\nheaders.bar = value2\n\n"
] | [
4
] | [] | [] | [
"python",
"soappy"
] | stackoverflow_0000354370_python_soappy.txt |
Q:
Is a PHP, Python, PostgreSQL design suitable for a business application?
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.
I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.
I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.
I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.
This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions.
What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.
I'm NOT looking for a buy vs. build debate, as that's a different discussion.
Thanks for any insight
A:
Look at Django.
Python code. A template language that permits some of the same features as PHP -- slightly different syntax.
Model is divorced from view functions ("business rules") and divorced from presentation. This is enforced throughout Django.
One of the common questions is "why can't I do -- some crazy PHP-like thing -- in the Django template?" The answer is that presentation is not processing. Do your processing in the Django view functions. Render the results as HTML in the template.
Also, Django has an ORM layer to divorce you from petty SQL considerations. MySQL or PostgreSQL are more-or-less equivalent from within Django.
Edit
"Maturity" means a lot of things. You specifically mentioned skilled people as a sign of maturity.
Django is pure Python. If you can find Python people, they can learn Django in a few days. They just have to do the tutorials.
A Django-powered site is usually Apache + some glue + Django. The glue can be mod_wsgi or mod_python or mod_fastcgi. You have to manage this configuration with some care because there are several moving parts. This, however, is the same Apache config problem you have with PHP -- nothing new here.
A Django site has one or more Django server instances, each with a settings file, a URL mapping and any number of applications. Pure Python at this point.
A Django application has URL mappings, model and views. All pure Python. Unit tested with the Django extensions to Python's own internal unittest framework.
The model uses an ORM layer. This may, perhaps, be the single most confusing thing in Django. People sometimes design very odd models because they think either too high-level-uber-generic or they think too much in SQL. Django is a middle ground of mostly object-orientation with some SQL consideration. Get this, and you're unstoppable.
A Django application may have templates, which are in their own template language. This would be about the only non-Python thing that's of much interest. You may want to add custom tags -- pure Python.
You'll probably have JavaScript (also true for PHP and every other web application framework). Nothing new here.
Since Django's admin application automatically handles basic CRUD processing, you don't have to write this. You are free to write all the transactional stuff you want. But you don't have to. This leads you to a very, very powerful hybrid.
You write a few complicated, critical transactions. Pure Python, BTW.
You don't write any of the dumb table-maintenance transactions. No code at all is superior to Python or PHP.
After you get your feet wet with the template engine and CSS's, you can tailor the admin interface to look like anything you want. This is HTML/CSS stuff, no Python or PHP.
Bottom line. Most of the skill set is Python. The ORM is -- syntactically -- Python, but requires some care in doing things simply and cleanly. The template is it's own language, but considerably simpler than PHP. The rest is SQL, Javascript, HTML, CSS, Apache and what-not.
Edit
Django Maturity
See http://www.djangoproject.com/community/ for the number of active projects.
Join http://groups.google.com/group/django-users for daily flood of emails from users.
The Django blog stretches back to '05, meaning they had years of solid experience before finally releasing 1.0 in September of '08. Development apparently began in '03.
A:
I'm going to assume that by "business application" you mean a web application hosted in an intranet environment as opposed to some sort of SaaS application on the internet.
While you're in the process of architecting your application you need to consider the existing infrastructure and infrastructure support people of your employer/customer. Also, if the company is large enough to have things such as "approved software/hardware lists," you should be aware of those. Keep in mind that some elements of the list may be downright retarded. Don't let past mistakes dictate the architecture of your app, but in cases where they are reasonably sensible I would pick my battles and stick with your enterprise standard. This can be a real pain when you pick a development stack that really works best on Unix/Linux, and then someone tries to force onto a Windows server admined by someone who's never touched anything but ASP.NET applications.
Unless there is a particular PHP module that you intend to use that has no Python equivalent, I would drop PHP and use Django. If there is a compelling reason to use PHP, then I'd drop Python. I'm having difficulty imagining a scenario where you would want to use both at the same time.
As for PG versus MySQL, either works. Look at what you customer already has deployed, and if they have a bunch of one and little of another, pick that. If they have existing Oracle infrastructure you should consider using it. If they are an SQL Server shop...reconsider your stack and remember to pick your battles.
A:
I can only repeat what other peoples here already said : if you choose Python for the domain layer, you won't gain anything (quite on the contrary) using PHP for the presentation layer. Others already advised Django, and that might be a pretty good choice, but there's no shortage of good Python web frameworks.
A:
I personally agree with the second and the third points in your post. Speaking about PHP, in my opinion you can use Python also for presentation, there are many solutions (Zope, Plone ...) based on Python.
A:
Just skip PHP and use Python (with Django, as already noticed while I typed). Django already separates the layers as you mentioned.
I have never used PgSQL myself, but I think it's mostly a matter of taste whether you prefer it over MySQL. It used to support more enterprise features than MySQL but I'm not sure if that's still true with MySQL 5.0 and 5.1. Transactions are supported in MySQL, anyway (you have to use the InnoDB table engine, however).
A:
Just to address the MySQL vs PgSQL issues - it shouldn't matter. They're both more than capable of the task, and any reasonable framework should isolate you from the differences relatively well. I think it's down to what you use already, what people have most experience in, and if there's a feature in one or the other you think you'd benefit from.
If you have no preference, you might want to go with MySQL purely because it's more popular for web work. This translates to more examples, easier to find help, etc. I actually prefer the philosophy of PgSQL, but this isn't a good enough reason to blow against the wind.
A:
Just to throw it out there... there are PHP frameworks utilizing MVC.
Codeigniter does simple and yet powerful things. You can definitely separate the template layer from the logic layer.
| Is a PHP, Python, PostgreSQL design suitable for a business application? | I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc.
I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design.
I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with.
I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect.
This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions.
What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future.
I'm NOT looking for a buy vs. build debate, as that's a different discussion.
Thanks for any insight
| [
"Look at Django.\nPython code. A template language that permits some of the same features as PHP -- slightly different syntax.\nModel is divorced from view functions (\"business rules\") and divorced from presentation. This is enforced throughout Django. \nOne of the common questions is \"why can't I do -- some crazy PHP-like thing -- in the Django template?\" The answer is that presentation is not processing. Do your processing in the Django view functions. Render the results as HTML in the template.\nAlso, Django has an ORM layer to divorce you from petty SQL considerations. MySQL or PostgreSQL are more-or-less equivalent from within Django.\n\nEdit\n\"Maturity\" means a lot of things. You specifically mentioned skilled people as a sign of maturity.\nDjango is pure Python. If you can find Python people, they can learn Django in a few days. They just have to do the tutorials. \n\nA Django-powered site is usually Apache + some glue + Django. The glue can be mod_wsgi or mod_python or mod_fastcgi. You have to manage this configuration with some care because there are several moving parts. This, however, is the same Apache config problem you have with PHP -- nothing new here.\nA Django site has one or more Django server instances, each with a settings file, a URL mapping and any number of applications. Pure Python at this point.\nA Django application has URL mappings, model and views. All pure Python. Unit tested with the Django extensions to Python's own internal unittest framework.\nThe model uses an ORM layer. This may, perhaps, be the single most confusing thing in Django. People sometimes design very odd models because they think either too high-level-uber-generic or they think too much in SQL. Django is a middle ground of mostly object-orientation with some SQL consideration. Get this, and you're unstoppable.\nA Django application may have templates, which are in their own template language. This would be about the only non-Python thing that's of much interest. You may want to add custom tags -- pure Python.\nYou'll probably have JavaScript (also true for PHP and every other web application framework). Nothing new here.\nSince Django's admin application automatically handles basic CRUD processing, you don't have to write this. You are free to write all the transactional stuff you want. But you don't have to. This leads you to a very, very powerful hybrid.\n\nYou write a few complicated, critical transactions. Pure Python, BTW.\nYou don't write any of the dumb table-maintenance transactions. No code at all is superior to Python or PHP.\nAfter you get your feet wet with the template engine and CSS's, you can tailor the admin interface to look like anything you want. This is HTML/CSS stuff, no Python or PHP.\n\n\nBottom line. Most of the skill set is Python. The ORM is -- syntactically -- Python, but requires some care in doing things simply and cleanly. The template is it's own language, but considerably simpler than PHP. The rest is SQL, Javascript, HTML, CSS, Apache and what-not.\n\nEdit\nDjango Maturity\n\nSee http://www.djangoproject.com/community/ for the number of active projects. \nJoin http://groups.google.com/group/django-users for daily flood of emails from users.\n\nThe Django blog stretches back to '05, meaning they had years of solid experience before finally releasing 1.0 in September of '08. Development apparently began in '03.\n",
"I'm going to assume that by \"business application\" you mean a web application hosted in an intranet environment as opposed to some sort of SaaS application on the internet.\nWhile you're in the process of architecting your application you need to consider the existing infrastructure and infrastructure support people of your employer/customer. Also, if the company is large enough to have things such as \"approved software/hardware lists,\" you should be aware of those. Keep in mind that some elements of the list may be downright retarded. Don't let past mistakes dictate the architecture of your app, but in cases where they are reasonably sensible I would pick my battles and stick with your enterprise standard. This can be a real pain when you pick a development stack that really works best on Unix/Linux, and then someone tries to force onto a Windows server admined by someone who's never touched anything but ASP.NET applications.\nUnless there is a particular PHP module that you intend to use that has no Python equivalent, I would drop PHP and use Django. If there is a compelling reason to use PHP, then I'd drop Python. I'm having difficulty imagining a scenario where you would want to use both at the same time.\nAs for PG versus MySQL, either works. Look at what you customer already has deployed, and if they have a bunch of one and little of another, pick that. If they have existing Oracle infrastructure you should consider using it. If they are an SQL Server shop...reconsider your stack and remember to pick your battles.\n",
"I can only repeat what other peoples here already said : if you choose Python for the domain layer, you won't gain anything (quite on the contrary) using PHP for the presentation layer. Others already advised Django, and that might be a pretty good choice, but there's no shortage of good Python web frameworks.\n",
"I personally agree with the second and the third points in your post. Speaking about PHP, in my opinion you can use Python also for presentation, there are many solutions (Zope, Plone ...) based on Python.\n",
"Just skip PHP and use Python (with Django, as already noticed while I typed). Django already separates the layers as you mentioned.\nI have never used PgSQL myself, but I think it's mostly a matter of taste whether you prefer it over MySQL. It used to support more enterprise features than MySQL but I'm not sure if that's still true with MySQL 5.0 and 5.1. Transactions are supported in MySQL, anyway (you have to use the InnoDB table engine, however).\n",
"Just to address the MySQL vs PgSQL issues - it shouldn't matter. They're both more than capable of the task, and any reasonable framework should isolate you from the differences relatively well. I think it's down to what you use already, what people have most experience in, and if there's a feature in one or the other you think you'd benefit from.\nIf you have no preference, you might want to go with MySQL purely because it's more popular for web work. This translates to more examples, easier to find help, etc. I actually prefer the philosophy of PgSQL, but this isn't a good enough reason to blow against the wind.\n",
"Just to throw it out there... there are PHP frameworks utilizing MVC.\nCodeigniter does simple and yet powerful things. You can definitely separate the template layer from the logic layer.\n"
] | [
11,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"php",
"postgresql",
"python"
] | stackoverflow_0000439759_php_postgresql_python.txt |
Q:
Form (or Formset?) to handle multiple table rows in Django
I'm working on my first Django application. In short, what it needs to do is to display a list of film titles, and allow users to give a rating (out of 10) to each film. I've been able to use the {{ form }} and {{ formset }} syntax in a template to produce a form which lets you rate one film at a time, which corresponds to one row in a MySQL table, but how do I produce a form that iterates over all the movie titles in the database and produces a form that lets you rate lots of them at once?
At first, I thought this was what formsets were for, but I can't see any way to automatically iterate over the contents of a database table to produce items to go in the form, if you see what I mean.
Currently, my views.py has this code:
def survey(request):
ScoreFormSet = formset_factory(ScoreForm)
if request.method == 'POST':
formset = ScoreFormSet(request.POST, request.FILES)
if formset.is_valid():
return HttpResponseRedirect('/')
else:
formset = ScoreFormSet()
return render_to_response('cf/survey.html', {
'formset':formset,
})
And my survey.html has this:
<form action="/survey/" method="POST">
<table>
{{ formset }}
</table>
<input type = "submit" value = "Submit">
</form>
Oh, and the definition of ScoreForm and Score from models.py are:
class Score(models.Model):
movie = models.ForeignKey(Movie)
score = models.IntegerField()
user = models.ForeignKey(User)
class ScoreForm(ModelForm):
class Meta:
model = Score
So, in case the above is not clear, what I'm aiming to produce is a form which has one row per movie, and each row shows a title, and has a box to allow the user to enter their score.
If anyone can point me at the right sort of approach to this, I'd be most grateful.
A:
"At first, I thought this was what formsets were for, but I can't see any way to automatically iterate over the contents of a database table to produce items to go in the form, if you see what I mean."
You need to get a queryset. And you need to provide that queryset to your form as initial data. See using initial data with a formset for the code.
initial = [ list of { dictionaries }, one per form ]
Interestingly, this is a direct feature of the model API through the values method of a queryset.
A:
I have found my answer, using modelformset_factory instead formset_factory solves the problem, Thanks...
| Form (or Formset?) to handle multiple table rows in Django | I'm working on my first Django application. In short, what it needs to do is to display a list of film titles, and allow users to give a rating (out of 10) to each film. I've been able to use the {{ form }} and {{ formset }} syntax in a template to produce a form which lets you rate one film at a time, which corresponds to one row in a MySQL table, but how do I produce a form that iterates over all the movie titles in the database and produces a form that lets you rate lots of them at once?
At first, I thought this was what formsets were for, but I can't see any way to automatically iterate over the contents of a database table to produce items to go in the form, if you see what I mean.
Currently, my views.py has this code:
def survey(request):
ScoreFormSet = formset_factory(ScoreForm)
if request.method == 'POST':
formset = ScoreFormSet(request.POST, request.FILES)
if formset.is_valid():
return HttpResponseRedirect('/')
else:
formset = ScoreFormSet()
return render_to_response('cf/survey.html', {
'formset':formset,
})
And my survey.html has this:
<form action="/survey/" method="POST">
<table>
{{ formset }}
</table>
<input type = "submit" value = "Submit">
</form>
Oh, and the definition of ScoreForm and Score from models.py are:
class Score(models.Model):
movie = models.ForeignKey(Movie)
score = models.IntegerField()
user = models.ForeignKey(User)
class ScoreForm(ModelForm):
class Meta:
model = Score
So, in case the above is not clear, what I'm aiming to produce is a form which has one row per movie, and each row shows a title, and has a box to allow the user to enter their score.
If anyone can point me at the right sort of approach to this, I'd be most grateful.
| [
"\"At first, I thought this was what formsets were for, but I can't see any way to automatically iterate over the contents of a database table to produce items to go in the form, if you see what I mean.\"\nYou need to get a queryset. And you need to provide that queryset to your form as initial data. See using initial data with a formset for the code.\ninitial = [ list of { dictionaries }, one per form ] \n\nInterestingly, this is a direct feature of the model API through the values method of a queryset.\n",
"I have found my answer, using modelformset_factory instead formset_factory solves the problem, Thanks...\n"
] | [
3,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000298779_django_python.txt |
Q:
File and space in Python
I have a file like:
<space>
<space>
line1
<space>
column 1 column 2 column 3 ...
.
.
.
<space>
<space>
How to remove this extra spaces?
I need to extract the heading which will be on line1. Also, I need to extract column 1, column 2, column 3 etc.
At the end of last column content there is '\n'.How to get rid of it ???
Help me with this...
Thank you
A:
Start by opening the file and reading all the lines:
f = open('filename string');
lines = f.readlines()
Then...
# remove empty lines
lines = [l for l in lines if len(l.strip()) > 0]
header = lines[0]
line = lines[1].split(' ')
column1 = line[0]
column2 = line[1]
...
Also:
total_lines = len(lines)
total_columns = len(line)
A:
A straightforward solution, using strip() to drop spaces and split() to separate column data:
>>> mylines
[' \n', ' \n', 'line1\n', ' \n', ' \n', 'column1 column2 column3 \n']
>>> def parser(lines):
... header=""
... data=[]
... for x in lines:
... line = x.strip()
... if line == "":
... continue
... if header == "":
... header=line
... else:
... data.append(line.split())
... return {"header":header,"data":data}
...
>>> parser(mylines)
{'header': 'line1', 'data': [['column1', 'column2', 'column3']]}
>>>
A:
Using Generator functions to handle each element of parsing
def nonEmptyLines( aFile ):
"""Discard empty lines, yield only non-empty lines."""
for line in aFile:
if len(line) > 0:
yield line
def splitFields( aFile ):
"""Split a non-empty line into fields."""
for line in nonEmptyLines(aFile):
yield line.split()
def dictReader( aFile ):
"""Turn non-empty lines file with header and data into dictionaries.
Like the ``csv`` module."""
iter= iter( splitFields( aFile ) )
heading= iter.next()
for line in iter:
yield dict( zip( heading, line ) )
rdr= dictReader( "myFile", "r" )
for d in rdr:
print d
| File and space in Python | I have a file like:
<space>
<space>
line1
<space>
column 1 column 2 column 3 ...
.
.
.
<space>
<space>
How to remove this extra spaces?
I need to extract the heading which will be on line1. Also, I need to extract column 1, column 2, column 3 etc.
At the end of last column content there is '\n'.How to get rid of it ???
Help me with this...
Thank you
| [
"Start by opening the file and reading all the lines:\nf = open('filename string');\nlines = f.readlines()\n\nThen...\n# remove empty lines\nlines = [l for l in lines if len(l.strip()) > 0]\nheader = lines[0]\nline = lines[1].split(' ')\ncolumn1 = line[0]\ncolumn2 = line[1]\n...\n\nAlso:\ntotal_lines = len(lines)\ntotal_columns = len(line)\n\n",
"A straightforward solution, using strip() to drop spaces and split() to separate column data:\n>>> mylines\n[' \\n', ' \\n', 'line1\\n', ' \\n', ' \\n', 'column1 column2 column3 \\n']\n>>> def parser(lines):\n... header=\"\"\n... data=[]\n... for x in lines:\n... line = x.strip()\n... if line == \"\":\n... continue\n... if header == \"\":\n... header=line\n... else:\n... data.append(line.split())\n... return {\"header\":header,\"data\":data}\n... \n>>> parser(mylines)\n{'header': 'line1', 'data': [['column1', 'column2', 'column3']]}\n>>> \n\n",
"Using Generator functions to handle each element of parsing\ndef nonEmptyLines( aFile ):\n \"\"\"Discard empty lines, yield only non-empty lines.\"\"\"\n for line in aFile:\n if len(line) > 0:\n yield line\n\ndef splitFields( aFile ):\n \"\"\"Split a non-empty line into fields.\"\"\"\n for line in nonEmptyLines(aFile):\n yield line.split()\n\ndef dictReader( aFile ):\n \"\"\"Turn non-empty lines file with header and data into dictionaries.\n Like the ``csv`` module.\"\"\"\n iter= iter( splitFields( aFile ) )\n heading= iter.next()\n for line in iter:\n yield dict( zip( heading, line ) )\n\nrdr= dictReader( \"myFile\", \"r\" )\nfor d in rdr:\n print d\n\n"
] | [
4,
1,
0
] | [] | [] | [
"file",
"parsing",
"python",
"spaces"
] | stackoverflow_0000494919_file_parsing_python_spaces.txt |
Q:
Which Python module is suitable for data manipulation in a list?
I have a sequence of x, y and z -coordinates, which I need to manipulate. They are in one list of three tuples, like {(x1, y1, z1), (x2, y2, z2), ...}.
I need addition, multiplication and logarithm to manipulate my data.
I would like to study a module, which is as powerful as Awk -language.
A:
I'm not sure exactly what you're after. You can do a lot with list comprehensions. For example, if you want to turn a list:
coords = [(x1, y1, z1), (x2, y2, z2), (x3, y3, z3)] # etc
into a tuple (x1+x2+x3, y1+y2+y3, z1+z2+z3), then you can do:
sums = (sum(a[0] for a in coords), sum(a[1] for a in coords), sum(a[2] for a in coords))
In fact, an experienced python programmer might write that as:
sums = map(sum, zip(*coords))
though that can look a bit like magic to a beginner.
If you want to multiply across coordinates, then the idea is similar. The only problem is python has no builtin multiplication equivalent to sum. We can build our own:
import operator
def prod(lst):
return reduce(operator.mul, lst)
Then you can multiply your tuples coordinate-wise as:
prods = map(prod, zip(*coords))
If you want to do something more complex with multiplication (inner product?) that will require a little more work (though it won't be very difficult).
I'm not sure what you want to take the logarithm of. But you can find the log function in the math module:
from math import log
Hope this helps.
A:
If you need many array manipulation, then numpy is the best choice in python
>>> import numpy
>>> data = numpy.array([(2, 4, 8), (3, 6, 5), (7, 5, 2)])
>>> data
array([[2, 4, 8],
[3, 6, 5],
[7, 5, 2]])
>>> data.sum() # product of all elements
42
>>> data.sum(axis=1) # sum of elements in rows
array([14, 14, 14])
>>> data.sum(axis=0) # sum of elements in columns
array([12, 15, 15])
>>> numpy.product(data, axis=1) # product of elements in rows
array([64, 90, 70])
>>> numpy.product(data, axis=0) # product of elements in columns
array([ 42, 120, 80])
>>> numpy.product(data) # product of all elements
403200
or element wise operation with arrays
>>> x,y,z = map(numpy.array,[(2, 4, 8), (3, 6, 5), (7, 5, 2)])
>>> x
array([2, 4, 8])
>>> y
array([3, 6, 5])
>>> z
array([7, 5, 2])
>>> x*y
array([ 6, 24, 40])
>>> x*y*z
array([ 42, 120, 80])
>>> x+y+z
array([12, 15, 15])
element wise mathematical operations, e.g.
>>> numpy.log(data)
array([[ 0.69314718, 1.38629436, 2.07944154],
[ 1.09861229, 1.79175947, 1.60943791],
[ 1.94591015, 1.60943791, 0.69314718]])
>>> numpy.exp(x)
array([ 7.3890561 , 54.59815003, 2980.95798704])
A:
You don't need a separate library or module to do this. Python has list comprehensions built into the language, which lets you manipulate lists and perform caculations. You could use the numpy module to do the same thing if you want to do lots of scientific calculations, or if you want to do lots of heavy number crunching.
A:
In Python 3 the reduce function is gone. You can do:
def prod(lst):
return [x*y*z for x, y, z in list(zip(*lst))]
coords = [(2, 4, 8), (3, 6, 5), (7, 5, 2)]
print(prod(coords))
>>> [42, 120, 80]
| Which Python module is suitable for data manipulation in a list? | I have a sequence of x, y and z -coordinates, which I need to manipulate. They are in one list of three tuples, like {(x1, y1, z1), (x2, y2, z2), ...}.
I need addition, multiplication and logarithm to manipulate my data.
I would like to study a module, which is as powerful as Awk -language.
| [
"I'm not sure exactly what you're after. You can do a lot with list comprehensions. For example, if you want to turn a list:\ncoords = [(x1, y1, z1), (x2, y2, z2), (x3, y3, z3)] # etc\n\ninto a tuple (x1+x2+x3, y1+y2+y3, z1+z2+z3), then you can do:\nsums = (sum(a[0] for a in coords), sum(a[1] for a in coords), sum(a[2] for a in coords))\n\nIn fact, an experienced python programmer might write that as:\nsums = map(sum, zip(*coords))\n\nthough that can look a bit like magic to a beginner.\nIf you want to multiply across coordinates, then the idea is similar. The only problem is python has no builtin multiplication equivalent to sum. We can build our own:\nimport operator\ndef prod(lst):\n return reduce(operator.mul, lst)\n\nThen you can multiply your tuples coordinate-wise as:\nprods = map(prod, zip(*coords))\n\nIf you want to do something more complex with multiplication (inner product?) that will require a little more work (though it won't be very difficult).\nI'm not sure what you want to take the logarithm of. But you can find the log function in the math module:\nfrom math import log\n\nHope this helps.\n",
"If you need many array manipulation, then numpy is the best choice in python\n>>> import numpy\n>>> data = numpy.array([(2, 4, 8), (3, 6, 5), (7, 5, 2)])\n>>> data\narray([[2, 4, 8],\n [3, 6, 5],\n [7, 5, 2]])\n\n>>> data.sum() # product of all elements\n42\n>>> data.sum(axis=1) # sum of elements in rows\narray([14, 14, 14])\n>>> data.sum(axis=0) # sum of elements in columns\narray([12, 15, 15])\n>>> numpy.product(data, axis=1) # product of elements in rows\narray([64, 90, 70])\n>>> numpy.product(data, axis=0) # product of elements in columns\narray([ 42, 120, 80])\n>>> numpy.product(data) # product of all elements\n403200\n\nor element wise operation with arrays\n>>> x,y,z = map(numpy.array,[(2, 4, 8), (3, 6, 5), (7, 5, 2)])\n>>> x\narray([2, 4, 8])\n>>> y\narray([3, 6, 5])\n>>> z\narray([7, 5, 2])\n\n>>> x*y\narray([ 6, 24, 40])\n>>> x*y*z\narray([ 42, 120, 80])\n>>> x+y+z\narray([12, 15, 15])\n\nelement wise mathematical operations, e.g.\n>>> numpy.log(data)\narray([[ 0.69314718, 1.38629436, 2.07944154],\n [ 1.09861229, 1.79175947, 1.60943791],\n [ 1.94591015, 1.60943791, 0.69314718]])\n>>> numpy.exp(x)\narray([ 7.3890561 , 54.59815003, 2980.95798704])\n\n",
"You don't need a separate library or module to do this. Python has list comprehensions built into the language, which lets you manipulate lists and perform caculations. You could use the numpy module to do the same thing if you want to do lots of scientific calculations, or if you want to do lots of heavy number crunching.\n",
"In Python 3 the reduce function is gone. You can do:\ndef prod(lst):\n return [x*y*z for x, y, z in list(zip(*lst))]\n\ncoords = [(2, 4, 8), (3, 6, 5), (7, 5, 2)]\nprint(prod(coords))\n>>> [42, 120, 80]\n\n"
] | [
8,
7,
2,
1
] | [] | [] | [
"module",
"python"
] | stackoverflow_0000493853_module_python.txt |
Q:
refactor this dictionary-to-xml converter in python
It's a small thing, really: I have this function that converts dict objects to xml.
Here's the function:
def dictToXml(d):
from xml.sax.saxutils import escape
def unicodify(o):
if o is None:
return u'';
return unicode(o)
lines = []
def addDict(node, offset):
for name, value in node.iteritems():
if isinstance(value, dict):
lines.append(offset + u"<%s>" % name)
addDict(value, offset + u" " * 4)
lines.append(offset + u"</%s>" % name)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
lines.append(offset + u"<%s>" % name)
addDict(item, offset + u" " * 4)
lines.append(offset + u"</%s>" % name)
else:
lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(item)), name))
else:
lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(value)), name))
addDict(d, u"")
lines.append(u"")
return u"\n".join(lines)
For example, it converts this dictionary
{ 'site': { 'name': 'stackoverflow', 'blogger': [ 'Jeff', 'Joel' ] } }
to:
<site>
<name>stackoverflow</name>
<blogger>jeff</blogger>
<blogger>joel</blogger>
</site>
It works, but the addDict function looks a little too repetitive. I'm sure there's a way to refactor it into 3 co-recursive functions named addDict, addList and addElse, but my brain is stuck. Any help?
Also, any way to get rid of the offset + thing in every line would be nice.
NOTE: I chose these semantics because I'm trying to match the behavior of the json-to-xml converter in org.json, which I use in a different part of my project. If you got to this page just looking for a dictionary to xml converter, there are some really good options in some of the answers. (Especially pyfo).
A:
>>> from pyfo import pyfo
>>> d = ('site', { 'name': 'stackoverflow', 'blogger': [ 'Jeff', 'Joel' ] } )
>>> result = pyfo(d, pretty=True, prolog=True, encoding='ascii')
>>> print result.encode('ascii', 'xmlcharrefreplace')
<?xml version="1.0" encoding="ascii"?>
<site>
<blogger>
Jeff
Joel
</blogger>
<name>stackoverflow</name>
</site>
To install pyfo:
$ easy_install pyfo
A:
I noticed you have commonality in adding items. Using this commonality I would refactor adding an item to a separate function.
def addItem(item, name, offset):
if isinstance(item, dict):
lines.append(offset + u"<%s>" % name)
addDict(item, offset + u" " * 4)
lines.append(offset + u"</%s>" % name)
else:
lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(item)), name))
def addList(value,name, offset):
for item in value:
addItem(item, name, offset)
def addDict(node, offset):
for name, value in node.iteritems():
if isinstance(value, list):
addList(value, name, offset)
else:
addItem(value, name, offset)
Advisory warning: this code is not tested or written by anybody who actually uses Python.
A:
To get rid of repeated "offset+":
offset = 0
def addLine(str):
lines.append(u" " * (offset * 4) + str
then
...
addLine(u"<%s>" % name)
offset = offset + 1
addDict(value)
offset = offset - 1
addLine(u"</%s>" % name)
Don't have access to an interpreter here, so take this with a grain of salt :(
A:
Your original code produce malformed XML and can produce the same XML for two different dictionaries (is not injective, speaking mathematically).
For example, if you have a list as a value of the only key in a dictionary:
d = { 'list': [1,2,3] }
I expect that your code would produce
<list>1</list><list>2</list><list>3</list>
and there is no root element. Any XML should have one and only one root element.
Then given the XML produced by your code, it is impossible to say if this XML
<tag>1</tag>
was produced from { 'tag': 1 } or from { 'tag': [1] }.
So, I suggest
always start from the root element
represent lists with either two special tags (e.g. <list/> and <item/>) or mark them as such in attributes
Then, after decisions about these conceptual shortcomings we can generate correct and unambiguous XML. I chose to use attributes to markup lists, and used ElementTree to construct the XML tree automatically. Also, recursion helps (add_value_to_xml is called recursively):
from xml.etree.ElementTree import Element, SubElement, tostring
def is_scalar(v):
return isinstance(v,basestring) or isinstance(v,float) \
or isinstance(v,int) or isinstance(v,bool)
def add_value_to_xml(root,v):
if type(v) == type({}):
for k,kv in v.iteritems():
vx = SubElement(root,unicode(k))
vx = add_value_to_xml(vx,kv)
elif type(v) == list:
root.set('type','list')
for e in v:
li = SubElement(root,root.tag)
li = add_value_to_xml(li,e)
li.set('type','item')
elif is_scalar(v):
root.text = unicode(v)
else:
raise Exception("add_value_to_xml: unsuppoted type (%s)"%type(v))
return root
def dict_to_xml(d,root='dict'):
x = Element(root)
x = add_value_to_xml(x,d)
return x
d = { 'float': 5194.177, 'str': 'eggs', 'int': 42,
'list': [1,2], 'dict': { 'recursion': True } }
x = dict_to_xml(d)
print tostring(x)
The result of the conversion of the test dict is:
<dict><int>42</int><dict><recursion>True</recursion></dict><float>5194.177</float><list type="list"><list type="item">1</list><list type="item">2</list></list><str>eggs</str></dict>
A:
Here is my short sketch for a solution:
have a general addSomething() function that dispatches based on the type of the value to addDict(), addList() or addElse(). Those functions recursively call addSomething() again.
Basically you are factoring out the parts in the if clause and add a recursive call.
A:
Here's what I find helpful when working with XML. Actually create the XML node structure first, then render this into text second.
This separates two unrelated concerns.
How do I transform my Python structure into an XML object model?
How to I format that XML object model?
It's hard when you put these two things together into one function. If, on the other hand, you separate them, then you have two things. First, you have a considerably simpler function to "walk" your Python structure and return an XML node. Your XML Nodes can be rendered into text with some preferred encoding and formatting rules applied.
from xml.sax.saxutils import escape
class Node( object ):
def __init__( self, name, *children ):
self.name= name
self.children= children
def toXml( self, indent ):
if len(self.children) == 0:
return u"%s<%s/>" % ( indent*4*u' ', self.name )
elif len(self.children) == 1:
child= self.children[0].toXml(0)
return u"%s<%s>%s</%s>" % ( indent*4*u' ', self.name, child, self.name )
else:
items = [ u"%s<%s>" % ( indent*4*u' ', self.name ) ]
items.extend( [ c.toXml(indent+1) for c in self.children ] )
items.append( u"%s</%s>" % ( indent*4*u' ', self.name ) )
return u"\n".join( items )
class Text( Node ):
def __init__( self, value ):
self.value= value
def toXml( self, indent ):
def unicodify(o):
if o is None:
return u'';
return unicode(o)
return "%s%s" % ( indent*4*u' ', escape( unicodify(self.value) ), )
def dictToXml(d):
def dictToNodeList(node):
nodes= []
for name, value in node.iteritems():
if isinstance(value, dict):
n= Node( name, *dictToNodeList( value ) )
nodes.append( n )
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
n= Node( name, *dictToNodeList( value ) )
nodes.append( n )
else:
n= Node( name, Text( item ) )
nodes.append( n )
else:
n= Node( name, Text( value ) )
nodes.append( n )
return nodes
return u"\n".join( [ n.toXml(0) for n in dictToNodeList(d) ] )
| refactor this dictionary-to-xml converter in python | It's a small thing, really: I have this function that converts dict objects to xml.
Here's the function:
def dictToXml(d):
from xml.sax.saxutils import escape
def unicodify(o):
if o is None:
return u'';
return unicode(o)
lines = []
def addDict(node, offset):
for name, value in node.iteritems():
if isinstance(value, dict):
lines.append(offset + u"<%s>" % name)
addDict(value, offset + u" " * 4)
lines.append(offset + u"</%s>" % name)
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
lines.append(offset + u"<%s>" % name)
addDict(item, offset + u" " * 4)
lines.append(offset + u"</%s>" % name)
else:
lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(item)), name))
else:
lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(value)), name))
addDict(d, u"")
lines.append(u"")
return u"\n".join(lines)
For example, it converts this dictionary
{ 'site': { 'name': 'stackoverflow', 'blogger': [ 'Jeff', 'Joel' ] } }
to:
<site>
<name>stackoverflow</name>
<blogger>jeff</blogger>
<blogger>joel</blogger>
</site>
It works, but the addDict function looks a little too repetitive. I'm sure there's a way to refactor it into 3 co-recursive functions named addDict, addList and addElse, but my brain is stuck. Any help?
Also, any way to get rid of the offset + thing in every line would be nice.
NOTE: I chose these semantics because I'm trying to match the behavior of the json-to-xml converter in org.json, which I use in a different part of my project. If you got to this page just looking for a dictionary to xml converter, there are some really good options in some of the answers. (Especially pyfo).
| [
">>> from pyfo import pyfo\n>>> d = ('site', { 'name': 'stackoverflow', 'blogger': [ 'Jeff', 'Joel' ] } )\n>>> result = pyfo(d, pretty=True, prolog=True, encoding='ascii')\n>>> print result.encode('ascii', 'xmlcharrefreplace')\n<?xml version=\"1.0\" encoding=\"ascii\"?>\n<site>\n <blogger>\n Jeff\n Joel\n </blogger>\n <name>stackoverflow</name>\n</site>\n\nTo install pyfo:\n$ easy_install pyfo\n\n",
"I noticed you have commonality in adding items. Using this commonality I would refactor adding an item to a separate function.\ndef addItem(item, name, offset):\n if isinstance(item, dict):\n lines.append(offset + u\"<%s>\" % name)\n addDict(item, offset + u\" \" * 4)\n lines.append(offset + u\"</%s>\" % name)\n else:\n lines.append(offset + u\"<%s>%s</%s>\" % (name, escape(unicodify(item)), name))\n\ndef addList(value,name, offset):\n for item in value:\n addItem(item, name, offset)\n\ndef addDict(node, offset):\n for name, value in node.iteritems():\n if isinstance(value, list):\n addList(value, name, offset)\n else:\n addItem(value, name, offset)\n\nAdvisory warning: this code is not tested or written by anybody who actually uses Python.\n",
"To get rid of repeated \"offset+\":\noffset = 0\ndef addLine(str):\n lines.append(u\" \" * (offset * 4) + str\n\nthen\n...\n addLine(u\"<%s>\" % name)\n offset = offset + 1\n addDict(value)\n offset = offset - 1\n addLine(u\"</%s>\" % name)\n\nDon't have access to an interpreter here, so take this with a grain of salt :(\n",
"Your original code produce malformed XML and can produce the same XML for two different dictionaries (is not injective, speaking mathematically).\nFor example, if you have a list as a value of the only key in a dictionary:\n d = { 'list': [1,2,3] }\n\nI expect that your code would produce\n <list>1</list><list>2</list><list>3</list>\n\nand there is no root element. Any XML should have one and only one root element.\nThen given the XML produced by your code, it is impossible to say if this XML\n <tag>1</tag>\n\nwas produced from { 'tag': 1 } or from { 'tag': [1] }.\nSo, I suggest\n\nalways start from the root element\nrepresent lists with either two special tags (e.g. <list/> and <item/>) or mark them as such in attributes\n\nThen, after decisions about these conceptual shortcomings we can generate correct and unambiguous XML. I chose to use attributes to markup lists, and used ElementTree to construct the XML tree automatically. Also, recursion helps (add_value_to_xml is called recursively):\nfrom xml.etree.ElementTree import Element, SubElement, tostring\n\ndef is_scalar(v):\n return isinstance(v,basestring) or isinstance(v,float) \\\n or isinstance(v,int) or isinstance(v,bool)\n\ndef add_value_to_xml(root,v):\n if type(v) == type({}):\n for k,kv in v.iteritems():\n vx = SubElement(root,unicode(k))\n vx = add_value_to_xml(vx,kv)\n elif type(v) == list:\n root.set('type','list')\n for e in v:\n li = SubElement(root,root.tag)\n li = add_value_to_xml(li,e)\n li.set('type','item')\n elif is_scalar(v):\n root.text = unicode(v)\n else:\n raise Exception(\"add_value_to_xml: unsuppoted type (%s)\"%type(v))\n return root\n\ndef dict_to_xml(d,root='dict'):\n x = Element(root)\n x = add_value_to_xml(x,d)\n return x\n\nd = { 'float': 5194.177, 'str': 'eggs', 'int': 42,\n 'list': [1,2], 'dict': { 'recursion': True } }\nx = dict_to_xml(d)\nprint tostring(x)\n\nThe result of the conversion of the test dict is:\n<dict><int>42</int><dict><recursion>True</recursion></dict><float>5194.177</float><list type=\"list\"><list type=\"item\">1</list><list type=\"item\">2</list></list><str>eggs</str></dict>\n\n",
"Here is my short sketch for a solution:\nhave a general addSomething() function that dispatches based on the type of the value to addDict(), addList() or addElse(). Those functions recursively call addSomething() again.\nBasically you are factoring out the parts in the if clause and add a recursive call.\n",
"Here's what I find helpful when working with XML. Actually create the XML node structure first, then render this into text second.\nThis separates two unrelated concerns.\n\nHow do I transform my Python structure into an XML object model?\nHow to I format that XML object model?\n\nIt's hard when you put these two things together into one function. If, on the other hand, you separate them, then you have two things. First, you have a considerably simpler function to \"walk\" your Python structure and return an XML node. Your XML Nodes can be rendered into text with some preferred encoding and formatting rules applied.\nfrom xml.sax.saxutils import escape\n\nclass Node( object ):\n def __init__( self, name, *children ):\n self.name= name\n self.children= children\n def toXml( self, indent ):\n if len(self.children) == 0:\n return u\"%s<%s/>\" % ( indent*4*u' ', self.name )\n elif len(self.children) == 1:\n child= self.children[0].toXml(0)\n return u\"%s<%s>%s</%s>\" % ( indent*4*u' ', self.name, child, self.name )\n else:\n items = [ u\"%s<%s>\" % ( indent*4*u' ', self.name ) ]\n items.extend( [ c.toXml(indent+1) for c in self.children ] )\n items.append( u\"%s</%s>\" % ( indent*4*u' ', self.name ) )\n return u\"\\n\".join( items )\n\nclass Text( Node ):\n def __init__( self, value ):\n self.value= value\n def toXml( self, indent ):\n def unicodify(o):\n if o is None:\n return u'';\n return unicode(o)\n return \"%s%s\" % ( indent*4*u' ', escape( unicodify(self.value) ), )\n\ndef dictToXml(d):\n\n def dictToNodeList(node):\n nodes= []\n for name, value in node.iteritems():\n if isinstance(value, dict):\n n= Node( name, *dictToNodeList( value ) )\n nodes.append( n )\n elif isinstance(value, list):\n for item in value:\n if isinstance(item, dict):\n n= Node( name, *dictToNodeList( value ) )\n nodes.append( n )\n else:\n n= Node( name, Text( item ) )\n nodes.append( n )\n else:\n n= Node( name, Text( value ) )\n nodes.append( n )\n return nodes\n\n return u\"\\n\".join( [ n.toXml(0) for n in dictToNodeList(d) ] )\n\n"
] | [
9,
4,
1,
1,
0,
0
] | [] | [] | [
"dry",
"python",
"xml"
] | stackoverflow_0000494881_dry_python_xml.txt |
Q:
Confusion about global variables in python
I'm new to python, so please excuse what is probably a pretty dumb question.
Basically, I have a single global variable, called _debug, which is used to determine whether or not the script should output debugging information. My problem is, I can't set it in a different python script than the one that uses it.
I have two scripts:
one.py:
-------
def my_function():
if _debug:
print "debugging!"
two.py:
-------
from one import *
_debug = False
my_function()
Running two.py generates an error:
NameError: global name '_debug' is not defined
Can anyone tell me what I'm doing wrong?
A:
There are more problems than just the leading underscore I'm afraid.
When you call my_function(), it still won't have your debug variable in its namespace, unless you import it from two.py.
Of course, doing that means you'll end up with cyclic dependencies (one.py -> two.py -> one.py), and you'll get NameErrors unless you refactor where various things are imported and declared.
One solution would be to create a simple third module which defines 'constants' like this, which can be safely imported from anywhere, e.g.:
constants.py
------------
debug = True
one.py
------
from constants import debug
#...
two.py
------
from constants import debug
#...
However, I would recommend just using the built in logging module for this - why not? It's easy to configure, simpler to use, reliable, flexible and extensible.
A:
Names beginning with an underscore aren't imported with
from one import *
A:
You can also use the __debug__ variable for debugging. It is true if the interpreter wasn't started with the -O option. The assert statement might be helpful, too.
A:
A bit more explanation: The function my_function's namespace is always in the module one. This means that when the name _debug is not found in my_function, it looks in one, not the namespace from which the function is called. Alabaster's answer provides a good solution.
| Confusion about global variables in python | I'm new to python, so please excuse what is probably a pretty dumb question.
Basically, I have a single global variable, called _debug, which is used to determine whether or not the script should output debugging information. My problem is, I can't set it in a different python script than the one that uses it.
I have two scripts:
one.py:
-------
def my_function():
if _debug:
print "debugging!"
two.py:
-------
from one import *
_debug = False
my_function()
Running two.py generates an error:
NameError: global name '_debug' is not defined
Can anyone tell me what I'm doing wrong?
| [
"There are more problems than just the leading underscore I'm afraid.\nWhen you call my_function(), it still won't have your debug variable in its namespace, unless you import it from two.py.\nOf course, doing that means you'll end up with cyclic dependencies (one.py -> two.py -> one.py), and you'll get NameErrors unless you refactor where various things are imported and declared.\nOne solution would be to create a simple third module which defines 'constants' like this, which can be safely imported from anywhere, e.g.:\nconstants.py\n------------\ndebug = True\n\none.py\n------\nfrom constants import debug\n#...\n\ntwo.py\n------\nfrom constants import debug\n#...\n\nHowever, I would recommend just using the built in logging module for this - why not? It's easy to configure, simpler to use, reliable, flexible and extensible.\n",
"Names beginning with an underscore aren't imported with \nfrom one import *\n\n",
"You can also use the __debug__ variable for debugging. It is true if the interpreter wasn't started with the -O option. The assert statement might be helpful, too.\n",
"A bit more explanation: The function my_function's namespace is always in the module one. This means that when the name _debug is not found in my_function, it looks in one, not the namespace from which the function is called. Alabaster's answer provides a good solution.\n"
] | [
16,
5,
4,
1
] | [] | [] | [
"global_variables",
"python",
"python_import"
] | stackoverflow_0000495422_global_variables_python_python_import.txt |
Q:
python setup.py develop not updating easy_install.pth
According to setuptools documentation, setup.py develop is supposed to create the egg-link file and update easy_install.pth when installing into site-packages folder. However, in my case it's only creating the egg-link file. How does setuptools decide if it needs to update easy_install.pth?
Some more info:
It works when I have setuptools 0.6c7 installed as a folder under site-packages. But when I use setuptools 0.6c9 installed as a zipped egg, it does not work.
A:
Reinstall setuptools with the command easy_install --always-unzip --upgrade setuptools. If that fixes it then the zipping was the problem.
A:
I'd try to debug it with pdb. The issue is most likely with the easy install's method check_site_dir, which seeks for easy-install.pth.
| python setup.py develop not updating easy_install.pth | According to setuptools documentation, setup.py develop is supposed to create the egg-link file and update easy_install.pth when installing into site-packages folder. However, in my case it's only creating the egg-link file. How does setuptools decide if it needs to update easy_install.pth?
Some more info:
It works when I have setuptools 0.6c7 installed as a folder under site-packages. But when I use setuptools 0.6c9 installed as a zipped egg, it does not work.
| [
"Reinstall setuptools with the command easy_install --always-unzip --upgrade setuptools. If that fixes it then the zipping was the problem.\n",
"I'd try to debug it with pdb. The issue is most likely with the easy install's method check_site_dir, which seeks for easy-install.pth. \n"
] | [
4,
0
] | [] | [] | [
"python",
"setuptools"
] | stackoverflow_0000421050_python_setuptools.txt |
Q:
How to send clip names using LiveAPI (of Ableton Live)
When an audio or midi clip is played (triggered), its name needs to be sent using OSC to another application.
LiveAPI is an interface which allows one to explore and automate Ableton Live using python scripts.
The code to do this must be written in a python script, which must be placed in a specific folder where Ableton Live can find it, selected in Live's Preferences.
More information about the LiveAPI can be found on these sites:
http://www.assembla.com/wiki/show/live-api
http://groups.google.com/group/liveapi
A:
According to the LiveAPI documentation, the Clip object has a "name" attribute which holds the clip name. Presumably that's what you want to send in your OSC packets.
Also, it's worth mentioning that the Max/MSP support in Live8 will probably be a lot more comfortable to work with than LiveAPI, which is pretty much a dead project. Max/MSP supposedly has OSC support, which was added to support the JazzMutant Lemur, but I'm not sure how much of that made it into Live. Anyways, it's worth keeping in mind for when Live8 is released.
A:
I know about Max 4 Live, but as I see it, it's kind of a different thing. Yes, it will probably be able to interface with Live to do all the stuff which people do now with LiveAPI. Some even think that M4L may not even go through LiveAPI, and use some internal interface instead (since Ableton and Cycling 74 are developing it together). From the promo videos on ableton.com site I think that M4L will mostly be about making and modifying sound, and not so much about controlling/reading other instruments, effects, clips etc.
I would not say that LiveAPI project is dead, because a lot of hardware MIDI controllers rely on LiveAPI to do some auto-mapping magic. When you look at the MIDI Remote Scripts folder in Live, you'll see that each controller has it's own folder with a python script. So I definitely think that LiveAPI is going to stay, and that this door into Live will remain open. They even created a new folder called Framework which contains some newer code, probably required for the new Akai controller to work with Live (that is what people believe in theory).
The application I plan to use the playing clip's name is called vvvv, so I don't want to have to bring Max into this, because it is not really needed.
I had some success with someone's modification of the original LiveAPI code, but only worked when I request all the clips' names, not when I asked for just a single one. I didn't have time to play with it later, and the thing for which I was preparing this has passed. I plan to work that out eventually, but it's not that urgent anymore.
| How to send clip names using LiveAPI (of Ableton Live) | When an audio or midi clip is played (triggered), its name needs to be sent using OSC to another application.
LiveAPI is an interface which allows one to explore and automate Ableton Live using python scripts.
The code to do this must be written in a python script, which must be placed in a specific folder where Ableton Live can find it, selected in Live's Preferences.
More information about the LiveAPI can be found on these sites:
http://www.assembla.com/wiki/show/live-api
http://groups.google.com/group/liveapi
| [
"According to the LiveAPI documentation, the Clip object has a \"name\" attribute which holds the clip name. Presumably that's what you want to send in your OSC packets.\nAlso, it's worth mentioning that the Max/MSP support in Live8 will probably be a lot more comfortable to work with than LiveAPI, which is pretty much a dead project. Max/MSP supposedly has OSC support, which was added to support the JazzMutant Lemur, but I'm not sure how much of that made it into Live. Anyways, it's worth keeping in mind for when Live8 is released.\n",
"I know about Max 4 Live, but as I see it, it's kind of a different thing. Yes, it will probably be able to interface with Live to do all the stuff which people do now with LiveAPI. Some even think that M4L may not even go through LiveAPI, and use some internal interface instead (since Ableton and Cycling 74 are developing it together). From the promo videos on ableton.com site I think that M4L will mostly be about making and modifying sound, and not so much about controlling/reading other instruments, effects, clips etc.\nI would not say that LiveAPI project is dead, because a lot of hardware MIDI controllers rely on LiveAPI to do some auto-mapping magic. When you look at the MIDI Remote Scripts folder in Live, you'll see that each controller has it's own folder with a python script. So I definitely think that LiveAPI is going to stay, and that this door into Live will remain open. They even created a new folder called Framework which contains some newer code, probably required for the new Akai controller to work with Live (that is what people believe in theory).\nThe application I plan to use the playing clip's name is called vvvv, so I don't want to have to bring Max into this, because it is not really needed.\nI had some success with someone's modification of the original LiveAPI code, but only worked when I request all the clips' names, not when I asked for just a single one. I didn't have time to play with it later, and the thing for which I was preparing this has passed. I plan to work that out eventually, but it's not that urgent anymore.\n"
] | [
2,
0
] | [] | [] | [
"ableton_live",
"api",
"osc",
"python"
] | stackoverflow_0000375052_ableton_live_api_osc_python.txt |
Q:
StaticText items disappear in wx.StaticBox
I'm creating a staticbox and a staticboxsizer in a vertical sizer. Everything works fine for me, but not on the customer's environment.
Everything in the staticbox is displayed, but labels. snippet below shows how i construct the staticboxsizer.
sbox2 = wx.StaticBox(self, wx.ID_ANY, 'CH1 Only')
sboxsizer2 = wx.StaticBoxSizer(sbox2, wx.VERTICAL)
gsizer9 = wx.GridBagSizer(1,1)
gsizer9.Add(comp.MinMaxLabel_21, (1,0), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMax_21, (1,1), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMax_19, (2,1), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMaxLabel_19, (2,0), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_15, (3,1), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMaxLabel_22, (3,0), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MonLabel_18, (0,3), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MonLabel_21, (0,4), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MonLabel_17, (0,5), (1,1), wx.ALL, 1)
comp.MonLabel_22.Wrap(40)
gsizer9.Add(comp.MonLabel_22, (0,6), (1,1), wx.ALL, 1)
comp.MonLabel_19.Wrap(40)
gsizer9.Add(comp.MonLabel_19, (0,7), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_10, (1,3), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_11, (1,4), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_12, (1,5), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_13, (1,6), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_14, (1,7), (1,1), wx.ALL, 1)
sboxsizer2.Add(gsizer9, 0,0,0)
vsizer4.Add(sboxsizer2, 0,0,0)
comp.MinMaxLabel_* returns a wx.StaticText(label='blah'), nothing fancy, just a wrapper, which works fine for other ~400 items in other sizers. but in StaticBox or StaticBoxSizers, no StaticText displayed on customer's setup.
normally it is displayed as this in my setup:
alt text http://img152.imageshack.us/img152/8758/normalnu9.jpg
this is what i get on customer's setup:
alt text http://img258.imageshack.us/img258/2351/problematiczo2.jpg
both setups have the same wxpython versions, 2.8.9.1. but 2.8.* also displays on my environment.
any suggestions?
A:
The source code of wxStaticBox does different things in painting code, depending on whether XP themes are enabled. In the screen shot without themes everything looks OK, in the one with themes enabled the labels are missing. Could you try on your system with themes enabled, and see whether labels display OK? Or can your customer temporarily disable themes and check whether that fixes the problem?
Also, what do you use as the parent for the labels - the frame / dialog or the static box? I can't see it from the posted code, but I would use the static box. Maybe this will make a difference too.
A:
comp.Component uses the main panel -ScrolledPanel- as the parent
class MyBackground(ScrolledPanel):
def __init__(self, parent, components):
ScrolledPanel.__init__(self, parent, -1, style=wx.TAB_TRAVERSAL)
self.setFont()
comp = Components(components, self)
...
...
app = wx.PySimpleApp(0)
wx.InitAllImageHandlers()
frame = wx.Frame(None, -1, 'Set Limits', size=(800,600), style=wx.DEFAULT_FRAME_STYLE)
panel = MyBackground(frame, components)
as a temporary but succesful solution, i've removed staticboxes and changed staticboxsizer to gridbagsizer, everything works fine :) most probably problem is related to theme as you've said and i guess changing the foreground color for labels might just work.
thanks for reply
| StaticText items disappear in wx.StaticBox | I'm creating a staticbox and a staticboxsizer in a vertical sizer. Everything works fine for me, but not on the customer's environment.
Everything in the staticbox is displayed, but labels. snippet below shows how i construct the staticboxsizer.
sbox2 = wx.StaticBox(self, wx.ID_ANY, 'CH1 Only')
sboxsizer2 = wx.StaticBoxSizer(sbox2, wx.VERTICAL)
gsizer9 = wx.GridBagSizer(1,1)
gsizer9.Add(comp.MinMaxLabel_21, (1,0), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMax_21, (1,1), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMax_19, (2,1), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMaxLabel_19, (2,0), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_15, (3,1), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MinMaxLabel_22, (3,0), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MonLabel_18, (0,3), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MonLabel_21, (0,4), (1,1), wx.ALL, 1)
gsizer9.Add(comp.MonLabel_17, (0,5), (1,1), wx.ALL, 1)
comp.MonLabel_22.Wrap(40)
gsizer9.Add(comp.MonLabel_22, (0,6), (1,1), wx.ALL, 1)
comp.MonLabel_19.Wrap(40)
gsizer9.Add(comp.MonLabel_19, (0,7), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_10, (1,3), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_11, (1,4), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_12, (1,5), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_13, (1,6), (1,1), wx.ALL, 1)
gsizer9.Add(comp.VcOS_14, (1,7), (1,1), wx.ALL, 1)
sboxsizer2.Add(gsizer9, 0,0,0)
vsizer4.Add(sboxsizer2, 0,0,0)
comp.MinMaxLabel_* returns a wx.StaticText(label='blah'), nothing fancy, just a wrapper, which works fine for other ~400 items in other sizers. but in StaticBox or StaticBoxSizers, no StaticText displayed on customer's setup.
normally it is displayed as this in my setup:
alt text http://img152.imageshack.us/img152/8758/normalnu9.jpg
this is what i get on customer's setup:
alt text http://img258.imageshack.us/img258/2351/problematiczo2.jpg
both setups have the same wxpython versions, 2.8.9.1. but 2.8.* also displays on my environment.
any suggestions?
| [
"The source code of wxStaticBox does different things in painting code, depending on whether XP themes are enabled. In the screen shot without themes everything looks OK, in the one with themes enabled the labels are missing. Could you try on your system with themes enabled, and see whether labels display OK? Or can your customer temporarily disable themes and check whether that fixes the problem?\nAlso, what do you use as the parent for the labels - the frame / dialog or the static box? I can't see it from the posted code, but I would use the static box. Maybe this will make a difference too.\n",
"comp.Component uses the main panel -ScrolledPanel- as the parent \nclass MyBackground(ScrolledPanel):\n def __init__(self, parent, components):\n ScrolledPanel.__init__(self, parent, -1, style=wx.TAB_TRAVERSAL)\n self.setFont()\n comp = Components(components, self)\n\n...\n...\napp = wx.PySimpleApp(0)\nwx.InitAllImageHandlers()\nframe = wx.Frame(None, -1, 'Set Limits', size=(800,600), style=wx.DEFAULT_FRAME_STYLE)\npanel = MyBackground(frame, components)\n\nas a temporary but succesful solution, i've removed staticboxes and changed staticboxsizer to gridbagsizer, everything works fine :) most probably problem is related to theme as you've said and i guess changing the foreground color for labels might just work.\nthanks for reply\n"
] | [
1,
1
] | [] | [] | [
"boxsizer",
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0000484389_boxsizer_python_wxpython_wxwidgets.txt |
Q:
How can I disable quoting in the Python 2.4 CSV reader?
I am writing a Python utility that needs to parse a large, regularly-updated CSV file I don't control. The utility must run on a server with only Python 2.4 available. The CSV file does not quote field values at all, but the Python 2.4 version of the csv library does not seem to give me any way to turn off quoting, it just allows me to set the quote character (dialect.quotechar = '"' or whatever). If I try setting the quote character to None or the empty string, I get an error.
I can sort of work around this by setting dialect.quotechar to some "rare" character, but this is brittle, as there is no ASCII character I can absolutely guarantee will not show up in field values (except the delimiter, but if I set dialect.quotechar = dialect.delimiter, things go predictably haywire).
In Python 2.5 and later, if I set dialect.quoting to csv.QUOTE_NONE, the CSV reader respects that and does not interpret any character as a quote character. Is there any way to duplicate this behavior in Python 2.4?
UPDATE: Thanks Triptych and Mark Roddy for helping to narrow the problem down. Here's a simplest-case demonstration:
>>> import csv
>>> import StringIO
>>> data = """
... 1,2,3,4,"5
... 1,2,3,4,5
... """
>>> reader = csv.reader(StringIO.StringIO(data))
>>> for i in reader: print i
...
[]
Traceback (most recent call last):
File "<stdin>", line 1, in ?
_csv.Error: newline inside string
The problem only occurs when there's a single double-quote character in the final column of a row. Unfortunately, this situation exists in my dataset. I've accepted Tanj's solution: manually assign a nonprinting character ("\x07" or BEL) as the quotechar. This is hacky, but it works, and I haven't yet seen another solution that does. Here's a demo of the solution in action:
>>> import csv
>>> import StringIO
>>> class MyDialect(csv.Dialect):
... quotechar = '\x07'
... delimiter = ','
... lineterminator = '\n'
... doublequote = False
... skipinitialspace = False
... quoting = csv.QUOTE_NONE
... escapechar = '\\'
...
>>> dialect = MyDialect()
>>> data = """
... 1,2,3,4,"5
... 1,2,3,4,5
... """
>>> reader = csv.reader(StringIO.StringIO(data), dialect=dialect)
>>> for i in reader: print i
...
[]
['1', '2', '3', '4', '"5']
['1', '2', '3', '4', '5']
In Python 2.5+ setting quoting to csv.QUOTE_NONE would be sufficient, and the value of quotechar would then be irrelevant. (I'm actually getting my initial dialect via a csv.Sniffer and then overriding the quotechar value, not by subclassing csv.Dialect, but I don't want that to be a distraction from the real issue; the above two sessions demonstrate that Sniffer isn't the problem.)
A:
I don't know if python would like/allow it but could you use a non-printable ascii code such as BEL or BS (backspace) These I would think to be extremely rare.
A:
I tried a few examples using Python 2.4.3, and it seemed to be smart enough to detect that the fields were unquoted.
I know you've already accepted a (slightly hacky) answer, but have you tried just leaving the reader.dialect.quotechar value alone? What happens if you do?
Any chance we could get example input?
A:
+1 for Triptych
Confirmation that csv.reader automatically handles csv files with out quotes:
>>> import StringIO
>>> import csv
>>> data="""
... 1,2,3,4,5
... 1,2,3,4,5
... 1,2,3,4,5
... """
>>> reader=csv.reader(StringIO.StringIO(data))
>>> for i in reader:
... print i
...
[]
['1', '2', '3', '4', '5']
['1', '2', '3', '4', '5']
['1', '2', '3', '4', '5']
| How can I disable quoting in the Python 2.4 CSV reader? | I am writing a Python utility that needs to parse a large, regularly-updated CSV file I don't control. The utility must run on a server with only Python 2.4 available. The CSV file does not quote field values at all, but the Python 2.4 version of the csv library does not seem to give me any way to turn off quoting, it just allows me to set the quote character (dialect.quotechar = '"' or whatever). If I try setting the quote character to None or the empty string, I get an error.
I can sort of work around this by setting dialect.quotechar to some "rare" character, but this is brittle, as there is no ASCII character I can absolutely guarantee will not show up in field values (except the delimiter, but if I set dialect.quotechar = dialect.delimiter, things go predictably haywire).
In Python 2.5 and later, if I set dialect.quoting to csv.QUOTE_NONE, the CSV reader respects that and does not interpret any character as a quote character. Is there any way to duplicate this behavior in Python 2.4?
UPDATE: Thanks Triptych and Mark Roddy for helping to narrow the problem down. Here's a simplest-case demonstration:
>>> import csv
>>> import StringIO
>>> data = """
... 1,2,3,4,"5
... 1,2,3,4,5
... """
>>> reader = csv.reader(StringIO.StringIO(data))
>>> for i in reader: print i
...
[]
Traceback (most recent call last):
File "<stdin>", line 1, in ?
_csv.Error: newline inside string
The problem only occurs when there's a single double-quote character in the final column of a row. Unfortunately, this situation exists in my dataset. I've accepted Tanj's solution: manually assign a nonprinting character ("\x07" or BEL) as the quotechar. This is hacky, but it works, and I haven't yet seen another solution that does. Here's a demo of the solution in action:
>>> import csv
>>> import StringIO
>>> class MyDialect(csv.Dialect):
... quotechar = '\x07'
... delimiter = ','
... lineterminator = '\n'
... doublequote = False
... skipinitialspace = False
... quoting = csv.QUOTE_NONE
... escapechar = '\\'
...
>>> dialect = MyDialect()
>>> data = """
... 1,2,3,4,"5
... 1,2,3,4,5
... """
>>> reader = csv.reader(StringIO.StringIO(data), dialect=dialect)
>>> for i in reader: print i
...
[]
['1', '2', '3', '4', '"5']
['1', '2', '3', '4', '5']
In Python 2.5+ setting quoting to csv.QUOTE_NONE would be sufficient, and the value of quotechar would then be irrelevant. (I'm actually getting my initial dialect via a csv.Sniffer and then overriding the quotechar value, not by subclassing csv.Dialect, but I don't want that to be a distraction from the real issue; the above two sessions demonstrate that Sniffer isn't the problem.)
| [
"I don't know if python would like/allow it but could you use a non-printable ascii code such as BEL or BS (backspace) These I would think to be extremely rare.\n",
"I tried a few examples using Python 2.4.3, and it seemed to be smart enough to detect that the fields were unquoted. \nI know you've already accepted a (slightly hacky) answer, but have you tried just leaving the reader.dialect.quotechar value alone? What happens if you do?\nAny chance we could get example input?\n",
"+1 for Triptych\nConfirmation that csv.reader automatically handles csv files with out quotes: \n>>> import StringIO\n>>> import csv\n>>> data=\"\"\"\n... 1,2,3,4,5\n... 1,2,3,4,5\n... 1,2,3,4,5\n... \"\"\"\n>>> reader=csv.reader(StringIO.StringIO(data))\n>>> for i in reader:\n... print i\n... \n[]\n['1', '2', '3', '4', '5']\n['1', '2', '3', '4', '5']\n['1', '2', '3', '4', '5']\n\n"
] | [
13,
3,
0
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0000494054_csv_python.txt |
Q:
Would Python make a good substitute for the Windows command-line/batch scripts?
I've got some experience with Bash, which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using
the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead.
Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc.
A:
Python is well suited for these tasks, and I would guess much easier to develop in and debug than Windows batch files.
The question is, I think, how easy and painless it is to ensure that all the computers that you have to run these scripts on, have Python installed.
A:
Summary
Windows: no need to think, use Python.
Unix: quick or run-it-once scripts are for Bash, serious and/or long life time scripts are for Python.
The big talk
In a Windows environment, Python is definitely the best choice since cmd is crappy and PowerShell has not really settled yet. What's more Python can run on several platform so it's a better investment. Finally, Python has a huge set of library so you will almost never hit the "god-I-can't-do-that" wall. This is not true for cmd and PowerShell.
In a Linux environment, this is a bit different. A lot of one liners are shorter, faster, more efficient and often more readable in pure Bash. But if you know your quick and dirty script is going to stay around for a while or will need to be improved, go for Python since it's far easier to maintain and extend and you will be able to do most of the task you can do with GNU tools with the standard library. And if you can't, you can still call the command-line from a Python script.
And of course you can call Python from the shell using -c option:
python -c "for line in open('/etc/fstab') : print line"
Some more literature about Python used for system administration tasks:
The IBM lab point of view.
A nice example to compare bash and python to script report.
The basics.
The must-have book.
A:
Sure, python is a pretty good choice for those tasks (I'm sure many will recommend PowerShell instead).
Here is a fine introduction from that point of view:
http://www.redhatmagazine.com/2008/02/07/python-for-bash-scripters-a-well-kept-secret/
EDIT: About gnud's concern: http://www.portablepython.com/
A:
Are you aware of PowerShell?
A:
Anything is a good replacement for the Batch file system in windows. Perl, Python, Powershell are all good choices.
A:
@BKB definitely has a valid concern. Here's a couple links you'll want to check if you run into any issues that can't be solved with the standard library:
Pywin32 is a package for working with low-level win32 APIs (advanced file system modifications, COM interfaces, etc.)
Tim Golden's Python page: he maintains a WMI wrapper package that builds off of Pywin32, but be sure to also check out his "Win32 How Do I" page for details on how to accomplish typical Windows tasks in Python.
A:
Python is certainly well suited to that. If you're going down that road, you might also want to investigate SCons which is a build system itself built with Python. The cool thing is the build scripts are actually full-blown Python scripts themselves, so you can do anything in the build script that you could otherwise do in Python. It makes make look pretty anemic in comparison.
Upon rereading your question, I should note that SCons is more suited to building software projects than to writing system maintenance scripts. But I wouldn't hesitate to recommend Python to you in any case.
A:
As a follow up, after some experimentation the thing I've found Python most useful for is any situation involving text manipulation (yourStringHere.replace(), regexes for more complex stuff) or testing some basic concept really quickly, which it is excellent for.
For stuff like SQL DB restore scripts I find I still usually just resort to batch files, as it's usually either something short enough that it actually takes more Python code to make the appropriate system calls or I can reuse snippets of code from other people reducing the writing time to just enough to tweak existing code to fit my needs.
As an addendum I would highly recommend IPython as a great interactive shell complete with tab completion and easy docstring access.
A:
Python, along with Pywin32, would be fine for Windows automation. However, VBScript or JScript used with the Windows Scripting Host works just as well, and requires nothing additional to install.
A:
I've done a decent amount of scripting in both Linux/Unix and Windows environments, in Python, Perl, batch files, Bash, etc. My advice is that if it's possible, install Cygwin and use Bash (it sounds from your description like installing a scripting language or env isn't a problem?). You'll be more comfortable with that since the transition is minimal.
If that's not an option, then here's my take. Batch files are very kludgy and limited, but make a lot of sense for simple tasks like 'copy some files' or 'restart this service'. Python will be cleaner, easier to maintain, and much more powerful. However, the downside is that either you end up calling external applications from Python with subprocess, popen or similar. Otherwise, you end up writing a bunch more code to do things that are comparatively simple in batch files, like copying a folder full of files. A lot of this depends on what your scripts are doing. Text/string processing is going to be much cleaner in Python, for example.
Lastly, it's probably not an attractive alternative, but you might also consider VBScript as an alternative. I don't enjoy working with it as a language personally, but if portability is any kind of concern then it wins out by virtue of being available out of the box in any copy of Windows. Because of this I've found myself writing scripts that were unwieldy as batch files in VBScript instead, since I can't usually depend on Python or Perl or Bash being available on Windows.
A:
I've been using a lot of Windows Script Files lately. More powerful than batch scripts, and since it uses Windows scripting, there's nothing to install.
| Would Python make a good substitute for the Windows command-line/batch scripts? | I've got some experience with Bash, which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using
the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead.
Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc.
| [
"Python is well suited for these tasks, and I would guess much easier to develop in and debug than Windows batch files.\nThe question is, I think, how easy and painless it is to ensure that all the computers that you have to run these scripts on, have Python installed.\n",
"Summary\nWindows: no need to think, use Python.\nUnix: quick or run-it-once scripts are for Bash, serious and/or long life time scripts are for Python.\nThe big talk\nIn a Windows environment, Python is definitely the best choice since cmd is crappy and PowerShell has not really settled yet. What's more Python can run on several platform so it's a better investment. Finally, Python has a huge set of library so you will almost never hit the \"god-I-can't-do-that\" wall. This is not true for cmd and PowerShell.\nIn a Linux environment, this is a bit different. A lot of one liners are shorter, faster, more efficient and often more readable in pure Bash. But if you know your quick and dirty script is going to stay around for a while or will need to be improved, go for Python since it's far easier to maintain and extend and you will be able to do most of the task you can do with GNU tools with the standard library. And if you can't, you can still call the command-line from a Python script.\nAnd of course you can call Python from the shell using -c option:\npython -c \"for line in open('/etc/fstab') : print line\"\n\nSome more literature about Python used for system administration tasks:\n\nThe IBM lab point of view.\nA nice example to compare bash and python to script report.\nThe basics.\nThe must-have book.\n\n",
"Sure, python is a pretty good choice for those tasks (I'm sure many will recommend PowerShell instead).\nHere is a fine introduction from that point of view:\nhttp://www.redhatmagazine.com/2008/02/07/python-for-bash-scripters-a-well-kept-secret/\nEDIT: About gnud's concern: http://www.portablepython.com/\n",
"Are you aware of PowerShell?\n",
"Anything is a good replacement for the Batch file system in windows. Perl, Python, Powershell are all good choices.\n",
"@BKB definitely has a valid concern. Here's a couple links you'll want to check if you run into any issues that can't be solved with the standard library:\n\nPywin32 is a package for working with low-level win32 APIs (advanced file system modifications, COM interfaces, etc.)\nTim Golden's Python page: he maintains a WMI wrapper package that builds off of Pywin32, but be sure to also check out his \"Win32 How Do I\" page for details on how to accomplish typical Windows tasks in Python.\n\n",
"Python is certainly well suited to that. If you're going down that road, you might also want to investigate SCons which is a build system itself built with Python. The cool thing is the build scripts are actually full-blown Python scripts themselves, so you can do anything in the build script that you could otherwise do in Python. It makes make look pretty anemic in comparison.\nUpon rereading your question, I should note that SCons is more suited to building software projects than to writing system maintenance scripts. But I wouldn't hesitate to recommend Python to you in any case.\n",
"As a follow up, after some experimentation the thing I've found Python most useful for is any situation involving text manipulation (yourStringHere.replace(), regexes for more complex stuff) or testing some basic concept really quickly, which it is excellent for.\nFor stuff like SQL DB restore scripts I find I still usually just resort to batch files, as it's usually either something short enough that it actually takes more Python code to make the appropriate system calls or I can reuse snippets of code from other people reducing the writing time to just enough to tweak existing code to fit my needs.\nAs an addendum I would highly recommend IPython as a great interactive shell complete with tab completion and easy docstring access.\n",
"Python, along with Pywin32, would be fine for Windows automation. However, VBScript or JScript used with the Windows Scripting Host works just as well, and requires nothing additional to install.\n",
"I've done a decent amount of scripting in both Linux/Unix and Windows environments, in Python, Perl, batch files, Bash, etc. My advice is that if it's possible, install Cygwin and use Bash (it sounds from your description like installing a scripting language or env isn't a problem?). You'll be more comfortable with that since the transition is minimal.\nIf that's not an option, then here's my take. Batch files are very kludgy and limited, but make a lot of sense for simple tasks like 'copy some files' or 'restart this service'. Python will be cleaner, easier to maintain, and much more powerful. However, the downside is that either you end up calling external applications from Python with subprocess, popen or similar. Otherwise, you end up writing a bunch more code to do things that are comparatively simple in batch files, like copying a folder full of files. A lot of this depends on what your scripts are doing. Text/string processing is going to be much cleaner in Python, for example.\nLastly, it's probably not an attractive alternative, but you might also consider VBScript as an alternative. I don't enjoy working with it as a language personally, but if portability is any kind of concern then it wins out by virtue of being available out of the box in any copy of Windows. Because of this I've found myself writing scripts that were unwieldy as batch files in VBScript instead, since I can't usually depend on Python or Perl or Bash being available on Windows.\n",
"I've been using a lot of Windows Script Files lately. More powerful than batch scripts, and since it uses Windows scripting, there's nothing to install.\n"
] | [
25,
15,
9,
5,
5,
5,
3,
2,
1,
1,
0
] | [
"As much as I love python, I don't think it a good choice to replace basic windows batch scripts. \nI can't see see someone having to import modules like sys, os or getopt to do basic things you can do with shell like call a program, check environment variable or an argument.\nAlso, in my experience, goto is much easier to understand to most sysadmins than a function call.\n"
] | [
-2
] | [
"command_line",
"python",
"scripting"
] | stackoverflow_0000213798_command_line_python_scripting.txt |
Q:
Python's os.path choking on Hebrew filenames
I'm writing a script that has to move some file around, but unfortunately it doesn't seem os.path plays with internationalization very well. When I have files named in Hebrew, there are problems. Here's a screenshot of the contents of a directory:
(source: thegreenplace.net)
Now consider this code that goes over the files in this directory:
files = os.listdir('test_source')
for f in files:
pf = os.path.join('test_source', f)
print pf, os.path.exists(pf)
The output is:
test_source\ex True
test_source\joe True
test_source\mie.txt True
test_source\__()'''.txt True
test_source\????.txt False
Notice how os.path.exists thinks that the hebrew-named file doesn't even exist?
How can I fix this?
ActivePython 2.5.2 on Windows XP Home SP2
A:
Hmm, after some digging it appears that when supplying os.listdir a unicode string, this kinda works:
files = os.listdir(u'test_source')
for f in files:
pf = os.path.join(u'test_source', f)
print pf.encode('ascii', 'replace'), os.path.exists(pf)
===>
test_source\ex True
test_source\joe True
test_source\mie.txt True
test_source\__()'''.txt True
test_source\????.txt True
Some important observations here:
Windows XP (like all NT derivatives) stores all filenames in unicode
os.listdir (and similar functions, like os.walk) should be passed a unicode string in order to work correctly with unicode paths. Here's a quote from the aforementioned link:
os.listdir(), which returns filenames,
raises an issue: should it return the
Unicode version of filenames, or
should it return 8-bit strings
containing the encoded versions?
os.listdir() will do both, depending
on whether you provided the directory
path as an 8-bit string or a Unicode
string. If you pass a Unicode string
as the path, filenames will be decoded
using the filesystem's encoding and a
list of Unicode strings will be
returned, while passing an 8-bit path
will return the 8-bit versions of the
filenames.
And lastly, print wants an ascii string, not unicode, so the path has to be encoded to ascii.
A:
It looks like a Unicode vs ASCII issue - os.listdir is returning a list of ASCII strings.
Edit: I tried it on Python 3.0, also on XP SP2, and os.listdir simply omitted the Hebrew filenames instead of listing them at all.
According to the docs, this means it was unable to decode it:
Note that when os.listdir() returns a
list of strings, filenames that cannot
be decoded properly are omitted rather
than raising UnicodeError.
A:
It works like a charm using Python 2.5.1 on OS X:
subdir/bar.txt True
subdir/foo.txt True
subdir/עִבְרִית.txt True
Maybe that means that this has to do with Windows XP somehow?
EDIT: I also tried with unicode strings to try mimic the Windows behaviour better:
for f in os.listdir(u'subdir'):
pf = os.path.join(u'subdir', f)
print pf, os.path.exists(pf)
subdir/bar.txt True
subdir/foo.txt True
subdir/עִבְרִית.txt True
In the Terminal (os x stock command prompt app) that is. Using IDLE it still worked but didn't print the filename correctly. To make sure it really is unicode there I checked:
>>>os.listdir(u'listdir')[2]
u'\u05e2\u05b4\u05d1\u05b0\u05e8\u05b4\u05d9\u05ea.txt'
A:
A question mark is the more or less universal symbol displayed when a unicode character can't be represented in a specific encoding. Your terminal or interactive session under Windows is probably using ASCII or ISO-8859-1 or something. So the actual string is unicode, but it gets translated to ???? when printed to the terminal. That's why it works for PEZ, using OSX.
| Python's os.path choking on Hebrew filenames | I'm writing a script that has to move some file around, but unfortunately it doesn't seem os.path plays with internationalization very well. When I have files named in Hebrew, there are problems. Here's a screenshot of the contents of a directory:
(source: thegreenplace.net)
Now consider this code that goes over the files in this directory:
files = os.listdir('test_source')
for f in files:
pf = os.path.join('test_source', f)
print pf, os.path.exists(pf)
The output is:
test_source\ex True
test_source\joe True
test_source\mie.txt True
test_source\__()'''.txt True
test_source\????.txt False
Notice how os.path.exists thinks that the hebrew-named file doesn't even exist?
How can I fix this?
ActivePython 2.5.2 on Windows XP Home SP2
| [
"Hmm, after some digging it appears that when supplying os.listdir a unicode string, this kinda works:\nfiles = os.listdir(u'test_source')\n\nfor f in files:\n\n pf = os.path.join(u'test_source', f)\n print pf.encode('ascii', 'replace'), os.path.exists(pf)\n\n===>\ntest_source\\ex True\ntest_source\\joe True\ntest_source\\mie.txt True\ntest_source\\__()'''.txt True\ntest_source\\????.txt True\n\nSome important observations here:\n\nWindows XP (like all NT derivatives) stores all filenames in unicode\nos.listdir (and similar functions, like os.walk) should be passed a unicode string in order to work correctly with unicode paths. Here's a quote from the aforementioned link:\n\n\nos.listdir(), which returns filenames,\n raises an issue: should it return the\n Unicode version of filenames, or\n should it return 8-bit strings\n containing the encoded versions?\n os.listdir() will do both, depending\n on whether you provided the directory\n path as an 8-bit string or a Unicode\n string. If you pass a Unicode string\n as the path, filenames will be decoded\n using the filesystem's encoding and a\n list of Unicode strings will be\n returned, while passing an 8-bit path\n will return the 8-bit versions of the\n filenames.\n\n\nAnd lastly, print wants an ascii string, not unicode, so the path has to be encoded to ascii.\n\n",
"It looks like a Unicode vs ASCII issue - os.listdir is returning a list of ASCII strings. \nEdit: I tried it on Python 3.0, also on XP SP2, and os.listdir simply omitted the Hebrew filenames instead of listing them at all.\nAccording to the docs, this means it was unable to decode it:\n\nNote that when os.listdir() returns a\n list of strings, filenames that cannot\n be decoded properly are omitted rather\n than raising UnicodeError.\n\n",
"It works like a charm using Python 2.5.1 on OS X:\nsubdir/bar.txt True\nsubdir/foo.txt True\nsubdir/עִבְרִית.txt True\n\nMaybe that means that this has to do with Windows XP somehow?\nEDIT: I also tried with unicode strings to try mimic the Windows behaviour better:\nfor f in os.listdir(u'subdir'):\n pf = os.path.join(u'subdir', f)\n print pf, os.path.exists(pf)\n\nsubdir/bar.txt True\nsubdir/foo.txt True\nsubdir/עִבְרִית.txt True\n\nIn the Terminal (os x stock command prompt app) that is. Using IDLE it still worked but didn't print the filename correctly. To make sure it really is unicode there I checked:\n>>>os.listdir(u'listdir')[2]\nu'\\u05e2\\u05b4\\u05d1\\u05b0\\u05e8\\u05b4\\u05d9\\u05ea.txt'\n\n",
"A question mark is the more or less universal symbol displayed when a unicode character can't be represented in a specific encoding. Your terminal or interactive session under Windows is probably using ASCII or ISO-8859-1 or something. So the actual string is unicode, but it gets translated to ???? when printed to the terminal. That's why it works for PEZ, using OSX.\n"
] | [
17,
3,
1,
0
] | [] | [] | [
"hebrew",
"internationalization",
"python"
] | stackoverflow_0000497233_hebrew_internationalization_python.txt |
Q:
Get rid of '\n' in Python
How to get rid of '\n' at the end of a line ?
A:
"string \n".strip();
or
"string \n".rstrip();
A:
If, as Rolf suggests in his comment, you want to print text without having a newline automatically appended, use
print "foo",
Note the trailing comma.
A:
Get rid of just the "\n" at the end of the line:
>>> "string \n".rstrip("\n")
'string '
Get rid of all whitespace at the end of the line:
>>> "string \n".rstrip()
'string'
Split text by lines, stripping trailing newlines:
>>> "line 1\nline 2 \nline 3\n".splitlines()
['line 1', 'line 2 ', 'line 3']
A:
In Python 3, to print a string without a newline, set the end to an empty string:
print("some string", end="")
A:
If you want a slightly more complex and explicit way of writing output:
import sys
sys.stdout.write("string")
Then you would responsible for your own newlines.
| Get rid of '\n' in Python | How to get rid of '\n' at the end of a line ?
| [
"\"string \\n\".strip();\n\nor\n\"string \\n\".rstrip();\n\n",
"If, as Rolf suggests in his comment, you want to print text without having a newline automatically appended, use\nprint \"foo\",\n\nNote the trailing comma.\n",
"Get rid of just the \"\\n\" at the end of the line:\n>>> \"string \\n\".rstrip(\"\\n\")\n'string '\n\nGet rid of all whitespace at the end of the line:\n>>> \"string \\n\".rstrip()\n'string'\n\nSplit text by lines, stripping trailing newlines:\n>>> \"line 1\\nline 2 \\nline 3\\n\".splitlines()\n['line 1', 'line 2 ', 'line 3']\n\n",
"In Python 3, to print a string without a newline, set the end to an empty string:\nprint(\"some string\", end=\"\")\n\n",
"If you want a slightly more complex and explicit way of writing output:\nimport sys\nsys.stdout.write(\"string\")\n\nThen you would responsible for your own newlines.\n"
] | [
25,
18,
7,
4,
2
] | [] | [] | [
"python"
] | stackoverflow_0000495424_python.txt |
Q:
Outputting to a text file
How to print the following code to a .txt file
y = '10.1.1.' # /24 network,
for x in range(255):
x += 1
print y + str(x) # not happy that it's in string, but how to print it into a.txt
There's copy paste, but would rather try something more interesting.
A:
f = open('myfile.txt', 'w')
for x in range(255):
ip = "10.1.1.%s\n" % str(x)
f.write(ip)
f.close()
A:
scriptname.py >> output.txt
A:
What is the x += 1 for? It seems to be a workaround for range(255) being 0 based - which gives the sequence 0,1,2...254.
range(1,256) will better give you what you want.
An alternative to other answers:
NETWORK = '10.1.1'
f = open('outfile.txt', 'w')
try:
for machine in range(1,256):
print >> f, "%s.%s" % (NETWORK, machine)
finally:
f.close()
A:
In Python 3, you can use the print function's keyword argument
called file. "a" means "append."
f = open("network.txt", "a")
for i in range(1, 256):
print("10.1.1." + str(i), file=f)
f.close()
| Outputting to a text file | How to print the following code to a .txt file
y = '10.1.1.' # /24 network,
for x in range(255):
x += 1
print y + str(x) # not happy that it's in string, but how to print it into a.txt
There's copy paste, but would rather try something more interesting.
| [
"f = open('myfile.txt', 'w')\nfor x in range(255):\n ip = \"10.1.1.%s\\n\" % str(x)\n f.write(ip)\nf.close()\n\n",
"scriptname.py >> output.txt\n",
"What is the x += 1 for? It seems to be a workaround for range(255) being 0 based - which gives the sequence 0,1,2...254.\nrange(1,256) will better give you what you want.\nAn alternative to other answers:\nNETWORK = '10.1.1'\nf = open('outfile.txt', 'w')\ntry:\n for machine in range(1,256):\n print >> f, \"%s.%s\" % (NETWORK, machine)\nfinally:\n f.close()\n\n",
"In Python 3, you can use the print function's keyword argument\ncalled file. \"a\" means \"append.\"\nf = open(\"network.txt\", \"a\")\nfor i in range(1, 256):\n print(\"10.1.1.\" + str(i), file=f)\nf.close()\n\n"
] | [
6,
3,
1,
0
] | [] | [] | [
"python",
"text"
] | stackoverflow_0000493816_python_text.txt |
Q:
How to make python gracefully fail?
I was just wondering how do you make python fail in a user defined way in all possible errors.
For example, I'm writing a program that processes a (large) list of items, and some of the items may not be in the format I defined. If python detects an error, it currently just spits out an ugly error message and stop the whole process. However, I want it to just outputs the error to somewhere together with some context and then move on to the next item.
If anyone can help me with this it would be greatly appreciated!
Thanks a lot!
Jason
A:
The following are a few basic strategies I regularly use in my more-than-trivial scripts and medium-size applications.
Tip 1: Trap the error at every level where it makes sense to continue processing. In your case it may be in the inside the loop. You don't have to protect every single line or every single function call, but only the places where it makes a difference to survive the error.
Tip 2: Use the logging module to report what happened in a way that is configurable independently from how you compose the module with other modules in a larger applications. Start importing the root logger in your module, then, using it in a few different places, you may eventually figure out a more sensible logging hierarchy.
import logging
logger = logging.getLogger()
for item in items:
try:
process(item)
except Exception, exc:
logger.warn("error while processing item: %s", exc)
Tip 3: define an "application exception", eventually you may want to define a hierarchy of such exception but this is better discovered when the need arise. Use such exception(s) to "bubble out" when the data you are dealing with are not what you expected or to signal inconsistent situations, while separating them from the normal standard exception arising from regular bugs or problems outside the modeled domain (IO errors etc).
class DomainException(Exception):
"""Life is not what I expected"""
def process(item):
# There is no way that this item can be processed, so bail out quickly.
# Here you are assuming that your caller will report this error but probably
# it will be able to process the other items.
if item.foo > item.bar:
raise DomainException("bad news")
# Everybody knows that every item has more that 10 wickets, so
# the following instruction is assumed always being successful.
# But even if luck is not on our side, our caller will be able to
# cope with this situation and keep on working
item.wickets[10] *= 2
The main function is the outmost checkpoint: finally deal here with the possible ways your task finished. If this was not a shell script (but e.g. the processing beneath a dialog in an UI application or an operation after a POST in a web application) only the way you report the error changes (and the use of the logging method completely separates the implementation of the processing from its interface).
def main():
try:
do_all_the_processing()
return 0
except DomainException, exc:
logger.error("I couldn't finish. The reason is: %s", exc)
return 1
except Exception, exc:
logger.error("Unexpected error: %s - %s", exc.__class__.__name__, exc)
# In this case you may want to forward a stacktrace to the developers via e-mail
return 1
except BaseException:
logger.info("user stop") # this deals with a ctrl-c
return 1
if __name__ == '__main__':
sys.exit(main())
A:
The ugly error message means that an exception is raised. You need to catch the exception.
A good place to start is the Python tutorial's section on exceptions.
Basically you need to wrap your code in a try...except block like this:
try:
do_something_dangerous()
except SomeException:
handle_the_error()
A:
use try... except idiom
try:
# code that possibly breaks
except RelevantError: # you need to know what kind of errors you code might produce
# show your message
A:
all possible errors
The other answers pretty much cover how to make your program gracefully fail, but I'd like to mention one thing -- You don't want to gracefully fail all errors. If you hide all your errors, you won't be shown those which signify that the logic of the program is wrong - namely errors you want to see.
So while it's important to catch your exceptions, make sure you know exactly which exceptions are actually being caught.
A:
When Python runs into an error condition, it is throwing a exception.
The way to handle this is to catch the exception and maybe handle it.
You might check out the section on exceptions on the python tutorial.
You expressed an interest in catching all exceptions. This could be done by catching the Exception class. according to the documentation:
All built-in, non-system-exiting
exceptions are derived from this
class. All user-defined exceptions
should also be derived from this class
| How to make python gracefully fail? | I was just wondering how do you make python fail in a user defined way in all possible errors.
For example, I'm writing a program that processes a (large) list of items, and some of the items may not be in the format I defined. If python detects an error, it currently just spits out an ugly error message and stop the whole process. However, I want it to just outputs the error to somewhere together with some context and then move on to the next item.
If anyone can help me with this it would be greatly appreciated!
Thanks a lot!
Jason
| [
"The following are a few basic strategies I regularly use in my more-than-trivial scripts and medium-size applications.\nTip 1: Trap the error at every level where it makes sense to continue processing. In your case it may be in the inside the loop. You don't have to protect every single line or every single function call, but only the places where it makes a difference to survive the error.\nTip 2: Use the logging module to report what happened in a way that is configurable independently from how you compose the module with other modules in a larger applications. Start importing the root logger in your module, then, using it in a few different places, you may eventually figure out a more sensible logging hierarchy.\nimport logging\nlogger = logging.getLogger()\n\nfor item in items:\n try:\n process(item)\n except Exception, exc:\n logger.warn(\"error while processing item: %s\", exc)\n\nTip 3: define an \"application exception\", eventually you may want to define a hierarchy of such exception but this is better discovered when the need arise. Use such exception(s) to \"bubble out\" when the data you are dealing with are not what you expected or to signal inconsistent situations, while separating them from the normal standard exception arising from regular bugs or problems outside the modeled domain (IO errors etc).\nclass DomainException(Exception):\n \"\"\"Life is not what I expected\"\"\"\n\ndef process(item):\n # There is no way that this item can be processed, so bail out quickly.\n # Here you are assuming that your caller will report this error but probably\n # it will be able to process the other items.\n if item.foo > item.bar:\n raise DomainException(\"bad news\")\n\n # Everybody knows that every item has more that 10 wickets, so\n # the following instruction is assumed always being successful.\n # But even if luck is not on our side, our caller will be able to\n # cope with this situation and keep on working\n item.wickets[10] *= 2\n\nThe main function is the outmost checkpoint: finally deal here with the possible ways your task finished. If this was not a shell script (but e.g. the processing beneath a dialog in an UI application or an operation after a POST in a web application) only the way you report the error changes (and the use of the logging method completely separates the implementation of the processing from its interface).\ndef main():\n try:\n do_all_the_processing()\n return 0\n except DomainException, exc:\n logger.error(\"I couldn't finish. The reason is: %s\", exc)\n return 1\n except Exception, exc:\n logger.error(\"Unexpected error: %s - %s\", exc.__class__.__name__, exc)\n # In this case you may want to forward a stacktrace to the developers via e-mail\n return 1\n except BaseException:\n logger.info(\"user stop\") # this deals with a ctrl-c\n return 1\n\nif __name__ == '__main__':\n sys.exit(main())\n\n",
"The ugly error message means that an exception is raised. You need to catch the exception.\nA good place to start is the Python tutorial's section on exceptions.\nBasically you need to wrap your code in a try...except block like this:\ntry:\n do_something_dangerous()\nexcept SomeException:\n handle_the_error()\n\n",
"use try... except idiom\ntry:\n # code that possibly breaks\nexcept RelevantError: # you need to know what kind of errors you code might produce\n # show your message\n\n",
"\nall possible errors\n\nThe other answers pretty much cover how to make your program gracefully fail, but I'd like to mention one thing -- You don't want to gracefully fail all errors. If you hide all your errors, you won't be shown those which signify that the logic of the program is wrong - namely errors you want to see.\nSo while it's important to catch your exceptions, make sure you know exactly which exceptions are actually being caught.\n",
"When Python runs into an error condition, it is throwing a exception. \nThe way to handle this is to catch the exception and maybe handle it. \nYou might check out the section on exceptions on the python tutorial.\nYou expressed an interest in catching all exceptions. This could be done by catching the Exception class. according to the documentation:\n\nAll built-in, non-system-exiting\n exceptions are derived from this\n class. All user-defined exceptions\n should also be derived from this class\n\n"
] | [
25,
8,
4,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0000497952_python.txt |
Q:
IronPython vs. C# for small-scale projects
I currently use Python for most of my programming projects (mainly rapid development of small programs and prototypes). I'd like to invest time in learning a language that gives me the flexibility to use various Microsoft tools and APIs whenever the opportunity arises. I'm trying to decide between IronPython and C#. Since Python is my favorite programming language (mainly because of its conciseness and clean syntax), IronPython sounds like the ideal option. Yet after reading about it a little bit I have several questions.
For those of you who have used IronPython, does it ever become unclear where classic Python ends and .NET begins? For example, there appears to be significant overlap in functionality between the .NET libraries and the Python standard library, so when I need to do string operations or parse XML, I'm unclear which library I'm supposed to use. Also, I'm unclear when I'm supposed to use Python versus .NET data types in my code. For example, which of the following would I be using in my code?
d = {}
or
d = System.Collections.Hashtable()
(By the way, it seems that if I do a lot of things like the latter I might lose some of the conciseness, which is why I favor Python in the first place.)
Another issue is that a number of Microsoft's developer tools, such as .NET CF and Xbox XNA, are not available in IronPython. Are there more situations where IronPython wouldn't give me the full reach of C#?
A:
I've built a large-scale application in IronPython bound with C#.
It's almost completely seamless. The only things missing in IronPython from the true "python" feel are the C-based libraries (gotta use .NET for those) and IDLE.
The language interacts with other .NET languages like a dream... Specifically if you embed the interpreter and bind variables by reference.
By the way, a hash in IronPython is declared:
d = {}
Just be aware that it's actually an IronPython.Dict object, and not a C# dictionary. That said, the conversions often work invisibly if you pass it to a .NET class, and if you need to convert explicitly, there are built-ins that do it just fine.
All in all, an awesome language to use with .NET, if you have reason to.
Just a word of advice: Avoid the Visual Studio IronPython IDE like the plague. I found the automatic line completions screwed up on indentation, between spaces and tabs. Now -that- is a difficult-to-trace bug inserted into code.
A:
I'd suggest taking a look at Boo [http://boo.codehaus.org/], a .NET-based language with a syntax inspired by Python, but which provides the full range of .NET 3.5 functionality.
IronPython is great for using .NET-centric libraries -- but it isn't well-suited to creating them due to underlying differences in how the languages do typing. As Boo does inference-based typing at compile time except where duck typing is explicitly requested (or a specific type is given by the user), it lets you build .NET-centric libraries easily usable from C# (and other languages') code, which IronPython isn't suitable for; also, as it has to do less introspection at runtime, Boo compiles to faster code.
| IronPython vs. C# for small-scale projects | I currently use Python for most of my programming projects (mainly rapid development of small programs and prototypes). I'd like to invest time in learning a language that gives me the flexibility to use various Microsoft tools and APIs whenever the opportunity arises. I'm trying to decide between IronPython and C#. Since Python is my favorite programming language (mainly because of its conciseness and clean syntax), IronPython sounds like the ideal option. Yet after reading about it a little bit I have several questions.
For those of you who have used IronPython, does it ever become unclear where classic Python ends and .NET begins? For example, there appears to be significant overlap in functionality between the .NET libraries and the Python standard library, so when I need to do string operations or parse XML, I'm unclear which library I'm supposed to use. Also, I'm unclear when I'm supposed to use Python versus .NET data types in my code. For example, which of the following would I be using in my code?
d = {}
or
d = System.Collections.Hashtable()
(By the way, it seems that if I do a lot of things like the latter I might lose some of the conciseness, which is why I favor Python in the first place.)
Another issue is that a number of Microsoft's developer tools, such as .NET CF and Xbox XNA, are not available in IronPython. Are there more situations where IronPython wouldn't give me the full reach of C#?
| [
"I've built a large-scale application in IronPython bound with C#.\nIt's almost completely seamless. The only things missing in IronPython from the true \"python\" feel are the C-based libraries (gotta use .NET for those) and IDLE.\nThe language interacts with other .NET languages like a dream... Specifically if you embed the interpreter and bind variables by reference.\nBy the way, a hash in IronPython is declared:\nd = {}\n\nJust be aware that it's actually an IronPython.Dict object, and not a C# dictionary. That said, the conversions often work invisibly if you pass it to a .NET class, and if you need to convert explicitly, there are built-ins that do it just fine.\nAll in all, an awesome language to use with .NET, if you have reason to.\nJust a word of advice: Avoid the Visual Studio IronPython IDE like the plague. I found the automatic line completions screwed up on indentation, between spaces and tabs. Now -that- is a difficult-to-trace bug inserted into code.\n",
"I'd suggest taking a look at Boo [http://boo.codehaus.org/], a .NET-based language with a syntax inspired by Python, but which provides the full range of .NET 3.5 functionality.\nIronPython is great for using .NET-centric libraries -- but it isn't well-suited to creating them due to underlying differences in how the languages do typing. As Boo does inference-based typing at compile time except where duck typing is explicitly requested (or a specific type is given by the user), it lets you build .NET-centric libraries easily usable from C# (and other languages') code, which IronPython isn't suitable for; also, as it has to do less introspection at runtime, Boo compiles to faster code.\n"
] | [
11,
3
] | [] | [] | [
".net",
"c#",
"ironpython",
"python"
] | stackoverflow_0000497747_.net_c#_ironpython_python.txt |
Q:
What's a good way to keep track of class instance variables in Python?
I'm a C++ programmer just starting to learn Python. I'd like to know how you keep track of instance variables in large Python classes. I'm used to having a .h file that gives me a neat list (complete with comments) of all the class' members. But since Python allows you to add new instance variables on the fly, how do you keep track of them all?
I'm picturing a scenario where I mistakenly add a new instance variable when I already had one - but it was 1000 lines away from where I was working. Are there standard practices for avoiding this?
Edit: It appears I created some confusion with the term "member variable." I really mean instance variable, and I've edited my question accordingly.
A:
I would say, the standard practice to avoid this is to not write classes where you can be 1000 lines away from anything!
Seriously, that's way too much for just about any useful class, especially in a language that is as expressive as Python. Using more of what the Standard Library offers and abstracting away code into separate modules should help keeping your LOC count down.
The largest classes in the standard library have well below 100 lines!
A:
First of all: class attributes, or instance attributes? Or both? =)
Usually you just add instance attributes in __init__, and class attributes in the class definition, often before method definitions... which should probably cover 90% of use cases.
If code adds attributes on the fly, it probably (hopefully :-) has good reasons for doing so... leveraging dynamic features, introspection, etc. Other than that, adding attributes this way is probably less common than you think.
A:
pylint can statically detect attributes that aren't detected in __init__, along with many other potential bugs.
I'd also recommend writing unit tests and running your code often to detect these types of "whoopsie" programming mistakes.
A:
Instance variables should be initialized in the class's __init__() method. (In general)
If that's not possible. You can use __dict__ to get a dictionary of all instance variables of an object during runtime. If you really need to track this in documentation add a list of instance variables you are using into the docstring of the class.
A:
It sounds like you're talking about instance variables and not class variables. Note that in the following code a is a class variable and b is an instance variable.
class foo:
a = 0 #class variable
def __init__(self):
self.b = 0 #instance variable
Regarding the hypothetical where you create an unneeded instance variable because the other one was about one thousand lines away: The best solution is to not have classes that are one thousand lines long. If you can't avoid the length, then your class should have a well defined purpose and that will enable you to keep all of the complexities in your head at once.
A:
A documentation generation system such as Epydoc can be used as a reference for what instance/class variables an object has, and if you're worried about accidentally creating new variables via typos you can use PyChecker to check your code for this.
A:
This is a common concern I hear from many programmers who come from a C, C++, or other statically typed language where variables are pre-declared. In fact it was one of the biggest concerns we heard when we were persuading programmers at our organization to abandon C for high-level programs and use Python instead.
In theory, yes you can add instance variables to an object at any time. Yes it can happen from typos, etc. In practice, it rarely results in a bug. When it does, the bugs are generally not hard to find.
As long as your classes are not bloated (1000 lines is pretty huge!) and you have ample unit tests, you should rarely run in to a real problem. In case you do, it's easy to drop to a Python console at almost any time and inspect things as much as you wish.
A:
It seems to me that the main issue here is that you're thinking in terms of C++ when you're working in python.
Having a 1000 line class is not a very wise thing anyway in python, (I know it happens alot in C++ though),
Learn to exploit the dynamism that python gives you, for instance you can combine lists and dictionaries in very creative ways and save your self hundreds of useless lines of code.
For example, if you're mapping strings to functions (for dispatching), you can exploit the fact that functions are first class objects and have a dictionary that goes like:
d = {'command1' : func1, 'command2': func2, 'command3' : func3}
#then somewhere else use this list to dispatch
#given a string `str`
func = d[str]
func() #call the function!
Something like this in C++ would take up sooo many lines of code!
A:
The easiest is to use an IDE. PyDev is a plugin for eclipse.
I'm not a full on expert in all ways pythonic, but in general I define my class members right under the class definition in python, so if I add members, they're all relative.
My personal opinion is that class members should be declared in one section, for this specific reason.
Local scoped variables, otoh, should be defined closest to when they are used (except in C--which I believe still requires variables to be declared at the beginning of a method).
| What's a good way to keep track of class instance variables in Python? | I'm a C++ programmer just starting to learn Python. I'd like to know how you keep track of instance variables in large Python classes. I'm used to having a .h file that gives me a neat list (complete with comments) of all the class' members. But since Python allows you to add new instance variables on the fly, how do you keep track of them all?
I'm picturing a scenario where I mistakenly add a new instance variable when I already had one - but it was 1000 lines away from where I was working. Are there standard practices for avoiding this?
Edit: It appears I created some confusion with the term "member variable." I really mean instance variable, and I've edited my question accordingly.
| [
"I would say, the standard practice to avoid this is to not write classes where you can be 1000 lines away from anything!\nSeriously, that's way too much for just about any useful class, especially in a language that is as expressive as Python. Using more of what the Standard Library offers and abstracting away code into separate modules should help keeping your LOC count down.\nThe largest classes in the standard library have well below 100 lines! \n",
"First of all: class attributes, or instance attributes? Or both? =)\nUsually you just add instance attributes in __init__, and class attributes in the class definition, often before method definitions... which should probably cover 90% of use cases.\nIf code adds attributes on the fly, it probably (hopefully :-) has good reasons for doing so... leveraging dynamic features, introspection, etc. Other than that, adding attributes this way is probably less common than you think.\n",
"pylint can statically detect attributes that aren't detected in __init__, along with many other potential bugs.\nI'd also recommend writing unit tests and running your code often to detect these types of \"whoopsie\" programming mistakes.\n",
"Instance variables should be initialized in the class's __init__() method. (In general)\nIf that's not possible. You can use __dict__ to get a dictionary of all instance variables of an object during runtime. If you really need to track this in documentation add a list of instance variables you are using into the docstring of the class. \n",
"It sounds like you're talking about instance variables and not class variables. Note that in the following code a is a class variable and b is an instance variable.\nclass foo:\n a = 0 #class variable\n\n def __init__(self):\n self.b = 0 #instance variable\n\nRegarding the hypothetical where you create an unneeded instance variable because the other one was about one thousand lines away: The best solution is to not have classes that are one thousand lines long. If you can't avoid the length, then your class should have a well defined purpose and that will enable you to keep all of the complexities in your head at once.\n",
"A documentation generation system such as Epydoc can be used as a reference for what instance/class variables an object has, and if you're worried about accidentally creating new variables via typos you can use PyChecker to check your code for this.\n",
"This is a common concern I hear from many programmers who come from a C, C++, or other statically typed language where variables are pre-declared. In fact it was one of the biggest concerns we heard when we were persuading programmers at our organization to abandon C for high-level programs and use Python instead.\nIn theory, yes you can add instance variables to an object at any time. Yes it can happen from typos, etc. In practice, it rarely results in a bug. When it does, the bugs are generally not hard to find. \nAs long as your classes are not bloated (1000 lines is pretty huge!) and you have ample unit tests, you should rarely run in to a real problem. In case you do, it's easy to drop to a Python console at almost any time and inspect things as much as you wish.\n",
"It seems to me that the main issue here is that you're thinking in terms of C++ when you're working in python.\nHaving a 1000 line class is not a very wise thing anyway in python, (I know it happens alot in C++ though), \nLearn to exploit the dynamism that python gives you, for instance you can combine lists and dictionaries in very creative ways and save your self hundreds of useless lines of code.\nFor example, if you're mapping strings to functions (for dispatching), you can exploit the fact that functions are first class objects and have a dictionary that goes like:\nd = {'command1' : func1, 'command2': func2, 'command3' : func3}\n#then somewhere else use this list to dispatch\n#given a string `str`\nfunc = d[str]\nfunc() #call the function!\n\nSomething like this in C++ would take up sooo many lines of code!\n",
"The easiest is to use an IDE. PyDev is a plugin for eclipse.\nI'm not a full on expert in all ways pythonic, but in general I define my class members right under the class definition in python, so if I add members, they're all relative.\nMy personal opinion is that class members should be declared in one section, for this specific reason.\nLocal scoped variables, otoh, should be defined closest to when they are used (except in C--which I believe still requires variables to be declared at the beginning of a method).\n"
] | [
10,
8,
5,
4,
3,
3,
2,
2,
0
] | [
"Consider using slots.\nFor example:\n\n class Foo:\n __slots__ = \"a b c\".split()\n x = Foo()\n x.a =1 # ok\n x.b =1 # ok\n x.c =1 # ok\n x.bb = 1 # will raise \"AttributeError: Foo instance has no attribute 'bb'\"\n\nIt is generally a concern in any dynamic programming language -- any language that does not require variable declaration -- that a typo in a variable name will create a new variable instead of raise an exception or cause a compile-time error. Slots helps with instance variables, but doesn't help you with, module-scope variables, globals, local variables, etc. There's no silver bullet for this; it's part of the trade-off of not having to declare variables.\n"
] | [
-2
] | [
"python",
"variables"
] | stackoverflow_0000496582_python_variables.txt |
Q:
XML-RPC and Continuum from Python / Perl
Has anyone had any success with getting data via Xml-rpc using Python or Perl...?
I'm using the continuum.py library:
#!/usr/bin/env python
from continuum import *
c = Continuum( "http://localhost:8080/continuum/xmlrpc" )
or:
#!/usr/bin/perl
use Frontier::Client;
my $url = "http://dev.server.com:8080/continuum/xmlrpc";
my $client = RPC::XML::Client->new($url);
my $res = $client->send_request('system.listMethods');
print " Response class = ".(ref $res)."\n";
print " Response type = ".$res->type."\n";
print " Response string = ".$res->as_string."\n";
print " Response value = ".$res->value."\n";
Gives: No such handler: system.listMethods
Anyone fared any better...?
A:
Yes... with Perl.
I've used XML::RPC. In fact I wrote the CPAN module WWW::FreshMeat::API using it to access Freshmeats XML-RPC API so I know it does work well!
Using XML::RPC with Freshmeat the "system.*" calls work for me....
use XML::RPC;
use Data::Dumper;
my $fm = XML::RPC->new( 'http://freshmeat.net/xmlrpc/' );
# need to put in your Freshmeat username/password here
my $session = $fm->call( 'login', { username => 'user', password => 'pass' });
my $x = $fm->call('system.listMethods');
say Dumper( $x );
Gives me....
$VAR1 = [
'system.listMethods',
'system.methodHelp',
'system.methodSignature',
'system.describeMethods',
'system.multiCall',
'system.getCapabilities',
'publish_release',
'fetch_branch_list',
'fetch_project_list',
'fetch_available_licenses',
'fetch_available_release_foci',
'fetch_release',
'login',
'logout',
'withdraw_release'
];
Hope that helps.
A:
What you describe is not part of the client-side library-- it's a matter of whether the server implements these methods.
I'm the author of the RPC::XML Perl module, and in the server class I provide I also provide implementation of the basic "introspection" API that has become a sort of semi-standard in the XML-RPC arena. But even then, users of the server class may choose to not have the introspection API activate.
Of course, I can't speak to other XML-RPC implementations.
| XML-RPC and Continuum from Python / Perl | Has anyone had any success with getting data via Xml-rpc using Python or Perl...?
I'm using the continuum.py library:
#!/usr/bin/env python
from continuum import *
c = Continuum( "http://localhost:8080/continuum/xmlrpc" )
or:
#!/usr/bin/perl
use Frontier::Client;
my $url = "http://dev.server.com:8080/continuum/xmlrpc";
my $client = RPC::XML::Client->new($url);
my $res = $client->send_request('system.listMethods');
print " Response class = ".(ref $res)."\n";
print " Response type = ".$res->type."\n";
print " Response string = ".$res->as_string."\n";
print " Response value = ".$res->value."\n";
Gives: No such handler: system.listMethods
Anyone fared any better...?
| [
"Yes... with Perl. \nI've used XML::RPC. In fact I wrote the CPAN module WWW::FreshMeat::API using it to access Freshmeats XML-RPC API so I know it does work well!\nUsing XML::RPC with Freshmeat the \"system.*\" calls work for me....\nuse XML::RPC;\nuse Data::Dumper;\n\nmy $fm = XML::RPC->new( 'http://freshmeat.net/xmlrpc/' );\n\n# need to put in your Freshmeat username/password here\nmy $session = $fm->call( 'login', { username => 'user', password => 'pass' });\n\nmy $x = $fm->call('system.listMethods');\n\nsay Dumper( $x );\n\nGives me....\n$VAR1 = [\n 'system.listMethods',\n 'system.methodHelp',\n 'system.methodSignature',\n 'system.describeMethods',\n 'system.multiCall',\n 'system.getCapabilities',\n 'publish_release',\n 'fetch_branch_list',\n 'fetch_project_list',\n 'fetch_available_licenses',\n 'fetch_available_release_foci',\n 'fetch_release',\n 'login',\n 'logout',\n 'withdraw_release'\n ];\n\nHope that helps.\n",
"What you describe is not part of the client-side library-- it's a matter of whether the server implements these methods.\nI'm the author of the RPC::XML Perl module, and in the server class I provide I also provide implementation of the basic \"introspection\" API that has become a sort of semi-standard in the XML-RPC arena. But even then, users of the server class may choose to not have the introspection API activate.\nOf course, I can't speak to other XML-RPC implementations.\n"
] | [
1,
1
] | [] | [] | [
"continuum",
"perl",
"python",
"xml_rpc"
] | stackoverflow_0000462038_continuum_perl_python_xml_rpc.txt |
Q:
Python and Bluetooth/OBEX
Is there any Python libraries that will let me send files with OBEX (OBject EXchange) and that works cross-platform (Windows, OS X, Linux)? I have found Lightblue, which works for Linux and OS X, but not for Windows.
If none such lib exists, are there any decent ones that only works in Windows?
A:
PyOBEX might work, but it has only been tested with a Linux Bluetooth stack:
http://pypi.python.org/pypi/PyOBEX/0.10
It would be good to know if it works correctly on Windows and Mac OS X.
A:
PyBluez - Windows
| Python and Bluetooth/OBEX | Is there any Python libraries that will let me send files with OBEX (OBject EXchange) and that works cross-platform (Windows, OS X, Linux)? I have found Lightblue, which works for Linux and OS X, but not for Windows.
If none such lib exists, are there any decent ones that only works in Windows?
| [
"PyOBEX might work, but it has only been tested with a Linux Bluetooth stack:\nhttp://pypi.python.org/pypi/PyOBEX/0.10\nIt would be good to know if it works correctly on Windows and Mac OS X.\n",
"PyBluez - Windows \n"
] | [
2,
1
] | [] | [] | [
"bluetooth",
"python"
] | stackoverflow_0000452018_bluetooth_python.txt |
Q:
Passing JSON strings larger than 80 characters
I'm having a problem passing strings that exceed 80 characters in JSON. When I pass a string that's exactly 80 characters long it works like magic. But once I add the 81st letter it craps out. I've tried looking at the json object in firebug and it seems to think the string is an array because it has an expander next to it. Clicking the expander though does nothing. I've tried searching online for caps on JSON string sizes and work arounds but am coming up empty :(. Anybody know anything about this?
edit:
It actually doesn't matter what the string is... using "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz" yields the same results.
Here's my code: (I'm using python)
result = {"test": "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"}
self.response.out.write(simplejson.dumps(result))
would you happen to know the class that encodes strings properly for python? Thanks so much :)
A:
What is the 81st character? Sounds like the string isn't properly escaped, making the json decoder think it is an array. If you could post the string here, or at least the 20 or so characters around 80, I could probably tell you what is wrong. Also, if you could tell how the json string was made. In most languages you can get a class that will make proper json strings out of objects and arrays. For example, php has json_encode();
| Passing JSON strings larger than 80 characters | I'm having a problem passing strings that exceed 80 characters in JSON. When I pass a string that's exactly 80 characters long it works like magic. But once I add the 81st letter it craps out. I've tried looking at the json object in firebug and it seems to think the string is an array because it has an expander next to it. Clicking the expander though does nothing. I've tried searching online for caps on JSON string sizes and work arounds but am coming up empty :(. Anybody know anything about this?
edit:
It actually doesn't matter what the string is... using "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz" yields the same results.
Here's my code: (I'm using python)
result = {"test": "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz"}
self.response.out.write(simplejson.dumps(result))
would you happen to know the class that encodes strings properly for python? Thanks so much :)
| [
"What is the 81st character? Sounds like the string isn't properly escaped, making the json decoder think it is an array. If you could post the string here, or at least the 20 or so characters around 80, I could probably tell you what is wrong. Also, if you could tell how the json string was made. In most languages you can get a class that will make proper json strings out of objects and arrays. For example, php has json_encode();\n"
] | [
1
] | [] | [] | [
"json",
"max",
"python",
"size",
"string"
] | stackoverflow_0000499596_json_max_python_size_string.txt |
Q:
How do you create python methods(signature and content) in code?
I've created a method that generates a new class and adds some methods into the class, but there is a strange bug, and I'm not sure what's happening:
def make_image_form(image_fields):
''' Takes a list of image_fields to generate images '''
images = SortedDict()
for image_name in image_fields:
images[image_name] = forms.ImageField(required=False)
new_form = type('ListingImagesForm2', (forms.BaseForm,), {'base_fields' : images})
#now we add the validation methods to the class
for image_name in image_fields:
print "image name is: ", image_name
setattr(new_form, 'clean_' + image_name, lambda self: self._clean_photo(image_name))
#Add the _clean_photo method to the class
setattr(new_form, '_clean_photo', _clean_photo)
return new_form
This is my method, which takes a list of image_fields (I'm making a site in Django), and it creates a whole bunch of ImageField fields, and creates a class ListingImagesForm2, and assigns the image fields to the class.
The problem is in creating the methods, and more specifically the method content.
In the loop:
for image_name in image_fields:
print "image name is: ", image_name
setattr(new_form, 'clean_' + image_name, lambda self: self._clean_photo(image_name))
The methods signatures are created correctly (e.g. clean_pic_1, clean_pic_2...) , but I think there is a problem in the lambda expression, as the _clean_photo method is always called with the same image name (which happens to be the last image name in the image_fields list).
Is there any nicer way to create dynamic method content(code), than using lambda expressions?
And why would my lambda expression only pass _clean_photo the last image_name in the for loop?
A:
Python code behaves like this for functions defined in scope of methods.
Use this instead:
for image_name in image_fields:
print "image name is: ", image_name
setattr(new_form, 'clean_' + image_name,
lambda self, iname=image_name: self._clean_photo(iname))
The usage of default keyword argument makes Python remember it at the time of lambda function creation rather than at the time of its calling (when it would always take the last image).
| How do you create python methods(signature and content) in code? | I've created a method that generates a new class and adds some methods into the class, but there is a strange bug, and I'm not sure what's happening:
def make_image_form(image_fields):
''' Takes a list of image_fields to generate images '''
images = SortedDict()
for image_name in image_fields:
images[image_name] = forms.ImageField(required=False)
new_form = type('ListingImagesForm2', (forms.BaseForm,), {'base_fields' : images})
#now we add the validation methods to the class
for image_name in image_fields:
print "image name is: ", image_name
setattr(new_form, 'clean_' + image_name, lambda self: self._clean_photo(image_name))
#Add the _clean_photo method to the class
setattr(new_form, '_clean_photo', _clean_photo)
return new_form
This is my method, which takes a list of image_fields (I'm making a site in Django), and it creates a whole bunch of ImageField fields, and creates a class ListingImagesForm2, and assigns the image fields to the class.
The problem is in creating the methods, and more specifically the method content.
In the loop:
for image_name in image_fields:
print "image name is: ", image_name
setattr(new_form, 'clean_' + image_name, lambda self: self._clean_photo(image_name))
The methods signatures are created correctly (e.g. clean_pic_1, clean_pic_2...) , but I think there is a problem in the lambda expression, as the _clean_photo method is always called with the same image name (which happens to be the last image name in the image_fields list).
Is there any nicer way to create dynamic method content(code), than using lambda expressions?
And why would my lambda expression only pass _clean_photo the last image_name in the for loop?
| [
"Python code behaves like this for functions defined in scope of methods.\nUse this instead:\nfor image_name in image_fields:\n print \"image name is: \", image_name\n setattr(new_form, 'clean_' + image_name, \n lambda self, iname=image_name: self._clean_photo(iname))\n\nThe usage of default keyword argument makes Python remember it at the time of lambda function creation rather than at the time of its calling (when it would always take the last image).\n"
] | [
5
] | [] | [] | [
"django",
"dynamic",
"lambda",
"methods",
"python"
] | stackoverflow_0000499964_django_dynamic_lambda_methods_python.txt |
Q:
Python 3.0 `wsgiref` server not functioning
I can't seem to get the wsgiref module to work at all under Python 3.0. It works fine under 2.5 for me, however. Even when I try the example in the docs, it fails. It fails so hard that even if I have a print function above where I do: "from wsgiref.simple_server import make_server", it never gets printed for some reason. It doesn't thow any errors when run, and it just displays a blank page in the browser and doesn't log any sort of request.
Does anybody know what the problem may be? Thanks!
A:
issue 4718:wsgiref package totally broken. sorry about that.
A:
You're in uncharted territory with WSGI on Python 3.0 I'm afraid.
WEB-SIG knew long ago that wsgiref was broken going into 3.0, but chose to do nothing about it. The spec hasn't been updated to cope with 3.0; pushing WSGI forwards even for the things everyone pretty-much agrees on is just agonisingly slow. It's depressing and senseless.
So yeah, it's easy to fix the obvious error with header unpacking in simple_server, but you'll still be running on a server that has been converted from Python 2-to-3 automatically and not really tested, with no de-jure standard to say exactly what it should do... never mind framework compatibility.
Python 3.0 for web scripting: needs some work.
| Python 3.0 `wsgiref` server not functioning | I can't seem to get the wsgiref module to work at all under Python 3.0. It works fine under 2.5 for me, however. Even when I try the example in the docs, it fails. It fails so hard that even if I have a print function above where I do: "from wsgiref.simple_server import make_server", it never gets printed for some reason. It doesn't thow any errors when run, and it just displays a blank page in the browser and doesn't log any sort of request.
Does anybody know what the problem may be? Thanks!
| [
"issue 4718:wsgiref package totally broken. sorry about that.\n",
"You're in uncharted territory with WSGI on Python 3.0 I'm afraid.\nWEB-SIG knew long ago that wsgiref was broken going into 3.0, but chose to do nothing about it. The spec hasn't been updated to cope with 3.0; pushing WSGI forwards even for the things everyone pretty-much agrees on is just agonisingly slow. It's depressing and senseless.\nSo yeah, it's easy to fix the obvious error with header unpacking in simple_server, but you'll still be running on a server that has been converted from Python 2-to-3 automatically and not really tested, with no de-jure standard to say exactly what it should do... never mind framework compatibility.\nPython 3.0 for web scripting: needs some work.\n"
] | [
2,
0
] | [] | [] | [
"python",
"python_3.x",
"wsgi",
"wsgiref"
] | stackoverflow_0000497704_python_python_3.x_wsgi_wsgiref.txt |
Q:
excluding characters in \S regex match
I have the following regex expression to match html links:
<a\s*href=['|"](http:\/\/(.*?)\S['|"]>
it kind of works. Except not really. Because it grabs everything after the < a href...
and just keeps going. I want to exclude the quote characters from that last \S match. Is there any way of doing that?
EDIT: This would make it grab only up to the quotes instead of everything after the < a href btw
A:
I don't think your regex is doing what you want.
<a\s*href=['|"](http:\/\/(.*?)\S['|"]>
This captures anything non-greedily from http:// up to the first non-space character before a quote, single quote, or pipe. For that matter, I'm not sure how it parses, as it doesn't seem to have enough close parens.
If you are trying to capture the href, you might try something like this:
<a .*?+href=['"](http:\/\/.*?)['"].*?>
This uses the .*? (non-greedy match anything) to allow for other attributes (target, title, etc.). It matches an href that begins and ends with either a single or double quote (it does not distinguish, and allows the href to open with one and close with the other).
A:
\S matches any character that is not a whitespace character, just like [^\s]
Written like that, you can easily exclude quotes: [^\s"']
Note that you'll likely have to give the .*? in your regex the same treatment. The dot matches any character that is not a newline, just like [^\r\n]
Again, written like that, you can easily exclude quotes: [^\r\n'"]
A:
>>> import re
>>> regex = '<a\s+href=["\'](http://(.*?))["\']>'
>>> string = '<a href="http://google.com/test/this">'
>>> match = re.search(regex, string)
>>> match.group(1)
'http://google.com/test/this'
>>> match.group(2)
'google.com/test/this'
explanations:
\s+ = match at least one white space (<ahref) is a bad link
["\'] = character class, | has no meaning within square brackets
(it will match a literal pipe "|")
A:
Why are you trying to match HTML links with a regex?
Depending on what you're trying to do the appropriate thing to do would vary.
You could try using an HTML Parser. There are several available, there's even one in the Python Library: https://docs.python.org/library/htmlparser.html
Hope this helps!
A:
Read Jeff Friedl's "Mastering Regular Expressions" book.
As written:
<a\s*href=['|"](http:\/\/(.*?)\S['|"]>
You have unbalanced parentheses in the expression. Maybe the trouble is that the first match is being treated as "read to end of regex". Also, why would you not want the last non-space character of the URL?
The .*? (lazy greedy) operator is interesting. I must say, though, that I'd be more inclined to write:
<a\s+href=['|"]http://([^'"><]+)\1>
This distinguishes between "<ahref" (a non-existent HTML tag) and "<a href" (a valid HTML tag). It doesn't capture the 'http://' prefix. I'm not certain whether you have to escape the slashes -- in Perl, where I mainly work, I wouldn't need to. The capturing part uses the greedy match, but only on characters that might semi-legitimately appear in the URL. Specifically, it excludes both quotes and the end-tag (and, for good measure, the begin-tag too). If you really want the 'http://' prefix, shift the capturing parenthesis appropriately.
A:
I ran into on issue with single quotes in some urls such as this one from Fox Sports. I made a slight adjustment that I think should take care of it.
http://msn.foxsports.com/mlb/story/9152594/Fehr:'Heightened'-concern-about-free-agent-market
/<a\s+href\s*=\s*["'](http://.*?)["'][>\s]/i
this requires that the closing quote be followed by a space or closing bracket.
| excluding characters in \S regex match | I have the following regex expression to match html links:
<a\s*href=['|"](http:\/\/(.*?)\S['|"]>
it kind of works. Except not really. Because it grabs everything after the < a href...
and just keeps going. I want to exclude the quote characters from that last \S match. Is there any way of doing that?
EDIT: This would make it grab only up to the quotes instead of everything after the < a href btw
| [
"I don't think your regex is doing what you want.\n<a\\s*href=['|\"](http:\\/\\/(.*?)\\S['|\"]>\n\nThis captures anything non-greedily from http:// up to the first non-space character before a quote, single quote, or pipe. For that matter, I'm not sure how it parses, as it doesn't seem to have enough close parens.\nIf you are trying to capture the href, you might try something like this:\n<a .*?+href=['\"](http:\\/\\/.*?)['\"].*?>\n\nThis uses the .*? (non-greedy match anything) to allow for other attributes (target, title, etc.). It matches an href that begins and ends with either a single or double quote (it does not distinguish, and allows the href to open with one and close with the other).\n",
"\\S matches any character that is not a whitespace character, just like [^\\s]\nWritten like that, you can easily exclude quotes: [^\\s\"']\nNote that you'll likely have to give the .*? in your regex the same treatment. The dot matches any character that is not a newline, just like [^\\r\\n]\nAgain, written like that, you can easily exclude quotes: [^\\r\\n'\"]\n",
">>> import re\n>>> regex = '<a\\s+href=[\"\\'](http://(.*?))[\"\\']>'\n>>> string = '<a href=\"http://google.com/test/this\">'\n>>> match = re.search(regex, string)\n>>> match.group(1)\n'http://google.com/test/this'\n>>> match.group(2)\n'google.com/test/this'\n\nexplanations:\n \\s+ = match at least one white space (<ahref) is a bad link\n [\"\\'] = character class, | has no meaning within square brackets\n (it will match a literal pipe \"|\")\n\n",
"Why are you trying to match HTML links with a regex?\nDepending on what you're trying to do the appropriate thing to do would vary.\nYou could try using an HTML Parser. There are several available, there's even one in the Python Library: https://docs.python.org/library/htmlparser.html\nHope this helps!\n",
"Read Jeff Friedl's \"Mastering Regular Expressions\" book.\nAs written:\n<a\\s*href=['|\"](http:\\/\\/(.*?)\\S['|\"]>\n\nYou have unbalanced parentheses in the expression. Maybe the trouble is that the first match is being treated as \"read to end of regex\". Also, why would you not want the last non-space character of the URL?\nThe .*? (lazy greedy) operator is interesting. I must say, though, that I'd be more inclined to write:\n<a\\s+href=['|\"]http://([^'\"><]+)\\1>\n\nThis distinguishes between \"<ahref\" (a non-existent HTML tag) and \"<a href\" (a valid HTML tag). It doesn't capture the 'http://' prefix. I'm not certain whether you have to escape the slashes -- in Perl, where I mainly work, I wouldn't need to. The capturing part uses the greedy match, but only on characters that might semi-legitimately appear in the URL. Specifically, it excludes both quotes and the end-tag (and, for good measure, the begin-tag too). If you really want the 'http://' prefix, shift the capturing parenthesis appropriately.\n",
"I ran into on issue with single quotes in some urls such as this one from Fox Sports. I made a slight adjustment that I think should take care of it.\nhttp://msn.foxsports.com/mlb/story/9152594/Fehr:'Heightened'-concern-about-free-agent-market\n/<a\\s+href\\s*=\\s*[\"'](http://.*?)[\"'][>\\s]/i\nthis requires that the closing quote be followed by a space or closing bracket.\n"
] | [
4,
3,
1,
0,
0,
0
] | [] | [] | [
"html",
"python",
"regex"
] | stackoverflow_0000292167_html_python_regex.txt |
Q:
Can I log into a web application automatically using a users windows logon?
On the intranet at my part time job (not IT related) there are various web applications that we use that do not require logging in explicitly. We are required to login to Windows obviously, and that then authenticates us some how.
I'm wondering how this is done? Without worrying about security TOO much, how would I go about authenticating a user to a web application, utilizing the windows login information? I'd be using Python (and Django).
Are there limitations on how this can be achieved? For instance, would a specific browser be required? Would the application and intranet backend have to be hosted at the same location or at least have to communicate? Or is it simply getting the users Windows credentials, and passing that to the authentication software of the web application?
A:
Once upon a time Internet Explorer supported NTLM authentication (similar to Basic Auth but it sent cached credentials to the server which could be verified with the domain controller). It was used to enable single-signon within an intranet where everyone was expected to be logged into the domain. I don't recall the details of it and I haven't used it for ages. It may still be an option if it fits your needs.
Maybe someone more familiar with it may have more details.
See: NTLM Authentication Scheme for HTTP
The tricky part of using non-microsoft server framework is going to be talking with the necessary services to verify the credentials.
A:
From here:
-- Added to settings.py --
### ACTIVE DIRECTORY SETTINGS
# AD_DNS_NAME should set to the AD DNS name of the domain (ie; example.com)
# If you are not using the AD server as your DNS, it can also be set to
# FQDN or IP of the AD server.
AD_DNS_NAME = 'example.com'
AD_LDAP_PORT = 389
AD_SEARCH_DN = 'CN=Users,dc=example,dc=com'
# This is the NT4/Samba domain name
AD_NT4_DOMAIN = 'EXAMPLE'
AD_SEARCH_FIELDS = ['mail','givenName','sn','sAMAccountName']
AD_LDAP_URL = 'ldap://%s:%s' % (AD_DNS_NAME,AD_LDAP_PORT)
-- In the auth.py file --
from django.contrib.auth.models import User
from django.conf import settings
import ldap
class ActiveDirectoryBackend:
def authenticate(self,username=None,password=None):
if not self.is_valid(username,password):
return None
try:
user = User.objects.get(username=username)
except User.DoesNotExist:
l = ldap.initialize(settings.AD_LDAP_URL)
l.simple_bind_s(username,password)
result = l.search_ext_s(settings.AD_SEARCH_DN,ldap.SCOPE_SUBTREE,
"sAMAccountName=%s" % username,settings.AD_SEARCH_FIELDS)[0][1]
l.unbind_s()
# givenName == First Name
if result.has_key('givenName'):
first_name = result['givenName'][0]
else:
first_name = None
# sn == Last Name (Surname)
if result.has_key('sn'):
last_name = result['sn'][0]
else:
last_name = None
# mail == Email Address
if result.has_key('mail'):
email = result['mail'][0]
else:
email = None
user = User(username=username,first_name=first_name,last_name=last_name,email=email)
user.is_staff = False
user.is_superuser = False
user.set_password(password)
user.save()
return user
def get_user(self,user_id):
try:
return User.objects.get(pk=user_id)
except User.DoesNotExist:
return None
def is_valid (self,username=None,password=None):
## Disallowing null or blank string as password
## as per comment: http://www.djangosnippets.org/snippets/501/#c868
if password == None or password == '':
return False
binddn = "%s@%s" % (username,settings.AD_NT4_DOMAIN)
try:
l = ldap.initialize(settings.AD_LDAP_URL)
l.simple_bind_s(binddn,password)
l.unbind_s()
return True
except ldap.LDAPError:
return False
A:
To the best of my knowledge the only browser that automatically passes your login credentials is Internet Explorer. To enable this feature select "Enable Integrated Windows Authentication" in the advanced Internet options dialog under the security section. This is usually enabled by default.
The web server will have to have the Anonymous user permission removed from the web application and enable windows authentication option checked. Simply add the users you want to have access to the web application to the file/folder permissions.
I have only tried this with IIS so I'm not sure if it will work on other web servers.
| Can I log into a web application automatically using a users windows logon? | On the intranet at my part time job (not IT related) there are various web applications that we use that do not require logging in explicitly. We are required to login to Windows obviously, and that then authenticates us some how.
I'm wondering how this is done? Without worrying about security TOO much, how would I go about authenticating a user to a web application, utilizing the windows login information? I'd be using Python (and Django).
Are there limitations on how this can be achieved? For instance, would a specific browser be required? Would the application and intranet backend have to be hosted at the same location or at least have to communicate? Or is it simply getting the users Windows credentials, and passing that to the authentication software of the web application?
| [
"Once upon a time Internet Explorer supported NTLM authentication (similar to Basic Auth but it sent cached credentials to the server which could be verified with the domain controller). It was used to enable single-signon within an intranet where everyone was expected to be logged into the domain. I don't recall the details of it and I haven't used it for ages. It may still be an option if it fits your needs. \nMaybe someone more familiar with it may have more details.\nSee: NTLM Authentication Scheme for HTTP\nThe tricky part of using non-microsoft server framework is going to be talking with the necessary services to verify the credentials.\n",
"From here:\n-- Added to settings.py --\n\n### ACTIVE DIRECTORY SETTINGS\n\n# AD_DNS_NAME should set to the AD DNS name of the domain (ie; example.com) \n# If you are not using the AD server as your DNS, it can also be set to \n# FQDN or IP of the AD server.\n\nAD_DNS_NAME = 'example.com'\nAD_LDAP_PORT = 389\n\nAD_SEARCH_DN = 'CN=Users,dc=example,dc=com'\n\n# This is the NT4/Samba domain name\nAD_NT4_DOMAIN = 'EXAMPLE'\n\nAD_SEARCH_FIELDS = ['mail','givenName','sn','sAMAccountName']\n\nAD_LDAP_URL = 'ldap://%s:%s' % (AD_DNS_NAME,AD_LDAP_PORT)\n\n\n-- In the auth.py file --\n\nfrom django.contrib.auth.models import User\nfrom django.conf import settings\nimport ldap\n\nclass ActiveDirectoryBackend:\n\n def authenticate(self,username=None,password=None):\n if not self.is_valid(username,password):\n return None\n try:\n user = User.objects.get(username=username)\n except User.DoesNotExist:\n l = ldap.initialize(settings.AD_LDAP_URL)\n l.simple_bind_s(username,password)\n result = l.search_ext_s(settings.AD_SEARCH_DN,ldap.SCOPE_SUBTREE, \n \"sAMAccountName=%s\" % username,settings.AD_SEARCH_FIELDS)[0][1]\n l.unbind_s()\n\n # givenName == First Name\n if result.has_key('givenName'):\n first_name = result['givenName'][0]\n else:\n first_name = None\n\n # sn == Last Name (Surname)\n if result.has_key('sn'):\n last_name = result['sn'][0]\n else:\n last_name = None\n\n # mail == Email Address\n if result.has_key('mail'):\n email = result['mail'][0]\n else:\n email = None\n\n user = User(username=username,first_name=first_name,last_name=last_name,email=email)\n user.is_staff = False\n user.is_superuser = False\n user.set_password(password)\n user.save()\n return user\n\n def get_user(self,user_id):\n try:\n return User.objects.get(pk=user_id)\n except User.DoesNotExist:\n return None\n\n def is_valid (self,username=None,password=None):\n ## Disallowing null or blank string as password\n ## as per comment: http://www.djangosnippets.org/snippets/501/#c868\n if password == None or password == '':\n return False\n binddn = \"%s@%s\" % (username,settings.AD_NT4_DOMAIN)\n try:\n l = ldap.initialize(settings.AD_LDAP_URL)\n l.simple_bind_s(binddn,password)\n l.unbind_s()\n return True\n except ldap.LDAPError:\n return False\n\n",
"To the best of my knowledge the only browser that automatically passes your login credentials is Internet Explorer. To enable this feature select \"Enable Integrated Windows Authentication\" in the advanced Internet options dialog under the security section. This is usually enabled by default. \nThe web server will have to have the Anonymous user permission removed from the web application and enable windows authentication option checked. Simply add the users you want to have access to the web application to the file/folder permissions. \nI have only tried this with IIS so I'm not sure if it will work on other web servers.\n"
] | [
2,
1,
0
] | [] | [] | [
"authentication",
"python",
"web_applications",
"windows_authentication"
] | stackoverflow_0000500134_authentication_python_web_applications_windows_authentication.txt |
Q:
Partial evaluation for parsing
I'm working on a macro system for Python (as discussed here) and one of the things I've been considering are units of measure. Although units of measure could be implemented without macros or via static macros (e.g. defining all your units ahead of time), I'm toying around with the idea of allowing syntax to be extended dynamically at runtime.
To do this, I'm considering using a sort of partial evaluation on the code at compile-time. If parsing fails for a given expression, due to a macro for its syntax not being available, the compiler halts evaluation of the function/block and generates the code it already has with a stub where the unknown expression is. When this stub is hit at runtime, the function is recompiled against the current macro set. If this compilation fails, a parse error would be thrown because execution can't continue. If the compilation succeeds, the new function replaces the old one and execution continues.
The biggest issue I see is that you can't find parse errors until the affected code is run. However, this wouldn't affect many cases, e.g. group operators like [], {}, (), and `` still need to be paired (requirement of my tokenizer/list parser), and top-level syntax like classes and functions wouldn't be affected since their "runtime" is really load time, where the syntax is evaluated and their objects are generated.
Aside from the implementation difficulty and the problem I described above, what problems are there with this idea?
A:
Here are a few possible problems:
You may find it difficult to provide the user with helpful error messages in case of a problem. This seems likely, as any compilation-time syntax error could be just a syntax extension.
Performance hit.
I was trying to find some discussion of the pluses, minuses, and/or implementation of dynamic parsing in Perl 6, but I couldn't find anything appropriate. However, you may find this quote from Nicklaus Wirth (designer of Pascal and other languages) interesting:
The phantasies of computer scientists
in the 1960s knew no bounds. Spurned
by the success of automatic syntax
analysis and parser generation, some
proposed the idea of the flexible, or
at least extensible language. The
notion was that a program would be
preceded by syntactic rules which
would then guide the general parser
while parsing the subsequent program.
A step further: The syntax rules would
not only precede the program, but they
could be interspersed anywhere
throughout the text. For example, if
someone wished to use a particularly
fancy private form of for statement,
he could do so elegantly, even
specifying different variants for the
same concept in different sections of
the same program. The concept that
languages serve to communicate between
humans had been completely blended
out, as apparently everyone could now
define his own language on the fly.
The high hopes, however, were soon
damped by the difficulties encountered
when trying to specify, what these
private constructions should mean. As
a consequence, the intreaguing idea of
extensible languages faded away rather
quickly.
Edit: Here's Perl 6's Synopsis 6: Subroutines, unfortunately in markup form because I couldn't find an updated, formatted version; search within for "macro". Unfortunately, it's not too interesting, but you may find some things relevant, like Perl 6's one-pass parsing rule, or its syntax for abstract syntax trees. The approach Perl 6 takes is that a macro is a function that executes immediately after its arguments are parsed and returns either an AST or a string; Perl 6 continues parsing as if the source actually contained the return value. There is mention of generation of error messages, but they make it seem like if macros return ASTs, you can do alright.
A:
Pushing this one step further, you could do "lazy" parsing and always only parse enough to evaluate the next statement. Like some kind of just-in-time parser. Then syntax errors could become normal runtime errors that just raise a normal Exception that could be handled by surrounding code:
def fun():
not implemented yet
try:
fun()
except:
pass
That would be an interesting effect, but if it's useful or desirable is a different question. Generally it's good to know about errors even if you don't call the code at the moment.
Macros would not be evaluated until control reaches them and naturally the parser would already know all previous definitions. Also the macro definition could maybe even use variables and data that the program has calculated so far (like adding some syntax for all elements in a previously calculated list). But this is probably a bad idea to start writing self-modifying programs for things that could usually be done as well directly in the language. This could get confusing...
In any case you should make sure to parse code only once, and if it is executed a second time use the already parsed expression, so that it doesn't lead to performance problems.
A:
Here are some ideas from my master's thesis, which may or may not be helpful.
The thesis was about robust parsing of natural language.
The main idea: given a context-free grammar for a language, try to parse a given
text (or, in your case, a python program). If parsing failed, you will have a partially generated parse tree. Use the tree structure to suggest new grammar rules that will better cover the parsed text.
I could send you my thesis, but unless you read Hebrew this will probably not be useful.
In a nutshell:
I used a bottom-up chart parser. This type of parser generates edges for productions from the grammar. Each edge is marked with the part of the tree that was consumed. Each edge gets a score according to how close it was to full coverage, for example:
S -> NP . VP
Has a score of one half (We succeeded in covering the NP but not the VP).
The highest-scored edges suggest a new rule (such as X->NP).
In general, a chart parser is less efficient than a common LALR or LL parser (the types usually used for programming languages) - O(n^3) instead of O(n) complexity, but then again you are trying something more complicated than just parsing an existing language.
If you can do something with the idea, I can send you further details.
I believe looking at natural language parsers may give you some other ideas.
A:
Another thing I've considered is making this the default behavior across the board, but allow languages (meaning a set of macros to parse a given language) to throw a parse error at compile-time. Python 2.5 in my system, for example, would do this.
Instead of the stub idea, simply recompile functions that couldn't be handled completely at compile-time when they're executed. This will also make self-modifying code easier, as you can modify the code and recompile it at runtime.
A:
You'll probably need to delimit the bits of input text with unknown syntax, so that the rest of the syntax tree can be resolved, apart from some character sequences nodes which will be expanded later. Depending on your top level syntax, that may be fine.
You may find that the parsing algorithm and the lexer and the interface between them all need updating, which might rule out most compiler creation tools.
(The more usual approach is to use string constants for this purpose, which can be parsed to a little interpreter at run time).
A:
I don't think your approach would work very well. Let's take a simple example written in pseudo-code:
define some syntax M1 with definition D1
if _whatever_:
define M1 to do D2
else:
define M1 to do D3
code that uses M1
So there is one example where, if you allow syntax redefinition at runtime, you have a problem (since by your approach the code that uses M1 would be compiled by definition D1). Note that verifying if syntax redefinition occurs is undecidable. An over-approximation could be computed by some kind of typing system or some other kind of static analysis, but Python is not well known for this :D.
Another thing that bothers me is that your solution does not 'feel' right. I find it evil to store source code you can't parse just because you may be able to parse it at runtime.
Another example that jumps to mind is this:
...function definition fun1 that calls fun2...
define M1 (at runtime)
use M1
...function definition for fun2
Technically, when you use M1, you cannot parse it, so you need to keep the rest of the program (including the function definition of fun2) in source code. When you run the entire program, you'll see a call to fun2 that you cannot call, even if it's defined.
| Partial evaluation for parsing | I'm working on a macro system for Python (as discussed here) and one of the things I've been considering are units of measure. Although units of measure could be implemented without macros or via static macros (e.g. defining all your units ahead of time), I'm toying around with the idea of allowing syntax to be extended dynamically at runtime.
To do this, I'm considering using a sort of partial evaluation on the code at compile-time. If parsing fails for a given expression, due to a macro for its syntax not being available, the compiler halts evaluation of the function/block and generates the code it already has with a stub where the unknown expression is. When this stub is hit at runtime, the function is recompiled against the current macro set. If this compilation fails, a parse error would be thrown because execution can't continue. If the compilation succeeds, the new function replaces the old one and execution continues.
The biggest issue I see is that you can't find parse errors until the affected code is run. However, this wouldn't affect many cases, e.g. group operators like [], {}, (), and `` still need to be paired (requirement of my tokenizer/list parser), and top-level syntax like classes and functions wouldn't be affected since their "runtime" is really load time, where the syntax is evaluated and their objects are generated.
Aside from the implementation difficulty and the problem I described above, what problems are there with this idea?
| [
"Here are a few possible problems:\n\nYou may find it difficult to provide the user with helpful error messages in case of a problem. This seems likely, as any compilation-time syntax error could be just a syntax extension.\nPerformance hit.\n\nI was trying to find some discussion of the pluses, minuses, and/or implementation of dynamic parsing in Perl 6, but I couldn't find anything appropriate. However, you may find this quote from Nicklaus Wirth (designer of Pascal and other languages) interesting:\n\nThe phantasies of computer scientists\n in the 1960s knew no bounds. Spurned\n by the success of automatic syntax\n analysis and parser generation, some\n proposed the idea of the flexible, or\n at least extensible language. The\n notion was that a program would be\n preceded by syntactic rules which\n would then guide the general parser\n while parsing the subsequent program.\n A step further: The syntax rules would\n not only precede the program, but they\n could be interspersed anywhere\n throughout the text. For example, if\n someone wished to use a particularly\n fancy private form of for statement,\n he could do so elegantly, even\n specifying different variants for the\n same concept in different sections of\n the same program. The concept that\n languages serve to communicate between\n humans had been completely blended\n out, as apparently everyone could now\n define his own language on the fly.\n The high hopes, however, were soon\n damped by the difficulties encountered\n when trying to specify, what these\n private constructions should mean. As\n a consequence, the intreaguing idea of\n extensible languages faded away rather\n quickly.\n\nEdit: Here's Perl 6's Synopsis 6: Subroutines, unfortunately in markup form because I couldn't find an updated, formatted version; search within for \"macro\". Unfortunately, it's not too interesting, but you may find some things relevant, like Perl 6's one-pass parsing rule, or its syntax for abstract syntax trees. The approach Perl 6 takes is that a macro is a function that executes immediately after its arguments are parsed and returns either an AST or a string; Perl 6 continues parsing as if the source actually contained the return value. There is mention of generation of error messages, but they make it seem like if macros return ASTs, you can do alright.\n",
"Pushing this one step further, you could do \"lazy\" parsing and always only parse enough to evaluate the next statement. Like some kind of just-in-time parser. Then syntax errors could become normal runtime errors that just raise a normal Exception that could be handled by surrounding code:\ndef fun():\n not implemented yet\n\ntry:\n fun()\nexcept:\n pass\n\nThat would be an interesting effect, but if it's useful or desirable is a different question. Generally it's good to know about errors even if you don't call the code at the moment.\nMacros would not be evaluated until control reaches them and naturally the parser would already know all previous definitions. Also the macro definition could maybe even use variables and data that the program has calculated so far (like adding some syntax for all elements in a previously calculated list). But this is probably a bad idea to start writing self-modifying programs for things that could usually be done as well directly in the language. This could get confusing...\nIn any case you should make sure to parse code only once, and if it is executed a second time use the already parsed expression, so that it doesn't lead to performance problems.\n",
"Here are some ideas from my master's thesis, which may or may not be helpful.\nThe thesis was about robust parsing of natural language.\nThe main idea: given a context-free grammar for a language, try to parse a given \ntext (or, in your case, a python program). If parsing failed, you will have a partially generated parse tree. Use the tree structure to suggest new grammar rules that will better cover the parsed text.\nI could send you my thesis, but unless you read Hebrew this will probably not be useful.\nIn a nutshell:\nI used a bottom-up chart parser. This type of parser generates edges for productions from the grammar. Each edge is marked with the part of the tree that was consumed. Each edge gets a score according to how close it was to full coverage, for example: \nS -> NP . VP\n\nHas a score of one half (We succeeded in covering the NP but not the VP).\nThe highest-scored edges suggest a new rule (such as X->NP).\nIn general, a chart parser is less efficient than a common LALR or LL parser (the types usually used for programming languages) - O(n^3) instead of O(n) complexity, but then again you are trying something more complicated than just parsing an existing language.\nIf you can do something with the idea, I can send you further details.\nI believe looking at natural language parsers may give you some other ideas.\n",
"Another thing I've considered is making this the default behavior across the board, but allow languages (meaning a set of macros to parse a given language) to throw a parse error at compile-time. Python 2.5 in my system, for example, would do this.\nInstead of the stub idea, simply recompile functions that couldn't be handled completely at compile-time when they're executed. This will also make self-modifying code easier, as you can modify the code and recompile it at runtime.\n",
"You'll probably need to delimit the bits of input text with unknown syntax, so that the rest of the syntax tree can be resolved, apart from some character sequences nodes which will be expanded later. Depending on your top level syntax, that may be fine. \nYou may find that the parsing algorithm and the lexer and the interface between them all need updating, which might rule out most compiler creation tools.\n(The more usual approach is to use string constants for this purpose, which can be parsed to a little interpreter at run time).\n",
"I don't think your approach would work very well. Let's take a simple example written in pseudo-code:\ndefine some syntax M1 with definition D1\nif _whatever_:\n define M1 to do D2\nelse:\n define M1 to do D3\ncode that uses M1\n\nSo there is one example where, if you allow syntax redefinition at runtime, you have a problem (since by your approach the code that uses M1 would be compiled by definition D1). Note that verifying if syntax redefinition occurs is undecidable. An over-approximation could be computed by some kind of typing system or some other kind of static analysis, but Python is not well known for this :D.\nAnother thing that bothers me is that your solution does not 'feel' right. I find it evil to store source code you can't parse just because you may be able to parse it at runtime.\nAnother example that jumps to mind is this:\n...function definition fun1 that calls fun2...\ndefine M1 (at runtime)\nuse M1\n...function definition for fun2\n\nTechnically, when you use M1, you cannot parse it, so you need to keep the rest of the program (including the function definition of fun2) in source code. When you run the entire program, you'll see a call to fun2 that you cannot call, even if it's defined.\n"
] | [
3,
2,
2,
1,
0,
0
] | [] | [] | [
"language_design",
"macros",
"parsing",
"python"
] | stackoverflow_0000474275_language_design_macros_parsing_python.txt |
Q:
Python's version of PHP's time() function
I've looked at the Python Time module and can't find anything that gives the integer of how many seconds since 1970 as PHP does with time().
Am I simply missing something here or is there a common way to do this that's simply not listed there?
A:
import time
print int(time.time())
A:
time.time() does it, but it might be float instead of int which i assume you expect. that is, precision can be higher than 1 sec on some systems.
A:
I recommend reading "Date and Time Representation in Python". I found it very enlightening.
| Python's version of PHP's time() function | I've looked at the Python Time module and can't find anything that gives the integer of how many seconds since 1970 as PHP does with time().
Am I simply missing something here or is there a common way to do this that's simply not listed there?
| [
"import time\nprint int(time.time())\n\n",
"time.time() does it, but it might be float instead of int which i assume you expect. that is, precision can be higher than 1 sec on some systems.\n",
"I recommend reading \"Date and Time Representation in Python\". I found it very enlightening.\n"
] | [
23,
6,
3
] | [] | [] | [
"python",
"time"
] | stackoverflow_0000495595_python_time.txt |
Q:
In Python - how to execute system command with no output
Is there a built-in method in Python to execute a system command without displaying the output? I only want to grab the return value.
It is important that it be cross-platform, so just redirecting the output to /dev/null won't work on Windows, and the other way around. I know I can just check os.platform and build the redirection myself, but I'm hoping for a built-in solution.
A:
import os
import subprocess
subprocess.call(["ls", "-l"], stdout=open(os.devnull, "w"), stderr=subprocess.STDOUT)
A:
You can redirect output into temp file and delete it afterward. But there's also a method called popen that redirects output directly to your program so it won't go on screen.
| In Python - how to execute system command with no output | Is there a built-in method in Python to execute a system command without displaying the output? I only want to grab the return value.
It is important that it be cross-platform, so just redirecting the output to /dev/null won't work on Windows, and the other way around. I know I can just check os.platform and build the redirection myself, but I'm hoping for a built-in solution.
| [
"import os\nimport subprocess\nsubprocess.call([\"ls\", \"-l\"], stdout=open(os.devnull, \"w\"), stderr=subprocess.STDOUT)\n\n",
"You can redirect output into temp file and delete it afterward. But there's also a method called popen that redirects output directly to your program so it won't go on screen.\n"
] | [
25,
2
] | [] | [] | [
"python"
] | stackoverflow_0000500477_python.txt |
Q:
Python/Twisted multiuser server - what is more efficient?
In Python, if I want my server to scale well CPU-wise, I obviously need to spawn multiple processes. I was wondering which is better (using Twisted):
A) The manager process (the one who holds the actual socket connections) puts received packets into a shared queue (the one from the multiprocessing module), and worker processes pull the packets out of the queue, process them and send the results back to the client.
B) The manager process (the one who holds the actual socket connections) launches a deferred thread and then calls the apply() function on the process pool. Once the result returns from the worker process, the manager sends the result back to the client.
In both implementations, the worker processes use thread pools so they can work on more than one packet at once (since there will be a lot of database querying).
A:
I think that B is problematic. The thread would only run on one CPU, and even if it runs a process, the thread is still running. A may be better.
It is best to try and measure both in terms of time and see which one is faster and which one scales well. However, I'll reiterate that I highly doubt that B will scale well.
A:
I think that "A" is the answer you want, but you don't have to do it yourself.
Have you considered ampoule?
| Python/Twisted multiuser server - what is more efficient? | In Python, if I want my server to scale well CPU-wise, I obviously need to spawn multiple processes. I was wondering which is better (using Twisted):
A) The manager process (the one who holds the actual socket connections) puts received packets into a shared queue (the one from the multiprocessing module), and worker processes pull the packets out of the queue, process them and send the results back to the client.
B) The manager process (the one who holds the actual socket connections) launches a deferred thread and then calls the apply() function on the process pool. Once the result returns from the worker process, the manager sends the result back to the client.
In both implementations, the worker processes use thread pools so they can work on more than one packet at once (since there will be a lot of database querying).
| [
"I think that B is problematic. The thread would only run on one CPU, and even if it runs a process, the thread is still running. A may be better.\nIt is best to try and measure both in terms of time and see which one is faster and which one scales well. However, I'll reiterate that I highly doubt that B will scale well.\n",
"I think that \"A\" is the answer you want, but you don't have to do it yourself.\nHave you considered ampoule?\n"
] | [
2,
1
] | [] | [] | [
"multi_user",
"python",
"twisted"
] | stackoverflow_0000471660_multi_user_python_twisted.txt |
Q:
unit testing for an application server
I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest.
My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach?
Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite?
A:
You should use Trial. It really isn't very hard. Trial's documentation could stand to be improved, but if you know how to use the standard library unit test, the only difference is that instead of writing
import unittest
you should write
from twisted.trial import unittest
... and then you can return Deferreds from your test_ methods. Pretty much everything else is the same.
The one other difference is that instead of building a giant test object at the bottom of your module and then running
python your/test_module.py
you can simply define your test cases and then run
trial your.test_module
If you don't care about reactor integration at all, in fact, you can just run trial on a set of existing Python unit tests. Trial supports the standard library 'unittest' module.
A:
"My question is: Is this a correct approach?"
It's what you chose. You made a lot of excuses, so I'm assuming that your pretty well fixed on this course. It's not the best, but you've already listed all your reasons for doing it (and then asked follow-up questions on this specific course of action). "correct" doesn't enter into it anymore, so there's no answer to this question.
"what kind of tests are covered with this approach?"
They call it "black-box" testing. The application server is a black box that has a few inputs and outputs, and you can't test any of it's internals. It's considered one acceptable form of testing because it tests the bottom-line external interfaces for acceptable behavior.
If you have problems, it turns out to be useless for doing diagnostic work. You'll find that you need to also to white-box testing on the internal structures.
"not being able to access the database layer in order to build/rebuild the schema,"
Why not? This is Python. Write a separate tool that imports that layer and does database builds.
"when will the test client going to connect to the server: per each unit test or before running the test suite?"
Depends on the intent of the test. Depends on your use cases. What happens in the "real world" with your actual intended clients?
You'll want to test client-like behavior, making connections the way clients make connections.
Also, you'll want to test abnormal behavior, like clients dropping connections or doing things out of order, or unconnected.
A:
I think you chose the wrong direction. It's true that the Trial docs is very light. But Trial is base on unittest and only add some stuff to deal with the reactor loop and the asynchronous calls (it's not easy to write tests that deal with deffers). All your tests that are not including deffer/asynchronous call will be exactly like normal unittest.
The Trial command is a test runner (a bit like nose), so you don't have to write test suites for your tests. You will save time with it. On top of that, the Trial command can output profiling and coverage information. Just do Trial -h for more info.
But in any way the first thing you should ask yourself is which kind of tests do you need the most, unit tests, integration tests or system tests (black-box). It's possible to do all with Trial but it's not necessary allways the best fit.
A:
haven't used twisted before, and the twisted/trial documentation isn't stellar from what I just saw, but it'll likely take you 2-3 days to implement correctly the test system you describe above. Now, like I said I have no idea about Trial, but I GUESS you could probably get it working in 1-2 days, since you already have a Twisted application. Now if Trial gives you more coverage in less time, I'd go with Trial.
But remember this is just an answer from a very cursory look at the docs
| unit testing for an application server | I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest.
My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach?
Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite?
| [
"You should use Trial. It really isn't very hard. Trial's documentation could stand to be improved, but if you know how to use the standard library unit test, the only difference is that instead of writing\nimport unittest\n\nyou should write\nfrom twisted.trial import unittest\n\n... and then you can return Deferreds from your test_ methods. Pretty much everything else is the same.\nThe one other difference is that instead of building a giant test object at the bottom of your module and then running\npython your/test_module.py\n\nyou can simply define your test cases and then run\ntrial your.test_module\n\nIf you don't care about reactor integration at all, in fact, you can just run trial on a set of existing Python unit tests. Trial supports the standard library 'unittest' module.\n",
"\"My question is: Is this a correct approach?\"\nIt's what you chose. You made a lot of excuses, so I'm assuming that your pretty well fixed on this course. It's not the best, but you've already listed all your reasons for doing it (and then asked follow-up questions on this specific course of action). \"correct\" doesn't enter into it anymore, so there's no answer to this question.\n\"what kind of tests are covered with this approach?\"\nThey call it \"black-box\" testing. The application server is a black box that has a few inputs and outputs, and you can't test any of it's internals. It's considered one acceptable form of testing because it tests the bottom-line external interfaces for acceptable behavior.\nIf you have problems, it turns out to be useless for doing diagnostic work. You'll find that you need to also to white-box testing on the internal structures. \n\"not being able to access the database layer in order to build/rebuild the schema,\" \nWhy not? This is Python. Write a separate tool that imports that layer and does database builds.\n\"when will the test client going to connect to the server: per each unit test or before running the test suite?\"\nDepends on the intent of the test. Depends on your use cases. What happens in the \"real world\" with your actual intended clients?\nYou'll want to test client-like behavior, making connections the way clients make connections.\nAlso, you'll want to test abnormal behavior, like clients dropping connections or doing things out of order, or unconnected.\n",
"I think you chose the wrong direction. It's true that the Trial docs is very light. But Trial is base on unittest and only add some stuff to deal with the reactor loop and the asynchronous calls (it's not easy to write tests that deal with deffers). All your tests that are not including deffer/asynchronous call will be exactly like normal unittest.\nThe Trial command is a test runner (a bit like nose), so you don't have to write test suites for your tests. You will save time with it. On top of that, the Trial command can output profiling and coverage information. Just do Trial -h for more info.\nBut in any way the first thing you should ask yourself is which kind of tests do you need the most, unit tests, integration tests or system tests (black-box). It's possible to do all with Trial but it's not necessary allways the best fit.\n",
"haven't used twisted before, and the twisted/trial documentation isn't stellar from what I just saw, but it'll likely take you 2-3 days to implement correctly the test system you describe above. Now, like I said I have no idea about Trial, but I GUESS you could probably get it working in 1-2 days, since you already have a Twisted application. Now if Trial gives you more coverage in less time, I'd go with Trial.\nBut remember this is just an answer from a very cursory look at the docs\n"
] | [
2,
1,
1,
0
] | [] | [] | [
"python",
"twisted",
"unit_testing"
] | stackoverflow_0000464543_python_twisted_unit_testing.txt |
Q:
is there an alternative way of calling next on python generators?
I have a generator and I would like to know if I can use it without having to worry about StopIteration , and I would like to use it without the for item in generator . I would like to use it with a while statement for example ( or other constructs ). How could I do that ?
A:
built-in function
next(iterator[, default])
Retrieve the next item from the iterator by calling its __next__() method. If default is given, it is returned if the iterator is exhausted, otherwise StopIteration is raised.
In Python 2.5 and older:
raiseStopIteration = object()
def next(iterator, default=raiseStopIteration):
if not hasattr(iterator, 'next'):
raise TypeError("not an iterator")
try:
return iterator.next()
except StopIteration:
if default is raiseStopIteration:
raise
else:
return default
A:
Another options is to read all generator values at once:
>>> alist = list(agenerator)
Example:
>>> def f():
... yield 'a'
...
>>> a = list(f())
>>> a[0]
'a'
>>> len(a)
1
A:
Use this to wrap your generator:
class GeneratorWrap(object):
def __init__(self, generator):
self.generator = generator
def __iter__(self):
return self
def next(self):
for o in self.generator:
return o
raise StopIteration # If you don't care about the iterator protocol, remove this line and the __iter__ method.
Use it like this:
def example_generator():
for i in [1,2,3,4,5]:
yield i
gen = GeneratorWrap(example_generator())
print gen.next() # prints 1
print gen.next() # prints 2
Update: Please use the answer below because it is much better than this one.
| is there an alternative way of calling next on python generators? | I have a generator and I would like to know if I can use it without having to worry about StopIteration , and I would like to use it without the for item in generator . I would like to use it with a while statement for example ( or other constructs ). How could I do that ?
| [
"built-in function\n\nnext(iterator[, default])\n Retrieve the next item from the iterator by calling its __next__() method. If default is given, it is returned if the iterator is exhausted, otherwise StopIteration is raised.\n\nIn Python 2.5 and older:\nraiseStopIteration = object()\ndef next(iterator, default=raiseStopIteration):\n if not hasattr(iterator, 'next'):\n raise TypeError(\"not an iterator\")\n try:\n return iterator.next()\n except StopIteration:\n if default is raiseStopIteration:\n raise\n else:\n return default\n\n",
"Another options is to read all generator values at once:\n>>> alist = list(agenerator)\n\nExample:\n>>> def f():\n... yield 'a'\n...\n>>> a = list(f())\n>>> a[0]\n'a'\n>>> len(a)\n1\n\n",
"Use this to wrap your generator:\nclass GeneratorWrap(object):\n\n def __init__(self, generator):\n self.generator = generator\n\n def __iter__(self):\n return self\n\n def next(self):\n for o in self.generator:\n return o\n raise StopIteration # If you don't care about the iterator protocol, remove this line and the __iter__ method.\n\nUse it like this:\ndef example_generator():\n for i in [1,2,3,4,5]:\n yield i\n\ngen = GeneratorWrap(example_generator())\nprint gen.next() # prints 1\nprint gen.next() # prints 2\n\nUpdate: Please use the answer below because it is much better than this one.\n"
] | [
15,
2,
-1
] | [] | [] | [
"language_features",
"python"
] | stackoverflow_0000500578_language_features_python.txt |
Q:
Python + SQLite query to find entries that sit in a specified time slot
I want to store a row in an SQLite 3 table for each booking in my diary.
Each row will have a 'start time' and a 'end time'.
Does any one know how I can query the table for an event at a given time?
E.g. Return any rows that happen at say 10:30am
Thanks
A:
SQLite3 doesn't have a datetime type, though it does have date and time functions.
Typically you store dates and times in your database in something like ISO 8601 format: YYYY-MM-DD HH:MM:SS. Then datetimes sort lexicographically into time order.
With your datetimes stored this way, you simply use text comparisons such as
SELECT * FROM tbl WHERE tbl.start = '2009-02-01 10:30:00'
or
SELECT * FROM tbl WHERE '2009-02-01 10:30:00' BETWEEN tbl.start AND tbl.end;
| Python + SQLite query to find entries that sit in a specified time slot | I want to store a row in an SQLite 3 table for each booking in my diary.
Each row will have a 'start time' and a 'end time'.
Does any one know how I can query the table for an event at a given time?
E.g. Return any rows that happen at say 10:30am
Thanks
| [
"SQLite3 doesn't have a datetime type, though it does have date and time functions.\nTypically you store dates and times in your database in something like ISO 8601 format: YYYY-MM-DD HH:MM:SS. Then datetimes sort lexicographically into time order.\nWith your datetimes stored this way, you simply use text comparisons such as \nSELECT * FROM tbl WHERE tbl.start = '2009-02-01 10:30:00'\n\nor\nSELECT * FROM tbl WHERE '2009-02-01 10:30:00' BETWEEN tbl.start AND tbl.end;\n\n"
] | [
2
] | [] | [] | [
"python",
"sql",
"sqlite"
] | stackoverflow_0000501021_python_sql_sqlite.txt |
Q:
list named with a function argument in python
I get the feeling this is probably something I should know but I can't think of it right now. I'm trying to get a function to build a list where the name of the list is an argument given in the function;
e.g.
def make_hand(deck, handname):
handname = []
for c in range(5):
handname.append(deck.pop())
return handname
# deck being a list containing all the cards in a deck of cards earlier
The issue is that this creates a list called handname when i want it to be called whatever the user enters as handname when creating the hand.
Anyone can help? thanks
A:
You can keep a dictionary where the keys are the name of the hand and the values are the list.
Then you can just say dictionary[handname] to access a particular hand. Along the lines of:
hands = {} # Create a new dictionary to hold the hands.
hands["flush"] = make_hand(deck) # Generate some hands using your function.
hands["straight"] = make_hand(deck) # Generate another hand with a different name.
print hands["flush"] # Access the hand later.
A:
While you can create variables with arbitrary names at runtime, using exec (as sykora suggested), or by meddlings with locals, globals or setattr on objects, your question is somewhat moot.
An object (just about anything, from integers to classes with 1000 members) is just a chunk of memory. It does not have a name, it can have arbitrarily many names, and all names are treated equal: they just introduce a reference to some object and prevent it from being collected.
If you want to name items in the sense that a user of your program gives a user-visible name to an object, you should use a dictionary to associated objects with names.
Your approach of user-supplied variable names has several other severe implications:
* what if the user supplies the name of an existing variable?
* what if the user supplies an invalid name?
You're introducing a leaky abstraction, so unless it is really, really important for the purpose of your program that the user can specify a new variable name, the user should not have to worry about how you store your objects - an not be presented with seemingly strange restrictions.
| list named with a function argument in python | I get the feeling this is probably something I should know but I can't think of it right now. I'm trying to get a function to build a list where the name of the list is an argument given in the function;
e.g.
def make_hand(deck, handname):
handname = []
for c in range(5):
handname.append(deck.pop())
return handname
# deck being a list containing all the cards in a deck of cards earlier
The issue is that this creates a list called handname when i want it to be called whatever the user enters as handname when creating the hand.
Anyone can help? thanks
| [
"You can keep a dictionary where the keys are the name of the hand and the values are the list. \nThen you can just say dictionary[handname] to access a particular hand. Along the lines of:\nhands = {} # Create a new dictionary to hold the hands.\nhands[\"flush\"] = make_hand(deck) # Generate some hands using your function.\nhands[\"straight\"] = make_hand(deck) # Generate another hand with a different name.\nprint hands[\"flush\"] # Access the hand later.\n\n",
"While you can create variables with arbitrary names at runtime, using exec (as sykora suggested), or by meddlings with locals, globals or setattr on objects, your question is somewhat moot.\nAn object (just about anything, from integers to classes with 1000 members) is just a chunk of memory. It does not have a name, it can have arbitrarily many names, and all names are treated equal: they just introduce a reference to some object and prevent it from being collected.\nIf you want to name items in the sense that a user of your program gives a user-visible name to an object, you should use a dictionary to associated objects with names.\nYour approach of user-supplied variable names has several other severe implications:\n * what if the user supplies the name of an existing variable?\n * what if the user supplies an invalid name?\nYou're introducing a leaky abstraction, so unless it is really, really important for the purpose of your program that the user can specify a new variable name, the user should not have to worry about how you store your objects - an not be presented with seemingly strange restrictions.\n"
] | [
6,
2
] | [] | [] | [
"arguments",
"list",
"python"
] | stackoverflow_0000501027_arguments_list_python.txt |
Q:
Komodo Edit and Notepad++ ::: Pros & Cons ::: Python dev
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
A:
I have worked a bit with Python programming for Google App Engine, which I started out in Notepad++ and then recently shifted over to Komodo using two excellent startup tutorials - both of which are conveniently linked from this blog post (direct: here and here).
Komodo supports the basic
organization of your work into
Projects, which Notepad++ does not
(apart from physical folder
organization).
The custom commands
toolbar is useful to keep track of
numerous frequently-used commands
and even link to URLs (like online
documentation and the like).
It has a working (if sometimes clunky)
code-completion mechanism.
In short, it's an IDE which provides all the benefits thereof.
Notepad++ is simpler, much MUCH faster to load, and does support some basic configurable run commands; it's a fine choice if you like doing all your execution and debugging right in the commandline or Python shell. My advice is to try both!
A:
I just downloaded and started using Komodo Edit. I've been using Notepad++ for awhile. Here is what I think about some of the features:
Komodo Edit Pros:
You can jump to a function definition, even if it's in another file (I love this)
There is a plugin that displays the list of classes, functions and such for the current file on the side. Notepad++ used to have a plugin like this, but it no longer works with the current version and hasn't been updated in a while.
Notepad++ Pros:
If you select a word, it will highlight all of those words in the current document (makes it easier to find misspellings), without having to hit Ctrl+F.
When working with HTML, when the cursor is on/in a tag, the starting and ending tags are both highlighted
Anyone know if either of those last 2 things is possible in Komodo Edit?
A:
I use Komodo edit. The main reasons are: Intellisense (not as good as VisualStudio, but Python's a hard language to do intellisense for) and cross-platform compatibility. It's nice being able to use the same editor on my Windows machine, my linux machine, and my macbook with little to no change in feel.
A:
I use both Komodo Edit and Notepad++.
Notepad++ is a lot quicker to launch and it's more lightweight, so I often use it for quick one-off editing.
I use Komodo Edit for major projects, like my django and wxPython applications. KE is a full-featured IDE, so it has a lot more features.
Main advantages of Komodo Edit for programming Python:
Manage groups of files as projects
Use custom commands to run files, run nosetests/pylint, etc.
Auto complete & syntax checking
Mozilla extension system, with several useful extensions available
Write macros in JavaScript or Python
Spell checking
Some of the little things that Notepad++ is missing for Python development:
Doesn't auto-indent after a colon
You can't set tabs/spaces on a file-type basis (I like to use tabs for HTML)
No code completion or tooltips
No on-the-fly syntax checking
A:
As far as I know , Notepad++ doesn't show you the docstring each method has .
A:
A downside I found of Notepad++ for Python is that it tends (for me) to silently mix tabs and spaces. I know this is configurable, but it caught me out, especially when trying to work with other people using different editors / IDE's, so take care.
A:
I haven't used Komodo yet (the download never quite finished on the slow connection I was on at the time), but I use Eclipse with PyDev regularly and enjoy the "IDE" features described by the other respondents. However, I'm also regularly frustrated by how much of a resource hog it is.
I downloaded Notepad++ recently (much smaller download size ;-) ) and have been enjoying it quite a bit. The editor itself is nice and fast and it looks to be extensible. I'm hoping to copy some of my favorite features from IDE into Notepad++ and migrate, at some distant point in the future.
A:
If I had to choose between Notepad++ and Komodo i would choose PyScripter ;.)
Seriously I consider PyScripter as a great alternative...
| Komodo Edit and Notepad++ ::: Pros & Cons ::: Python dev | I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
| [
"I have worked a bit with Python programming for Google App Engine, which I started out in Notepad++ and then recently shifted over to Komodo using two excellent startup tutorials - both of which are conveniently linked from this blog post (direct: here and here).\n\nKomodo supports the basic\norganization of your work into\nProjects, which Notepad++ does not\n(apart from physical folder\norganization). \nThe custom commands\ntoolbar is useful to keep track of\nnumerous frequently-used commands\nand even link to URLs (like online\ndocumentation and the like).\nIt has a working (if sometimes clunky)\ncode-completion mechanism.\n\nIn short, it's an IDE which provides all the benefits thereof.\nNotepad++ is simpler, much MUCH faster to load, and does support some basic configurable run commands; it's a fine choice if you like doing all your execution and debugging right in the commandline or Python shell. My advice is to try both!\n",
"I just downloaded and started using Komodo Edit. I've been using Notepad++ for awhile. Here is what I think about some of the features:\nKomodo Edit Pros:\n\nYou can jump to a function definition, even if it's in another file (I love this)\nThere is a plugin that displays the list of classes, functions and such for the current file on the side. Notepad++ used to have a plugin like this, but it no longer works with the current version and hasn't been updated in a while.\n\nNotepad++ Pros:\n\nIf you select a word, it will highlight all of those words in the current document (makes it easier to find misspellings), without having to hit Ctrl+F.\nWhen working with HTML, when the cursor is on/in a tag, the starting and ending tags are both highlighted\n\nAnyone know if either of those last 2 things is possible in Komodo Edit?\n",
"I use Komodo edit. The main reasons are: Intellisense (not as good as VisualStudio, but Python's a hard language to do intellisense for) and cross-platform compatibility. It's nice being able to use the same editor on my Windows machine, my linux machine, and my macbook with little to no change in feel.\n",
"I use both Komodo Edit and Notepad++.\nNotepad++ is a lot quicker to launch and it's more lightweight, so I often use it for quick one-off editing.\nI use Komodo Edit for major projects, like my django and wxPython applications. KE is a full-featured IDE, so it has a lot more features. \nMain advantages of Komodo Edit for programming Python:\n\nManage groups of files as projects\nUse custom commands to run files, run nosetests/pylint, etc.\nAuto complete & syntax checking\nMozilla extension system, with several useful extensions available\nWrite macros in JavaScript or Python\nSpell checking\n\nSome of the little things that Notepad++ is missing for Python development:\n\nDoesn't auto-indent after a colon\nYou can't set tabs/spaces on a file-type basis (I like to use tabs for HTML)\nNo code completion or tooltips\nNo on-the-fly syntax checking\n\n",
"As far as I know , Notepad++ doesn't show you the docstring each method has .\n",
"A downside I found of Notepad++ for Python is that it tends (for me) to silently mix tabs and spaces. I know this is configurable, but it caught me out, especially when trying to work with other people using different editors / IDE's, so take care.\n",
"I haven't used Komodo yet (the download never quite finished on the slow connection I was on at the time), but I use Eclipse with PyDev regularly and enjoy the \"IDE\" features described by the other respondents. However, I'm also regularly frustrated by how much of a resource hog it is.\nI downloaded Notepad++ recently (much smaller download size ;-) ) and have been enjoying it quite a bit. The editor itself is nice and fast and it looks to be extensible. I'm hoping to copy some of my favorite features from IDE into Notepad++ and migrate, at some distant point in the future.\n",
"If I had to choose between Notepad++ and Komodo i would choose PyScripter ;.)\nSeriously I consider PyScripter as a great alternative...\n"
] | [
22,
9,
8,
7,
5,
4,
1,
1
] | [
"Downloaded both myself. Like Komodo better. \nKomodo Pros: Like it better. Does more. Looks like an IDE. Edits Django templates\nNotepad++ Cons: Don't like it as much. Does less. Looks less like and IDE.\n"
] | [
-4
] | [
"editor",
"komodo",
"komodoedit",
"notepad++",
"python"
] | stackoverflow_0000309135_editor_komodo_komodoedit_notepad++_python.txt |
Q:
Minimal, Standalone, Distributable, cross platform web server
I've been writing a fair number of smaller wsgi apps lately and am looking to find a web server that can be distributed, preconfigured to run the specific app. I know there are things like twisted and cherrypy which can serve up wsgi apps, but they seem to be missing a key piece of functionality for me, which is the ability to "pseudostream" large files using the http range header. Is there a web server available under a BSD or similar license which can be distributed as a standalone executable on any of the major platforms which is capable of both proxying to a a wsgi server (like cherrypy or the like) AND serving large files using the http range header?
A:
Lighttpd has a BSD license, so you should be able to bundle it if you wanted.
You say its for small apps, so I guess that means, small, local, single user web interfaces being served by a small http server? If thats is the case, then any python implementation should work. Just use something like py2exe to package it (in fact, there was a question relating to packaging python programs here on SO not too long ago).
Update, re: range header:
The default python http server may not support the range header you want, but its pretty easy to write your own handler, or a small wsgi app to do the logic, especially if all you're doing is streaming a file. It wouldn't be too many lines:
def stream_file(environ, start_response):
fp = open(base_dir + environ["PATH_INFO"])
fp.seek(environ["HTTP_CONTENT_RANGE"]) # just an example
start_response("200 OK", (('Content-Type', "file/type")))
return fp
A:
What's wrong with Apache + mod_wsgi? Apache is already multiplatform; it's often already installed (except in Windows).
You might also want to look at lighttpd, there are some blogs on configuring it to work with WSGI. See http://cleverdevil.org/computing/24/python-fastcgi-wsgi-and-lighttpd, and http://redmine.lighttpd.net/issues/show/1523
| Minimal, Standalone, Distributable, cross platform web server | I've been writing a fair number of smaller wsgi apps lately and am looking to find a web server that can be distributed, preconfigured to run the specific app. I know there are things like twisted and cherrypy which can serve up wsgi apps, but they seem to be missing a key piece of functionality for me, which is the ability to "pseudostream" large files using the http range header. Is there a web server available under a BSD or similar license which can be distributed as a standalone executable on any of the major platforms which is capable of both proxying to a a wsgi server (like cherrypy or the like) AND serving large files using the http range header?
| [
"Lighttpd has a BSD license, so you should be able to bundle it if you wanted.\nYou say its for small apps, so I guess that means, small, local, single user web interfaces being served by a small http server? If thats is the case, then any python implementation should work. Just use something like py2exe to package it (in fact, there was a question relating to packaging python programs here on SO not too long ago).\nUpdate, re: range header:\nThe default python http server may not support the range header you want, but its pretty easy to write your own handler, or a small wsgi app to do the logic, especially if all you're doing is streaming a file. It wouldn't be too many lines:\ndef stream_file(environ, start_response):\n fp = open(base_dir + environ[\"PATH_INFO\"])\n fp.seek(environ[\"HTTP_CONTENT_RANGE\"]) # just an example\n start_response(\"200 OK\", (('Content-Type', \"file/type\")))\n return fp\n\n",
"What's wrong with Apache + mod_wsgi? Apache is already multiplatform; it's often already installed (except in Windows).\nYou might also want to look at lighttpd, there are some blogs on configuring it to work with WSGI. See http://cleverdevil.org/computing/24/python-fastcgi-wsgi-and-lighttpd, and http://redmine.lighttpd.net/issues/show/1523 \n"
] | [
5,
3
] | [] | [] | [
"http",
"python",
"wsgi"
] | stackoverflow_0000499084_http_python_wsgi.txt |
Q:
How can I execute Python code without Komodo -ide?
I do that without the IDE:
$ ipython
$ edit file.py
$ :x (save and close)
It executes Python code, but not the one, where I use Pygame. It gives:
WARNING: Failure executing file:
In the IDE, my code executes.
A:
If something doesn't work in ipython, try the real Python interpreter (just python); ipython has known bugs, and not infrequently code known to work in the real interpreter fails there.
On UNIXlike platforms, your script should start with a shebang -- that is, a line like the following:
#!/usr/bin/env python
should be the very first line (and should have a standard UNIX line ending). This tells the operating system to execute your code with the first python interpreter found in the PATH, presuming that your script has executable permission set and is invoked as a program.
The other option is to start the program manually -- as per the following example:
$ python yourprogram.py
...or, to use a specific version of the interpreter (if more than one is installed):
$ python2.5 yourprogram.py
| How can I execute Python code without Komodo -ide? | I do that without the IDE:
$ ipython
$ edit file.py
$ :x (save and close)
It executes Python code, but not the one, where I use Pygame. It gives:
WARNING: Failure executing file:
In the IDE, my code executes.
| [
"If something doesn't work in ipython, try the real Python interpreter (just python); ipython has known bugs, and not infrequently code known to work in the real interpreter fails there.\nOn UNIXlike platforms, your script should start with a shebang -- that is, a line like the following:\n#!/usr/bin/env python\n\nshould be the very first line (and should have a standard UNIX line ending). This tells the operating system to execute your code with the first python interpreter found in the PATH, presuming that your script has executable permission set and is invoked as a program.\nThe other option is to start the program manually -- as per the following example:\n$ python yourprogram.py\n\n...or, to use a specific version of the interpreter (if more than one is installed):\n$ python2.5 yourprogram.py\n\n"
] | [
1
] | [] | [] | [
"executable",
"ide",
"python"
] | stackoverflow_0000501817_executable_ide_python.txt |
Q:
How to modularize a Python application
I've got a number of scripts that use common definitions. How do I split them in multiple files? Furthermore, the application can not be installed in any way in my scenario; it must be possible to have an arbitrary number of versions concurrently running and it must work without superuser rights. Solutions I've come up with are:
Duplicate code in every
script. Messy, and probably the worst
scheme.
Put all scripts and common
code in a single directory, and
use from . import to load them.
The downside of this approach is that
I'd like to put my libraries in other
directory than the applications.
Put common
code in its own directory, write a __init__.py that imports all submodules and finally use from . import to load them.
Keeps code organized, but it's a little bit of overhead to maintain __init__.py and qualify names.
Add the library directory to
sys.path and
import. I tend to
this, but I'm not sure whether
fiddling with sys.path
is nice code.
Load using
execfile
(exec in Python 3).
Combines the advantages of the
previous two approaches: Only one
line per module needed, and I can use
a dedicated. On the other hand, this
evades the python module concept and
polutes the global namespace.
Write and install a module using
distutils. This
installs the library for all python
scripts and needs superuser rights
and impacts other applications and is hence not applicable in my case.
What is the best method?
A:
Adding to sys.path (usually using site.addsitedir) is quite common and not particularly frowned upon. Certainly you will want your common working shared stuff to be in modules somewhere convenient.
If you are using Python 2.6+ there's already a user-level modules folder you can use without having to add to sys.path or PYTHONPATH. It's ~/.local/lib/python2.6/site-packages on Unix-likes - see PEP 370 for more information.
A:
You can set the PYTHONPATH environment variable to the directory where your library files are located. This adds that path to the library search path and you can use a normal import to import them.
A:
If you have multiple environments which have various combinations of dependencies, a good solution is to use virtualenv to create sandboxed Python environments, each with their own set of installed packages. Each environment will function in the same way as a system-wide Python site-packages setup, but no superuser rights are required to create local environments.
Google has plenty of info, but this looks like a pretty good starting point.
A:
Another alternative to manually adding the path to sys.path is to use the environment variable PYTHONPATH.
Also, distutils allows you to specify a custom installation directory using
python setup.py install --home=/my/dir
However, neither of these may be practical if you need to have multiple versions running simultaneously with the same module names. In that case you're probably best off modifying sys.path.
A:
I've used the third approach (add the directories to sys.path) for more than one project, and I think it's a valid approach.
| How to modularize a Python application | I've got a number of scripts that use common definitions. How do I split them in multiple files? Furthermore, the application can not be installed in any way in my scenario; it must be possible to have an arbitrary number of versions concurrently running and it must work without superuser rights. Solutions I've come up with are:
Duplicate code in every
script. Messy, and probably the worst
scheme.
Put all scripts and common
code in a single directory, and
use from . import to load them.
The downside of this approach is that
I'd like to put my libraries in other
directory than the applications.
Put common
code in its own directory, write a __init__.py that imports all submodules and finally use from . import to load them.
Keeps code organized, but it's a little bit of overhead to maintain __init__.py and qualify names.
Add the library directory to
sys.path and
import. I tend to
this, but I'm not sure whether
fiddling with sys.path
is nice code.
Load using
execfile
(exec in Python 3).
Combines the advantages of the
previous two approaches: Only one
line per module needed, and I can use
a dedicated. On the other hand, this
evades the python module concept and
polutes the global namespace.
Write and install a module using
distutils. This
installs the library for all python
scripts and needs superuser rights
and impacts other applications and is hence not applicable in my case.
What is the best method?
| [
"Adding to sys.path (usually using site.addsitedir) is quite common and not particularly frowned upon. Certainly you will want your common working shared stuff to be in modules somewhere convenient.\nIf you are using Python 2.6+ there's already a user-level modules folder you can use without having to add to sys.path or PYTHONPATH. It's ~/.local/lib/python2.6/site-packages on Unix-likes - see PEP 370 for more information.\n",
"You can set the PYTHONPATH environment variable to the directory where your library files are located. This adds that path to the library search path and you can use a normal import to import them.\n",
"If you have multiple environments which have various combinations of dependencies, a good solution is to use virtualenv to create sandboxed Python environments, each with their own set of installed packages. Each environment will function in the same way as a system-wide Python site-packages setup, but no superuser rights are required to create local environments.\nGoogle has plenty of info, but this looks like a pretty good starting point.\n",
"Another alternative to manually adding the path to sys.path is to use the environment variable PYTHONPATH.\nAlso, distutils allows you to specify a custom installation directory using\n python setup.py install --home=/my/dir \n\nHowever, neither of these may be practical if you need to have multiple versions running simultaneously with the same module names. In that case you're probably best off modifying sys.path.\n",
"I've used the third approach (add the directories to sys.path) for more than one project, and I think it's a valid approach.\n"
] | [
8,
4,
3,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000501945_python.txt |
Q:
How do I send an ARP packet through python on windows without needing winpcap?
Is there any way to send ARP packet on Windows without the use of another library such as winpcap?
I have heard that Windows XP SP2 blocks raw ethernet sockets, but I have also heard that raw sockets are only blocked for administrators. Any clarification here?
A:
There is no way to do that in the general case without the use of an external library.
If there are no requirements on what the packet should contain (i.e., if any ARP packet will do) then you can obviously send an ARP request if you're on an Ethernet network simply by trying to send something to any IP on your own subnet (ensuring beforehand that the destination IP is not in the ARP cache by running an external arp -d tar.get.ip.address command), but this will probably not be what you want.
For more information about raw socket support see the TCP/IP Raw Sockets Docs page, specifically the Limitations on Raw Sockets section.
A:
You could use the OpenVPN tap to send arbitrary packets as if you where using raw sockets.
| How do I send an ARP packet through python on windows without needing winpcap? | Is there any way to send ARP packet on Windows without the use of another library such as winpcap?
I have heard that Windows XP SP2 blocks raw ethernet sockets, but I have also heard that raw sockets are only blocked for administrators. Any clarification here?
| [
"There is no way to do that in the general case without the use of an external library.\nIf there are no requirements on what the packet should contain (i.e., if any ARP packet will do) then you can obviously send an ARP request if you're on an Ethernet network simply by trying to send something to any IP on your own subnet (ensuring beforehand that the destination IP is not in the ARP cache by running an external arp -d tar.get.ip.address command), but this will probably not be what you want.\nFor more information about raw socket support see the TCP/IP Raw Sockets Docs page, specifically the Limitations on Raw Sockets section.\n",
"You could use the OpenVPN tap to send arbitrary packets as if you where using raw sockets.\n"
] | [
3,
0
] | [] | [] | [
"arp",
"ethernet",
"python",
"sockets"
] | stackoverflow_0000395846_arp_ethernet_python_sockets.txt |
Q:
using django-rest-interface with http put
I'm trying to figure out how to implement my first RESTful interface using Django and django-rest-interface. I'm having problems with the HTTP PUT requests.
How do I access the parameters of the PUT request?
I thought they would be in the request.POST array, as PUT is somewhat similar to POST in my understanding, but that array is always empty.
What am I doing wrong?
Thanks for the help
A:
request.POST processes form-encoded data into a dictionary, which only makes sense for web browser form submissions. There is no equivalent for PUT, as web browsers don't PUT forms; the data submitted could have any content type. You'll need to get the raw data out of request.raw_post_data, possibly check the content type, and process it however makes sense for your application.
More information in this thread.
A:
if you figure in the dispatch of ResourceBase there are a line like:
elif request_method == 'PUT':
load_put_and_files(request)
return target.update(request, *args, **kwargs)
load_put_and_files let prepare for you the request.PUT with the data y the request.method is PUT, so you dont have to worry about that...
| using django-rest-interface with http put | I'm trying to figure out how to implement my first RESTful interface using Django and django-rest-interface. I'm having problems with the HTTP PUT requests.
How do I access the parameters of the PUT request?
I thought they would be in the request.POST array, as PUT is somewhat similar to POST in my understanding, but that array is always empty.
What am I doing wrong?
Thanks for the help
| [
"request.POST processes form-encoded data into a dictionary, which only makes sense for web browser form submissions. There is no equivalent for PUT, as web browsers don't PUT forms; the data submitted could have any content type. You'll need to get the raw data out of request.raw_post_data, possibly check the content type, and process it however makes sense for your application.\nMore information in this thread.\n",
"if you figure in the dispatch of ResourceBase there are a line like:\nelif request_method == 'PUT':\n load_put_and_files(request)\n return target.update(request, *args, **kwargs)\n\nload_put_and_files let prepare for you the request.PUT with the data y the request.method is PUT, so you dont have to worry about that...\n"
] | [
13,
4
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000500434_django_python.txt |
Q:
Beginner-level Python threading problems
As someone new to GUI development in Python (with pyGTK), I've just started learning about threading. To test out my skills, I've written a simple little GTK interface with a start/stop button. The goal is that when it is clicked, a thread starts that quickly increments a number in the text box, while keeping the GUI responsive.
I've got the GUI working just fine, but am having problems with the threading. It is probably a simple problem, but my mind is about fried for the day. Below I have pasted first the trackback from the Python interpreter, followed by the code. You can go to http://drop.io/pxgr5id to download it. I'm using bzr for revision control, so if you want to make a modification and re-drop it, please commit the changes. I'm also pasting the code at http://dpaste.com/113388/ because it can have line numbers, and this markdown stuff is giving me a headache.
Update 27 January, 15:52 EST:
Slightly updated code can be found here: http://drop.io/threagui/asset/thread-gui-rev3-tar-gz
Traceback
crashsystems@crashsystems-laptop:~/Desktop/thread-gui$ python threadgui.pybtnStartStop clicked
Traceback (most recent call last):
File "threadgui.py", line 39, in on_btnStartStop_clicked
self.thread.stop()
File "threadgui.py", line 20, in stop
self.join()
File "/usr/lib/python2.5/threading.py", line 583, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
btnStartStop clicked
threadStop = 1
btnStartStop clicked
threadStop = 0
btnStartStop clicked
Traceback (most recent call last):
File "threadgui.py", line 36, in on_btnStartStop_clicked
self.thread.start()
File "/usr/lib/python2.5/threading.py", line 434, in start
raise RuntimeError("thread already started")
RuntimeError: thread already started
btnExit clicked
exit() called
Code
#!/usr/bin/bash
import gtk, threading
class ThreadLooper (threading.Thread):
def __init__ (self, sleep_interval, function, args=[], kwargs={}):
threading.Thread.__init__(self)
self.sleep_interval = sleep_interval
self.function = function
self.args = args
self.kwargs = kwargs
self.finished = threading.Event()
def stop (self):
self.finished.set()
self.join()
def run (self):
while not self.finished.isSet():
self.finished.wait(self.sleep_interval)
self.function(*self.args, **self.kwargs)
class ThreadGUI:
# Define signals
def on_btnStartStop_clicked(self, widget, data=None):
print "btnStartStop clicked"
if(self.threadStop == 0):
self.threadStop = 1
self.thread.start()
else:
self.threadStop = 0
self.thread.stop()
print "threadStop = " + str(self.threadStop)
def on_btnMessageBox_clicked(self, widget, data=None):
print "btnMessageBox clicked"
self.lblMessage.set_text("This is a message!")
self.msgBox.show()
def on_btnExit_clicked(self, widget, data=None):
print "btnExit clicked"
self.exit()
def on_btnOk_clicked(self, widget, data=None):
print "btnOk clicked"
self.msgBox.hide()
def on_mainWindow_destroy(self, widget, data=None):
print "mainWindow destroyed!"
self.exit()
def exit(self):
print "exit() called"
self.threadStop = 1
gtk.main_quit()
def threadLoop(self):
# This will run in a thread
self.txtThreadView.set_text(str(self.threadCount))
print "hello world"
self.threadCount += 1
def __init__(self):
# Connect to the xml GUI file
builder = gtk.Builder()
builder.add_from_file("threadgui.xml")
# Connect to GUI widgets
self.mainWindow = builder.get_object("mainWindow")
self.txtThreadView = builder.get_object("txtThreadView")
self.btnStartStop = builder.get_object("btnStartStop")
self.msgBox = builder.get_object("msgBox")
self.btnMessageBox = builder.get_object("btnMessageBox")
self.btnExit = builder.get_object("btnExit")
self.lblMessage = builder.get_object("lblMessage")
self.btnOk = builder.get_object("btnOk")
# Connect the signals
builder.connect_signals(self)
# This global will be used for signaling the thread to stop.
self.threadStop = 1
# The thread
self.thread = ThreadLooper(0.1, self.threadLoop, (1,0,-1))
self.threadCounter = 0
if __name__ == "__main__":
# Start GUI instance
GUI = ThreadGUI()
GUI.mainWindow.show()
gtk.main()
A:
Threading with PyGTK is bit tricky if you want to do it right. Basically, you should not update GUI from within any other thread than main thread (common limitation in GUI libs). Usually this is done in PyGTK using mechanism of queued messages (for communication between workers and GUI) which are read periodically using timeout function. Once I had a presentation on my local LUG on this topic, you can grab example code for this presentation from Google Code repository. Have a look at MainWindow class in forms/frmmain.py, specially for method _pulse() and what is done in on_entry_activate() (thread is started there plus the idle timer is created).
def on_entry_activate(self, entry):
text = entry.get_text().strip()
if text:
store = entry.get_completion().get_model()
if text not in [row[0] for row in store]:
store.append((text, ))
thread = threads.RecommendationsFetcher(text, self.queue)# <- 1
self.idle_timer = gobject.idle_add(self._pulse)# <- 2
tv_results = self.widgets.get_widget('tv_results')
model = tv_results.get_model()
model.clear()
thread.setDaemon(True)# <- 3
progress_update = self.widgets.get_widget('progress_update')
progress_update.show()
thread.start()# <- 4
This way, application updates GUI when is "idle" (by GTK means) causing no freezes.
1: create thread
2: create idle timer
3: daemonize thread so the app can be closed without waiting for thread completion
4: start thread
A:
Generally it's better to avoid threads when you can. It's very difficult to write a threaded application correctly, and even more difficult to know you got it right. Since you're writing a GUI application, it's easier for you to visualize how to do so, since you already have to write your application within an asynchronous framework.
The important thing to realize is that a GUI application is doing a whole lot of nothing. It spends most of its time waiting for the OS to tell it that something has happened. You can do a lot of stuff in this idle time as long as you know how to write long-running code so it doesn't block.
You can solve your original problem by using a timeout; telling your GUI framework to call back some function after a delay, and then resetting that delay or starting another delayed call.
Another common question is how to communicate over the network in a GUI application. Network apps are like GUI apps in that they do a whole lot of waiting. Using a network IO framework (like Twisted) makes it easy to have both parts of your application wait cooperatively instead of competitively, and again alleviates the need for extra threads.
Long-running calculations can be written iteratively instead of synchronously, and you can do your processing while the GUI is idle. You can use a generator to do this quite easily in python.
def long_calculation(param, callback):
result = None
while True:
result = calculate_next_part(param, result)
if calculation_is_done(result):
break
else:
yield
callback(result)
Calling long_calculation will give you a generator object, and calling .next() on the generator object will run the generator until it reaches either yield or return. You would just tell the GUI framework to call long_calculation(some_param, some_callback).next when it has time, and eventually your callback will be called with the result.
I don't know GTK very well, so I can't tell you which gobject functions you should be calling. With this explanation, though, you should be able to find the necessary functions in the documentation, or at worst, ask on a relevant IRC channel.
Unfortunately there is no good general-case answer. If you clarify with exactly what you're trying to do, it would be easier to explain why you don't need threads in that situation.
A:
You can't restart a stopped thread object; don't try. Instead, create a new instance of the object if you want to restart it after it's truly stopped and joined.
A:
I've played with different tools to help clean up the work with threads, idle processing, etc.
make_idle is a function decorator that allows you to run a task in the background cooperatively. This is a good middle ground between something short enough to run once in the UI thread and not affect the responsiveness of the app and doing a full out thread in special synchronization. Inside the decorated function you use "yield" to hand the processing back over to the GUI so it can remain responsive and the next time the UI is idle it will pick up in your function where you left off. So to get this started you just call idle_add to the decorated function.
def make_idler(func):
"""
Decorator that makes a generator-function into a function that will
continue execution on next call
"""
a = []
@functools.wraps(func)
def decorated_func(*args, **kwds):
if not a:
a.append(func(*args, **kwds))
try:
a[0].next()
return True
except StopIteration:
del a[:]
return False
return decorated_func
If you need to do a bit more processing, you can use a context manager to lock the UI thread whenever needed to help make the code a bit safer
@contextlib.contextmanager
def gtk_critical_section():
gtk.gdk.threads_enter()
try:
yield
finally:
gtk.gdk.threads_leave()
with that you can just
with gtk_critical_section():
... processing ...
I have not finished with it yet, but in combining doing things purely in idle and purely in a thread, I have a decorator (not tested yet so not posted) that you can tell it whether the next section after the yield is to be run in the UI's idle time or in a thread. This would allow one to do some setup in the UI thread, switch to a new thread for doing background stuff, and then switch over to the UI's idle time to do cleanup, minimizing the need for locks.
A:
I haven't looked in detail on your code. But I see two solutions to your problem:
Don't use threads at all. Instead use a timeout, like this:
import gobject
i = 0
def do_print():
global i
print i
i += 1
if i == 10:
main_loop.quit()
return False
return True
main_loop = gobject.MainLoop()
gobject.timeout_add(250, do_print)
main_loop.run()
When using threads, you must make sure that your GUI code is only called from one thread at the same time by guarding it like this:
import threading
import time
import gobject
import gtk
gtk.gdk.threads_init()
def run_thread():
for i in xrange(10):
time.sleep(0.25)
gtk.gdk.threads_enter()
# update the view here
gtk.gdk.threads_leave()
gtk.gdk.threads_enter()
main_loop.quit()
gtk.gdk.threads_leave()
t = threading.Thread(target=run_thread)
t.start()
main_loop = gobject.MainLoop()
main_loop.run()
| Beginner-level Python threading problems | As someone new to GUI development in Python (with pyGTK), I've just started learning about threading. To test out my skills, I've written a simple little GTK interface with a start/stop button. The goal is that when it is clicked, a thread starts that quickly increments a number in the text box, while keeping the GUI responsive.
I've got the GUI working just fine, but am having problems with the threading. It is probably a simple problem, but my mind is about fried for the day. Below I have pasted first the trackback from the Python interpreter, followed by the code. You can go to http://drop.io/pxgr5id to download it. I'm using bzr for revision control, so if you want to make a modification and re-drop it, please commit the changes. I'm also pasting the code at http://dpaste.com/113388/ because it can have line numbers, and this markdown stuff is giving me a headache.
Update 27 January, 15:52 EST:
Slightly updated code can be found here: http://drop.io/threagui/asset/thread-gui-rev3-tar-gz
Traceback
crashsystems@crashsystems-laptop:~/Desktop/thread-gui$ python threadgui.pybtnStartStop clicked
Traceback (most recent call last):
File "threadgui.py", line 39, in on_btnStartStop_clicked
self.thread.stop()
File "threadgui.py", line 20, in stop
self.join()
File "/usr/lib/python2.5/threading.py", line 583, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
btnStartStop clicked
threadStop = 1
btnStartStop clicked
threadStop = 0
btnStartStop clicked
Traceback (most recent call last):
File "threadgui.py", line 36, in on_btnStartStop_clicked
self.thread.start()
File "/usr/lib/python2.5/threading.py", line 434, in start
raise RuntimeError("thread already started")
RuntimeError: thread already started
btnExit clicked
exit() called
Code
#!/usr/bin/bash
import gtk, threading
class ThreadLooper (threading.Thread):
def __init__ (self, sleep_interval, function, args=[], kwargs={}):
threading.Thread.__init__(self)
self.sleep_interval = sleep_interval
self.function = function
self.args = args
self.kwargs = kwargs
self.finished = threading.Event()
def stop (self):
self.finished.set()
self.join()
def run (self):
while not self.finished.isSet():
self.finished.wait(self.sleep_interval)
self.function(*self.args, **self.kwargs)
class ThreadGUI:
# Define signals
def on_btnStartStop_clicked(self, widget, data=None):
print "btnStartStop clicked"
if(self.threadStop == 0):
self.threadStop = 1
self.thread.start()
else:
self.threadStop = 0
self.thread.stop()
print "threadStop = " + str(self.threadStop)
def on_btnMessageBox_clicked(self, widget, data=None):
print "btnMessageBox clicked"
self.lblMessage.set_text("This is a message!")
self.msgBox.show()
def on_btnExit_clicked(self, widget, data=None):
print "btnExit clicked"
self.exit()
def on_btnOk_clicked(self, widget, data=None):
print "btnOk clicked"
self.msgBox.hide()
def on_mainWindow_destroy(self, widget, data=None):
print "mainWindow destroyed!"
self.exit()
def exit(self):
print "exit() called"
self.threadStop = 1
gtk.main_quit()
def threadLoop(self):
# This will run in a thread
self.txtThreadView.set_text(str(self.threadCount))
print "hello world"
self.threadCount += 1
def __init__(self):
# Connect to the xml GUI file
builder = gtk.Builder()
builder.add_from_file("threadgui.xml")
# Connect to GUI widgets
self.mainWindow = builder.get_object("mainWindow")
self.txtThreadView = builder.get_object("txtThreadView")
self.btnStartStop = builder.get_object("btnStartStop")
self.msgBox = builder.get_object("msgBox")
self.btnMessageBox = builder.get_object("btnMessageBox")
self.btnExit = builder.get_object("btnExit")
self.lblMessage = builder.get_object("lblMessage")
self.btnOk = builder.get_object("btnOk")
# Connect the signals
builder.connect_signals(self)
# This global will be used for signaling the thread to stop.
self.threadStop = 1
# The thread
self.thread = ThreadLooper(0.1, self.threadLoop, (1,0,-1))
self.threadCounter = 0
if __name__ == "__main__":
# Start GUI instance
GUI = ThreadGUI()
GUI.mainWindow.show()
gtk.main()
| [
"Threading with PyGTK is bit tricky if you want to do it right. Basically, you should not update GUI from within any other thread than main thread (common limitation in GUI libs). Usually this is done in PyGTK using mechanism of queued messages (for communication between workers and GUI) which are read periodically using timeout function. Once I had a presentation on my local LUG on this topic, you can grab example code for this presentation from Google Code repository. Have a look at MainWindow class in forms/frmmain.py, specially for method _pulse() and what is done in on_entry_activate() (thread is started there plus the idle timer is created).\ndef on_entry_activate(self, entry):\n text = entry.get_text().strip()\n if text:\n store = entry.get_completion().get_model()\n if text not in [row[0] for row in store]:\n store.append((text, ))\n thread = threads.RecommendationsFetcher(text, self.queue)# <- 1\n self.idle_timer = gobject.idle_add(self._pulse)# <- 2\n tv_results = self.widgets.get_widget('tv_results')\n model = tv_results.get_model()\n model.clear()\n thread.setDaemon(True)# <- 3\n progress_update = self.widgets.get_widget('progress_update')\n progress_update.show()\n thread.start()# <- 4\n\nThis way, application updates GUI when is \"idle\" (by GTK means) causing no freezes.\n\n1: create thread \n2: create idle timer\n3: daemonize thread so the app can be closed without waiting for thread completion\n4: start thread\n\n",
"Generally it's better to avoid threads when you can. It's very difficult to write a threaded application correctly, and even more difficult to know you got it right. Since you're writing a GUI application, it's easier for you to visualize how to do so, since you already have to write your application within an asynchronous framework. \nThe important thing to realize is that a GUI application is doing a whole lot of nothing. It spends most of its time waiting for the OS to tell it that something has happened. You can do a lot of stuff in this idle time as long as you know how to write long-running code so it doesn't block.\nYou can solve your original problem by using a timeout; telling your GUI framework to call back some function after a delay, and then resetting that delay or starting another delayed call.\nAnother common question is how to communicate over the network in a GUI application. Network apps are like GUI apps in that they do a whole lot of waiting. Using a network IO framework (like Twisted) makes it easy to have both parts of your application wait cooperatively instead of competitively, and again alleviates the need for extra threads.\nLong-running calculations can be written iteratively instead of synchronously, and you can do your processing while the GUI is idle. You can use a generator to do this quite easily in python.\ndef long_calculation(param, callback):\n result = None\n while True:\n result = calculate_next_part(param, result)\n if calculation_is_done(result):\n break\n else:\n yield\n callback(result)\n\nCalling long_calculation will give you a generator object, and calling .next() on the generator object will run the generator until it reaches either yield or return. You would just tell the GUI framework to call long_calculation(some_param, some_callback).next when it has time, and eventually your callback will be called with the result.\nI don't know GTK very well, so I can't tell you which gobject functions you should be calling. With this explanation, though, you should be able to find the necessary functions in the documentation, or at worst, ask on a relevant IRC channel.\nUnfortunately there is no good general-case answer. If you clarify with exactly what you're trying to do, it would be easier to explain why you don't need threads in that situation.\n",
"You can't restart a stopped thread object; don't try. Instead, create a new instance of the object if you want to restart it after it's truly stopped and joined.\n",
"I've played with different tools to help clean up the work with threads, idle processing, etc.\nmake_idle is a function decorator that allows you to run a task in the background cooperatively. This is a good middle ground between something short enough to run once in the UI thread and not affect the responsiveness of the app and doing a full out thread in special synchronization. Inside the decorated function you use \"yield\" to hand the processing back over to the GUI so it can remain responsive and the next time the UI is idle it will pick up in your function where you left off. So to get this started you just call idle_add to the decorated function. \ndef make_idler(func):\n \"\"\"\n Decorator that makes a generator-function into a function that will\ncontinue execution on next call\n \"\"\"\n a = []\n\n @functools.wraps(func)\n def decorated_func(*args, **kwds):\n if not a:\n a.append(func(*args, **kwds))\n try:\n a[0].next()\n return True\n except StopIteration:\n del a[:]\n return False\n\n return decorated_func\n\nIf you need to do a bit more processing, you can use a context manager to lock the UI thread whenever needed to help make the code a bit safer\n@contextlib.contextmanager\ndef gtk_critical_section():\n gtk.gdk.threads_enter()\n try:\n yield\n finally:\n gtk.gdk.threads_leave()\n\nwith that you can just\nwith gtk_critical_section():\n ... processing ...\n\nI have not finished with it yet, but in combining doing things purely in idle and purely in a thread, I have a decorator (not tested yet so not posted) that you can tell it whether the next section after the yield is to be run in the UI's idle time or in a thread. This would allow one to do some setup in the UI thread, switch to a new thread for doing background stuff, and then switch over to the UI's idle time to do cleanup, minimizing the need for locks.\n",
"I haven't looked in detail on your code. But I see two solutions to your problem:\nDon't use threads at all. Instead use a timeout, like this:\nimport gobject\n\ni = 0\ndef do_print():\n global i\n print i\n i += 1\n if i == 10:\n main_loop.quit()\n return False\n return True\n\nmain_loop = gobject.MainLoop()\ngobject.timeout_add(250, do_print)\nmain_loop.run()\n\nWhen using threads, you must make sure that your GUI code is only called from one thread at the same time by guarding it like this:\nimport threading\nimport time\n\nimport gobject\nimport gtk\n\ngtk.gdk.threads_init()\n\ndef run_thread():\n for i in xrange(10):\n time.sleep(0.25)\n gtk.gdk.threads_enter()\n # update the view here\n gtk.gdk.threads_leave()\n gtk.gdk.threads_enter()\n main_loop.quit()\n gtk.gdk.threads_leave()\n\nt = threading.Thread(target=run_thread)\nt.start()\nmain_loop = gobject.MainLoop()\nmain_loop.run()\n\n"
] | [
9,
3,
1,
0,
0
] | [] | [] | [
"multithreading",
"pygtk",
"python"
] | stackoverflow_0000482263_multithreading_pygtk_python.txt |
Q:
Query distinct list of choices for Django form with App Engine Datastore
I've been trying to figure this out for hours across a couple of days, and can not get it to work. I've been everywhere. I'll continue trying to figure it out, but was hoping for a quicker solution. I'm using App Engine datastore + Django.
Using a query in a view and custom forms, I was able to get a list to the form but then I was not able to post. I have been trying to figure out how to dynamically add the choices as part of the Django form... I've tried various ways with no success. Help!
Below are the two models. I'd like to get a distinct list of address_id to show in the location field in InfoForm. This fields could (and maybe should) be named the same, but I thought it'd be easier if they were named different.
class Info(db.Model):
user = db.UserProperty()
location = db.StringProperty()
info = db.StringProperty()
created = db.DateTimeProperty(auto_now_add=True)
modified = db.DateTimeProperty(auto_now=True)
class Locations(db.Model):
user = db.UserProperty()
address_id = db.StringProperty()
address = db.StringProperty()
class InfoForm(djangoforms.ModelForm):
info = forms.ChoiceField(choices=INFO_CHOICES)
location = forms.ChoiceField()
class Meta:
model = Info
exclude = ['user','created','modified']
A:
I'm not familiar with App Engine Datastore, but I'm guessing you probably want to do something along these lines:
class InfoForm(djangoforms.ModelForm):
def __init__(self, *args, **kwargs):
super(InfoForm, self).__init__(*args, **kwargs)
choices = [(r.id, r.info) for r in Info.objects.filter(...)]
self.fields['info'] = ChoiceField(choices=choices)
By assigning info dynamically, you would then exclude this line from your InfoForm class:
info = forms.ChoiceField(choices=INFO_CHOICES)
| Query distinct list of choices for Django form with App Engine Datastore | I've been trying to figure this out for hours across a couple of days, and can not get it to work. I've been everywhere. I'll continue trying to figure it out, but was hoping for a quicker solution. I'm using App Engine datastore + Django.
Using a query in a view and custom forms, I was able to get a list to the form but then I was not able to post. I have been trying to figure out how to dynamically add the choices as part of the Django form... I've tried various ways with no success. Help!
Below are the two models. I'd like to get a distinct list of address_id to show in the location field in InfoForm. This fields could (and maybe should) be named the same, but I thought it'd be easier if they were named different.
class Info(db.Model):
user = db.UserProperty()
location = db.StringProperty()
info = db.StringProperty()
created = db.DateTimeProperty(auto_now_add=True)
modified = db.DateTimeProperty(auto_now=True)
class Locations(db.Model):
user = db.UserProperty()
address_id = db.StringProperty()
address = db.StringProperty()
class InfoForm(djangoforms.ModelForm):
info = forms.ChoiceField(choices=INFO_CHOICES)
location = forms.ChoiceField()
class Meta:
model = Info
exclude = ['user','created','modified']
| [
"I'm not familiar with App Engine Datastore, but I'm guessing you probably want to do something along these lines:\nclass InfoForm(djangoforms.ModelForm):\n def __init__(self, *args, **kwargs):\n super(InfoForm, self).__init__(*args, **kwargs)\n choices = [(r.id, r.info) for r in Info.objects.filter(...)]\n self.fields['info'] = ChoiceField(choices=choices)\n\nBy assigning info dynamically, you would then exclude this line from your InfoForm class:\n info = forms.ChoiceField(choices=INFO_CHOICES)\n\n"
] | [
1
] | [] | [] | [
"django",
"google_app_engine",
"python"
] | stackoverflow_0000503295_django_google_app_engine_python.txt |
Q:
Simple simulations for Physics in Python?
I would like to know similar, concrete simulations, as the simulation about watering a field here.
What is your favorite library/internet page for such simulations in Python?
I know little Simpy, Numpy and Pygame. I would like to get examples about them.
A:
If you are looking for some game physics (collisions, deformations, gravity, etc.) which looks real and is reasonably fast consider re-using some physics engine libraries.
As a first reference, you may want to look into pymunk, a Python wrapper of Chipmunk 2D physics library. You can find a list of various Open Source physics engines (2D and 3D) in Wikipedia.
If you are looking for physically correct simulations, no matter what language you want to use, it will be much slower (almost never real-time), and you need to use some numerical analysis software (and probably to write something yourself). Exact answer depends on the problem you want to solve. It is a fairly complicated field (of math).
For example, if you need to do simulations in continuum mechanics or electromagnetism, you probably need Finite Difference, Finite Volume or Finite Element methods. For Python, there are some ready-to-use libraries, for example: FiPy (FVM), GetFem++ (FEM), FEniCS/DOLFIN (FEM), and some other.
A:
Here is some simple astronomy related python. And here is a hardcore code from the same guy.
And Eagleclaw solves and plots various hyperbolic equations using some python. However, most of the code is written in Fortran to do the computations and python to plot the results. If you are studying physics though you may have to get used to this kind of Fortran wrapped code. It is a reality. But this isn't really what your looking for I guess. The good thing it that it is documented in a literate programming style so it should be understandable.
A:
Maybe PyODE?
A:
I've heard of PyBox2D, which is a port of the really nice Box2D. To quote the site:
Box2D is a feature rich 2d rigid body physics engine, written in C++ by Erin Catto. It has been used in many games, including Crayon Physics Deluxe, winner of the 2008 Independent Game Festival Grand Prize.
| Simple simulations for Physics in Python? | I would like to know similar, concrete simulations, as the simulation about watering a field here.
What is your favorite library/internet page for such simulations in Python?
I know little Simpy, Numpy and Pygame. I would like to get examples about them.
| [
"If you are looking for some game physics (collisions, deformations, gravity, etc.) which looks real and is reasonably fast consider re-using some physics engine libraries.\nAs a first reference, you may want to look into pymunk, a Python wrapper of Chipmunk 2D physics library. You can find a list of various Open Source physics engines (2D and 3D) in Wikipedia.\nIf you are looking for physically correct simulations, no matter what language you want to use, it will be much slower (almost never real-time), and you need to use some numerical analysis software (and probably to write something yourself). Exact answer depends on the problem you want to solve. It is a fairly complicated field (of math).\nFor example, if you need to do simulations in continuum mechanics or electromagnetism, you probably need Finite Difference, Finite Volume or Finite Element methods. For Python, there are some ready-to-use libraries, for example: FiPy (FVM), GetFem++ (FEM), FEniCS/DOLFIN (FEM), and some other.\n",
"Here is some simple astronomy related python. And here is a hardcore code from the same guy.\nAnd Eagleclaw solves and plots various hyperbolic equations using some python. However, most of the code is written in Fortran to do the computations and python to plot the results. If you are studying physics though you may have to get used to this kind of Fortran wrapped code. It is a reality. But this isn't really what your looking for I guess. The good thing it that it is documented in a literate programming style so it should be understandable.\n",
"Maybe PyODE?\n",
"I've heard of PyBox2D, which is a port of the really nice Box2D. To quote the site:\n\nBox2D is a feature rich 2d rigid body physics engine, written in C++ by Erin Catto. It has been used in many games, including Crayon Physics Deluxe, winner of the 2008 Independent Game Festival Grand Prize.\n\n"
] | [
15,
3,
2,
1
] | [] | [] | [
"modeling",
"python",
"simulation"
] | stackoverflow_0000501940_modeling_python_simulation.txt |
Q:
Organising a GUI application
This is going to be a generic question.
I am struggling in designing a GUI application, esp. with dealing with interactions between different parts.
I don't know how I should deal with shared state. On one hand, shared state is bad, and things should be as explicit as possible. On the other hand, not having shared state introduces unwanted coupling between components.
An example:
I want my application to be extendable in an Emacs/Vim sort of way, via scripts. Clearly, some sort of shared state needs to be modified, so that the GUI will use it. My initial plan was having a global "session" that is accessible from everywhere, but I'm not so sure about it.
One tricky use case is key bindings. I want the user to be able to specify custom keybindings from a script. Each keybinding maps to an arbitrary command, that receives the session as the only argument.
Now, the editor component captures keypresses. It has to have access to the keymappings, which are per-session, so it needs access to the session. Is coupling the editor to the session a good idea? Other components will also need to access the keybindings, so the session now becomes shared and can be a singleton...
Is there any good reading about designing GUI applications that goes beyond MVC?
This is Python and wxPython, FWIW.
[EDIT]: Added concrete usecase.
A:
If you've looked at MVC you're probably moving in the right direction. MVC, MVP, Passive View, Supervising Controller. Those are all different ways, each with their own pros and cons, of accomplishing what you're after. I find that Passive View is the "ideal", but it causes you to introduce far too many widgets into your GUI interfaces (i.e. IInterface). In general I find that Supervising Controller is a good compromise.
A:
Sorry to jump on this question so late, but nothing, I mean nothing can beat looking at the source of an application that does something similar. (I might recommend something like http://pida.co.uk, but there are plenty of extensible wx+Python IDEs out there as that sounds like what you are making).
If I might make a few notes:
message passing is not inherently bad, and it doesn't necessarily cause coupling between components as long as components adhere to interfaces.
shared state is not inherently bad, but I would go with your gut instinct and use as little as possible. Since the universe itself is stateful, you can't really avoid this entirely. I tend to use a shared "Boss" object which is usually a non-singleton single instance per application, and is responsible for brokering other components.
For keybindings, I tend to use some kind of "Action" system. Actions are high level things which a user can do, for example: "Save the current buffer", and they can be conveniently represented in the UI by toolbar buttons or menu items. So your scripts/plugins create actions, and register them with something central (eg some kind of registry object - see 1 and 2). And their involvement ends there. On top of this you have some kind of key-binding service that maps keys to actions (which it lists from the registry, per session or otherwise). This way you have achieved separation of the plugin and keybinding code, separation of the editor and the action code. As an added bonus your task of "Configuring shortcuts" or "User defined key maps" is made particularly easier.
I could go on, but most of what I have to say is in the PIDA codebase, so back to my original point...
A:
In MVC, the Model stuff is the shared state of the information.
The Control stuff is the shared state of the GUI control settings and responses to mouse-clicks and what-not.
Your scripting angle can
1) Update the Model objects. This is good. The Control can be "Observers" of the model objects and the View be updated to reflect the observed changes.
2) Update the Control objects. This is not so good, but... The Control objects can then make appropriate changes to the Model and/or View.
I'm not sure what the problem is with MVC. Could you provide a more detailed design example with specific issues or concerns?
| Organising a GUI application | This is going to be a generic question.
I am struggling in designing a GUI application, esp. with dealing with interactions between different parts.
I don't know how I should deal with shared state. On one hand, shared state is bad, and things should be as explicit as possible. On the other hand, not having shared state introduces unwanted coupling between components.
An example:
I want my application to be extendable in an Emacs/Vim sort of way, via scripts. Clearly, some sort of shared state needs to be modified, so that the GUI will use it. My initial plan was having a global "session" that is accessible from everywhere, but I'm not so sure about it.
One tricky use case is key bindings. I want the user to be able to specify custom keybindings from a script. Each keybinding maps to an arbitrary command, that receives the session as the only argument.
Now, the editor component captures keypresses. It has to have access to the keymappings, which are per-session, so it needs access to the session. Is coupling the editor to the session a good idea? Other components will also need to access the keybindings, so the session now becomes shared and can be a singleton...
Is there any good reading about designing GUI applications that goes beyond MVC?
This is Python and wxPython, FWIW.
[EDIT]: Added concrete usecase.
| [
"If you've looked at MVC you're probably moving in the right direction. MVC, MVP, Passive View, Supervising Controller. Those are all different ways, each with their own pros and cons, of accomplishing what you're after. I find that Passive View is the \"ideal\", but it causes you to introduce far too many widgets into your GUI interfaces (i.e. IInterface). In general I find that Supervising Controller is a good compromise.\n",
"Sorry to jump on this question so late, but nothing, I mean nothing can beat looking at the source of an application that does something similar. (I might recommend something like http://pida.co.uk, but there are plenty of extensible wx+Python IDEs out there as that sounds like what you are making).\nIf I might make a few notes:\n\nmessage passing is not inherently bad, and it doesn't necessarily cause coupling between components as long as components adhere to interfaces.\nshared state is not inherently bad, but I would go with your gut instinct and use as little as possible. Since the universe itself is stateful, you can't really avoid this entirely. I tend to use a shared \"Boss\" object which is usually a non-singleton single instance per application, and is responsible for brokering other components.\nFor keybindings, I tend to use some kind of \"Action\" system. Actions are high level things which a user can do, for example: \"Save the current buffer\", and they can be conveniently represented in the UI by toolbar buttons or menu items. So your scripts/plugins create actions, and register them with something central (eg some kind of registry object - see 1 and 2). And their involvement ends there. On top of this you have some kind of key-binding service that maps keys to actions (which it lists from the registry, per session or otherwise). This way you have achieved separation of the plugin and keybinding code, separation of the editor and the action code. As an added bonus your task of \"Configuring shortcuts\" or \"User defined key maps\" is made particularly easier.\n\nI could go on, but most of what I have to say is in the PIDA codebase, so back to my original point...\n",
"In MVC, the Model stuff is the shared state of the information.\nThe Control stuff is the shared state of the GUI control settings and responses to mouse-clicks and what-not.\nYour scripting angle can \n1) Update the Model objects. This is good. The Control can be \"Observers\" of the model objects and the View be updated to reflect the observed changes.\n2) Update the Control objects. This is not so good, but... The Control objects can then make appropriate changes to the Model and/or View.\nI'm not sure what the problem is with MVC. Could you provide a more detailed design example with specific issues or concerns?\n"
] | [
2,
2,
1
] | [] | [] | [
"architecture",
"model_view_controller",
"python",
"user_interface",
"wxpython"
] | stackoverflow_0000471279_architecture_model_view_controller_python_user_interface_wxpython.txt |
Q:
Extracting data from MS Word
I am looking for a way to extract / scrape data from Word files into a database. Our corporate procedures have Minutes of Meetings with clients documented in MS Word files, mostly due to history and inertia.
I want to be able to pull the action items from these meeting minutes into a database so that we can access them from a web-interface, turn them into tasks and update them as they are completed.
Which is the best way to do this:
VBA macro from inside Word to create CSV and then upload to the DB?
VBA macro in Word with connection to DB (how does one connect to MySQL from VBA?)
Python script via win32com then upload to DB?
The last one is attractive to me as the web-interface is being built with Django, but I've never used win32com or tried scripting Word from python.
EDIT: I've started extracting the text with VBA because it makes it a little easier to deal with the Word Object Model. I am having a problem though - all the text is in Tables, and when I pull the strings out of the CELLS I want, I get a strange little box character at the end of each string. My code looks like:
sFile = "D:\temp\output.txt"
fnum = FreeFile
Open sFile For Output As #fnum
num_rows = Application.ActiveDocument.Tables(2).Rows.Count
For n = 1 To num_rows
Descr = Application.ActiveDocument.Tables(2).Cell(n, 2).Range.Text
Assign = Application.ActiveDocument.Tables(2).Cell(n, 3).Range.Text
Target = Application.ActiveDocument.Tables(2).Cell(n, 4).Range.Text
If Target = "" Then
ExportText = ""
Else
ExportText = Descr & Chr(44) & Assign & Chr(44) & _
Target & Chr(13) & Chr(10)
Print #fnum, ExportText
End If
Next n
Close #fnum
What's up with the little control character box? Is some kind of character code coming across from Word?
A:
Word has a little marker thingy that it puts at the end of every cell of text in a table.
It is used just like an end-of-paragraph marker in paragraphs: to store the formatting for the entire paragraph.
Just use the Left() function to strip it out, i.e.
Left(Target, Len(Target)-1))
By the way, instead of
num_rows = Application.ActiveDocument.Tables(2).Rows.Count
For n = 1 To num_rows
Descr = Application.ActiveDocument.Tables(2).Cell(n, 2).Range.Text
Try this:
For Each row in Application.ActiveDocument.Tables(2).Rows
Descr = row.Cells(2).Range.Text
A:
Well, I've never scripted Word, but it's pretty easy to do simple stuff with win32com. Something like:
from win32com.client import Dispatch
word = Dispatch('Word.Application')
doc = word.Open('d:\\stuff\\myfile.doc')
doc.SaveAs(FileName='d:\\stuff\\text\\myfile.txt', FileFormat=?) # not sure what to use for ?
This is untested, but I think something like that will just open the file and save it as plain text (provided you can find the right fileformat) – you could then read the text into python and manipulate it from there. There is probably a way to grab the contents of the file directly, too, but I don't know it off hand; documentation can be hard to find, but if you've got VBA docs or experience, you should be able to carry them across.
Have a look at this post from a while ago: http://mail.python.org/pipermail/python-list/2002-October/168785.html Scroll down to COMTools.py; there's some good examples there.
You can also run makepy.py (part of the pythonwin distribution) to generate python "signatures" for the COM functions available, and then look through it as a kind of documentation.
A:
You could use OpenOffice. It can open word files, and also can run python macros.
A:
I'd say look at the related questions on the right -->
The top one seems to have some good ideas for going the python route.
A:
how about saving the file as xml. then using python or something else and pull the data out of word and into the database.
A:
It is possible to programmatically save a Word document as HTML and to import the table(s) contained into Access. This requires very little effort.
| Extracting data from MS Word | I am looking for a way to extract / scrape data from Word files into a database. Our corporate procedures have Minutes of Meetings with clients documented in MS Word files, mostly due to history and inertia.
I want to be able to pull the action items from these meeting minutes into a database so that we can access them from a web-interface, turn them into tasks and update them as they are completed.
Which is the best way to do this:
VBA macro from inside Word to create CSV and then upload to the DB?
VBA macro in Word with connection to DB (how does one connect to MySQL from VBA?)
Python script via win32com then upload to DB?
The last one is attractive to me as the web-interface is being built with Django, but I've never used win32com or tried scripting Word from python.
EDIT: I've started extracting the text with VBA because it makes it a little easier to deal with the Word Object Model. I am having a problem though - all the text is in Tables, and when I pull the strings out of the CELLS I want, I get a strange little box character at the end of each string. My code looks like:
sFile = "D:\temp\output.txt"
fnum = FreeFile
Open sFile For Output As #fnum
num_rows = Application.ActiveDocument.Tables(2).Rows.Count
For n = 1 To num_rows
Descr = Application.ActiveDocument.Tables(2).Cell(n, 2).Range.Text
Assign = Application.ActiveDocument.Tables(2).Cell(n, 3).Range.Text
Target = Application.ActiveDocument.Tables(2).Cell(n, 4).Range.Text
If Target = "" Then
ExportText = ""
Else
ExportText = Descr & Chr(44) & Assign & Chr(44) & _
Target & Chr(13) & Chr(10)
Print #fnum, ExportText
End If
Next n
Close #fnum
What's up with the little control character box? Is some kind of character code coming across from Word?
| [
"Word has a little marker thingy that it puts at the end of every cell of text in a table. \nIt is used just like an end-of-paragraph marker in paragraphs: to store the formatting for the entire paragraph.\nJust use the Left() function to strip it out, i.e. \n Left(Target, Len(Target)-1))\n\nBy the way, instead of \n num_rows = Application.ActiveDocument.Tables(2).Rows.Count\n For n = 1 To num_rows\n Descr = Application.ActiveDocument.Tables(2).Cell(n, 2).Range.Text\n\nTry this:\n For Each row in Application.ActiveDocument.Tables(2).Rows\n Descr = row.Cells(2).Range.Text\n\n",
"Well, I've never scripted Word, but it's pretty easy to do simple stuff with win32com. Something like:\nfrom win32com.client import Dispatch\nword = Dispatch('Word.Application')\ndoc = word.Open('d:\\\\stuff\\\\myfile.doc')\ndoc.SaveAs(FileName='d:\\\\stuff\\\\text\\\\myfile.txt', FileFormat=?) # not sure what to use for ?\n\nThis is untested, but I think something like that will just open the file and save it as plain text (provided you can find the right fileformat) – you could then read the text into python and manipulate it from there. There is probably a way to grab the contents of the file directly, too, but I don't know it off hand; documentation can be hard to find, but if you've got VBA docs or experience, you should be able to carry them across.\nHave a look at this post from a while ago: http://mail.python.org/pipermail/python-list/2002-October/168785.html Scroll down to COMTools.py; there's some good examples there.\nYou can also run makepy.py (part of the pythonwin distribution) to generate python \"signatures\" for the COM functions available, and then look through it as a kind of documentation.\n",
"You could use OpenOffice. It can open word files, and also can run python macros.\n",
"I'd say look at the related questions on the right -->\nThe top one seems to have some good ideas for going the python route.\n",
"how about saving the file as xml. then using python or something else and pull the data out of word and into the database. \n",
"It is possible to programmatically save a Word document as HTML and to import the table(s) contained into Access. This requires very little effort.\n"
] | [
4,
1,
1,
0,
0,
0
] | [] | [] | [
"ms_word",
"python",
"pywin32",
"vba"
] | stackoverflow_0000505925_ms_word_python_pywin32_vba.txt |
Q:
Python includes, module scope issue
I'm working on my first significant Python project and I'm having trouble with scope issues and executing code in included files. Previously my experience is with PHP.
What I would like to do is have one single file that sets up a number of configuration variables, which would then be used throughout the code. Also, I want to make certain functions and classes available globally. For example, the main file would include a single other file, and that file would load a bunch of commonly used functions (each in its own file) and a configuration file. Within those loaded files, I also want to be able to access the functions and configuration variables. What I don't want to do, is to have to put the entire routine at the beginning of each (included) file to include all of the rest. Also, these included files are in various sub-directories, which is making it much harder to import them (especially if I have to re-import in every single file).
Anyway I'm looking for general advice on the best way to structure the code to achieve what I want.
Thanks!
A:
In python, it is a common practice to have a bunch of modules that implement various functions and then have one single module that is the point-of-access to all the functions. This is basically the facade pattern.
An example: say you're writing a package foo, which includes the bar, baz, and moo modules.
~/project/foo
~/project/foo/__init__.py
~/project/foo/bar.py
~/project/foo/baz.py
~/project/foo/moo.py
~/project/foo/config.py
What you would usually do is write __init__.py like this:
from foo.bar import func1, func2
from foo.baz import func3, constant1
from foo.moo import func1 as moofunc1
from foo.config import *
Now, when you want to use the functions you just do
import foo
foo.func1()
print foo.constant1
# assuming config defines a config1 variable
print foo.config1
If you wanted, you could arrange your code so that you only need to write
import foo
At the top of every module, and then access everything through foo (which you should probably name "globals" or something to that effect). If you don't like namespaces, you could even do
from foo import *
and have everything as global, but this is really not recommended. Remember: namespaces are one honking great idea!
A:
This is a two-step process:
In your module globals.py import the items from wherever.
In all of your other modules, do "from globals import *"
This brings all of those names into the current module's namespace.
Now, having told you how to do this, let me suggest that you don't. First of all, you are loading up the local namespace with a bunch of "magically defined" entities. This violates precept 2 of the Zen of Python, "Explicit is better than implicit." Instead of "from foo import *", try using "import foo" and then saying "foo.some_value". If you want to use the shorter names, use "from foo import mumble, snort". Either of these methods directly exposes the actual use of the module foo.py. Using the globals.py method is just a little too magic. The primary exception to this is in an __init__.py where you are hiding some internal aspects of a package.
Globals are also semi-evil in that it can be very difficult to figure out who is modifying (or corrupting) them. If you have well-defined routines for getting/setting globals, then debugging them can be much simpler.
I know that PHP has this "everything is one, big, happy namespace" concept, but it's really just an artifact of poor language design.
A:
As far as I know program-wide global variables/functions/classes/etc. does not exist in Python, everything is "confined" in some module (namespace). So if you want some functions or classes to be used in many parts of your code one solution is creating some modules like: "globFunCl" (defining/importing from elsewhere everything you want to be "global") and "config" (containing configuration variables) and importing those everywhere you need them. If you don't like idea of using nested namespaces you can use:
from globFunCl import *
This way you'll "hide" namespaces (making names look like "globals").
I'm not sure what you mean by not wanting to "put the entire routine at the beginning of each (included) file to include all of the rest", I'm afraid you can't really escape from this. Check out the Python Packages though, they should make it easier for you.
A:
This depends a bit on how you want to package things up. You can either think in terms of files or modules. The latter is "more pythonic", and enables you to decide exactly which items (and they can be anything with a name: classes, functions, variables, etc.) you want to make visible.
The basic rule is that for any file or module you import, anything directly in its namespace can be accessed. So if myfile.py contains definitions def myfun(...): and class myclass(...) as well as myvar = ... then you can access them from another file by
import myfile
y = myfile.myfun(...)
x = myfile.myvar
or
from myfile import myfun, myvar, myclass
Crucially, anything at the top level of myfile is accessible, including imports. So if myfile contains from foo import bar, then myfile.bar is also available.
| Python includes, module scope issue | I'm working on my first significant Python project and I'm having trouble with scope issues and executing code in included files. Previously my experience is with PHP.
What I would like to do is have one single file that sets up a number of configuration variables, which would then be used throughout the code. Also, I want to make certain functions and classes available globally. For example, the main file would include a single other file, and that file would load a bunch of commonly used functions (each in its own file) and a configuration file. Within those loaded files, I also want to be able to access the functions and configuration variables. What I don't want to do, is to have to put the entire routine at the beginning of each (included) file to include all of the rest. Also, these included files are in various sub-directories, which is making it much harder to import them (especially if I have to re-import in every single file).
Anyway I'm looking for general advice on the best way to structure the code to achieve what I want.
Thanks!
| [
"In python, it is a common practice to have a bunch of modules that implement various functions and then have one single module that is the point-of-access to all the functions. This is basically the facade pattern.\nAn example: say you're writing a package foo, which includes the bar, baz, and moo modules.\n~/project/foo\n~/project/foo/__init__.py\n~/project/foo/bar.py\n~/project/foo/baz.py\n~/project/foo/moo.py\n~/project/foo/config.py\n\nWhat you would usually do is write __init__.py like this:\nfrom foo.bar import func1, func2\nfrom foo.baz import func3, constant1\nfrom foo.moo import func1 as moofunc1\nfrom foo.config import *\n\nNow, when you want to use the functions you just do\nimport foo\nfoo.func1()\nprint foo.constant1\n# assuming config defines a config1 variable\nprint foo.config1\n\n\nIf you wanted, you could arrange your code so that you only need to write\nimport foo\n\nAt the top of every module, and then access everything through foo (which you should probably name \"globals\" or something to that effect). If you don't like namespaces, you could even do\nfrom foo import *\n\nand have everything as global, but this is really not recommended. Remember: namespaces are one honking great idea!\n",
"This is a two-step process:\n\nIn your module globals.py import the items from wherever.\nIn all of your other modules, do \"from globals import *\"\n\nThis brings all of those names into the current module's namespace.\nNow, having told you how to do this, let me suggest that you don't. First of all, you are loading up the local namespace with a bunch of \"magically defined\" entities. This violates precept 2 of the Zen of Python, \"Explicit is better than implicit.\" Instead of \"from foo import *\", try using \"import foo\" and then saying \"foo.some_value\". If you want to use the shorter names, use \"from foo import mumble, snort\". Either of these methods directly exposes the actual use of the module foo.py. Using the globals.py method is just a little too magic. The primary exception to this is in an __init__.py where you are hiding some internal aspects of a package.\nGlobals are also semi-evil in that it can be very difficult to figure out who is modifying (or corrupting) them. If you have well-defined routines for getting/setting globals, then debugging them can be much simpler.\nI know that PHP has this \"everything is one, big, happy namespace\" concept, but it's really just an artifact of poor language design.\n",
"As far as I know program-wide global variables/functions/classes/etc. does not exist in Python, everything is \"confined\" in some module (namespace). So if you want some functions or classes to be used in many parts of your code one solution is creating some modules like: \"globFunCl\" (defining/importing from elsewhere everything you want to be \"global\") and \"config\" (containing configuration variables) and importing those everywhere you need them. If you don't like idea of using nested namespaces you can use:\nfrom globFunCl import *\n\nThis way you'll \"hide\" namespaces (making names look like \"globals\").\nI'm not sure what you mean by not wanting to \"put the entire routine at the beginning of each (included) file to include all of the rest\", I'm afraid you can't really escape from this. Check out the Python Packages though, they should make it easier for you.\n",
"This depends a bit on how you want to package things up. You can either think in terms of files or modules. The latter is \"more pythonic\", and enables you to decide exactly which items (and they can be anything with a name: classes, functions, variables, etc.) you want to make visible.\nThe basic rule is that for any file or module you import, anything directly in its namespace can be accessed. So if myfile.py contains definitions def myfun(...): and class myclass(...) as well as myvar = ... then you can access them from another file by\nimport myfile\ny = myfile.myfun(...)\nx = myfile.myvar\n\nor\nfrom myfile import myfun, myvar, myclass\n\nCrucially, anything at the top level of myfile is accessible, including imports. So if myfile contains from foo import bar, then myfile.bar is also available. \n"
] | [
6,
1,
0,
0
] | [] | [] | [
"import",
"include",
"module",
"python"
] | stackoverflow_0000507425_import_include_module_python.txt |
Q:
get site name from a URL in python
I am new to Python and it seems to have a lot of nice functions that I don't know about. What function can I use to get the root site name? For example, how would I get faqs.org if I gave the function the URL "http://www.faqs.org/docs/diveintopython/kgp_commandline.html"?
A:
>>> from urllib.parse import urlparse
>>> urlparse('http://www.cwi.nl:80/%7Eguido/Python.html').hostname
'www.cwi.nl'
A:
The much overlooked urlparse module:
from urlparse import urlparse
scheme, netloc, path, params, query, fragment = urlparse("http://www.faqs.org/docs/diveintopython/kgp_commandline.html")
print netloc
A:
What version of Python are you learning with? Note that SilentGhost's answer is for Python 3.0, while Alabaster Codify's will work with the 2.x series.
| get site name from a URL in python | I am new to Python and it seems to have a lot of nice functions that I don't know about. What function can I use to get the root site name? For example, how would I get faqs.org if I gave the function the URL "http://www.faqs.org/docs/diveintopython/kgp_commandline.html"?
| [
" >>> from urllib.parse import urlparse\n >>> urlparse('http://www.cwi.nl:80/%7Eguido/Python.html').hostname\n 'www.cwi.nl'\n\n",
"The much overlooked urlparse module:\nfrom urlparse import urlparse\nscheme, netloc, path, params, query, fragment = urlparse(\"http://www.faqs.org/docs/diveintopython/kgp_commandline.html\")\nprint netloc\n\n",
"What version of Python are you learning with? Note that SilentGhost's answer is for Python 3.0, while Alabaster Codify's will work with the 2.x series.\n"
] | [
5,
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0000508183_python.txt |
Q:
Why does Django only serve files containing a space?
I'm writing a basic Django application. For testing / development purposes I'm trying to serve the static content of the website using Django's development server as per http://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files.
My urls.py contains:
(r'^admin/(.*)', admin.site.root),
(r'^(?P<page_name>\S*)$', 'Blah.content.views.index'),
(r'^static/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': 'C:/Users/My User/workspace/Blah/media',
'show_indexes': True})
However, when I try to access a file such as http://localhost:8000/static/images/Logo.jpg Django gives me a 404 error claiming that "No Page matches the given query."
When I try to access a file such as http://localhost:8000/static/images/Blah%20Logo.jpg it serves the file!
What's going on?
A:
You have wrong patterns order in urls.py.
When you try to retrieve path without space it matches:
(r'^(?P<page_name>\S*)$', 'Blah.content.views.index'),
not static.serve and of course you have not such page, But when you try to access path with space it matches proper static.serve pattern because it is more generic and allows spaces.
To solve this problem just swap those patterns.
| Why does Django only serve files containing a space? | I'm writing a basic Django application. For testing / development purposes I'm trying to serve the static content of the website using Django's development server as per http://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files.
My urls.py contains:
(r'^admin/(.*)', admin.site.root),
(r'^(?P<page_name>\S*)$', 'Blah.content.views.index'),
(r'^static/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': 'C:/Users/My User/workspace/Blah/media',
'show_indexes': True})
However, when I try to access a file such as http://localhost:8000/static/images/Logo.jpg Django gives me a 404 error claiming that "No Page matches the given query."
When I try to access a file such as http://localhost:8000/static/images/Blah%20Logo.jpg it serves the file!
What's going on?
| [
"You have wrong patterns order in urls.py.\nWhen you try to retrieve path without space it matches:\n(r'^(?P<page_name>\\S*)$', 'Blah.content.views.index'),\n\nnot static.serve and of course you have not such page, But when you try to access path with space it matches proper static.serve pattern because it is more generic and allows spaces.\nTo solve this problem just swap those patterns.\n"
] | [
10
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000508609_django_python.txt |
Q:
Problem in understanding Python list comprehensions
What does the last line mean in the following code?
import pickle, urllib
handle = urllib.urlopen("http://www.pythonchallenge.com/pc/def/banner.p")
data = pickle.load(handle)
handle.close()
for elt in data:
print "".join([e[1] * e[0] for e in elt])
My attempt to the problem:
"".join... uses join -method to empty text
e[1] * e[0] multiplies two subsequent values in the sequence, e
I am not sure what is e
I am not sure, what it means, when you have something before for -loop, like: e[1] * e[0] for e in elt
A:
Maybe best explained with an example:
print "".join([e[1] * e[0] for e in elt])
is the short form of
x = []
for e in elt:
x.append(e[1] * e[0])
print "".join(x)
List comprehensions are simply syntactic sugar for for loops, which make an expression out of a sequence of statements.
elt can be an arbitrary object, since you load it from pickles, and e likewise. The usage suggests that is it a sequence type, but it could just be anything that implements the sequence protocol.
A:
Firstly, you need to put http:// in front of the URL, ie:
handle = urllib.urlopen("http://www.pythonchallenge.com/pc/def/banner.p")
An expression [e for e in a_list] is a list comprehension which generates a list of values.
With Python strings, the * operator is used to repeat a string. Try typing in the commands one by one into an interpreter then look at data:
>>> data[0]
[(' ', 95)]
This shows us each line of data is a tuple containing two elements.
Thus the expression e[1] * e[0] is effectively the string in e[0] repeated e[1] times.
Hence the program draws a banner.
A:
[e[1] * e[0] for e in elt] is a list comprehension, which evaluates to a list itself by looping through another list, in this case elt. Each element in the new list is e[1]*e[0], where e is the corresponding element in elt.
A:
The question itself has already been fully answered but I'd like to add that a list comprehension also supports filtering. Your original line
print "".join([e[1] * e[0] for e in elt])
could, as an example, become
print "".join([e[1] * e[0] for e in elt if len(e) == 2])
to only operate on items in elt that have two elements.
A:
join() is a string method, that works on a separator in new string
>>> ':'.join(['ab', 'cd'])
>>> 'ab:cd'
and list comprehension is not necessary there, generator would suffice
A:
Andy's is a great answer!
If you want to see every step of the loop (with line-breaks) try this out:
for elt in data:
for e in elt:
print "e[0] == %s, e[1] == %d, which gives us: '%s'" % (e[0], e[1], ''.join(e[1] * e[0]))
| Problem in understanding Python list comprehensions | What does the last line mean in the following code?
import pickle, urllib
handle = urllib.urlopen("http://www.pythonchallenge.com/pc/def/banner.p")
data = pickle.load(handle)
handle.close()
for elt in data:
print "".join([e[1] * e[0] for e in elt])
My attempt to the problem:
"".join... uses join -method to empty text
e[1] * e[0] multiplies two subsequent values in the sequence, e
I am not sure what is e
I am not sure, what it means, when you have something before for -loop, like: e[1] * e[0] for e in elt
| [
"Maybe best explained with an example:\nprint \"\".join([e[1] * e[0] for e in elt])\n\nis the short form of\nx = []\nfor e in elt:\n x.append(e[1] * e[0])\nprint \"\".join(x)\n\nList comprehensions are simply syntactic sugar for for loops, which make an expression out of a sequence of statements.\nelt can be an arbitrary object, since you load it from pickles, and e likewise. The usage suggests that is it a sequence type, but it could just be anything that implements the sequence protocol.\n",
"Firstly, you need to put http:// in front of the URL, ie:\nhandle = urllib.urlopen(\"http://www.pythonchallenge.com/pc/def/banner.p\")\n\nAn expression [e for e in a_list] is a list comprehension which generates a list of values.\nWith Python strings, the * operator is used to repeat a string. Try typing in the commands one by one into an interpreter then look at data:\n>>> data[0]\n[(' ', 95)]\n\nThis shows us each line of data is a tuple containing two elements.\nThus the expression e[1] * e[0] is effectively the string in e[0] repeated e[1] times.\nHence the program draws a banner.\n",
"[e[1] * e[0] for e in elt] is a list comprehension, which evaluates to a list itself by looping through another list, in this case elt. Each element in the new list is e[1]*e[0], where e is the corresponding element in elt.\n",
"The question itself has already been fully answered but I'd like to add that a list comprehension also supports filtering. Your original line\nprint \"\".join([e[1] * e[0] for e in elt])\n\ncould, as an example, become\nprint \"\".join([e[1] * e[0] for e in elt if len(e) == 2])\n\nto only operate on items in elt that have two elements.\n",
"join() is a string method, that works on a separator in new string\n>>> ':'.join(['ab', 'cd'])\n>>> 'ab:cd'\n\nand list comprehension is not necessary there, generator would suffice\n",
"Andy's is a great answer!\nIf you want to see every step of the loop (with line-breaks) try this out:\nfor elt in data:\n for e in elt:\n print \"e[0] == %s, e[1] == %d, which gives us: '%s'\" % (e[0], e[1], ''.join(e[1] * e[0]))\n\n"
] | [
21,
7,
4,
2,
1,
1
] | [] | [] | [
"list_comprehension",
"python"
] | stackoverflow_0000501308_list_comprehension_python.txt |
Q:
How to overwrite some bytes in the middle of a file with Python?
I'd like to be able to overwrite some bytes at a given offset in a file using Python.
My attempts have failed miserably and resulted in:
overwriting the bytes at the offset but also truncating the file just after (file mode = "w" or "w+")
appending the bytes at the end of the file (file mode = "a" or "a+")
Is it possible to achieve this with Python in a portable way?
A:
Try this:
fh = open("filename.ext", "r+b")
fh.seek(offset)
fh.write(bytes)
fh.close()
A:
According to this python page you can type file.seek to seek to a particualar offset. You can then write whatever you want.
To avoid truncating the file, you can open it with "a+" then seek to the right offset.
A:
Very inefficient, but I don't know any other way right now, that doesn't overwrite the bytes in the middle (as Ben Blanks one does):
a=file('/tmp/test123','r+')
s=a.read()
a.seek(0)
a.write(s[:3]+'xxx'+s[3:])
a.close()
will write 'xxx' at offset 3: 123456789 --> 123xxx456789
| How to overwrite some bytes in the middle of a file with Python? | I'd like to be able to overwrite some bytes at a given offset in a file using Python.
My attempts have failed miserably and resulted in:
overwriting the bytes at the offset but also truncating the file just after (file mode = "w" or "w+")
appending the bytes at the end of the file (file mode = "a" or "a+")
Is it possible to achieve this with Python in a portable way?
| [
"Try this:\nfh = open(\"filename.ext\", \"r+b\")\nfh.seek(offset)\nfh.write(bytes)\nfh.close()\n\n",
"According to this python page you can type file.seek to seek to a particualar offset. You can then write whatever you want.\nTo avoid truncating the file, you can open it with \"a+\" then seek to the right offset.\n",
"Very inefficient, but I don't know any other way right now, that doesn't overwrite the bytes in the middle (as Ben Blanks one does):\na=file('/tmp/test123','r+')\ns=a.read()\na.seek(0)\na.write(s[:3]+'xxx'+s[3:])\na.close()\n\nwill write 'xxx' at offset 3: 123456789 --> 123xxx456789\n"
] | [
47,
5,
0
] | [] | [] | [
"file",
"patch",
"python"
] | stackoverflow_0000508983_file_patch_python.txt |
Q:
Symbolic Mathematics Python?
I am extreamly interested in math and programming and planning to start symbolic math project from scratch.
Is this good project idea?
Where to start?
How should one approach this
project?
Any good resources?
Thanks in advance.
A:
It's a good project to practice programming skills. But if you want to create a real library that other people will want to use this is a project you do not want to start allone and from scratch ...
Where to start: Have a look at the solutions that are already out there and think about what it is that you want to do different. How will your project differ from others?
Resource: SymPy is a Python library for symbolic mathematics
A:
1.Is this good project idea?
Yes; I would expect it to provide an endless source of interesting work which will, quite quickly, test and extend your programming powers.
2.Where to start?
I second the other suggestions that you should look at existing work. SAGE is very impressive and if you had asked for my advice I would suggest that you firstly write a basic system for doing arithmetic with numbers and symbols; then have a look at SAGE and write a module to extend the system, in other words become a contributor to something larger rather than trying to do it all on your own. Look also at Mathematica and Maple, Macsyma and Axiom. The latter 2 are free (I think) but they are all well documented on-line and a great source of ideas and challenges.
3.How should one approach this project?
As one would approach eating an elephant. One bite at a time. More seriously, I think there are some core issues, such as representation of expressions, and some basic functionality (arithmetic on polynomials) which you could cut your teeth on.
4.Any good resources?
Lots and lots. google for 'computer algebra', 'term rewriting'. Have a look at what's available on Amazon. And, if you have access, check out the ACM digital library
Good luck.
A:
Symbolic math is a fun project. Whether or not anyone uses it doesn't appear to matter in your question, so dive in.
I've written two of these over the years. The coolest was one for SQL where clauses -- it did some trivial symbolic manipulations on the SQL to fold in some additional AND conditions. Not a complete "solver" or "optimizer" or anything, just a few symbolic manipulations of any SQL where clause possible. The less cool one was for a debugger; it did complex math to work out (symbolically) stack offsets for variables.
You start by defining classes for elements of a mathematical expression -- operands, operators, functions, etc.
You have to decide what manipulations these objects have to participate in. Getting a concrete value for an expression is an easy and obvious one. Start with the case where all variables have a binding.
Then handle the case where some variables remain unbound, and you can only evaluate parts of the expression.
Then handle rearranging an expression into a canonical form. I.e., you've done a partial evaluation and have Add( Variable(x), Add( Variable(x), Lit(3) ) ). You need to write rules to transform this into Add( Multiply( Lit(2), Variable(x) ), Lit(3) ).
One very cool exercise is optimizing the parenthesis so that the printed output has the fewest parenthesis necessary to capture the meaning.
There are many, many other "expression transformation" rules that we all learn in school for doing algebraic manipulations. Lots of them.
In particular, rearranging an equation to isolate a variable can be really hard in some cases.
Doing the derivative transformation is easy, but symbolic integration is really, really hard with a ton of special cases.
The basics are fun. Depending on how far you want to go, it gets progressively harder.
A:
@Resources: You could take a look at pythonica - this was an attempt to implement a Mathematica-type program in Python (source code is available for download).
A:
This pySym Blog might also interest you for getting ideas and starters, and learning what others are doing with python & symbolic math.
A:
More in the way of resources: SympyCore:
The aim of the SympyCore project is to seek out new high Performance solutions to represent and manipulate symbolic expressions in the Python programming language, and to try out new symbolic models to achive fundamentally consistent and sufficiently general symbolic model that would be easy to extend to a Computer Algebra System (CAS).
A:
I think this is a great project for a programmer of any skill level. It is quite easy to do implement a symbolic calculator that is just powerful enough to be useful. If you continue to work on breadth, there are so many fun features to add that you can occupy yourself with it for a long time. If you choose to go for depth, you will find that things soon get very hard. You can challenge yourself indefinitely, if that's what you like.
There are many great resources. I recommend the book "Modern Computer Algebra" by zur Gathen and Gerhard, although it is really more concerned with arithmetic in special forms (polynomials, integers, matrices) than general symbolic manipulation. When you're starting out, you may actually be better helped by looking at some Lisp or Scheme tutorial, because symbolic math is conceptually very straightforward to do in Lisp, and to build a symbolic engine in Python you'll more or less have to implement a mini-Lisp as a foundation.
As others have pointed out, you could look at SymPy and sympycore for inspiration or concrete algorithms. The source code for either project is a bit complex (but certainly not too hard to learn from).
(If I may plug a bit, I wrote a tiny symbolic engine a while back (as a weekend project -- it's very tiny and I haven't worked on it since). It implements a generic symbolic engine in about 200 lines of code, and then there are 300 lines of code implementing symbolic arithmetic and symbolic boolean algebra, with some very rudimentary simplification. Perhaps easier to dig into than SymPy. But everything in there is things that you could easily discover for yourself, and may have more fun doing so.)
| Symbolic Mathematics Python? | I am extreamly interested in math and programming and planning to start symbolic math project from scratch.
Is this good project idea?
Where to start?
How should one approach this
project?
Any good resources?
Thanks in advance.
| [
"\nIt's a good project to practice programming skills. But if you want to create a real library that other people will want to use this is a project you do not want to start allone and from scratch ...\nWhere to start: Have a look at the solutions that are already out there and think about what it is that you want to do different. How will your project differ from others? \nResource: SymPy is a Python library for symbolic mathematics\n\n",
"1.Is this good project idea?\nYes; I would expect it to provide an endless source of interesting work which will, quite quickly, test and extend your programming powers.\n2.Where to start?\nI second the other suggestions that you should look at existing work. SAGE is very impressive and if you had asked for my advice I would suggest that you firstly write a basic system for doing arithmetic with numbers and symbols; then have a look at SAGE and write a module to extend the system, in other words become a contributor to something larger rather than trying to do it all on your own. Look also at Mathematica and Maple, Macsyma and Axiom. The latter 2 are free (I think) but they are all well documented on-line and a great source of ideas and challenges.\n3.How should one approach this project?\nAs one would approach eating an elephant. One bite at a time. More seriously, I think there are some core issues, such as representation of expressions, and some basic functionality (arithmetic on polynomials) which you could cut your teeth on.\n4.Any good resources?\nLots and lots. google for 'computer algebra', 'term rewriting'. Have a look at what's available on Amazon. And, if you have access, check out the ACM digital library\nGood luck.\n",
"Symbolic math is a fun project. Whether or not anyone uses it doesn't appear to matter in your question, so dive in.\nI've written two of these over the years. The coolest was one for SQL where clauses -- it did some trivial symbolic manipulations on the SQL to fold in some additional AND conditions. Not a complete \"solver\" or \"optimizer\" or anything, just a few symbolic manipulations of any SQL where clause possible. The less cool one was for a debugger; it did complex math to work out (symbolically) stack offsets for variables.\nYou start by defining classes for elements of a mathematical expression -- operands, operators, functions, etc.\nYou have to decide what manipulations these objects have to participate in. Getting a concrete value for an expression is an easy and obvious one. Start with the case where all variables have a binding.\nThen handle the case where some variables remain unbound, and you can only evaluate parts of the expression.\nThen handle rearranging an expression into a canonical form. I.e., you've done a partial evaluation and have Add( Variable(x), Add( Variable(x), Lit(3) ) ). You need to write rules to transform this into Add( Multiply( Lit(2), Variable(x) ), Lit(3) ).\nOne very cool exercise is optimizing the parenthesis so that the printed output has the fewest parenthesis necessary to capture the meaning.\nThere are many, many other \"expression transformation\" rules that we all learn in school for doing algebraic manipulations. Lots of them.\nIn particular, rearranging an equation to isolate a variable can be really hard in some cases.\nDoing the derivative transformation is easy, but symbolic integration is really, really hard with a ton of special cases.\nThe basics are fun. Depending on how far you want to go, it gets progressively harder. \n",
"@Resources: You could take a look at pythonica - this was an attempt to implement a Mathematica-type program in Python (source code is available for download).\n",
"This pySym Blog might also interest you for getting ideas and starters, and learning what others are doing with python & symbolic math.\n",
"More in the way of resources: SympyCore:\n\nThe aim of the SympyCore project is to seek out new high Performance solutions to represent and manipulate symbolic expressions in the Python programming language, and to try out new symbolic models to achive fundamentally consistent and sufficiently general symbolic model that would be easy to extend to a Computer Algebra System (CAS).\n\n",
"I think this is a great project for a programmer of any skill level. It is quite easy to do implement a symbolic calculator that is just powerful enough to be useful. If you continue to work on breadth, there are so many fun features to add that you can occupy yourself with it for a long time. If you choose to go for depth, you will find that things soon get very hard. You can challenge yourself indefinitely, if that's what you like.\nThere are many great resources. I recommend the book \"Modern Computer Algebra\" by zur Gathen and Gerhard, although it is really more concerned with arithmetic in special forms (polynomials, integers, matrices) than general symbolic manipulation. When you're starting out, you may actually be better helped by looking at some Lisp or Scheme tutorial, because symbolic math is conceptually very straightforward to do in Lisp, and to build a symbolic engine in Python you'll more or less have to implement a mini-Lisp as a foundation.\nAs others have pointed out, you could look at SymPy and sympycore for inspiration or concrete algorithms. The source code for either project is a bit complex (but certainly not too hard to learn from).\n(If I may plug a bit, I wrote a tiny symbolic engine a while back (as a weekend project -- it's very tiny and I haven't worked on it since). It implements a generic symbolic engine in about 200 lines of code, and then there are 300 lines of code implementing symbolic arithmetic and symbolic boolean algebra, with some very rudimentary simplification. Perhaps easier to dig into than SymPy. But everything in there is things that you could easily discover for yourself, and may have more fun doing so.)\n"
] | [
19,
10,
7,
4,
3,
1,
1
] | [] | [] | [
"algorithm",
"math",
"python",
"symbolic_math"
] | stackoverflow_0000506748_algorithm_math_python_symbolic_math.txt |
Q:
Passing self to class functions in Python
What's the reason of passing a value for a self reference in class functions in python? For instance:
class MyClass:
"""A simple example class"""
i = 12345
def f(**self**):
return 'hello world'
By doing this, aren't you doing the compiler's work?
A:
Many electrons have given their lives to discussing this question over the years.
Guido (python's creator) weighs forth on the issue in his blog here, in response to a proposal last year to get rid of the explicit self. The python FAQ also covers the issue.
Finally, if you don't mind a bit of grey magic, you can use a metaclass to get rid of it. Although there are good reasons why you shouldn't do that (it breaks properties, will make it harder for you to understand other people's code, and it will confuse any other python programmer who looks at your code).
A:
Guido says: http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-stay.html
| Passing self to class functions in Python | What's the reason of passing a value for a self reference in class functions in python? For instance:
class MyClass:
"""A simple example class"""
i = 12345
def f(**self**):
return 'hello world'
By doing this, aren't you doing the compiler's work?
| [
"Many electrons have given their lives to discussing this question over the years.\nGuido (python's creator) weighs forth on the issue in his blog here, in response to a proposal last year to get rid of the explicit self. The python FAQ also covers the issue.\nFinally, if you don't mind a bit of grey magic, you can use a metaclass to get rid of it. Although there are good reasons why you shouldn't do that (it breaks properties, will make it harder for you to understand other people's code, and it will confuse any other python programmer who looks at your code).\n",
"Guido says: http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-stay.html\n"
] | [
9,
2
] | [] | [] | [
"class",
"function",
"python"
] | stackoverflow_0000509421_class_function_python.txt |
Q:
Python M2Crypto - generating a DSA key pair and separating public/private components
Could anybody explain what is the cause of the following:
>>> from M2Crypto import DSA, BIO
>>> dsa = DSA.gen_params(1024)
..+........+++++++++++++++++++++++++++++++++++++++++++++++++++*
............+.+.+..+.........+.............+.....................+.
...+.............+...........+.........................................
+.........+................+..........+......+..+.+...+..........+..+..
..+...+......+....+.............+.................+.......+.........+..
....+......+.+..+..........+........+.+...+..+...............+.........
..+.....+........+..........+++++++++++++++++++++++++++++++++++++++++++
++++++++*
>>> mem = BIO.MemoryBuffer()
>>> dsa.save_key_bio(mem, cipher=None)
1
>>> dsa.save_pub_key_bio(mem)
0
>>> print mem.getvalue()
-----BEGIN DSA PRIVATE KEY-----
MIIBIgIBAAKBgQDPGRFSTqqx8vet5kaW5m99A83REotTcX9HOv+zrqMxQpaTlinS
MDz49I4psDPJ+bWH7vySEdOYO2JGUj6kYZdz/ZwyNjphWNjQkaUrmfaVLzS3PHpW
aMrPEweLesf/PT4KXm2HaDbaW/g2Ds5h+Zlq9LDKcN2vfvyeiTCmf1esyQIVAO9I
ippU4PIdvJVO9HQRkqrD2bxPAoGBAIwVgM7dgNVwihJva6qeeh7ypy3ESNB9k8nY
fOnES+SqZGQbkPrJIusRCJNKERiMATJXNRMfBeWD8htNRezbgtr0OpuYSBurAQjp
hKKVI3DHSv7XT49BQ3tdJww8lQfkOhHOfFTG6U1dJhWdggp0WN3EjYlt77agRsjR
4t5sD1f3
-----END DSA PRIVATE KEY-----
-----BEGIN PUBLIC KEY-----
-----END PUBLIC KEY-----
>>>
clearly I'm missing something. M2Crypto nonexistant docs don't help.
A:
Call dsa.gen_key(), then save. You aren't actually generating the public key.
>>> from M2Crypto import DSA, BIO
>>> dsa = DSA.gen_params(1024)
..+..etc
>>> mem = BIO.MemoryBuffer()
>>> dsa.gen_key()
>>> dsa.save_key_bio(mem, cipher=None)
1
>>> dsa.save_pub_key_bio(mem)
1
>>> print mem.getvalue()
-----BEGIN DSA PRIVATE KEY-----
MIIBuwIBAAKBgQDowiLFDXGwaWIOkZybeeqSXYZ8KCLmXg5XfnAtDBlVOokB91Rj
etc.
-----END DSA PRIVATE KEY-----
-----BEGIN PUBLIC KEY-----
MIIBtzCCASsGByqGSM44BAEwggEeAoGBAOjCIsUNcbBpYg6RnJt56pJdhnwoIuZe
Dld+cC0MGVU6iQH3VGNEzKycBVQeVYke3itZwQALSlT2JfUsmOjeZYIkc9l2YYob
rixObXfQyc0AOBM/J53F0F6R8+xvEwN/Hmdd9SjjbdZi8gve+dr9UfnKHXi0KPUF
s2ougGhXeEjTAhUAiW5bMzG8nCVjXErgwaDEx+JEdtECgYACba2quw3xibhT3JNd
sDh0gIRpHPQgIgxgzGv6A09Vdb4VgtWf0MYAo6gAhxsZIWWKzQ94Oe1nf7OhC+B+
VjT+PW+ExSrbJVONTN5ycE64O7+2L+q/hZSjjkxXgfcApqeVtZp4wKqbS976Kpch
WgNl0zdkvV8JddRs0oKQ0Bl7dwOBhQACgYEAgkdF/+ncobVcYXfXHBUH3H5SLD3y
u2zUWGhXM4/MUTwPromDOQ8Zd0H7myYhmQvVUb+J9mJHMIn7Guf4JDH+8d6rBpzo
U5yEGqgsSqYqgtStzDvsKHfqw3mvjvsktm66N/vm36eai2I6J15QibdtP0lb1Um8
EeECDTxWUWT93rs=
-----END PUBLIC KEY-----
>>>
| Python M2Crypto - generating a DSA key pair and separating public/private components | Could anybody explain what is the cause of the following:
>>> from M2Crypto import DSA, BIO
>>> dsa = DSA.gen_params(1024)
..+........+++++++++++++++++++++++++++++++++++++++++++++++++++*
............+.+.+..+.........+.............+.....................+.
...+.............+...........+.........................................
+.........+................+..........+......+..+.+...+..........+..+..
..+...+......+....+.............+.................+.......+.........+..
....+......+.+..+..........+........+.+...+..+...............+.........
..+.....+........+..........+++++++++++++++++++++++++++++++++++++++++++
++++++++*
>>> mem = BIO.MemoryBuffer()
>>> dsa.save_key_bio(mem, cipher=None)
1
>>> dsa.save_pub_key_bio(mem)
0
>>> print mem.getvalue()
-----BEGIN DSA PRIVATE KEY-----
MIIBIgIBAAKBgQDPGRFSTqqx8vet5kaW5m99A83REotTcX9HOv+zrqMxQpaTlinS
MDz49I4psDPJ+bWH7vySEdOYO2JGUj6kYZdz/ZwyNjphWNjQkaUrmfaVLzS3PHpW
aMrPEweLesf/PT4KXm2HaDbaW/g2Ds5h+Zlq9LDKcN2vfvyeiTCmf1esyQIVAO9I
ippU4PIdvJVO9HQRkqrD2bxPAoGBAIwVgM7dgNVwihJva6qeeh7ypy3ESNB9k8nY
fOnES+SqZGQbkPrJIusRCJNKERiMATJXNRMfBeWD8htNRezbgtr0OpuYSBurAQjp
hKKVI3DHSv7XT49BQ3tdJww8lQfkOhHOfFTG6U1dJhWdggp0WN3EjYlt77agRsjR
4t5sD1f3
-----END DSA PRIVATE KEY-----
-----BEGIN PUBLIC KEY-----
-----END PUBLIC KEY-----
>>>
clearly I'm missing something. M2Crypto nonexistant docs don't help.
| [
"Call dsa.gen_key(), then save. You aren't actually generating the public key.\n>>> from M2Crypto import DSA, BIO\n>>> dsa = DSA.gen_params(1024)\n..+..etc\n>>> mem = BIO.MemoryBuffer()\n>>> dsa.gen_key()\n>>> dsa.save_key_bio(mem, cipher=None)\n1\n>>> dsa.save_pub_key_bio(mem)\n1\n>>> print mem.getvalue()\n-----BEGIN DSA PRIVATE KEY-----\nMIIBuwIBAAKBgQDowiLFDXGwaWIOkZybeeqSXYZ8KCLmXg5XfnAtDBlVOokB91Rj\netc.\n-----END DSA PRIVATE KEY-----\n-----BEGIN PUBLIC KEY-----\nMIIBtzCCASsGByqGSM44BAEwggEeAoGBAOjCIsUNcbBpYg6RnJt56pJdhnwoIuZe\nDld+cC0MGVU6iQH3VGNEzKycBVQeVYke3itZwQALSlT2JfUsmOjeZYIkc9l2YYob\nrixObXfQyc0AOBM/J53F0F6R8+xvEwN/Hmdd9SjjbdZi8gve+dr9UfnKHXi0KPUF\ns2ougGhXeEjTAhUAiW5bMzG8nCVjXErgwaDEx+JEdtECgYACba2quw3xibhT3JNd\nsDh0gIRpHPQgIgxgzGv6A09Vdb4VgtWf0MYAo6gAhxsZIWWKzQ94Oe1nf7OhC+B+\nVjT+PW+ExSrbJVONTN5ycE64O7+2L+q/hZSjjkxXgfcApqeVtZp4wKqbS976Kpch\nWgNl0zdkvV8JddRs0oKQ0Bl7dwOBhQACgYEAgkdF/+ncobVcYXfXHBUH3H5SLD3y\nu2zUWGhXM4/MUTwPromDOQ8Zd0H7myYhmQvVUb+J9mJHMIn7Guf4JDH+8d6rBpzo\nU5yEGqgsSqYqgtStzDvsKHfqw3mvjvsktm66N/vm36eai2I6J15QibdtP0lb1Um8\nEeECDTxWUWT93rs=\n-----END PUBLIC KEY-----\n\n>>> \n\n"
] | [
7
] | [] | [] | [
"cryptography",
"m2crypto",
"python",
"rsa"
] | stackoverflow_0000509449_cryptography_m2crypto_python_rsa.txt |
Q:
python reading lines w/o \n?
Would this work on all platforms? i know windows does \r\n, and remember hearing mac does \r while linux did \n. I ran this code on windows so it seems fine, but do any of you know if its cross platform?
while 1:
line = f.readline()
if line == "":
break
line = line[:-1]
print "\"" + line + "\""
A:
First of all, there is universal newline support
Second: just use line.strip(). Use line.rstrip('\r\n'), if you want to preserve any whitespace at the beginning or end of the line.
Oh, and
print '"%s"' % line
or at least
print '"' + line + '"'
might look a bit nicer.
You can iterate over the lines in a file like this (this will not break on empty lines in the middle of the file like your code):
for line in f:
print '"' + line.strip('\r\n') + '"'
If your input file is short enough, you can use the fact that str.splitlines throws away the line endings by default:
with open('input.txt', 'rU') as f:
for line in f.read().splitlines():
print '"%s"' % line
A:
Try this instead:
line = line.rstrip('\r\n')
A:
line = line[:-1]
A line can have no trailing newline, if it's the last line of a file.
As suggested above, try universal newlines with rstrip().
| python reading lines w/o \n? | Would this work on all platforms? i know windows does \r\n, and remember hearing mac does \r while linux did \n. I ran this code on windows so it seems fine, but do any of you know if its cross platform?
while 1:
line = f.readline()
if line == "":
break
line = line[:-1]
print "\"" + line + "\""
| [
"First of all, there is universal newline support\nSecond: just use line.strip(). Use line.rstrip('\\r\\n'), if you want to preserve any whitespace at the beginning or end of the line.\nOh, and\nprint '\"%s\"' % line\n\nor at least\nprint '\"' + line + '\"'\n\nmight look a bit nicer.\nYou can iterate over the lines in a file like this (this will not break on empty lines in the middle of the file like your code):\nfor line in f:\n print '\"' + line.strip('\\r\\n') + '\"'\n\nIf your input file is short enough, you can use the fact that str.splitlines throws away the line endings by default:\nwith open('input.txt', 'rU') as f:\n for line in f.read().splitlines():\n print '\"%s\"' % line\n\n",
"Try this instead:\nline = line.rstrip('\\r\\n')\n\n",
"\nline = line[:-1]\n\nA line can have no trailing newline, if it's the last line of a file.\nAs suggested above, try universal newlines with rstrip().\n"
] | [
13,
4,
0
] | [] | [] | [
"file",
"newline",
"python"
] | stackoverflow_0000509446_file_newline_python.txt |
Q:
Fast PDF splitter library
pyPdf is a great library to split, merge PDF files.
I'm using it to split pdf documents into 1 page documents. pyPdf is pure python and spends quite a lot of time in the _sweepIndirectReferences() method of the PdfFileWriter object when saving the extracted page. I need something with better performance. I've tried using multi-threading but since most of the time is spent in python code there was no speed gain because of the GIL (it actually ran slower).
Is there any library written in c that provides the same functionality? or does anyone have a good idea on how to improve performance (other than spawning a new process for each pdf file that I want to split)
Thank you in advance.
Follow up.
Links to a couple of command line solutions, that can prove sometimes faster than pyPDF:
http://multivalent.sourceforge.net/Tools/pdf/Split.html
http://www.linuxsolutions.fr/how-to-extract-pages-from-a-pdf/
I modified pyPDF PdfWriter class to keep track of how much time has been spent on the _sweepIndirectReferences() method. If it has been too long (right now I use the magical value of 3 seconds) then I revert to using ghostscript by making a call to it from python.
Thanks for all your answers. (codelogic's xpdf reference is the one that made me look for a different approach)
A:
mbtPdfAsm is a fast, open source command line tool for PDF processing.
Xpdf is also worth mentioning since it's GPL and written in C++. The source code is well modularized and allows for writing command line tools.
A:
Does it have to be python? My pure-Perl library CAM::PDF is pretty fast at appending and deleting PDF document pages. It saves the sweeping for the very end, where possible.
A:
pdfLaTex can do a lot of PDF managing and is very fast.
i've used it for some quite complex imposition worflows. the TeX language is really alien to programming, but it's easy to write a python script that generates the needed LaTex layout and processes it.
A:
Have you tried using Psyco with pyPdf?
| Fast PDF splitter library | pyPdf is a great library to split, merge PDF files.
I'm using it to split pdf documents into 1 page documents. pyPdf is pure python and spends quite a lot of time in the _sweepIndirectReferences() method of the PdfFileWriter object when saving the extracted page. I need something with better performance. I've tried using multi-threading but since most of the time is spent in python code there was no speed gain because of the GIL (it actually ran slower).
Is there any library written in c that provides the same functionality? or does anyone have a good idea on how to improve performance (other than spawning a new process for each pdf file that I want to split)
Thank you in advance.
Follow up.
Links to a couple of command line solutions, that can prove sometimes faster than pyPDF:
http://multivalent.sourceforge.net/Tools/pdf/Split.html
http://www.linuxsolutions.fr/how-to-extract-pages-from-a-pdf/
I modified pyPDF PdfWriter class to keep track of how much time has been spent on the _sweepIndirectReferences() method. If it has been too long (right now I use the magical value of 3 seconds) then I revert to using ghostscript by making a call to it from python.
Thanks for all your answers. (codelogic's xpdf reference is the one that made me look for a different approach)
| [
"mbtPdfAsm is a fast, open source command line tool for PDF processing.\nXpdf is also worth mentioning since it's GPL and written in C++. The source code is well modularized and allows for writing command line tools. \n",
"Does it have to be python? My pure-Perl library CAM::PDF is pretty fast at appending and deleting PDF document pages. It saves the sweeping for the very end, where possible.\n",
"pdfLaTex can do a lot of PDF managing and is very fast.\ni've used it for some quite complex imposition worflows. the TeX language is really alien to programming, but it's easy to write a python script that generates the needed LaTex layout and processes it.\n",
"Have you tried using Psyco with pyPdf?\n"
] | [
4,
2,
1,
1
] | [] | [] | [
"c",
"pdf",
"pypdf",
"python"
] | stackoverflow_0000508144_c_pdf_pypdf_python.txt |
Q:
How to chain views in Django?
I'm implementing James Bennett's excellent django-contact-form but have hit a snag. My contact page not only contains the form, but also additional flat page information.
Without rewriting the existing view the contact form uses, I'd like to be able to wrap, or chain, the views. This way I could inject some additional information via the context so that both the form and the flat page data could be rendered within the same template.
I've heard it mentioned that this is possible, but I can't seem to figure out how to make it work. I've created my own wrapper view, called the contact form view, and attempted to inspect the HttpResponse object for an attribute I can append to, but I can't seem to figure out which, if any, it is.
EDIT: James commented that the latest code can new be found here at BitBucket.
A:
There's a context processor that may do what you want.
http://docs.djangoproject.com/en/dev/ref/templates/api/
You can probably add your various pieces of "flat page information" to the context.
A:
Write a wrapper which uses the URL to look up the appropriate flat page object.
From your wrapper, call (and return the response from) the contact form view, passing the flat page in the extra_context argument (which is there for, among other things, precisely this sort of use case).
There is no third step.
A:
Context processors are what you're thinking of. And render_to_response is irrelevant. The required piece of information is if the view uses RequestContext or not, as that is what activates context processors.
Other than those, there is no way to "chain" views to add to context - you can wrap one view in another and alter the data going into it, but you cannot add to context that way.
| How to chain views in Django? | I'm implementing James Bennett's excellent django-contact-form but have hit a snag. My contact page not only contains the form, but also additional flat page information.
Without rewriting the existing view the contact form uses, I'd like to be able to wrap, or chain, the views. This way I could inject some additional information via the context so that both the form and the flat page data could be rendered within the same template.
I've heard it mentioned that this is possible, but I can't seem to figure out how to make it work. I've created my own wrapper view, called the contact form view, and attempted to inspect the HttpResponse object for an attribute I can append to, but I can't seem to figure out which, if any, it is.
EDIT: James commented that the latest code can new be found here at BitBucket.
| [
"There's a context processor that may do what you want.\nhttp://docs.djangoproject.com/en/dev/ref/templates/api/\nYou can probably add your various pieces of \"flat page information\" to the context.\n",
"\nWrite a wrapper which uses the URL to look up the appropriate flat page object.\nFrom your wrapper, call (and return the response from) the contact form view, passing the flat page in the extra_context argument (which is there for, among other things, precisely this sort of use case).\nThere is no third step.\n\n",
"Context processors are what you're thinking of. And render_to_response is irrelevant. The required piece of information is if the view uses RequestContext or not, as that is what activates context processors.\nOther than those, there is no way to \"chain\" views to add to context - you can wrap one view in another and alter the data going into it, but you cannot add to context that way.\n"
] | [
2,
2,
1
] | [] | [] | [
"django",
"django_views",
"extension_methods",
"python",
"word_wrap"
] | stackoverflow_0000505703_django_django_views_extension_methods_python_word_wrap.txt |
Q:
How do I use django mptt?
I have a model:
class Company(models.Model):
name = models.CharField( max_length=100)
parent = models.ForeignKey('self', null=True, blank=True, related_name='children')
mptt.register(Company, order_insertion_by=['name'])
and
class Financials(models.Model):
year = models.IntegerField()
revenue = models.DecimalField(max_digits = 10, decimal_places = 2)
So how can I add Financials as a child to Company in the mptt tree structure?
A:
I don't quite follow your question. A tree stores one type of object, in your case Company. To link Financials to Company just add a foreign key from Financials to Company.
If this doesn't help please expand your question to give us some more detail about what you are trying to achieve.
| How do I use django mptt? | I have a model:
class Company(models.Model):
name = models.CharField( max_length=100)
parent = models.ForeignKey('self', null=True, blank=True, related_name='children')
mptt.register(Company, order_insertion_by=['name'])
and
class Financials(models.Model):
year = models.IntegerField()
revenue = models.DecimalField(max_digits = 10, decimal_places = 2)
So how can I add Financials as a child to Company in the mptt tree structure?
| [
"I don't quite follow your question. A tree stores one type of object, in your case Company. To link Financials to Company just add a foreign key from Financials to Company.\nIf this doesn't help please expand your question to give us some more detail about what you are trying to achieve.\n"
] | [
1
] | [] | [] | [
"django",
"django_models",
"django_mptt",
"python"
] | stackoverflow_0000510339_django_django_models_django_mptt_python.txt |
Q:
Valid use case for django admin?
I want to build a django site where a certain group of trusted users can edit their profile information. Does it make sense to have each trusted user go through the django admin interface? I'd only want them to be able to see and edit their own information (obviously). It doesn't seem like this fits the way the django people define "trust", especially the bolded bit...
From The Django Book, Chapter 18:
The admin is designed to be used by
people who you, the developer, trust.
This doesn’t just mean “people who
have been authenticated;” it means
that Django assumes that your content
editors can be trusted to do the right
thing.
This means that there’s no “approval”
process for editing content — if you
trust your users, nobody needs to
approve of their edits. It also means
that the permission system, while
powerful, has no support for limiting
access on a per-object basis. If you
trust someone to edit their own
stories, you trust them not to edit
anyone else’s without permission.
Is this one of those use cases that fits with django's admin module, or is it just a specialized view for a non-trusted user?
A:
No, the Django admin is not suited for individual user profiles, each user would be able to see, and edit, all other user profiles. This is suited more to an administrator who has to manage all the users at once.
What you need to build is a user profile page. Django already has a nice login system courtesy of the django.contrib.auth module. You can easily integrate this into your pages, and its exactly what the Django admin uses to authenticate users.
Next you'll have to build a simple page that exposes that specific user's profile information based on their User model. This should be relatively painless as it will only require one view and one template, and the template can take advantage of ModelForms.
A:
I would suggest you to create a Person model which contains a OneToOneField to the User model(Admin site User model.). Some what like this..
from django.contrib.auth.models import User
class Person(models.Model):
"""The person class FKs to the User class and contains additional user information
including userImage, country, etc"""
user = models.OneToOneField(User, related_name='person_fk')
url = models.URLField(max_length=255, blank=True)
country = models.CharField(max_length=2, blank=True)
state = models.CharField(max_length=50, blank=True)
zipCode = models.IntegerField(max_length=7, blank=True, null=True)
userImage = models.ImageField(upload_to=generate_filename, blank=True, null=True)
A:
I wouldn't consider editing my personal profile on a website an administrative task. I think django-profiles is what you are looking for.
A:
Django's model for authorization is a little too simplistic. It is just checks for permission on the Model as a whole.
For this kind of thing, you are pretty much forced to write your own view functions that handle the additional check.
After you've written one or two, you'll see the pattern. Then you can think about writing your own decorator to handle this.
def profileView( request, object_id ):
p= Profile.objects.get( id=int(object_id) )
if request.session['user'] != p.get_user():
# respond with a 401 not authorized or a helpful message
# process normally, since the session['user'] is the user for the Profile.
For the above to work, you'll need to enable sessions, and be sure you record the user in the session when they login successfully. You'll also need to eradicate the session when they logout.
A:
Just for reference, here's a snippet demonstrating how you can fairly easily achieve this effect (users can only edit their "own" objects) in the Django admin. Caveat: I wouldn't recommend doing this for user profiles, it'll be easier and more flexible to just create your own edit view using a ModelForm.
A:
There are a few django pluggable apps that allow row level permissions on your admin model. However, I'd be more inclined to write my own view that allows users to do it from within the application.
I had similar aspirations (using the admin contrib) when designing the app I'm currently working on, but decided that the admin app really is for admin use, and that regular users should be given their own pages to do the work and customisation (if required).
You can easily generate the CRUD views for a particular model using generic views and modelforms, and just apply style sheets for consistent look with the rest of your application.
| Valid use case for django admin? | I want to build a django site where a certain group of trusted users can edit their profile information. Does it make sense to have each trusted user go through the django admin interface? I'd only want them to be able to see and edit their own information (obviously). It doesn't seem like this fits the way the django people define "trust", especially the bolded bit...
From The Django Book, Chapter 18:
The admin is designed to be used by
people who you, the developer, trust.
This doesn’t just mean “people who
have been authenticated;” it means
that Django assumes that your content
editors can be trusted to do the right
thing.
This means that there’s no “approval”
process for editing content — if you
trust your users, nobody needs to
approve of their edits. It also means
that the permission system, while
powerful, has no support for limiting
access on a per-object basis. If you
trust someone to edit their own
stories, you trust them not to edit
anyone else’s without permission.
Is this one of those use cases that fits with django's admin module, or is it just a specialized view for a non-trusted user?
| [
"No, the Django admin is not suited for individual user profiles, each user would be able to see, and edit, all other user profiles. This is suited more to an administrator who has to manage all the users at once.\nWhat you need to build is a user profile page. Django already has a nice login system courtesy of the django.contrib.auth module. You can easily integrate this into your pages, and its exactly what the Django admin uses to authenticate users.\nNext you'll have to build a simple page that exposes that specific user's profile information based on their User model. This should be relatively painless as it will only require one view and one template, and the template can take advantage of ModelForms.\n",
"I would suggest you to create a Person model which contains a OneToOneField to the User model(Admin site User model.). Some what like this..\nfrom django.contrib.auth.models import User\n\n class Person(models.Model):\n \"\"\"The person class FKs to the User class and contains additional user information\n including userImage, country, etc\"\"\"\n\n user = models.OneToOneField(User, related_name='person_fk')\n url = models.URLField(max_length=255, blank=True)\n country = models.CharField(max_length=2, blank=True)\n state = models.CharField(max_length=50, blank=True)\n zipCode = models.IntegerField(max_length=7, blank=True, null=True)\n userImage = models.ImageField(upload_to=generate_filename, blank=True, null=True)\n\n",
"I wouldn't consider editing my personal profile on a website an administrative task. I think django-profiles is what you are looking for. \n",
"Django's model for authorization is a little too simplistic. It is just checks for permission on the Model as a whole.\nFor this kind of thing, you are pretty much forced to write your own view functions that handle the additional check.\nAfter you've written one or two, you'll see the pattern. Then you can think about writing your own decorator to handle this. \ndef profileView( request, object_id ):\n p= Profile.objects.get( id=int(object_id) )\n if request.session['user'] != p.get_user():\n # respond with a 401 not authorized or a helpful message\n # process normally, since the session['user'] is the user for the Profile.\n\nFor the above to work, you'll need to enable sessions, and be sure you record the user in the session when they login successfully. You'll also need to eradicate the session when they logout. \n",
"Just for reference, here's a snippet demonstrating how you can fairly easily achieve this effect (users can only edit their \"own\" objects) in the Django admin. Caveat: I wouldn't recommend doing this for user profiles, it'll be easier and more flexible to just create your own edit view using a ModelForm.\n",
"There are a few django pluggable apps that allow row level permissions on your admin model. However, I'd be more inclined to write my own view that allows users to do it from within the application.\nI had similar aspirations (using the admin contrib) when designing the app I'm currently working on, but decided that the admin app really is for admin use, and that regular users should be given their own pages to do the work and customisation (if required).\nYou can easily generate the CRUD views for a particular model using generic views and modelforms, and just apply style sheets for consistent look with the rest of your application.\n"
] | [
17,
5,
3,
3,
1,
1
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0000498199_django_django_admin_python.txt |
Q:
How can I tell python which version of libmysqlclient.so to use?
I'm running a python script on a shared hosting server which until this morning had MySQL version 4. Now it has version 5. My python script can no longer connect to MySQL, as it can't find libmysqlclient_r.so.14:
$ python my_script.py
Traceback (most recent call last):
File "my_script.py", line 6, in ?
import MySQLdb
File "/home/lib/python2.4/site-packages/PIL-1.1.6-py2.4-linux-i686.egg/__init__.py", line 19, in ?
File "build/bdist.linux-i686/egg/_mysql.py", line 7, in ?
File "build/bdist.linux-i686/egg/_mysql.py", line 6, in __bootstrap__
ImportError: libmysqlclient_r.so.14: cannot open shared object file: No such file or directory
There are various other versions of libmysqlclient in /usr/lib:
/usr/lib/libmysqlclient.so.15
/usr/lib/libmysqlclient.so.14
/usr/lib/mysql/libmysqlclient.la
/usr/lib/mysql/libmysqlclient.so
/usr/lib/mysql/libmysqlclient_r.so
/usr/lib/mysql/libmysqlclient_r.a
/usr/lib/mysql/libmysqlclient_r.la
/usr/lib/mysql/libmysqlclient.a
/usr/lib/libmysqlclient.so
/usr/lib/libmysqlclient_r.so
/usr/lib/libmysqlclient_r.so.15
/usr/lib/libmysqlclient_r.so.15.0.0
/usr/lib/libmysqlclient.so.15.0.0
So my question is this: how can I tell python (version 2.4.3) which version of libmysqlclient to use?
A:
You can't tell the dynamic linker which version of a library to use, because the SONAME (full name of the library + interface) is part of the binary.
In your case, you can try to upload libmysqlclient_r.so.14 to the host and set LD_LIBRARY_PATH accordingly, so tell the dynamic linker which directories to search additionally to the system dirs when resolving shared objects.
You can use ldd to see if it LD_LIBRARY_PATH works:
$ ldd $path_to/_mysql.so
...
libmysqlclient_r.so.14 => $path_to_lib/libmysqlclient_r.so.14
...
Otherwise, there will be an error message about unresolved shared objects.
Of course that can only be a temporary fix until you rebuild MySQLdb to use the new libraries.
A:
You will have to recompile python-mysql (aka MySQLdb) to get it to link to the new version of libmysqlclient.
If your host originally set up the environment rather than you compiling it, you'll have to pester them.
/usr/lib/libmysqlclient.so.14
This looks like a remnant of the old libmysqlclient, and should be removed. The _r and .a (static) versions are gone and you don't really want a mixture of libraries still around, it will only risk confusing automake.
Whilst you could make a symbolic link from libmysqlclient_r.so.14 to .15, that'd only work if the new version of the client happened to have the same ABI for the functions you wanted to use as the old - and that's pretty unlikely, as that's the whole point of changing the version number.
A:
One solution is to set your PYTHONPATH environment variable to have some local directory, and copy over (or link, I suppose) the version of the mysql lib you want.
| How can I tell python which version of libmysqlclient.so to use? | I'm running a python script on a shared hosting server which until this morning had MySQL version 4. Now it has version 5. My python script can no longer connect to MySQL, as it can't find libmysqlclient_r.so.14:
$ python my_script.py
Traceback (most recent call last):
File "my_script.py", line 6, in ?
import MySQLdb
File "/home/lib/python2.4/site-packages/PIL-1.1.6-py2.4-linux-i686.egg/__init__.py", line 19, in ?
File "build/bdist.linux-i686/egg/_mysql.py", line 7, in ?
File "build/bdist.linux-i686/egg/_mysql.py", line 6, in __bootstrap__
ImportError: libmysqlclient_r.so.14: cannot open shared object file: No such file or directory
There are various other versions of libmysqlclient in /usr/lib:
/usr/lib/libmysqlclient.so.15
/usr/lib/libmysqlclient.so.14
/usr/lib/mysql/libmysqlclient.la
/usr/lib/mysql/libmysqlclient.so
/usr/lib/mysql/libmysqlclient_r.so
/usr/lib/mysql/libmysqlclient_r.a
/usr/lib/mysql/libmysqlclient_r.la
/usr/lib/mysql/libmysqlclient.a
/usr/lib/libmysqlclient.so
/usr/lib/libmysqlclient_r.so
/usr/lib/libmysqlclient_r.so.15
/usr/lib/libmysqlclient_r.so.15.0.0
/usr/lib/libmysqlclient.so.15.0.0
So my question is this: how can I tell python (version 2.4.3) which version of libmysqlclient to use?
| [
"You can't tell the dynamic linker which version of a library to use, because the SONAME (full name of the library + interface) is part of the binary.\nIn your case, you can try to upload libmysqlclient_r.so.14 to the host and set LD_LIBRARY_PATH accordingly, so tell the dynamic linker which directories to search additionally to the system dirs when resolving shared objects.\nYou can use ldd to see if it LD_LIBRARY_PATH works:\n$ ldd $path_to/_mysql.so\n...\nlibmysqlclient_r.so.14 => $path_to_lib/libmysqlclient_r.so.14\n...\n\nOtherwise, there will be an error message about unresolved shared objects.\nOf course that can only be a temporary fix until you rebuild MySQLdb to use the new libraries.\n",
"You will have to recompile python-mysql (aka MySQLdb) to get it to link to the new version of libmysqlclient.\nIf your host originally set up the environment rather than you compiling it, you'll have to pester them.\n\n/usr/lib/libmysqlclient.so.14\n\nThis looks like a remnant of the old libmysqlclient, and should be removed. The _r and .a (static) versions are gone and you don't really want a mixture of libraries still around, it will only risk confusing automake.\nWhilst you could make a symbolic link from libmysqlclient_r.so.14 to .15, that'd only work if the new version of the client happened to have the same ABI for the functions you wanted to use as the old - and that's pretty unlikely, as that's the whole point of changing the version number.\n",
"One solution is to set your PYTHONPATH environment variable to have some local directory, and copy over (or link, I suppose) the version of the mysql lib you want.\n"
] | [
5,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000511011_python.txt |
Q:
Django Installed Apps Location
I am an experienced PHP programmer using Django for the first time, and I think it is incredible!
I have a project that has a lot of apps, so I wanted to group them in an apps folder.
So the structure of the project is:
/project/
/project/apps/
/project/apps/app1/
/project/apps/app2
Then in Django settings I have put this:
INSTALLED_APPS = (
'project.apps.app1',
'project.apps.app2',
)
This does not seem to work?
Any ideas on how you can put all your apps into a seprate folder and not in the project root?
Many thanks.
A:
Make sure that the '__init__.py' file is in your apps directory, if it's not there it won't be recognized as part of the package.
So each of the folders here should have '__init__.py' file in it. (empty is fine).
/project/
/project/apps/
/project/apps/app1/
/project/apps/app2
Then as long as your root 'module' folder is in your PYTHONPATH you'll be able to import from your apps.
Here's the documentation regarding the python search path for your reading pleasure:
http://docs.python.org/install/index.html#modifying-python-s-search-path
And a nice simple explanation of what __init__.py file is for:
http://effbot.org/pyfaq/what-is-init-py-used-for.htm
A:
As long as your apps are in your PYTHONPATH, everything should work. Try setting that environment variable to the folder containing your apps.
PYTHONPATH="/path/to/your/apps/dir/:$PYTHONPATH"
A:
Your top-level urls.py (also named in your settings.py) must be able to use a simple "import" statement to get your applications.
Does import project.apps.app1.urls work? If not, then your PYTHONPATH isn't set up properly, or you didn't install your project in Python's site-packages directory.
I suggest using the PYTHONPATH environment variable, instead of installing into site-packages. Django applications (to me, anyway) seem easier to manage when outside site-packages.
We do the following:
Django projects go in /opt/project/.
PYTHONPATH includes /opt/project.
Our settings.py uses apps.this and apps.that (note that the project part of the name is part of the PYTHONPATH, not part of the import.
| Django Installed Apps Location | I am an experienced PHP programmer using Django for the first time, and I think it is incredible!
I have a project that has a lot of apps, so I wanted to group them in an apps folder.
So the structure of the project is:
/project/
/project/apps/
/project/apps/app1/
/project/apps/app2
Then in Django settings I have put this:
INSTALLED_APPS = (
'project.apps.app1',
'project.apps.app2',
)
This does not seem to work?
Any ideas on how you can put all your apps into a seprate folder and not in the project root?
Many thanks.
| [
"Make sure that the '__init__.py' file is in your apps directory, if it's not there it won't be recognized as part of the package.\nSo each of the folders here should have '__init__.py' file in it. (empty is fine).\n/project/\n/project/apps/\n/project/apps/app1/\n/project/apps/app2\n\nThen as long as your root 'module' folder is in your PYTHONPATH you'll be able to import from your apps.\nHere's the documentation regarding the python search path for your reading pleasure:\nhttp://docs.python.org/install/index.html#modifying-python-s-search-path\nAnd a nice simple explanation of what __init__.py file is for:\nhttp://effbot.org/pyfaq/what-is-init-py-used-for.htm\n",
"As long as your apps are in your PYTHONPATH, everything should work. Try setting that environment variable to the folder containing your apps.\nPYTHONPATH=\"/path/to/your/apps/dir/:$PYTHONPATH\"\n\n",
"Your top-level urls.py (also named in your settings.py) must be able to use a simple \"import\" statement to get your applications.\nDoes import project.apps.app1.urls work? If not, then your PYTHONPATH isn't set up properly, or you didn't install your project in Python's site-packages directory.\nI suggest using the PYTHONPATH environment variable, instead of installing into site-packages. Django applications (to me, anyway) seem easier to manage when outside site-packages.\nWe do the following:\n\nDjango projects go in /opt/project/. \nPYTHONPATH includes /opt/project.\nOur settings.py uses apps.this and apps.that (note that the project part of the name is part of the PYTHONPATH, not part of the import.\n\n"
] | [
41,
2,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000511291_django_python.txt |
Q:
creating blank field and receving the INTEGER PRIMARY KEY with sqlite, python
I am using sqlite with python. When i insert into table A i need to feed it an ID from table B. So what i wanted to do is insert default data into B, grab the id (which is auto increment) and use it in table A. Whats the best way receive the key from the table i just inserted into?
A:
As Christian said, sqlite3_last_insert_rowid() is what you want... but that's the C level API, and you're using the Python DB-API bindings for SQLite.
It looks like the cursor method lastrowid will do what you want (search for 'lastrowid' in the documentation for more information). Insert your row with cursor.execute( ... ), then do something like lastid = cursor.lastrowid to check the last ID inserted.
That you say you need "an" ID worries me, though... it doesn't matter which ID you have? Unless you are using the data just inserted into B for something, in which case you need that row ID, your database structure is seriously screwed up if you just need any old row ID for table B.
A:
Check out sqlite3_last_insert_rowid() -- it's probably what you're looking for:
Each entry in an SQLite table has a
unique 64-bit signed integer key
called the "rowid". The rowid is
always available as an undeclared
column named ROWID, OID, or _ROWID_ as
long as those names are not also used
by explicitly declared columns. If the
table has a column of type INTEGER
PRIMARY KEY then that column is
another alias for the rowid.
This routine returns the rowid of the
most recent successful INSERT into the
database from the database connection
in the first argument. If no
successful INSERTs have ever occurred
on that database connection, zero is
returned.
Hope it helps! (More info on ROWID is available here and here.)
A:
Simply use:
SELECT last_insert_rowid();
However, if you have multiple connections writing to the database, you might not get back the key that you expect.
| creating blank field and receving the INTEGER PRIMARY KEY with sqlite, python | I am using sqlite with python. When i insert into table A i need to feed it an ID from table B. So what i wanted to do is insert default data into B, grab the id (which is auto increment) and use it in table A. Whats the best way receive the key from the table i just inserted into?
| [
"As Christian said, sqlite3_last_insert_rowid() is what you want... but that's the C level API, and you're using the Python DB-API bindings for SQLite.\nIt looks like the cursor method lastrowid will do what you want (search for 'lastrowid' in the documentation for more information). Insert your row with cursor.execute( ... ), then do something like lastid = cursor.lastrowid to check the last ID inserted.\nThat you say you need \"an\" ID worries me, though... it doesn't matter which ID you have? Unless you are using the data just inserted into B for something, in which case you need that row ID, your database structure is seriously screwed up if you just need any old row ID for table B.\n",
"Check out sqlite3_last_insert_rowid() -- it's probably what you're looking for:\n\nEach entry in an SQLite table has a\n unique 64-bit signed integer key\n called the \"rowid\". The rowid is\n always available as an undeclared\n column named ROWID, OID, or _ROWID_ as\n long as those names are not also used\n by explicitly declared columns. If the\n table has a column of type INTEGER\n PRIMARY KEY then that column is\n another alias for the rowid.\nThis routine returns the rowid of the\n most recent successful INSERT into the\n database from the database connection\n in the first argument. If no\n successful INSERTs have ever occurred\n on that database connection, zero is\n returned.\n\nHope it helps! (More info on ROWID is available here and here.)\n",
"Simply use:\nSELECT last_insert_rowid();\n\nHowever, if you have multiple connections writing to the database, you might not get back the key that you expect.\n"
] | [
7,
1,
1
] | [] | [] | [
"python",
"sqlite"
] | stackoverflow_0000510135_python_sqlite.txt |
Q:
How to call a python function from a foreign language thread (C++)
I am developing a program that use DirectShow to grab audio data from
media files. DirectShow use thread to pass audio data to the callback
function in my program, and I let that callback function call another
function in Python.
I use Boost.Python to wrapper my library, the callback function :
class PythonCallback {
private:
object m_Function;
public:
PythonCallback(object obj)
: m_Function(obj)
{}
void operator() (double time, const AudioData &data) {
// Call the callback function in python
m_Function(time, data);
}
};
Here comes the problem, a thread of DirectShow calls my
PythonCallback, namely, call the function in Python. Once it calls, my
program just crash. I found this should be threading problem. Then I
found this document:
http://docs.python.org/c-api/init.html
It seems that my program can't call to Python's function from thread
directly, because there is Global Interpreter Lock. The python's GIL
is so complex, I have no idea how it works. I'm sorry, what I can do
is to ask. My question is. What should I do before and after I call a
Python function from threads?
It may looks like this.
void operator() (double time, const AudioData &data) {
// acquire lock
m_Function(time, data);
// release lock
}
Thanks.
Victor Lin.
A:
Take a look at PyGILState_Ensure()/PyGILState_Release(), from PEP 311
http://www.python.org/dev/peps/pep-0311/
Here is an example taken from the PEP itself:
void SomeCFunction(void)
{
/* ensure we hold the lock */
PyGILState_STATE state = PyGILState_Ensure();
/* Use the Python API */
...
/* Restore the state of Python */
PyGILState_Release(state);
}
A:
Have the c++ callback place the data in a queue. Have the python code poll the queue to extract the data.
| How to call a python function from a foreign language thread (C++) | I am developing a program that use DirectShow to grab audio data from
media files. DirectShow use thread to pass audio data to the callback
function in my program, and I let that callback function call another
function in Python.
I use Boost.Python to wrapper my library, the callback function :
class PythonCallback {
private:
object m_Function;
public:
PythonCallback(object obj)
: m_Function(obj)
{}
void operator() (double time, const AudioData &data) {
// Call the callback function in python
m_Function(time, data);
}
};
Here comes the problem, a thread of DirectShow calls my
PythonCallback, namely, call the function in Python. Once it calls, my
program just crash. I found this should be threading problem. Then I
found this document:
http://docs.python.org/c-api/init.html
It seems that my program can't call to Python's function from thread
directly, because there is Global Interpreter Lock. The python's GIL
is so complex, I have no idea how it works. I'm sorry, what I can do
is to ask. My question is. What should I do before and after I call a
Python function from threads?
It may looks like this.
void operator() (double time, const AudioData &data) {
// acquire lock
m_Function(time, data);
// release lock
}
Thanks.
Victor Lin.
| [
"Take a look at PyGILState_Ensure()/PyGILState_Release(), from PEP 311\nhttp://www.python.org/dev/peps/pep-0311/\nHere is an example taken from the PEP itself:\nvoid SomeCFunction(void)\n{\n /* ensure we hold the lock */\n PyGILState_STATE state = PyGILState_Ensure();\n /* Use the Python API */\n ...\n /* Restore the state of Python */\n PyGILState_Release(state);\n}\n\n",
"Have the c++ callback place the data in a queue. Have the python code poll the queue to extract the data.\n"
] | [
6,
1
] | [] | [] | [
"boost",
"c++",
"locking",
"multithreading",
"python"
] | stackoverflow_0000510085_boost_c++_locking_multithreading_python.txt |
Q:
Using Python Ctypes for ssdeep's fuzzy.dll but receive error
I am trying to use Python and ctypes to use the fuzzy.dll from ssdeep. So far everything I have tried fails with an access violation error. Here is what I do after changing to the proper directory which contains the fuzzy.dll and fuzzy.def files:
>>> import os,sys
>>> from ctypes import *
>>> fn = create_string_buffer(os.path.abspath("fuzzy.def"))
>>> fuzz = windll.fuzzy
>>> chash = c_char_p(512)
>>> hstat = fuzz.fuzzy_hash_filename(fn,chash)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
WindowsError: exception: access violation writing 0x00000200
>>>
From what I understand, I have passed the proper c_types. From fuzzy.h:
extern int fuzzy_hash_filename(char * filename, char * result)
I just cannot get past that access violation.
A:
There are two problems with your code:
You should not use windll.fuzzy, but cdll.fuzzy -- from ctypes documentation:
cdll loads libraries which export functions using the standard cdecl calling convention, while windll libraries call functions using the stdcall calling convention.
For return value (chash), you should declare a buffer rather than creating a pointer to 0x0000200 (=512) -- this is where the access violation comes from. Use create_string_buffer('\000' * 512) instead.
So your example should look like this:
>>> import os, sys
>>> from ctypes import *
>>> fn = create_string_buffer(os.path.abspath("fuzzy.def"))
>>> fuzz = cdll.fuzzy
>>> chash = create_string_buffer('\000' * 512)
>>> hstat = fuzz.fuzzy_hash_filename(fn,chash)
>>> print hstat
0 # == success
| Using Python Ctypes for ssdeep's fuzzy.dll but receive error | I am trying to use Python and ctypes to use the fuzzy.dll from ssdeep. So far everything I have tried fails with an access violation error. Here is what I do after changing to the proper directory which contains the fuzzy.dll and fuzzy.def files:
>>> import os,sys
>>> from ctypes import *
>>> fn = create_string_buffer(os.path.abspath("fuzzy.def"))
>>> fuzz = windll.fuzzy
>>> chash = c_char_p(512)
>>> hstat = fuzz.fuzzy_hash_filename(fn,chash)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
WindowsError: exception: access violation writing 0x00000200
>>>
From what I understand, I have passed the proper c_types. From fuzzy.h:
extern int fuzzy_hash_filename(char * filename, char * result)
I just cannot get past that access violation.
| [
"There are two problems with your code:\n\nYou should not use windll.fuzzy, but cdll.fuzzy -- from ctypes documentation:\n\ncdll loads libraries which export functions using the standard cdecl calling convention, while windll libraries call functions using the stdcall calling convention.\n\n\nFor return value (chash), you should declare a buffer rather than creating a pointer to 0x0000200 (=512) -- this is where the access violation comes from. Use create_string_buffer('\\000' * 512) instead.\n\n\nSo your example should look like this:\n>>> import os, sys\n>>> from ctypes import *\n>>> fn = create_string_buffer(os.path.abspath(\"fuzzy.def\"))\n>>> fuzz = cdll.fuzzy\n>>> chash = create_string_buffer('\\000' * 512)\n>>> hstat = fuzz.fuzzy_hash_filename(fn,chash)\n>>> print hstat\n0 # == success\n\n"
] | [
4
] | [] | [] | [
"ctypes",
"python"
] | stackoverflow_0000510443_ctypes_python.txt |
Q:
Why doesn't PyRun_String evaluate bool literals?
I need to evaluate a Python expression from C++. This code seems to work:
PyObject * dict = PyDict_New();
PyObject * val = PyRun_String(expression, Py_eval_input, dict, 0);
Py_DECREF(dict);
Unfortunately, it fails horribly if expression is "True" of "False" (that is, val is 0 and PyErr_Occurred() returns true). What am I doing wrong? Shouldn't they evaluate to Py_True and Py_False respectively?
A:
PyObject* PyRun_String(const char *str, int start, PyObject *globals, PyObject *locals);
If you want True and False they will have to be in the *globals dict passed to the interpreter. You might be able to fix that by calling PyEval_GetBuiltins.
From the Python 2.6 source code:
if (PyDict_GetItemString(globals, "__builtins__") == NULL) {
if (PyDict_SetItemString(globals, "__builtins__",
PyEval_GetBuiltins()) != 0)
return NULL;
}
If that doesn't work, you could try to PyRun_String("import __builtin__ as __builtins__", globals, locals) before calling PyRun_String("True", ...).
You might notice the Python interactive interpreter always runs code in the __main__ module which we haven't bothered to create here. I don't know whether you need to have a __main__ module, except that there are a lot of scripts that contain if __name__ == "__main__".
| Why doesn't PyRun_String evaluate bool literals? | I need to evaluate a Python expression from C++. This code seems to work:
PyObject * dict = PyDict_New();
PyObject * val = PyRun_String(expression, Py_eval_input, dict, 0);
Py_DECREF(dict);
Unfortunately, it fails horribly if expression is "True" of "False" (that is, val is 0 and PyErr_Occurred() returns true). What am I doing wrong? Shouldn't they evaluate to Py_True and Py_False respectively?
| [
"PyObject* PyRun_String(const char *str, int start, PyObject *globals, PyObject *locals);\n\nIf you want True and False they will have to be in the *globals dict passed to the interpreter. You might be able to fix that by calling PyEval_GetBuiltins.\nFrom the Python 2.6 source code:\nif (PyDict_GetItemString(globals, \"__builtins__\") == NULL) {\n if (PyDict_SetItemString(globals, \"__builtins__\",\n PyEval_GetBuiltins()) != 0)\n return NULL;\n}\n\nIf that doesn't work, you could try to PyRun_String(\"import __builtin__ as __builtins__\", globals, locals) before calling PyRun_String(\"True\", ...).\nYou might notice the Python interactive interpreter always runs code in the __main__ module which we haven't bothered to create here. I don't know whether you need to have a __main__ module, except that there are a lot of scripts that contain if __name__ == \"__main__\".\n"
] | [
5
] | [] | [] | [
"boolean",
"cpython",
"python"
] | stackoverflow_0000512036_boolean_cpython_python.txt |
Q:
How to access templates in Python?
Sometimes, for a program with a lot of data, it is common to place the data in an external file. An example is a script that produces an HTML report, using an external file to hold a template.
In Java, the most recommended way to retrieve a resource of the program is to use getClass().getClassLoader().getResource() or getClass().getClassLoader().getResourceAsStream() for a stream.
The advantage is that this is independent of the file system. Also, it works whether the classes are in the file system or the application is distributed as a Jar file.
How do you achieve the same in Python ? What if you're using py2exe or Freeze to generate a stand-alone running app, as seen in this question ?
A:
You can use os.path.dirname(__file__) to get the directory of the current module. Then use the path manipulation functions (specifically, os.path.join) and file input/output to open a file under the current module.
A:
What Daniel said. :-) Also, py2exe can be told to include external files (this is often used for images etc).
| How to access templates in Python? | Sometimes, for a program with a lot of data, it is common to place the data in an external file. An example is a script that produces an HTML report, using an external file to hold a template.
In Java, the most recommended way to retrieve a resource of the program is to use getClass().getClassLoader().getResource() or getClass().getClassLoader().getResourceAsStream() for a stream.
The advantage is that this is independent of the file system. Also, it works whether the classes are in the file system or the application is distributed as a Jar file.
How do you achieve the same in Python ? What if you're using py2exe or Freeze to generate a stand-alone running app, as seen in this question ?
| [
"You can use os.path.dirname(__file__) to get the directory of the current module. Then use the path manipulation functions (specifically, os.path.join) and file input/output to open a file under the current module.\n",
"What Daniel said. :-) Also, py2exe can be told to include external files (this is often used for images etc).\n"
] | [
2,
1
] | [] | [] | [
"python",
"templates"
] | stackoverflow_0000512499_python_templates.txt |
Q:
How can I write a wrapper around ngrep that highlights matches?
I just learned about ngrep, a cool program that lets you easily sniff packets that match a particular string.
The only problem is that it can be hard to see the match in the big blob of output. I'd like to write a wrapper script to highlight these matches -- it could use ANSI escape sequences:
echo -e 'This is \e[31mRED\e[0m.'
I'm most familiar with Perl, but I'm perfectly happy with a solution in Python or any other language. The simplest approach would be something like:
while (<STDIN>) {
s/$keyword/\e[31m$keyword\e[0m/g;
print;
}
However, this isn't a nice solution, because ngrep prints out hash marks without newlines whenever it receives a non-matching packet, and the code above will suppress the printing of these hashmarks until the script sees a newline.
Is there any way to do the highlighting without inhibiting the instant appearance of the hashmarks?
A:
This seems to do the trick, at least comparing two windows, one running a straight ngrep (e.g. ngrep whatever) and one being piped into the following program (with ngrep whatever | ngrephl target-string).
#! /usr/bin/perl
use strict;
use warnings;
$| = 1; # autoflush on
my $keyword = shift or die "No pattern specified\n";
my $cache = '';
while (read STDIN, my $ch, 1) {
if ($ch eq '#') {
$cache =~ s/($keyword)/\e[31m$1\e[0m/g;
syswrite STDOUT, "$cache$ch";
$cache = '';
}
else {
$cache .= $ch;
}
}
A:
Ah, forget it. This is too much of a pain. It was a lot easier to get the source to ngrep and make it print the hash marks to stderr:
--- ngrep.c 2006-11-28 05:38:43.000000000 -0800
+++ ngrep.c.new 2008-10-17 16:28:29.000000000 -0700
@@ -687,8 +687,7 @@
}
if (quiet < 1) {
- printf("#");
- fflush(stdout);
+ fprintf (stderr, "#");
}
switch (ip_proto) {
Then, filtering is a piece of cake:
while (<CMD>) {
s/($keyword)/\e[93m$1\e[0m/g;
print;
}
A:
You could also pipe the output through ack. The --passthru flag will help.
A:
It shouldn't be too hard if you have the answer this question.
(Essentially, read one character at a time and if it's a hash, print it. If it isn't a hash, save the character to print out later.)
A:
why not just call ngrep with the -q parameter to eliminate the hash marks?
A:
This is easy in python.
#!/usr/bin/env python
import sys, re
keyword = 'RED'
while 1:
c = sys.stdin.read(1)
if not c:
break
if c in '#\n':
sys.stdout.write(c)
else:
sys.stdout.write(
(c+sys.stdin.readline()).replace(
keyword, '\x1b[31m%s\x1b[0m\r' % keyword))
| How can I write a wrapper around ngrep that highlights matches? | I just learned about ngrep, a cool program that lets you easily sniff packets that match a particular string.
The only problem is that it can be hard to see the match in the big blob of output. I'd like to write a wrapper script to highlight these matches -- it could use ANSI escape sequences:
echo -e 'This is \e[31mRED\e[0m.'
I'm most familiar with Perl, but I'm perfectly happy with a solution in Python or any other language. The simplest approach would be something like:
while (<STDIN>) {
s/$keyword/\e[31m$keyword\e[0m/g;
print;
}
However, this isn't a nice solution, because ngrep prints out hash marks without newlines whenever it receives a non-matching packet, and the code above will suppress the printing of these hashmarks until the script sees a newline.
Is there any way to do the highlighting without inhibiting the instant appearance of the hashmarks?
| [
"This seems to do the trick, at least comparing two windows, one running a straight ngrep (e.g. ngrep whatever) and one being piped into the following program (with ngrep whatever | ngrephl target-string).\n#! /usr/bin/perl\n\nuse strict;\nuse warnings;\n\n$| = 1; # autoflush on\n\nmy $keyword = shift or die \"No pattern specified\\n\";\nmy $cache = '';\n\nwhile (read STDIN, my $ch, 1) {\n if ($ch eq '#') {\n $cache =~ s/($keyword)/\\e[31m$1\\e[0m/g;\n syswrite STDOUT, \"$cache$ch\";\n $cache = '';\n }\n else {\n $cache .= $ch;\n }\n}\n\n",
"Ah, forget it. This is too much of a pain. It was a lot easier to get the source to ngrep and make it print the hash marks to stderr:\n--- ngrep.c 2006-11-28 05:38:43.000000000 -0800\n+++ ngrep.c.new 2008-10-17 16:28:29.000000000 -0700\n@@ -687,8 +687,7 @@\n }\n\n if (quiet < 1) {\n- printf(\"#\");\n- fflush(stdout);\n+ fprintf (stderr, \"#\");\n }\n\n switch (ip_proto) { \n\nThen, filtering is a piece of cake:\nwhile (<CMD>) {\n s/($keyword)/\\e[93m$1\\e[0m/g;\n print;\n}\n\n",
"You could also pipe the output through ack. The --passthru flag will help.\n",
"It shouldn't be too hard if you have the answer this question.\n(Essentially, read one character at a time and if it's a hash, print it. If it isn't a hash, save the character to print out later.)\n",
"why not just call ngrep with the -q parameter to eliminate the hash marks?\n",
"This is easy in python.\n#!/usr/bin/env python\nimport sys, re\n\nkeyword = 'RED'\n\nwhile 1:\n c = sys.stdin.read(1)\n if not c:\n break\n if c in '#\\n':\n sys.stdout.write(c)\n else:\n sys.stdout.write(\n (c+sys.stdin.readline()).replace(\n keyword, '\\x1b[31m%s\\x1b[0m\\r' % keyword))\n\n"
] | [
4,
3,
3,
1,
1,
0
] | [
"See the script at this post to Linux-IL where someone asked a similar question. It's written in Perl and uses the CPAN Term::ANSIColor module.\n"
] | [
-1
] | [
"networking",
"perl",
"python",
"unix"
] | stackoverflow_0000214059_networking_perl_python_unix.txt |
Q:
Various Python datetime issues
I have two methods that I'm using as custom tags in a template engine:
# Renders a <select> form field
def select_field(options, selected_item, field_name):
options = [(str(v),str(v)) for v in options]
html = ['<select name="%s">' % field_name]
for k,v in options:
tmp = '<option '
if k == selected_item:
tmp += 'selected '
tmp += 'value="%s">%s</option>' % (k,v)
html.append(tmp)
html.append('</select>')
return '\n'.join(html)
# Renders a collection of <select> fields for datetime values
def datetime_field(current_dt, field_name):
if current_dt == None:
current_dt = datetime.datetime.now()
day = select_field(range(1, 32), current_dt.day, field_name + "_day")
month = select_field(range(1, 13), current_dt.month, field_name + "_month")
year = select_field(range(datetime.datetime.now().year, datetime.datetime.now().year + 10), current_dt.year, field_name + "_year")
hour = select_field(range(1, 13), current_dt.hour, field_name + "_hour")
minute = select_field(range(1, 60), current_dt.minute, field_name + "_minute")
period = select_field(['AM', 'PM'], 'AM', field_name + "_period")
return "\n".join([day, '/', month, '/', year, ' at ', hour, ':', minute, period])
As you can see in the comments, I'm attempting to generate select fields for building a date and time value.
My first question is, how do I make the day, month, hour, and minute ranges use two digits? For example, the code generates "1, 2, 3..." while I would like "01, 02, 03".
Next, when the fields get generated, the values of the supplied datetime are not selected automatically, even though I'm telling the select_field method to add a "selected" attribute when the value is equal to the supplied datetime's value.
Finally, how do I get the 12-hour period identifier (AM/PM) for the supplied datetime? For the moment, I'm simply selecting 'AM' by default, but I would like to supply this value from the datetime itself.
A:
To turn an integer range into two digit strings:
>>> range(13)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
>>> [ '%02d' % i for i in range(13) ]
['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12']
Then to get the AM/PM indicator:
>>> import datetime
>>> current_dt = datetime.datetime.now()
>>> current_dt
datetime.datetime(2009, 2, 4, 22, 2, 14, 390000)
>>> ['AM','PM'][current_dt.hour>=12]
'PM'
Voila!
A:
Question 1:
>>> '%02d' % 2
'02'
>>> '%02d' % 59
'59'
A:
To add to my previous answer, some comments on making the selected option selected.
First, your HTML generation code would be more robust if rather than splitting the start and end of the HTML tags...
html = ['<select name="%s">' % field_name]
for k,v in options:
tmp = '<option '
if k == selected_item:
tmp += 'selected '
tmp += 'value="%s">%s</option>' % (k,v)
html.append(tmp)
html.append('</select>')
... you generated them as atomic wholes:
opt_bits = ('<option %s value="%s">'% (['','selected'][str(v)==selected_item], str(v)) for v in options)
# If running Python2.5, it's better to use: 'selected' if str(v)==selected_item else ''
html = '<select name=%s>%s</select>' % '\n'.join(opt_bits)
Second, I'm not sure why you double the options list rather than reusing the same variable name, as I did above:
options = [(str(v),str(v)) for v in options] # ????
This is unnecessary, unless options is really a dict and you intended something like:
[ (k,v) for k,v in options.iteritems() ]
Finally, your code will break if the list items contain quotes. Look in the Python documentation for standard functions for escaping HTML.
Good luck!
A:
Supply a format string to the select_field method to show leading zeros.
>>> print "%02i" % (1, )
01
datetime.hour is 0-23, so you have to adapt it when setting the field. This is also the answer on how to detect AM/PM. Make sure to convert it back correctly.
| Various Python datetime issues | I have two methods that I'm using as custom tags in a template engine:
# Renders a <select> form field
def select_field(options, selected_item, field_name):
options = [(str(v),str(v)) for v in options]
html = ['<select name="%s">' % field_name]
for k,v in options:
tmp = '<option '
if k == selected_item:
tmp += 'selected '
tmp += 'value="%s">%s</option>' % (k,v)
html.append(tmp)
html.append('</select>')
return '\n'.join(html)
# Renders a collection of <select> fields for datetime values
def datetime_field(current_dt, field_name):
if current_dt == None:
current_dt = datetime.datetime.now()
day = select_field(range(1, 32), current_dt.day, field_name + "_day")
month = select_field(range(1, 13), current_dt.month, field_name + "_month")
year = select_field(range(datetime.datetime.now().year, datetime.datetime.now().year + 10), current_dt.year, field_name + "_year")
hour = select_field(range(1, 13), current_dt.hour, field_name + "_hour")
minute = select_field(range(1, 60), current_dt.minute, field_name + "_minute")
period = select_field(['AM', 'PM'], 'AM', field_name + "_period")
return "\n".join([day, '/', month, '/', year, ' at ', hour, ':', minute, period])
As you can see in the comments, I'm attempting to generate select fields for building a date and time value.
My first question is, how do I make the day, month, hour, and minute ranges use two digits? For example, the code generates "1, 2, 3..." while I would like "01, 02, 03".
Next, when the fields get generated, the values of the supplied datetime are not selected automatically, even though I'm telling the select_field method to add a "selected" attribute when the value is equal to the supplied datetime's value.
Finally, how do I get the 12-hour period identifier (AM/PM) for the supplied datetime? For the moment, I'm simply selecting 'AM' by default, but I would like to supply this value from the datetime itself.
| [
"To turn an integer range into two digit strings:\n>>> range(13)\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]\n>>> [ '%02d' % i for i in range(13) ]\n['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12']\n\nThen to get the AM/PM indicator:\n>>> import datetime\n>>> current_dt = datetime.datetime.now()\n>>> current_dt\ndatetime.datetime(2009, 2, 4, 22, 2, 14, 390000)\n>>> ['AM','PM'][current_dt.hour>=12]\n'PM'\n\nVoila!\n",
"Question 1:\n>>> '%02d' % 2\n'02'\n>>> '%02d' % 59\n'59'\n\n",
"To add to my previous answer, some comments on making the selected option selected. \nFirst, your HTML generation code would be more robust if rather than splitting the start and end of the HTML tags...\nhtml = ['<select name=\"%s\">' % field_name]\nfor k,v in options:\n tmp = '<option '\n if k == selected_item:\n tmp += 'selected '\n tmp += 'value=\"%s\">%s</option>' % (k,v)\n html.append(tmp)\nhtml.append('</select>')\n\n... you generated them as atomic wholes:\nopt_bits = ('<option %s value=\"%s\">'% (['','selected'][str(v)==selected_item], str(v)) for v in options)\n# If running Python2.5, it's better to use: 'selected' if str(v)==selected_item else '' \nhtml = '<select name=%s>%s</select>' % '\\n'.join(opt_bits)\n\nSecond, I'm not sure why you double the options list rather than reusing the same variable name, as I did above:\noptions = [(str(v),str(v)) for v in options] # ????\n\nThis is unnecessary, unless options is really a dict and you intended something like:\n[ (k,v) for k,v in options.iteritems() ]\n\nFinally, your code will break if the list items contain quotes. Look in the Python documentation for standard functions for escaping HTML.\nGood luck!\n",
"\nSupply a format string to the select_field method to show leading zeros.\n>>> print \"%02i\" % (1, )\n01\n\ndatetime.hour is 0-23, so you have to adapt it when setting the field. This is also the answer on how to detect AM/PM. Make sure to convert it back correctly.\n\n"
] | [
5,
1,
1,
0
] | [] | [] | [
"datetime",
"python"
] | stackoverflow_0000513291_datetime_python.txt |
Q:
Spawn subprocess that expects console input without blocking?
I am trying to do a CVS login from Python by calling the cvs.exe process.
When calling cvs.exe by hand, it prints a message to the console and then waits for the user to input the password.
When calling it with subprocess.Popen, I've noticed that the call blocks. The code is
subprocess.Popen(cvscmd, shell = True, stdin = subprocess.PIPE, stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
I assume that it blocks because it's waiting for input, but my expectation was that calling Popen would return immediately and then I could call subprocess.communicate() to input the actual password. How can I achieve this behaviour and avoid blocking on Popen?
OS: Windows XP
Python: 2.6
cvs.exe: 1.11
A:
Remove the shell=True part. Your shell has nothing to do with it. Using shell=True is a common cause of trouble.
Use a list of parameters for cmd.
Example:
cmd = ['cvs',
'-d:pserver:anonymous@bayonne.cvs.sourceforge.net:/cvsroot/bayonne',
'login']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
This won't block on my system (my script continues executing).
However since cvs reads the password directly from the terminal (not from standard input or output) you can't just write the password to the subprocess' stdin.
What you could do is pass the password as part of the CVSROOT specification instead, like this:
:pserver:<user>[:<passwd>]@<server>:/<path>
I.e. a function to login to a sourceforge project:
import subprocess
def login_to_sourceforge_cvs(project, username='anonymous', password=''):
host = '%s.cvs.sourceforge.net' % project
path = '/cvsroot/%s' % project
cmd = ['cvs',
'-d:pserver:%s:%s@%s:%s' % (username, password, host, path),
'login']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE,
stdout=subprocess.PIPE
stderr=subprocess.STDOUT)
return p
This works for me. Calling
login_to_sourceforge_cvs('bayonne')
Will log in anonymously to the bayonne project's cvs.
A:
If you are automating external programs that need input - like password - your best bet would probably be to use pexpect.
| Spawn subprocess that expects console input without blocking? | I am trying to do a CVS login from Python by calling the cvs.exe process.
When calling cvs.exe by hand, it prints a message to the console and then waits for the user to input the password.
When calling it with subprocess.Popen, I've noticed that the call blocks. The code is
subprocess.Popen(cvscmd, shell = True, stdin = subprocess.PIPE, stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
I assume that it blocks because it's waiting for input, but my expectation was that calling Popen would return immediately and then I could call subprocess.communicate() to input the actual password. How can I achieve this behaviour and avoid blocking on Popen?
OS: Windows XP
Python: 2.6
cvs.exe: 1.11
| [
"\nRemove the shell=True part. Your shell has nothing to do with it. Using shell=True is a common cause of trouble.\nUse a list of parameters for cmd.\n\nExample:\ncmd = ['cvs', \n '-d:pserver:anonymous@bayonne.cvs.sourceforge.net:/cvsroot/bayonne', \n 'login']\np = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE) \n\nThis won't block on my system (my script continues executing).\nHowever since cvs reads the password directly from the terminal (not from standard input or output) you can't just write the password to the subprocess' stdin.\nWhat you could do is pass the password as part of the CVSROOT specification instead, like this:\n:pserver:<user>[:<passwd>]@<server>:/<path>\n\nI.e. a function to login to a sourceforge project:\nimport subprocess\n\ndef login_to_sourceforge_cvs(project, username='anonymous', password=''):\n host = '%s.cvs.sourceforge.net' % project\n path = '/cvsroot/%s' % project\n cmd = ['cvs', \n '-d:pserver:%s:%s@%s:%s' % (username, password, host, path), \n 'login']\n p = subprocess.Popen(cmd, stdin=subprocess.PIPE, \n stdout=subprocess.PIPE\n stderr=subprocess.STDOUT) \n return p\n\nThis works for me. Calling\nlogin_to_sourceforge_cvs('bayonne')\n\nWill log in anonymously to the bayonne project's cvs.\n",
"If you are automating external programs that need input - like password - your best bet would probably be to use pexpect.\n"
] | [
2,
0
] | [] | [] | [
"python",
"subprocess",
"windows"
] | stackoverflow_0000510751_python_subprocess_windows.txt |
Q:
How to distribute script using gdata-python-client?
I've written several scripts that make use of the gdata API, and they all (obviously) have my API key and client ID in plain-text. How am I supposed to distribute these?
A:
Move the variables into a separate module and replace your values with dummy values. Make sure you trap for an invalid key and provide instructions on how to obtain a key and where to place it. In your code you can just import the values from that module.
import gdata_api_key
print gdata_api_key.key_value
A:
If we assume that you want clients to use their own keys I'd recommend putting them in a configuration file which defaults to an (invalid) sentinel value.
If on the other hand you want the script to use your key the best you can do is obfuscate it. After all, if your program can read it then an attacker (with a debugger) can read it too.
| How to distribute script using gdata-python-client? | I've written several scripts that make use of the gdata API, and they all (obviously) have my API key and client ID in plain-text. How am I supposed to distribute these?
| [
"Move the variables into a separate module and replace your values with dummy values. Make sure you trap for an invalid key and provide instructions on how to obtain a key and where to place it. In your code you can just import the values from that module.\nimport gdata_api_key\nprint gdata_api_key.key_value\n\n",
"If we assume that you want clients to use their own keys I'd recommend putting them in a configuration file which defaults to an (invalid) sentinel value.\nIf on the other hand you want the script to use your key the best you can do is obfuscate it. After all, if your program can read it then an attacker (with a debugger) can read it too.\n"
] | [
3,
0
] | [] | [] | [
"gdata_api",
"python"
] | stackoverflow_0000513806_gdata_api_python.txt |
Q:
Newline characters in non ASCII encoded files
I'm using Python 2.6 to read latin2 encoded file with windows line endings ('\r\n').
import codecs
file = codecs.open('stackoverflow_secrets.txt', encoding='latin2', mode='rt')
line = file.readline()
print(repr(line))
outputs : u'login: yabcok\n'
file = codecs.open('stackoverflow_secrets.txt', encoding='latin2', mode='r')
line = file.readline()
print(repr(line))
or
file = codecs.open('stackoverflow_secrets.txt', encoding='latin2', mode='rb')
line = file.readline()
print(repr(line))
outputs : u'password: l1x1%Dm\r\n'
My questions:
Why text mode is not the default? Documentation states otherwise. Is codecs module commonly used with binary files?
Why newline chars aren't stripped from readline() output? This is annoying and redundant.
Is there a way to specify newline character for files not ASCII encoded.
A:
Are you sure that your examples are correct? The documentation of the codecs module says:
Note: Files are always opened in binary mode, even if no binary mode was specified. This is done to avoid data loss due to encodings using 8-bit values. This means that no automatic conversion of '\n' is done on reading and writing.
On my system, with a Latin-2 encoded file + DOS line endings, there's no difference between "rt", "r" and "rb" (Disclaimer: I'm using 2.5 on Linux).
The documentation for open also mentions no "t" flag, so that behavior seems a little strange.
Newline characters are not stripped from lines because not all lines returned by readline may end in newlines. If the file does not end with a newline, the last line does not carry one. (I obviously can't come up with a better explanation).
Newline characters do not differ based on the encoding (at least not among the ones which use ASCII for 0-127), only based on the platform. You can specify "U" in the mode when opening the file and Python will detect any form of newline, either Windows, Mac or Unix.
A:
mode='rt'
'rt' isn't a real mode as such - that will do the same as 'r'.
Why text mode is not the default?
See Torsten's answer.
Also, if you are using anything but Windows, text mode files behave identically to binary files anyway.
You may instead be thinking of 'U'niversal newlines mode, which attempts to allow other platforms' text-mode files to work. Whilst it is possible to pass a 'U' flag to codecs.open, given the doc as outlined above I think it's bug. Certainly the results would go wrong on UTF-16 and some East Asian codecs, so don't rely on it.
Why newline chars aren't stripped from readline() output?
It is necessary to be able to tell whether the last line of the file ends with a trailing newline.
| Newline characters in non ASCII encoded files | I'm using Python 2.6 to read latin2 encoded file with windows line endings ('\r\n').
import codecs
file = codecs.open('stackoverflow_secrets.txt', encoding='latin2', mode='rt')
line = file.readline()
print(repr(line))
outputs : u'login: yabcok\n'
file = codecs.open('stackoverflow_secrets.txt', encoding='latin2', mode='r')
line = file.readline()
print(repr(line))
or
file = codecs.open('stackoverflow_secrets.txt', encoding='latin2', mode='rb')
line = file.readline()
print(repr(line))
outputs : u'password: l1x1%Dm\r\n'
My questions:
Why text mode is not the default? Documentation states otherwise. Is codecs module commonly used with binary files?
Why newline chars aren't stripped from readline() output? This is annoying and redundant.
Is there a way to specify newline character for files not ASCII encoded.
| [
"Are you sure that your examples are correct? The documentation of the codecs module says: \n\nNote: Files are always opened in binary mode, even if no binary mode was specified. This is done to avoid data loss due to encodings using 8-bit values. This means that no automatic conversion of '\\n' is done on reading and writing.\n\nOn my system, with a Latin-2 encoded file + DOS line endings, there's no difference between \"rt\", \"r\" and \"rb\" (Disclaimer: I'm using 2.5 on Linux).\nThe documentation for open also mentions no \"t\" flag, so that behavior seems a little strange. \nNewline characters are not stripped from lines because not all lines returned by readline may end in newlines. If the file does not end with a newline, the last line does not carry one. (I obviously can't come up with a better explanation).\nNewline characters do not differ based on the encoding (at least not among the ones which use ASCII for 0-127), only based on the platform. You can specify \"U\" in the mode when opening the file and Python will detect any form of newline, either Windows, Mac or Unix.\n",
"\nmode='rt'\n\n'rt' isn't a real mode as such - that will do the same as 'r'. \n\nWhy text mode is not the default?\n\nSee Torsten's answer.\nAlso, if you are using anything but Windows, text mode files behave identically to binary files anyway.\nYou may instead be thinking of 'U'niversal newlines mode, which attempts to allow other platforms' text-mode files to work. Whilst it is possible to pass a 'U' flag to codecs.open, given the doc as outlined above I think it's bug. Certainly the results would go wrong on UTF-16 and some East Asian codecs, so don't rely on it.\n\nWhy newline chars aren't stripped from readline() output?\n\nIt is necessary to be able to tell whether the last line of the file ends with a trailing newline.\n"
] | [
3,
0
] | [] | [] | [
"encoding",
"file",
"python"
] | stackoverflow_0000513675_encoding_file_python.txt |
Q:
Combined Python & Ruby extension module
I have a C extension module for Python and I want to make it available to Rubyists.
The source has a number of C modules, with only one being Python-dependent. The rest depend only on each other and the standard library. I can build it with python setup.py build in the usual way.
I've been experimenting with adding Ruby support using newgem and I can build a version of the extension with rake gem. However, the combined source has an ugly directory layout (mixing Gem-style and Setuptools-style structures) and the build process is a kludge.
I can't just keep all the sources in the same directory because mkmf automatically picks up the Python-dependent module and tries to build that, and users shouldn't have to install Python to compile a module that won't be used. My current hack is for extconf.rb to copy the Python-independent source-files into the same directory as the Ruby-dependent extension module.
Is there a saner way to make the code available to both languages? Should I just duplicate the Python-independent code in a separate Gem? Should I release the independent code as a separate lib built with autotools? Is there a version of mkmf that can skip the unwanted module?
A:
One way to solve it is to create three different projects:
The library itself, independent on python & ruby
Python bindings
Ruby bindings
That's probably the cleanest solution, albeit it requires a bit more work when doing releases, but it has the advantage that you can release a new version of the Ruby bindings without having to ship a new library/python bindings version.
A:
Complementing on what Johan said, I've used a couple c/c++ support libraries in Python thanks to swig. You write your code in c/c++ then make an intermediary template for each language that you want to support. Its rather painless for Python, but some considerations must be made for Ruby... namely I don't think pthread support is to happy with ruby or vice versa.
http://www.swig.org/
It's got a somewhat steep learning curve so it might be best to find an example project out there that demonstrates how to use the wrapper for your target languages.
This is definitely a useful tool as it makes your code a lot cleaner while still providing robust bindings to multiple languages (PHP, Python, Ruby, and I believe c#)
| Combined Python & Ruby extension module | I have a C extension module for Python and I want to make it available to Rubyists.
The source has a number of C modules, with only one being Python-dependent. The rest depend only on each other and the standard library. I can build it with python setup.py build in the usual way.
I've been experimenting with adding Ruby support using newgem and I can build a version of the extension with rake gem. However, the combined source has an ugly directory layout (mixing Gem-style and Setuptools-style structures) and the build process is a kludge.
I can't just keep all the sources in the same directory because mkmf automatically picks up the Python-dependent module and tries to build that, and users shouldn't have to install Python to compile a module that won't be used. My current hack is for extconf.rb to copy the Python-independent source-files into the same directory as the Ruby-dependent extension module.
Is there a saner way to make the code available to both languages? Should I just duplicate the Python-independent code in a separate Gem? Should I release the independent code as a separate lib built with autotools? Is there a version of mkmf that can skip the unwanted module?
| [
"One way to solve it is to create three different projects:\n\nThe library itself, independent on python & ruby\nPython bindings\nRuby bindings\n\nThat's probably the cleanest solution, albeit it requires a bit more work when doing releases, but it has the advantage that you can release a new version of the Ruby bindings without having to ship a new library/python bindings version.\n",
"Complementing on what Johan said, I've used a couple c/c++ support libraries in Python thanks to swig. You write your code in c/c++ then make an intermediary template for each language that you want to support. Its rather painless for Python, but some considerations must be made for Ruby... namely I don't think pthread support is to happy with ruby or vice versa.\nhttp://www.swig.org/\nIt's got a somewhat steep learning curve so it might be best to find an example project out there that demonstrates how to use the wrapper for your target languages.\nThis is definitely a useful tool as it makes your code a lot cleaner while still providing robust bindings to multiple languages (PHP, Python, Ruby, and I believe c#)\n"
] | [
5,
0
] | [] | [] | [
"newgem",
"python",
"ruby",
"setuptools"
] | stackoverflow_0000511412_newgem_python_ruby_setuptools.txt |
Q:
Is it possible for a running python program to overwrite itself?
Is it possible for a python script to open its own source file and overwrite it?
The idea was to have a very simple and very dirty way for a python script to download an update of itself so that the next time it is run it would be an updated version.
A:
That's certainly possible. After the script is loaded/imported, the Python interpreter won't access it anymore, except when printing source line in a exception stack trace. Any pyc file will be regenerated the next time as the source file is newer than the pyc.
A:
If you put most of the code into a module, you could have the main file (which is the one that is run) check the update location, and automatically download the most recent version and install that, before the module is imported.
That way you wouldn't have to have a restart of the application to run the most recent version, just reimport the module.
# Check version of module
import module
# Check update address
if update_version > module.version:
download(update_module)
import module
reload(module)
module.main()
You can use the reload() function to force a module to reload it's data. Note there are some caveats to this: objects created using classes in this module will not be magically updated to the new version, and "from module import stuff" before the reimport may result in "stuff" referring to the old object "module.stuff".
[Clearly, I didn't read the previous post clearly enough - it does exactly what I suggest!]
A:
Actually, it's preferable that your application starts with a dummy checker-downloader that changes rarely (if ever); before running, it should check if any updates are available; if yes, then it would download them (typically the rest of the app would be modules) and then import them and start the app.
This way, as soon as updates are available, you start the application and will run the latest version; otherwise, a restart of the application is required.
A:
You might also want to check out this module ( which is in 2.5 & 3 as well )
http://www.python.org/doc/2.1.3/lib/os-process.html
Specifically the execv method will overwrite the current process with whatever you call from this method. I've done some personnel tests and it works pretty reliably.
| Is it possible for a running python program to overwrite itself? | Is it possible for a python script to open its own source file and overwrite it?
The idea was to have a very simple and very dirty way for a python script to download an update of itself so that the next time it is run it would be an updated version.
| [
"That's certainly possible. After the script is loaded/imported, the Python interpreter won't access it anymore, except when printing source line in a exception stack trace. Any pyc file will be regenerated the next time as the source file is newer than the pyc.\n",
"If you put most of the code into a module, you could have the main file (which is the one that is run) check the update location, and automatically download the most recent version and install that, before the module is imported.\nThat way you wouldn't have to have a restart of the application to run the most recent version, just reimport the module. \n# Check version of module\nimport module\n\n# Check update address\nif update_version > module.version:\n download(update_module)\n import module\n reload(module)\n\nmodule.main()\n\nYou can use the reload() function to force a module to reload it's data. Note there are some caveats to this: objects created using classes in this module will not be magically updated to the new version, and \"from module import stuff\" before the reimport may result in \"stuff\" referring to the old object \"module.stuff\".\n[Clearly, I didn't read the previous post clearly enough - it does exactly what I suggest!]\n",
"Actually, it's preferable that your application starts with a dummy checker-downloader that changes rarely (if ever); before running, it should check if any updates are available; if yes, then it would download them (typically the rest of the app would be modules) and then import them and start the app.\nThis way, as soon as updates are available, you start the application and will run the latest version; otherwise, a restart of the application is required.\n",
"You might also want to check out this module ( which is in 2.5 & 3 as well )\nhttp://www.python.org/doc/2.1.3/lib/os-process.html\nSpecifically the execv method will overwrite the current process with whatever you call from this method. I've done some personnel tests and it works pretty reliably.\n"
] | [
25,
15,
8,
1
] | [] | [] | [
"python"
] | stackoverflow_0000291448_python.txt |
Q:
Help building a regular expression in python using the re module
im writing a simple propositional logic formula parser in python which uses regular expressions re module and the lex/yacc module for lexing/parsing. Originally my code could pick out implication as ->, but adding logical equivalence (<->) caused issues with the compiled expressions
IMPLICATION = re.compile('[\s]*\-\>[\s]*')
EQUIVALENCE = re.compile('[\s]*\<\-\>[\s]*')
...
elif self.IMPLICATION.search(formula[0].strip()):
...
elif self.EQUIVALENCE.search(formula[0].strip()):
...
I originally tried adding [^<] to the front of -> to make it ignore instances of equivalence but this just made it not accept any instances of implication at all. Any possible help would be warmly welcome :)
A:
As far as I can tell, your regular expressions are equivalent to the following:
# This is bad, because IMPLICATION also will match every
# string that EQUIVALENCE matches
IMPLICATION = re.compile("->")
EQUIVALENCE = re.compile("<->")
As you've written it, you're also matching for zero or more whitespace characters before the -> and <-> literal. But you're not capturing the spaces, so it's useless to specify "match whether spaces are present or not". Also, note that - and > do not need to be escaped in these regular expressions.
You have two options as I see it. The first is to make sure that IMPLICATION does not match the same strings as EQUIVALENCE
# This ought to work just fine.
IMPLICATION = re.compile("[^<]->")
EQUIVALENCE = re.compile("<->")
Another option is to use the maximal munch method; i.e., match against all regular expressions, and choose the longest match. This would resolve ambiguity by giving EQUIVALENCE a higher precedence than IMPLICATION.
A:
I think you can solve this simply by reordering your checking to match equivalences first, and then implications. However, this seems to work :
>>> IMPLICATION = re.compile(r'\s*[^\<]\-\>\s*')
>>> EQUIVALENCE = re.compile(r'\s*\<\-\>\s*')
| Help building a regular expression in python using the re module | im writing a simple propositional logic formula parser in python which uses regular expressions re module and the lex/yacc module for lexing/parsing. Originally my code could pick out implication as ->, but adding logical equivalence (<->) caused issues with the compiled expressions
IMPLICATION = re.compile('[\s]*\-\>[\s]*')
EQUIVALENCE = re.compile('[\s]*\<\-\>[\s]*')
...
elif self.IMPLICATION.search(formula[0].strip()):
...
elif self.EQUIVALENCE.search(formula[0].strip()):
...
I originally tried adding [^<] to the front of -> to make it ignore instances of equivalence but this just made it not accept any instances of implication at all. Any possible help would be warmly welcome :)
| [
"As far as I can tell, your regular expressions are equivalent to the following:\n# This is bad, because IMPLICATION also will match every\n# string that EQUIVALENCE matches\nIMPLICATION = re.compile(\"->\")\nEQUIVALENCE = re.compile(\"<->\")\n\nAs you've written it, you're also matching for zero or more whitespace characters before the -> and <-> literal. But you're not capturing the spaces, so it's useless to specify \"match whether spaces are present or not\". Also, note that - and > do not need to be escaped in these regular expressions.\nYou have two options as I see it. The first is to make sure that IMPLICATION does not match the same strings as EQUIVALENCE\n# This ought to work just fine.\nIMPLICATION = re.compile(\"[^<]->\")\nEQUIVALENCE = re.compile(\"<->\")\n\nAnother option is to use the maximal munch method; i.e., match against all regular expressions, and choose the longest match. This would resolve ambiguity by giving EQUIVALENCE a higher precedence than IMPLICATION.\n",
"I think you can solve this simply by reordering your checking to match equivalences first, and then implications. However, this seems to work :\n>>> IMPLICATION = re.compile(r'\\s*[^\\<]\\-\\>\\s*')\n>>> EQUIVALENCE = re.compile(r'\\s*\\<\\-\\>\\s*')\n\n"
] | [
4,
0
] | [] | [] | [
"parsing",
"python",
"regex"
] | stackoverflow_0000514475_parsing_python_regex.txt |
Q:
Random name generator strategy - help me improve it
I have a small project I am doing in Python using web.py. It's a name generator, using 4 "parts" of a name (firstname, middlename, anothername, surname). Each part of the name is a collection of entites in a MySQL databse (name_part (id, part, type_id), and name_part_type (id, description)). Basic stuff, I guess.
My generator picks a random entry of each "type", and assembles a comical name. Right now, I am using select * from name_part where type_id=[something] order by rand() limit 1 to select a random entry of each type (so I also have 4 queries that run per pageview, I figured this was better than one fat query returning potentially hundreds of rows; if you have a suggestion for how to pull this off in one query w/o a sproc I'll listen).
Obviously I want to make this more random. Actually, I want to give it better coverage, not necessarily randomness. I want to make sure it's using as many possibilities as possible. That's what I am asking in this question, what sorts of strategies can I use to give coverage over a large random sample?
My idea, is to implement a counter column on each name_part, and increment it each time I use it. I would need some logic to then say like: "get a name_part that is less than the highest "counter" for this "name_part_type", unless there are none then pick a random one". I am not very good at SQL, is this kind of logic even possible? The only way I can think to do this would require up to 3 or 4 queries for each part of the name (so up to 12 queries per pageview).
Can I get some input on my logic here? Am I overthinking it? This actually sounds ideal for a stored procedure... but can you guys at least help me solve how to do it without a sproc? (I don't know if I can even use a sproc with the built-in database stuff of web.py).
I hope this isn't terribly dumb but thanks ahead of time.
edit: Aside from my specific problem I am still curious if there are any alternate strategies I can use that may be better.
A:
I think what you're after is:
select * from name_part
where type_id=[something]
order by used_count asc, rand()
limit 1
This will put the lesser used names at the top of the list and, if there's multiples with the same (lowest) used_count, they'll sort randomly.
A:
I agree with your intuition that using a stored procedure is the right way to go, but then, I almost always try to implement database stuff in the database.
In your proc, I would introduce some kind of logic like say, there's only a 30% chance that returning the result will actually increment the counter. Just to increase the variability.
| Random name generator strategy - help me improve it | I have a small project I am doing in Python using web.py. It's a name generator, using 4 "parts" of a name (firstname, middlename, anothername, surname). Each part of the name is a collection of entites in a MySQL databse (name_part (id, part, type_id), and name_part_type (id, description)). Basic stuff, I guess.
My generator picks a random entry of each "type", and assembles a comical name. Right now, I am using select * from name_part where type_id=[something] order by rand() limit 1 to select a random entry of each type (so I also have 4 queries that run per pageview, I figured this was better than one fat query returning potentially hundreds of rows; if you have a suggestion for how to pull this off in one query w/o a sproc I'll listen).
Obviously I want to make this more random. Actually, I want to give it better coverage, not necessarily randomness. I want to make sure it's using as many possibilities as possible. That's what I am asking in this question, what sorts of strategies can I use to give coverage over a large random sample?
My idea, is to implement a counter column on each name_part, and increment it each time I use it. I would need some logic to then say like: "get a name_part that is less than the highest "counter" for this "name_part_type", unless there are none then pick a random one". I am not very good at SQL, is this kind of logic even possible? The only way I can think to do this would require up to 3 or 4 queries for each part of the name (so up to 12 queries per pageview).
Can I get some input on my logic here? Am I overthinking it? This actually sounds ideal for a stored procedure... but can you guys at least help me solve how to do it without a sproc? (I don't know if I can even use a sproc with the built-in database stuff of web.py).
I hope this isn't terribly dumb but thanks ahead of time.
edit: Aside from my specific problem I am still curious if there are any alternate strategies I can use that may be better.
| [
"I think what you're after is:\nselect * from name_part\n where type_id=[something]\n order by used_count asc, rand()\n limit 1\n\nThis will put the lesser used names at the top of the list and, if there's multiples with the same (lowest) used_count, they'll sort randomly.\n",
"I agree with your intuition that using a stored procedure is the right way to go, but then, I almost always try to implement database stuff in the database.\nIn your proc, I would introduce some kind of logic like say, there's only a 30% chance that returning the result will actually increment the counter. Just to increase the variability.\n"
] | [
4,
1
] | [] | [] | [
"mysql",
"python",
"random",
"web.py"
] | stackoverflow_0000514617_mysql_python_random_web.py.txt |
Q:
Web based wizard with Python
What is a good/simple way to create, say a five page wizard, in Python, where the web server component composes the wizard page content mostly dynamically by fetching the data via calls to a XML-RPC back-end. I have experienced a bit with the XML-RPC Python module, but I don't know which Python module would be providing the web server, how to create the static content for the wizard and I don't know how to extend the web server component to make the XML-RPC calls from the web server to the XML-RPC back-end to be able to create the dynamic content.
A:
If we break down to the components you'll need, we get:
HTTP server to receive the request from the clients browser.
A URL router to look at the URL sent from client browser and call your function/method to handle that URL.
An XML-RPC client library to fetch the data for that URL.
A template processor to render the fetched data into HTML.
A way to send the rendered HTML as a response back to the client browser.
These components are handled by almost all, if not all, Python web frameworks. The XML-RPC client might be missing, but you can just use the standard Python module you already know.
Django and Pylons are well documented and can easily handle this kind of project, but they will also have a lot of stuff you won't need. If you want very easy and absolute minimum take a look at using juno, which was just released recently and is getting some buzz.
These frameworks will handle #1 and provide a way for you to specify #2, so then you need to write your function/method that handles the incoming request (in Django this is called a 'view').
All you would do is fetch your data via XML-RPC, populate a dictionary with that data (in Django this dictionary is referred to as 'context') and then render a template from the context into HTML by calling the template engine for that framework.
Your function will just return the HTML to the framework which will then format it properly as an HTTP response and send it back to the client browser.
Simple!
UPDATE: Here's a description of how to do wizard style multiple-step forms in Django that should help you out.
| Web based wizard with Python | What is a good/simple way to create, say a five page wizard, in Python, where the web server component composes the wizard page content mostly dynamically by fetching the data via calls to a XML-RPC back-end. I have experienced a bit with the XML-RPC Python module, but I don't know which Python module would be providing the web server, how to create the static content for the wizard and I don't know how to extend the web server component to make the XML-RPC calls from the web server to the XML-RPC back-end to be able to create the dynamic content.
| [
"If we break down to the components you'll need, we get:\n\nHTTP server to receive the request from the clients browser.\nA URL router to look at the URL sent from client browser and call your function/method to handle that URL.\nAn XML-RPC client library to fetch the data for that URL.\nA template processor to render the fetched data into HTML.\nA way to send the rendered HTML as a response back to the client browser.\n\nThese components are handled by almost all, if not all, Python web frameworks. The XML-RPC client might be missing, but you can just use the standard Python module you already know.\nDjango and Pylons are well documented and can easily handle this kind of project, but they will also have a lot of stuff you won't need. If you want very easy and absolute minimum take a look at using juno, which was just released recently and is getting some buzz.\nThese frameworks will handle #1 and provide a way for you to specify #2, so then you need to write your function/method that handles the incoming request (in Django this is called a 'view').\nAll you would do is fetch your data via XML-RPC, populate a dictionary with that data (in Django this dictionary is referred to as 'context') and then render a template from the context into HTML by calling the template engine for that framework.\nYour function will just return the HTML to the framework which will then format it properly as an HTTP response and send it back to the client browser.\nSimple!\nUPDATE: Here's a description of how to do wizard style multiple-step forms in Django that should help you out.\n"
] | [
3
] | [] | [] | [
"python",
"wizard",
"xml_rpc"
] | stackoverflow_0000514912_python_wizard_xml_rpc.txt |
Q:
How best to pass database objects to a turbogears WidgetList?
I am trying to set up form widgets for adding some objects to the database but I'm getting stuck because it seems impossible to pass any arguments to Widgets contained within a WidgetList. To clarify that, here is my WidgetList:
class ClientFields(forms.WidgetsList):
"""Form to create a client"""
name = forms.TextField(validator=validators.NotEmpty())
abbreviated = forms.TextField(validator=validators.NotEmpty(), attrs={'size':2})
address = forms.TextArea(validator=validators.NotEmpty())
country = forms.TextField(validator=validators.NotEmpty())
vat_number = forms.TextField(validator=validators.NotEmpty())
email_address = forms.TextField(validator=validators.Email(not_empty=True))
client_group = forms.SingleSelectField(validator=validators.NotEmpty(),
options=[(g.id, g.name) for g in ClientGroup.all_client_groups().all()])
You see I have had to resort to grabbing objects from the database from within the WidgetList, which means that it's rather tightly coupled with the database code (even though it's using a classmethod in the model).
The problem is that once the WidgetList instance is created, you can't access those fields (otherwise I could just call client_fields.client_group.options=[(key,value)] from the controller) - the fields are removed from the class and added to a list, so to find them again, I'd have to iterate through that list to find the Field class I want to alter - not clean. Here's the output from ipython as I check out the WidgetsList:
In [8]: mad.declared_widgets
Out[8]:
[TextField(name='name', attrs={}, field_class='textfield', css_classes=[], convert=True),
TextField(name='abbreveated', attrs={'size': 2}, field_class='textfield', css_classes=[], convert=True),
TextArea(name='address', rows=7, cols=50, attrs={}, field_class='textarea', css_classes=[], convert=True),
TextField(name='country', attrs={}, field_class='textfield', css_classes=[], convert=True),
TextField(name='vat_number', attrs={}, field_class='textfield', css_classes=[], convert=True),
TextField(name='email_address', attrs={}, field_class='textfield', css_classes=[], convert=True),
SingleSelectField(name='client_group', attrs={}, options=[(1, u"Proporta's Clients")], field_class='singleselectfield', css_classes=[], convert=False)]
So...what would be the right way to set these Widgets and WidgetLists up without coupling them too tightly to the database and so on?
| How best to pass database objects to a turbogears WidgetList? | I am trying to set up form widgets for adding some objects to the database but I'm getting stuck because it seems impossible to pass any arguments to Widgets contained within a WidgetList. To clarify that, here is my WidgetList:
class ClientFields(forms.WidgetsList):
"""Form to create a client"""
name = forms.TextField(validator=validators.NotEmpty())
abbreviated = forms.TextField(validator=validators.NotEmpty(), attrs={'size':2})
address = forms.TextArea(validator=validators.NotEmpty())
country = forms.TextField(validator=validators.NotEmpty())
vat_number = forms.TextField(validator=validators.NotEmpty())
email_address = forms.TextField(validator=validators.Email(not_empty=True))
client_group = forms.SingleSelectField(validator=validators.NotEmpty(),
options=[(g.id, g.name) for g in ClientGroup.all_client_groups().all()])
You see I have had to resort to grabbing objects from the database from within the WidgetList, which means that it's rather tightly coupled with the database code (even though it's using a classmethod in the model).
The problem is that once the WidgetList instance is created, you can't access those fields (otherwise I could just call client_fields.client_group.options=[(key,value)] from the controller) - the fields are removed from the class and added to a list, so to find them again, I'd have to iterate through that list to find the Field class I want to alter - not clean. Here's the output from ipython as I check out the WidgetsList:
In [8]: mad.declared_widgets
Out[8]:
[TextField(name='name', attrs={}, field_class='textfield', css_classes=[], convert=True),
TextField(name='abbreveated', attrs={'size': 2}, field_class='textfield', css_classes=[], convert=True),
TextArea(name='address', rows=7, cols=50, attrs={}, field_class='textarea', css_classes=[], convert=True),
TextField(name='country', attrs={}, field_class='textfield', css_classes=[], convert=True),
TextField(name='vat_number', attrs={}, field_class='textfield', css_classes=[], convert=True),
TextField(name='email_address', attrs={}, field_class='textfield', css_classes=[], convert=True),
SingleSelectField(name='client_group', attrs={}, options=[(1, u"Proporta's Clients")], field_class='singleselectfield', css_classes=[], convert=False)]
So...what would be the right way to set these Widgets and WidgetLists up without coupling them too tightly to the database and so on?
| [] | [] | [
"A page in the TurboGears documentation may help.\n"
] | [
-1
] | [
"python",
"turbogears"
] | stackoverflow_0000515522_python_turbogears.txt |
Q:
pure web based versioning system
My hosting service does not currently run/allow svn, git, cvs on their server. I would really like to be able to 'sync' my current source on my development machine with my production server.
I am looking for a pure php/python/ruby version control system (not just a client for a version control system) that does not require any services running on the server machine, something that could use the http interface to upload/download and sync files - basically offering a back end into my 'live' site for version control.
Additionally, I would think that such a system would be easy to develop an 'online' ide for, so that I could develop directly on the production server. (issues of testing aside of course)
Does anyone know if such a system exists?
==Edit==
Really, I want a wiki front end for a version control / development system - Basically look like a wiki and edit development files so that I could easily make and roll back changes via the web. I doubt this exists, but it would be easy to extend an existing php port of svn...
A:
Get a better hosting service. Seriously. Even if you found something that worked in PHP/Ruby/Perl/Whatever, it would still be a sub-par solution. It most likely wouldn't integrate with any IDE you have, and wouldn't have a good tool set available for working with it. It would be really clunky to do correctly.
The other option is to get a free SVN host, or host SVN on your own machine, and then just push updates from your SVN host to your web site via ftp.
A:
Don't host your repository on your web server. Deploy from your server to the ftp/sftp - whatever.
A:
You could look into mercurial or bazaar-ng they are both written in python and support at least http downloads afaik, not really web based but written in one of the languages your hoster supports if the tags are correct. HTH
A:
Mercurial has a web-interface and allows commits via http. It uses a couple of C extensions, but I would guess that all of them have pure-Python counterparts.
You can also just use WebDAV, when your hoster provides it.
A:
I think it's actually a pretty good idea, but don't believe such a versioning system exists (yet) so hopefully you'll go ahead and make one.
I don't think adapting an existing solution is going to be easy, but it's probably worth looking into because if you use an existing solution you'll have all the client support done, and most of the versioning difficulties taken care of.
Starting from scratch is not going to be trivial.
-Adam
A:
Use Bazaar:
Lightweight. No dedicated server with Bazaar installed is needed, just FTP access to a web server. A smart server is available for those requiring additional performance or security but it is not required in many cases - Bazaar 1.x over plain http performs well.
A:
you could try the reverse way
use e.g. a free online svn/git Service to version control the sources on your dev machine
use usual ways to update the "production" machine aka site, like FTP
A:
Why dont you want a client..? A simple client that you can run on your production machine which then syncs to your repository running on another server somewhere.
SVN is available over HTTP so writing a client that is able to sync your code is really easy in python or php.
| pure web based versioning system | My hosting service does not currently run/allow svn, git, cvs on their server. I would really like to be able to 'sync' my current source on my development machine with my production server.
I am looking for a pure php/python/ruby version control system (not just a client for a version control system) that does not require any services running on the server machine, something that could use the http interface to upload/download and sync files - basically offering a back end into my 'live' site for version control.
Additionally, I would think that such a system would be easy to develop an 'online' ide for, so that I could develop directly on the production server. (issues of testing aside of course)
Does anyone know if such a system exists?
==Edit==
Really, I want a wiki front end for a version control / development system - Basically look like a wiki and edit development files so that I could easily make and roll back changes via the web. I doubt this exists, but it would be easy to extend an existing php port of svn...
| [
"Get a better hosting service. Seriously. Even if you found something that worked in PHP/Ruby/Perl/Whatever, it would still be a sub-par solution. It most likely wouldn't integrate with any IDE you have, and wouldn't have a good tool set available for working with it. It would be really clunky to do correctly.\nThe other option is to get a free SVN host, or host SVN on your own machine, and then just push updates from your SVN host to your web site via ftp.\n",
"Don't host your repository on your web server. Deploy from your server to the ftp/sftp - whatever.\n",
"You could look into mercurial or bazaar-ng they are both written in python and support at least http downloads afaik, not really web based but written in one of the languages your hoster supports if the tags are correct. HTH\n",
"Mercurial has a web-interface and allows commits via http. It uses a couple of C extensions, but I would guess that all of them have pure-Python counterparts.\nYou can also just use WebDAV, when your hoster provides it.\n",
"I think it's actually a pretty good idea, but don't believe such a versioning system exists (yet) so hopefully you'll go ahead and make one.\nI don't think adapting an existing solution is going to be easy, but it's probably worth looking into because if you use an existing solution you'll have all the client support done, and most of the versioning difficulties taken care of.\nStarting from scratch is not going to be trivial.\n-Adam\n",
"Use Bazaar:\n\nLightweight. No dedicated server with Bazaar installed is needed, just FTP access to a web server. A smart server is available for those requiring additional performance or security but it is not required in many cases - Bazaar 1.x over plain http performs well.\n\n",
"you could try the reverse way\n\nuse e.g. a free online svn/git Service to version control the sources on your dev machine\nuse usual ways to update the \"production\" machine aka site, like FTP\n\n",
"Why dont you want a client..? A simple client that you can run on your production machine which then syncs to your repository running on another server somewhere.\nSVN is available over HTTP so writing a client that is able to sync your code is really easy in python or php.\n"
] | [
7,
2,
1,
1,
1,
1,
0,
0
] | [] | [] | [
"php",
"python",
"version_control",
"web_applications"
] | stackoverflow_0000513173_php_python_version_control_web_applications.txt |
Q:
Is it possible to google search with the gdata API?
I might be just thick (nothing new), but I can't seem to find anything related to an old-fashioned, vanilla google search in the gdata API docs. Anyone know if it's possible? (I know it probably is with a little tinkering, but I already have a Python web-scraping class created that does it for me, but I was wondering if using gdata would be the right thing to do)
A:
AFAIK, the gdata API is just for the Google Doc's application (Google Spreadsheet etc).
The search API does expose a REST interface for "Flash and other Non-Javascript Environments" though:
http://code.google.com/apis/ajaxsearch/documentation/#fonje
| Is it possible to google search with the gdata API? | I might be just thick (nothing new), but I can't seem to find anything related to an old-fashioned, vanilla google search in the gdata API docs. Anyone know if it's possible? (I know it probably is with a little tinkering, but I already have a Python web-scraping class created that does it for me, but I was wondering if using gdata would be the right thing to do)
| [
"AFAIK, the gdata API is just for the Google Doc's application (Google Spreadsheet etc). \nThe search API does expose a REST interface for \"Flash and other Non-Javascript Environments\" though:\nhttp://code.google.com/apis/ajaxsearch/documentation/#fonje\n"
] | [
4
] | [] | [] | [
"gdata_api",
"python"
] | stackoverflow_0000516335_gdata_api_python.txt |
Q:
How to vertically align Paragraphs within a Table using Reportlab?
I'm using Reportlab to generate report cards. The report cards are basically one big Table object. Some of the content in the table cells needs to wrap, specifically titles and comments, and I also need to bold certain elements.
To accomplish both the wrapping and ability to bold, I'm using Paragraph objects within the Table. My table needs several of these elements vertically aligned to 'middle' but the Paragraph alignment and snaps my text to the bottom of the cell.
How can I vertically align my Paragraph within a Table cell?
A:
I have to ask: have you tried the tablestyle VALIGN:MIDDLE?
something like:
t=Table(data)
t.setStyle(TableStyle([('VALIGN',(-1,-1),(-1,-1),'MIDDLE')]))
(more details in section 7.2 of the ReportLab user guide)
If that doesn't do it, then your paragraph object must be the full height of the cell, and internally aligned to the bottom.
Could you please post a small sample that reproduces the problem?
| How to vertically align Paragraphs within a Table using Reportlab? | I'm using Reportlab to generate report cards. The report cards are basically one big Table object. Some of the content in the table cells needs to wrap, specifically titles and comments, and I also need to bold certain elements.
To accomplish both the wrapping and ability to bold, I'm using Paragraph objects within the Table. My table needs several of these elements vertically aligned to 'middle' but the Paragraph alignment and snaps my text to the bottom of the cell.
How can I vertically align my Paragraph within a Table cell?
| [
"I have to ask: have you tried the tablestyle VALIGN:MIDDLE?\nsomething like:\nt=Table(data) \nt.setStyle(TableStyle([('VALIGN',(-1,-1),(-1,-1),'MIDDLE')])) \n\n(more details in section 7.2 of the ReportLab user guide)\nIf that doesn't do it, then your paragraph object must be the full height of the cell, and internally aligned to the bottom.\nCould you please post a small sample that reproduces the problem?\n"
] | [
12
] | [] | [] | [
"alignment",
"pdf",
"python",
"reportlab"
] | stackoverflow_0000500406_alignment_pdf_python_reportlab.txt |
Q:
Python regex: Turn "ThisFileName.txt" into "This File Name.txt"
I'm trying to add a space before every capital letter, except the first one.
Here's what I have so far, and the output I'm getting:
>>> tex = "ThisFileName.txt"
>>> re.sub('[A-Z].', ' ', tex)
' his ile ame.txt'
I want:
'This File Name.txt'
(It'd be nice if I could also get rid of .txt, but I can do that in a separate operation.)
A:
Key concept here is backreferences in regular expressions:
import re
text = "ThisFileName.txt"
print re.sub('([a-z])([A-Z])', r'\1 \2', text)
# Prints: "This File Name.txt"
For pulling off the '.txt' in a reliable way, I recommend os.path.splitext()
import os
filename = "ThisFileName.txt"
print os.path.splitext(filename)
# Prints: ('ThisFileName', '.txt')
A:
re.sub('([a-z])([A-Z])', '\\1 \\2', 'TheFileName.txt')
EDIT: StackOverflow eats some \s, when not in 'code mode'... Because I forgot to add a newline after the code above, it was not interpreted in 'code mode' :-((. Since I added that text here I didn't have to change anything and it's correct now.
A:
Another possible regular expression using a look behind:
(?<!^)([A-Z])
A:
It is not clear what you want to do if the filename is Hello123There.txt. So, if you want a space before all capital letters regardless of what precedes them, you can:
import re
def add_space_before_caps(text):
"Add a space before all caps except at start of text"
return re.sub(r"(?<!^)(?=[A-Z])", " ", text)
>>> add_space_before_caps("Hello123ThereIBM.txt")
'Hello123 There I B M.txt'
| Python regex: Turn "ThisFileName.txt" into "This File Name.txt" | I'm trying to add a space before every capital letter, except the first one.
Here's what I have so far, and the output I'm getting:
>>> tex = "ThisFileName.txt"
>>> re.sub('[A-Z].', ' ', tex)
' his ile ame.txt'
I want:
'This File Name.txt'
(It'd be nice if I could also get rid of .txt, but I can do that in a separate operation.)
| [
"Key concept here is backreferences in regular expressions:\nimport re\ntext = \"ThisFileName.txt\"\nprint re.sub('([a-z])([A-Z])', r'\\1 \\2', text)\n# Prints: \"This File Name.txt\"\n\nFor pulling off the '.txt' in a reliable way, I recommend os.path.splitext()\nimport os\nfilename = \"ThisFileName.txt\"\nprint os.path.splitext(filename)\n# Prints: ('ThisFileName', '.txt')\n\n",
"re.sub('([a-z])([A-Z])', '\\\\1 \\\\2', 'TheFileName.txt')\n\nEDIT: StackOverflow eats some \\s, when not in 'code mode'... Because I forgot to add a newline after the code above, it was not interpreted in 'code mode' :-((. Since I added that text here I didn't have to change anything and it's correct now.\n",
"Another possible regular expression using a look behind:\n(?<!^)([A-Z])\n\n",
"It is not clear what you want to do if the filename is Hello123There.txt. So, if you want a space before all capital letters regardless of what precedes them, you can:\nimport re\n\ndef add_space_before_caps(text):\n \"Add a space before all caps except at start of text\"\n return re.sub(r\"(?<!^)(?=[A-Z])\", \" \", text)\n\n>>> add_space_before_caps(\"Hello123ThereIBM.txt\")\n'Hello123 There I B M.txt'\n\n"
] | [
9,
2,
2,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000516451_python_regex.txt |
Q:
Mixed language source directory layout
We are running a large project with several different languages: Java, Python, PHP, SQL and Perl.
Until now people have been working in their own private repositories, but now we want to merge the entire project in a single repository. The question now is: how should the directory structure look? Should we have separate directories for each language, or should we separate it by component/project? How well does python/perl/java cope with a common directory layout?
A:
My experience indicates that this kind of layout is best:
mylib/
src/
java/
python/
perl/
.../
bin/
java/
python/
perl/
stage/
dist/
src is your source, and is the only thing checked in.
bin is where "compilation" occurs to during the build, and is not checked in.
stage is where you copy things during the build to prepare them for packaging
dist is where you put the build artifacts
I put the module/component/library at the top of the hierarchy, because I build every module separately, and use a dependency manager to combine them as needed.
Of course, naming conventions vary. But I've found this to work quite satisfactorily.
A:
I think the best thing to do would be to ensure that your various modules don't depend upon being in the same directory (i.e. separate by component). A lot of people seem to be deathly afraid of this idea, but a good set of build scripts should be able to automate away any pain.
The end goal would be to make it easy to install the infrastructure, and then really easy to work on a single component once the environment is setup.
(It's important to note that I come from the Perl and CL worlds, where we install "modules" into some global location, like ~/perl or ~/.sbcl, rather than including each module with each project, like Java people do. You'd think this would be a maintenance problem, but it ends up not being one. With a script that updates each module from your git repository (or CPAN) on a regular basis, it is really the best way.)
Edit: one more thing:
Projects always have external dependencies. My projects need Postgres and a working Linux install. It would be insane to bundle this with the app code in version control -- but a script to get everything setup on a fresh workstation is very helpful.
I guess what I'm trying to say, in a roundabout way perhaps, is that I don't think you should treat your internal modules differently from external modules.
| Mixed language source directory layout | We are running a large project with several different languages: Java, Python, PHP, SQL and Perl.
Until now people have been working in their own private repositories, but now we want to merge the entire project in a single repository. The question now is: how should the directory structure look? Should we have separate directories for each language, or should we separate it by component/project? How well does python/perl/java cope with a common directory layout?
| [
"My experience indicates that this kind of layout is best:\nmylib/\n src/\n java/\n python/\n perl/\n .../\n bin/\n java/\n python/\n perl/\n stage/\n dist/\n\nsrc is your source, and is the only thing checked in.\nbin is where \"compilation\" occurs to during the build, and is not checked in.\nstage is where you copy things during the build to prepare them for packaging\ndist is where you put the build artifacts\nI put the module/component/library at the top of the hierarchy, because I build every module separately, and use a dependency manager to combine them as needed.\nOf course, naming conventions vary. But I've found this to work quite satisfactorily.\n",
"I think the best thing to do would be to ensure that your various modules don't depend upon being in the same directory (i.e. separate by component). A lot of people seem to be deathly afraid of this idea, but a good set of build scripts should be able to automate away any pain.\nThe end goal would be to make it easy to install the infrastructure, and then really easy to work on a single component once the environment is setup.\n(It's important to note that I come from the Perl and CL worlds, where we install \"modules\" into some global location, like ~/perl or ~/.sbcl, rather than including each module with each project, like Java people do. You'd think this would be a maintenance problem, but it ends up not being one. With a script that updates each module from your git repository (or CPAN) on a regular basis, it is really the best way.)\nEdit: one more thing:\nProjects always have external dependencies. My projects need Postgres and a working Linux install. It would be insane to bundle this with the app code in version control -- but a script to get everything setup on a fresh workstation is very helpful. \nI guess what I'm trying to say, in a roundabout way perhaps, is that I don't think you should treat your internal modules differently from external modules.\n"
] | [
6,
2
] | [] | [] | [
"directory",
"java",
"python",
"sql"
] | stackoverflow_0000516798_directory_java_python_sql.txt |
Q:
List/Arrays - Check Dates
I'm trying to make a program that checks an array to make sure there are four folders with partially same names.
So
For a date like 0103 (jan 3rd), there should be 0103-1, 0103-2, 0103-3, and 0103-4. Other folders are like 0107-1, 0107-2, 0107-3, 0107-4. How do I go about doing this? I thought about using glob.glob (python) and wildcards to make sure there are only four matches...but I don't like this method.
Any suggestions?
A:
import os
def myfunc(date, num):
for x in range(1, num+1):
filename = str(date) + "-" + str(x)
if os.path.exists(filename):
print(filename+" exists")
else:
print(filename+" does not exist")
myfunc('0102', 3);
0102-1 does not exist
0102-2 does not exist
0102-3 does not exist
A:
Here's a naive way to find the largest common leading substring given an array of strings:
>>> arr = ['0102-1', '0102-2', '0102-3']
>>> for i in reversed(range(len(arr[0]))):
... for s in arr:
... if not s.startswith(arr[0][:i+1]):
... break
... else:
... break
... else:
... if i == 0: i = -1
...
>>> arr[0][:i+1]
'0102-'
>>> i
4
| List/Arrays - Check Dates | I'm trying to make a program that checks an array to make sure there are four folders with partially same names.
So
For a date like 0103 (jan 3rd), there should be 0103-1, 0103-2, 0103-3, and 0103-4. Other folders are like 0107-1, 0107-2, 0107-3, 0107-4. How do I go about doing this? I thought about using glob.glob (python) and wildcards to make sure there are only four matches...but I don't like this method.
Any suggestions?
| [
"import os\n\ndef myfunc(date, num):\n for x in range(1, num+1):\n filename = str(date) + \"-\" + str(x)\n if os.path.exists(filename):\n print(filename+\" exists\")\n else:\n print(filename+\" does not exist\")\n\nmyfunc('0102', 3);\n\n0102-1 does not exist\n0102-2 does not exist\n0102-3 does not exist\n",
"Here's a naive way to find the largest common leading substring given an array of strings:\n>>> arr = ['0102-1', '0102-2', '0102-3']\n>>> for i in reversed(range(len(arr[0]))):\n... for s in arr:\n... if not s.startswith(arr[0][:i+1]):\n... break\n... else:\n... break\n... else:\n... if i == 0: i = -1\n...\n>>> arr[0][:i+1]\n'0102-'\n>>> i\n4\n\n"
] | [
3,
0
] | [] | [] | [
"python",
"sorting"
] | stackoverflow_0000518782_python_sorting.txt |
Q:
Thread specific data with webpy
I'm writing a little web app with webpy, and I'm wondering if anyone has any information on a little problem I'm having.
I've written a little ORM system, and it seems to be working pretty well. Ideally I'd like to stitch it in with webpy, but it appears that just using it as is causes thread issues (DB connection is instantiated/accessed across thread boundaries, or so the exception states).
Does anyone know how I can (within the webpy) create my DB connection on the same thread as the rest of the page handling code will be?
A:
We use SQLAlchemy with web.py and use hooks to create and close db connections per request. SQLAlchemy handles pooling, so not every connection is a tcp connection.
The thread local storage you want to use is web.ctx ie. any time you access web.ctx you only see properties set by that thread.
Our code looks something like this:
def sa_load_hook():
web.ctx.sadb = Session()
def sa_unload_hook():
web.ctx.sadb.close()
web.loadhooks['sasession'] = sa_load_hook
web.unloadhooks['sasession'] = sa_unload_hook
Replace Session with your db connection function and it should work fine for you.
A:
I'll give this one a try. Disclaimer: I have no experience with the web.py framework.
I suggest you try the following:
(1) Create a global threading.local instance to keep track of your thread local objects (in your case, it will keep track of only one object, a database session).
import threading
serving = threading.local()
(2) At the start of each request, create a db connection/session and save it in the threading.local instance. If I understand the web.py documentation correctly, you can do the following:
def setup_dbconnection(handler):
serving.dbconnection = create_dbconnection(...)
try:
return handler()
finally:
serving.dbconnection.close() # or similar
app.add_processor(setup_dbconnection)
(3) In your controller methods (if they're called that in web.py?), whenever you need a db connection, use serving.dbconnection.
| Thread specific data with webpy | I'm writing a little web app with webpy, and I'm wondering if anyone has any information on a little problem I'm having.
I've written a little ORM system, and it seems to be working pretty well. Ideally I'd like to stitch it in with webpy, but it appears that just using it as is causes thread issues (DB connection is instantiated/accessed across thread boundaries, or so the exception states).
Does anyone know how I can (within the webpy) create my DB connection on the same thread as the rest of the page handling code will be?
| [
"We use SQLAlchemy with web.py and use hooks to create and close db connections per request. SQLAlchemy handles pooling, so not every connection is a tcp connection.\nThe thread local storage you want to use is web.ctx ie. any time you access web.ctx you only see properties set by that thread.\nOur code looks something like this:\ndef sa_load_hook():\n web.ctx.sadb = Session()\n\ndef sa_unload_hook():\n web.ctx.sadb.close()\n\nweb.loadhooks['sasession'] = sa_load_hook\nweb.unloadhooks['sasession'] = sa_unload_hook\n\nReplace Session with your db connection function and it should work fine for you.\n",
"I'll give this one a try. Disclaimer: I have no experience with the web.py framework.\nI suggest you try the following:\n(1) Create a global threading.local instance to keep track of your thread local objects (in your case, it will keep track of only one object, a database session).\nimport threading\nserving = threading.local()\n\n(2) At the start of each request, create a db connection/session and save it in the threading.local instance. If I understand the web.py documentation correctly, you can do the following:\ndef setup_dbconnection(handler): \n serving.dbconnection = create_dbconnection(...)\n try:\n return handler()\n finally:\n serving.dbconnection.close() # or similar\n\napp.add_processor(setup_dbconnection)\n\n(3) In your controller methods (if they're called that in web.py?), whenever you need a db connection, use serving.dbconnection.\n"
] | [
4,
2
] | [] | [] | [
"database",
"multithreading",
"python",
"web.py"
] | stackoverflow_0000459608_database_multithreading_python_web.py.txt |
Q:
django facebook connect missing libs?
I'm trying to integrate some photo related functionality with my site and facebook. I checked out facebook connect and it seems like the way to go for this (since I don't want to make an app, just have users authenticate and then grab some content from facebook to integrate into our site)
First of all, if you think there is a better way to do this (infinite session maybe?) let me know.
Otherwise here is the problem I'm having... I downloaded django-fbconnect and installed it as an app (as per the readme.txt included in the svn) but python is complaining about a missing signals.py
Error: No module named signals
which I assume should be fbconnect/signals.py because of this line of code:
from fbconnect.signals import facebook_update
Anyway does anyone have experience with django-fbconnect? or any advice on getting the developer to update google code?
Thanks
edit: Found this: "Integrating Facebook Connect with Django in 15 minutes" which uses middleware instead of the django-fbconnect app. I prefer the app because it's lighter and the code is clearer. Also, it sticks to the 'everything is an app' culture of django. but I guess I'll look into this other possibility
edit 2: I contacted the original author of django-fbconnect, and he graciously updated the project with the missing file (he also answered on this post)
A:
I've just added the missing file folks. Sorry for the inconveniences. :/
A:
the solution proposeed by van gale (removing the lines which reference the signals.py file) work well enough. I think I may end up needing to write my own signals.py eventually... I keep you updated.
Anyway here's is the answer:
Remove the functionality and all uses of it, provided by the signals.py
it's kind of crappy, but oh well
The missing file has been added to the google code project! :)
| django facebook connect missing libs? | I'm trying to integrate some photo related functionality with my site and facebook. I checked out facebook connect and it seems like the way to go for this (since I don't want to make an app, just have users authenticate and then grab some content from facebook to integrate into our site)
First of all, if you think there is a better way to do this (infinite session maybe?) let me know.
Otherwise here is the problem I'm having... I downloaded django-fbconnect and installed it as an app (as per the readme.txt included in the svn) but python is complaining about a missing signals.py
Error: No module named signals
which I assume should be fbconnect/signals.py because of this line of code:
from fbconnect.signals import facebook_update
Anyway does anyone have experience with django-fbconnect? or any advice on getting the developer to update google code?
Thanks
edit: Found this: "Integrating Facebook Connect with Django in 15 minutes" which uses middleware instead of the django-fbconnect app. I prefer the app because it's lighter and the code is clearer. Also, it sticks to the 'everything is an app' culture of django. but I guess I'll look into this other possibility
edit 2: I contacted the original author of django-fbconnect, and he graciously updated the project with the missing file (he also answered on this post)
| [
"I've just added the missing file folks. Sorry for the inconveniences. :/\n",
"the solution proposeed by van gale (removing the lines which reference the signals.py file) work well enough. I think I may end up needing to write my own signals.py eventually... I keep you updated.\nAnyway here's is the answer:\nRemove the functionality and all uses of it, provided by the signals.py\nit's kind of crappy, but oh well\nThe missing file has been added to the google code project! :)\n"
] | [
2,
0
] | [] | [] | [
"django",
"facebook",
"fbconnect",
"integration",
"python"
] | stackoverflow_0000503462_django_facebook_fbconnect_integration_python.txt |
Q:
How do I remove VSS hooks from a VS Web Site?
I have a Visual Studio 2008 solution with 7 various projects included with it. 3 of these 'projects' are Web Sites (the kind of project without a project file).
I have stripped all the various Visual Sourcesafe files from all the directories, removed the Scc references in the SLN file and all the project files that exist. I deleted the SUO file and all the USER files also. Visual Studio still thinks that 2 of the Web Sites are still under source control, and it adds the Scc entries back into the SLN file for me.
Does anybody know how VS still knows about the old source control?
Edit:
Another thing that I didn't mention is that the files I'm trying to remove VSS hooks from has been copied outside of VSS's known working directories, the python script run and manual edits to files made before the solution is opened in VS 2008 or VS 2005 (I had the problem with both).
Here is a python script that I used to weed out these files and let me know which files needed manually edited.
import os, stat
from os.path import join
def main():
startDir = r"C:\Documents and Settings\user\Desktop\project"
manualEdits = []
for root, dirs, files in os.walk(startDir, topdown=False):
if '.svn' in dirs:
dirs.remove('.svn')
for name in files:
os.chmod(join(root,name), stat.S_IWRITE)
if name.endswith(".vssscc") or name.endswith(".scc") or name.endswith(".vspscc") or name.endswith(".suo") or name.endswith(".user"):
print "Deleting:", join(root, name)
os.remove(join(root,name))
if name.endswith("sln") or name.endswith("dbp") or name.endswith("vbproj") or name.endswith("csproj"):
manualEdits.append(join(root, name))
print "Manual Edits are needed for these files:"
for name in manualEdits:
print name
if __name__ == "__main__":
main()
A:
It probably is only trying to add it on your instance of VS. You have to remove the cache so VS thinks its no longer under SS
under file -> SourceControl -> Workspaces
Select the SS location
Edit
Choose the working folder
Remove!
A:
Those things are pernicious! Visual Studio sticks links to SourceSafe in everywhere, including into the XML that makes up your sln file.
I wrote an article about my experiences converting sourcesafe to subversion, and included with it the python script that I used to clean out the junk. Please note:
1) This is VERY LIGHTLY TESTED. Make backups so you don't screw up your sln/*proj files. Run your test suite before and after to make sure it didn't screw up something (how could it? Who knows! but stranger things have happened.)
2) This may have been with a different version of sourcesafe and visual studio in mind, so you may need to tweak it. Anyway, without further ado:
import os, re
PROJ_RE = re.compile(r"^\s+Scc")
SLN_RE = re.compile(r"GlobalSection\(SourceCodeControl\).*?EndGlobalSection",
re.DOTALL)
VDPROJ_RE = re.compile(r"^\"Scc")
for (dir, dirnames, filenames) in os.walk('.'):
for fname in filenames:
fullname = os.path.join(dir, fname)
if fname.endswith('scc'):
os.unlink(fullname)
elif fname.endswith('vdproj'):
#Installer project has a different format
fin = file(fullname)
text = fin.readlines()
fin.close()
fout = file(fullname, 'w')
for line in text:
if not VDPROJ_RE.match(line):
fout.write(line)
fout.close()
elif fname.endswith('csproj'):
fin = file(fullname)
text = fin.readlines()
fin.close()
fout = file(fullname, 'w')
for line in text:
if not PROJ_RE.match(line):
fout.write(line)
fout.close()
elif fname.endswith('sln'):
fin = file(fullname)
text = fin.read()
fin.close()
text = SLN_RE.sub("", text)
fout = file(fullname, 'w')
fout.write(text)
A:
In your %APPDATA% directory Visual Studio saves a list of websites used in Visual Studio, with some settings of that site:
On my Vista Machine the exact location of the file is
C:\Users\{name}\AppData\Local\Microsoft\WebsiteCache\Websites.xml
This file contains entries like
<?xml version="1.0" encoding="utf-16"?>
<DesignTimeData>
<Website RootUrl="e:\Documents\Visual Studio 2008\WebSites\WebSite\"
CacheFolder="WebSite" sccprovider="SubversionScc" scclocalpath="Svn"
sccauxpath="Svn" addnewitemlang="Visual Basic" sccprojectname="Svn"
targetframework="3.5" vwdport="60225"
_LastAccess="11-11-2008 10:58:03"/>
<Website RootUrl="E:\siteje.webproj\" CacheFolder="siteje.webproj"
_LastAccess="11-6-2008 14:43:45"/>
<!-- And many more -->
</DesignTimeData />
As you can see it contains the Scc references that are also part of your solution file.
(In this case the SCC provider is AnkhSVN 2.0, so it doesn't contain the actual SCC mapping; just some constant strings that tell the SCC provider to look at the working copy).
I think tried to fix the missing project file by caching this information in several locations. But it would be welcome if this file was properly documented.
| How do I remove VSS hooks from a VS Web Site? | I have a Visual Studio 2008 solution with 7 various projects included with it. 3 of these 'projects' are Web Sites (the kind of project without a project file).
I have stripped all the various Visual Sourcesafe files from all the directories, removed the Scc references in the SLN file and all the project files that exist. I deleted the SUO file and all the USER files also. Visual Studio still thinks that 2 of the Web Sites are still under source control, and it adds the Scc entries back into the SLN file for me.
Does anybody know how VS still knows about the old source control?
Edit:
Another thing that I didn't mention is that the files I'm trying to remove VSS hooks from has been copied outside of VSS's known working directories, the python script run and manual edits to files made before the solution is opened in VS 2008 or VS 2005 (I had the problem with both).
Here is a python script that I used to weed out these files and let me know which files needed manually edited.
import os, stat
from os.path import join
def main():
startDir = r"C:\Documents and Settings\user\Desktop\project"
manualEdits = []
for root, dirs, files in os.walk(startDir, topdown=False):
if '.svn' in dirs:
dirs.remove('.svn')
for name in files:
os.chmod(join(root,name), stat.S_IWRITE)
if name.endswith(".vssscc") or name.endswith(".scc") or name.endswith(".vspscc") or name.endswith(".suo") or name.endswith(".user"):
print "Deleting:", join(root, name)
os.remove(join(root,name))
if name.endswith("sln") or name.endswith("dbp") or name.endswith("vbproj") or name.endswith("csproj"):
manualEdits.append(join(root, name))
print "Manual Edits are needed for these files:"
for name in manualEdits:
print name
if __name__ == "__main__":
main()
| [
"It probably is only trying to add it on your instance of VS. You have to remove the cache so VS thinks its no longer under SS\n\nunder file -> SourceControl -> Workspaces\nSelect the SS location\nEdit\nChoose the working folder\nRemove!\n\n",
"Those things are pernicious! Visual Studio sticks links to SourceSafe in everywhere, including into the XML that makes up your sln file.\nI wrote an article about my experiences converting sourcesafe to subversion, and included with it the python script that I used to clean out the junk. Please note:\n1) This is VERY LIGHTLY TESTED. Make backups so you don't screw up your sln/*proj files. Run your test suite before and after to make sure it didn't screw up something (how could it? Who knows! but stranger things have happened.)\n2) This may have been with a different version of sourcesafe and visual studio in mind, so you may need to tweak it. Anyway, without further ado:\nimport os, re\n\nPROJ_RE = re.compile(r\"^\\s+Scc\")\nSLN_RE = re.compile(r\"GlobalSection\\(SourceCodeControl\\).*?EndGlobalSection\",\n re.DOTALL)\nVDPROJ_RE = re.compile(r\"^\\\"Scc\")\n\nfor (dir, dirnames, filenames) in os.walk('.'):\n for fname in filenames:\n fullname = os.path.join(dir, fname)\n if fname.endswith('scc'):\n os.unlink(fullname)\n elif fname.endswith('vdproj'):\n #Installer project has a different format\n fin = file(fullname)\n text = fin.readlines()\n fin.close()\n\n fout = file(fullname, 'w')\n for line in text:\n if not VDPROJ_RE.match(line):\n fout.write(line)\n fout.close()\n elif fname.endswith('csproj'):\n fin = file(fullname)\n text = fin.readlines()\n fin.close()\n\n fout = file(fullname, 'w')\n for line in text:\n if not PROJ_RE.match(line):\n fout.write(line)\n fout.close()\n elif fname.endswith('sln'):\n fin = file(fullname)\n text = fin.read()\n fin.close()\n\n text = SLN_RE.sub(\"\", text)\n\n fout = file(fullname, 'w')\n fout.write(text)\n\n",
"In your %APPDATA% directory Visual Studio saves a list of websites used in Visual Studio, with some settings of that site:\nOn my Vista Machine the exact location of the file is\nC:\\Users\\{name}\\AppData\\Local\\Microsoft\\WebsiteCache\\Websites.xml\n\nThis file contains entries like\n<?xml version=\"1.0\" encoding=\"utf-16\"?>\n<DesignTimeData>\n <Website RootUrl=\"e:\\Documents\\Visual Studio 2008\\WebSites\\WebSite\\\"\n CacheFolder=\"WebSite\" sccprovider=\"SubversionScc\" scclocalpath=\"Svn\"\n sccauxpath=\"Svn\" addnewitemlang=\"Visual Basic\" sccprojectname=\"Svn\"\n targetframework=\"3.5\" vwdport=\"60225\" \n _LastAccess=\"11-11-2008 10:58:03\"/>\n <Website RootUrl=\"E:\\siteje.webproj\\\" CacheFolder=\"siteje.webproj\"\n _LastAccess=\"11-6-2008 14:43:45\"/>\n <!-- And many more -->\n</DesignTimeData />\n\nAs you can see it contains the Scc references that are also part of your solution file.\n(In this case the SCC provider is AnkhSVN 2.0, so it doesn't contain the actual SCC mapping; just some constant strings that tell the SCC provider to look at the working copy).\nI think tried to fix the missing project file by caching this information in several locations. But it would be welcome if this file was properly documented.\n"
] | [
1,
1,
0
] | [] | [] | [
"python",
"visual_sourcesafe",
"visual_studio"
] | stackoverflow_0000471190_python_visual_sourcesafe_visual_studio.txt |
Q:
Is there a Vim equivalent to the Linux/Unix "fold" command?
I realize there's a way in Vim to hide/fold lines, but what I'm looking for is a way to select a block of text and have Vim wrap lines at or near column 80.
Mostly I want to use this on comments in situations where I'm adding some text to an existing comment that pushes it over 80 characters. It would also be nice if it could insert the comment marker at the beginning of the line when it wraps too. Also I'd prefer the solution to not autowrap the entire file since I have a particular convention that I use when it comes to keeping my structured code under the 80 character line-length.
This is mostly for Python code, but I'm also interested in learning the general solution to the problem in case I have to apply it to other types of text.
A:
gq
It's controlled by the textwidth option, see ":help gq" for more info.
gq will work on the current line by default, but you can highlight a visual block with Ctrl+V and format multiple lines / paragraphs like that.
gqap does the current "paragraph" of text.
A:
Take a look at ":help =" and ":help 'equalprg"
:set equalprg=fold
and in normal mode == filters the current line through the external fold program. Or visual-select something and hit =
| Is there a Vim equivalent to the Linux/Unix "fold" command? | I realize there's a way in Vim to hide/fold lines, but what I'm looking for is a way to select a block of text and have Vim wrap lines at or near column 80.
Mostly I want to use this on comments in situations where I'm adding some text to an existing comment that pushes it over 80 characters. It would also be nice if it could insert the comment marker at the beginning of the line when it wraps too. Also I'd prefer the solution to not autowrap the entire file since I have a particular convention that I use when it comes to keeping my structured code under the 80 character line-length.
This is mostly for Python code, but I'm also interested in learning the general solution to the problem in case I have to apply it to other types of text.
| [
"gq\n\nIt's controlled by the textwidth option, see \":help gq\" for more info. \ngq will work on the current line by default, but you can highlight a visual block with Ctrl+V and format multiple lines / paragraphs like that.\ngqap does the current \"paragraph\" of text.\n",
"Take a look at \":help =\" and \":help 'equalprg\"\n:set equalprg=fold\n\nand in normal mode == filters the current line through the external fold program. Or visual-select something and hit =\n"
] | [
11,
0
] | [] | [] | [
"comments",
"formatting",
"python",
"vim",
"word_wrap"
] | stackoverflow_0000516501_comments_formatting_python_vim_word_wrap.txt |
Q:
Python "property object has no attribute" Exception
confirmation = property(_get_confirmation, _set_confirmation)
confirmation.short_description = "Confirmation"
When I try the above I get an Exception I don't quite understand:
AttributeError: 'property' object has no attribute 'short_description'
This was an answer to another question on here but I couldn't comment on it as I don't have enough points or something. :-(
In other tests I've also got this error under similar circumstances:
TypeError: 'property' object has only read-only attributes (assign to .short_description)
Any ideas anybody?
A:
The result of property() is an object where you can't add new fields or methods. It's immutable which is why you get the error.
One way to achieve what you want is with using four arguments to property():
confirmation = property(_get_confirmation, _set_confirmation, None, "Confirmation.")
or put the explanation into the docstring of _get_confirmation.
(see docs here, also supported in Python 2)
[EDIT] As for the answer, you refer to: I think the indentation of the example was completely wrong when you looked at it. This has been fixed, now.
| Python "property object has no attribute" Exception | confirmation = property(_get_confirmation, _set_confirmation)
confirmation.short_description = "Confirmation"
When I try the above I get an Exception I don't quite understand:
AttributeError: 'property' object has no attribute 'short_description'
This was an answer to another question on here but I couldn't comment on it as I don't have enough points or something. :-(
In other tests I've also got this error under similar circumstances:
TypeError: 'property' object has only read-only attributes (assign to .short_description)
Any ideas anybody?
| [
"The result of property() is an object where you can't add new fields or methods. It's immutable which is why you get the error.\nOne way to achieve what you want is with using four arguments to property():\nconfirmation = property(_get_confirmation, _set_confirmation, None, \"Confirmation.\")\n\nor put the explanation into the docstring of _get_confirmation.\n(see docs here, also supported in Python 2)\n[EDIT] As for the answer, you refer to: I think the indentation of the example was completely wrong when you looked at it. This has been fixed, now.\n"
] | [
2
] | [] | [] | [
"django",
"properties",
"python"
] | stackoverflow_0000520152_django_properties_python.txt |
Q:
Search Files& Dirs on Website
Hi im coding to code a Tool that searchs for Dirs and files.
have done so the tool searchs for dirs, but need help to make it search for files on websites.
Any idea how it can be in python?
A:
Is this tool scanning the directories of your own website (in which the tool is running), or external sites?
A:
You can only do this if you have permission to browse directories on the site and no default page exists.
A:
You cannot get a directory listing on a website.
Pedantically, HTTP has no notion of directory.
Pratically, WebDAV provides a directory listing verb, so you can use that if WebDAV is enabled.
Otherwise, the closest thing you can do is similar to what recursive wget does: get a page, parse the HTML, look for hyperlinks (a/@href in xpath), filter out hyperlinks that do not point to URL below the current page, recurse into the remaining urls.
You can do further filtering, depending on your use case, such as removing the query part of the URL (anything after the first ?).
When the server has a directory listing feature enabled, this gives you something usable. This also gives you something usable if the website has no directory listing but is organized in a sensible way.
A:
If you're getting information on your own website for presentation in your own web application, you should use os.walk.
See http://www.python.org/doc/2.5.2/lib/os-file-dir.html for more information.
| Search Files& Dirs on Website | Hi im coding to code a Tool that searchs for Dirs and files.
have done so the tool searchs for dirs, but need help to make it search for files on websites.
Any idea how it can be in python?
| [
"Is this tool scanning the directories of your own website (in which the tool is running), or external sites?\n",
"You can only do this if you have permission to browse directories on the site and no default page exists.\n",
"You cannot get a directory listing on a website.\nPedantically, HTTP has no notion of directory.\nPratically, WebDAV provides a directory listing verb, so you can use that if WebDAV is enabled.\nOtherwise, the closest thing you can do is similar to what recursive wget does: get a page, parse the HTML, look for hyperlinks (a/@href in xpath), filter out hyperlinks that do not point to URL below the current page, recurse into the remaining urls.\nYou can do further filtering, depending on your use case, such as removing the query part of the URL (anything after the first ?).\nWhen the server has a directory listing feature enabled, this gives you something usable. This also gives you something usable if the website has no directory listing but is organized in a sensible way.\n",
"If you're getting information on your own website for presentation in your own web application, you should use os.walk.\nSee http://www.python.org/doc/2.5.2/lib/os-file-dir.html for more information.\n"
] | [
1,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000520362_python.txt |