content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Python Socket help (Syntax error)
import socket
HOST = "swemach.se"
PORT = 21
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT)
data = s.recv(1024)
s.close()
print "%s" % data
Gives me error
File "main.txt", line 7
data = s.recv(1024)
^
SyntaxError: invaild syntax
What im going wrong? any tip/solution?
A:
You've forgot parenthesis.
s.connect((HOST, PORT))
A:
(( HOST, PORT)
^ there you go
| Python Socket help (Syntax error) | import socket
HOST = "swemach.se"
PORT = 21
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT)
data = s.recv(1024)
s.close()
print "%s" % data
Gives me error
File "main.txt", line 7
data = s.recv(1024)
^
SyntaxError: invaild syntax
What im going wrong? any tip/solution?
| [
"You've forgot parenthesis.\ns.connect((HOST, PORT))\n\n",
"(( HOST, PORT)\n\n^ there you go\n"
] | [
4,
4
] | [] | [] | [
"python"
] | stackoverflow_0000577779_python.txt |
Q:
How do I prevent Python's os.walk from walking across mount points?
In Unix all disks are exposed as paths in the main filesystem, so os.walk('/') would traverse, for example, /media/cdrom as well as the primary hard disk, and that is undesirable for some applications.
How do I get an os.walk that stays on a single device?
Related:
Is there a way to determine if a subdirectory is in the same filesystem from python when using os.walk?
A:
From os.walk docs:
When topdown is true, the caller can
modify the dirnames list in-place
(perhaps using del or slice
assignment), and walk() will only
recurse into the subdirectories whose
names remain in dirnames; this can be
used to prune the search
So something like this should work:
for root, dirnames, filenames in os.walk(...):
dirnames[:] = [
dir for dir in dirnames
if not os.path.ismount(os.path.join(root, dir))]
...
A:
I think os.path.ismount might work for you. You code might look something like this:
import os
import os.path
for root, dirs, files in os.walk('/'):
# Handle files.
dirs[:] = filter(lambda dir: not os.path.ismount(os.path.join(root, dir)),
dirs)
You may also find this answer helpful in building your solution.
*Thanks for the comments on filtering dirs correctly.
A:
os.walk() can't tell (as far as I know) that it is browsing a different drive. You will need to check that yourself.
Try using os.stat(), or checking that the root variable from os.walk() is not /media
| How do I prevent Python's os.walk from walking across mount points? | In Unix all disks are exposed as paths in the main filesystem, so os.walk('/') would traverse, for example, /media/cdrom as well as the primary hard disk, and that is undesirable for some applications.
How do I get an os.walk that stays on a single device?
Related:
Is there a way to determine if a subdirectory is in the same filesystem from python when using os.walk?
| [
"From os.walk docs:\n\nWhen topdown is true, the caller can\n modify the dirnames list in-place\n (perhaps using del or slice\n assignment), and walk() will only\n recurse into the subdirectories whose\n names remain in dirnames; this can be\n used to prune the search\n\nSo something like this should work:\nfor root, dirnames, filenames in os.walk(...):\n dirnames[:] = [\n dir for dir in dirnames\n if not os.path.ismount(os.path.join(root, dir))]\n ...\n\n",
"I think os.path.ismount might work for you. You code might look something like this:\nimport os\nimport os.path\nfor root, dirs, files in os.walk('/'):\n # Handle files.\n dirs[:] = filter(lambda dir: not os.path.ismount(os.path.join(root, dir)), \n dirs)\n\nYou may also find this answer helpful in building your solution.\n*Thanks for the comments on filtering dirs correctly.\n",
"os.walk() can't tell (as far as I know) that it is browsing a different drive. You will need to check that yourself.\nTry using os.stat(), or checking that the root variable from os.walk() is not /media\n"
] | [
19,
3,
1
] | [] | [] | [
"linux",
"python",
"unix"
] | stackoverflow_0000577761_linux_python_unix.txt |
Q:
How to pass values by ref in Python?
Basically I am using the C++ API of an app, but there is no reference for its python access. One variable is passed by ref, like so:
GetPoint ( Point &p, Object obj )
so how can I translate to Python? Is there a pass by ref symbol?
A:
There is no pass by reference symbol in Python.
Just modify the passed in point, your modifications will be visible from the calling function.
>>> def change(obj):
... obj.x = 10
...
>>> class Point(object): x,y = 0,0
...
>>> p = Point()
>>> p.x
0
>>> change(p)
>>> p.x
10
...
So I should pass it like: GetPoint (p, obj)?
Yes, though Iraimbilanja has a good point. The bindings may have changed the call to return the point rather than use an out parameter.
A:
It's likely that your Python bindings have turned that signature into:
Point GetPoint(Object obj)
or even:
Point Object::GetPoint()
So look into the bindings' documentation or sources.
A:
I'm pretty sure Python passes the value of the reference to a variable. This article can probably explain it better than I.
A:
Objects are always passed as reference in Python. So wrapping up in object produces similar effect.
A:
There is not ref by symbol in python - the right thing to do depends on your API. The question to ask yourself is who owns the object passed from C++ to python. Sometimes, the easiest ting it just to copy the object into a python object, but that may not always be the best thing to do.
You may be interested in boost.python http://www.boost.org/doc/libs/1_38_0/libs/python/doc/index.html
| How to pass values by ref in Python? | Basically I am using the C++ API of an app, but there is no reference for its python access. One variable is passed by ref, like so:
GetPoint ( Point &p, Object obj )
so how can I translate to Python? Is there a pass by ref symbol?
| [
"There is no pass by reference symbol in Python.\nJust modify the passed in point, your modifications will be visible from the calling function.\n>>> def change(obj):\n... obj.x = 10\n...\n>>> class Point(object): x,y = 0,0\n...\n>>> p = Point()\n>>> p.x\n0\n>>> change(p)\n>>> p.x\n10\n\n...\n\nSo I should pass it like: GetPoint (p, obj)?\n\nYes, though Iraimbilanja has a good point. The bindings may have changed the call to return the point rather than use an out parameter.\n",
"It's likely that your Python bindings have turned that signature into:\nPoint GetPoint(Object obj)\n\nor even:\nPoint Object::GetPoint()\n\nSo look into the bindings' documentation or sources.\n",
"I'm pretty sure Python passes the value of the reference to a variable. This article can probably explain it better than I.\n",
"Objects are always passed as reference in Python. So wrapping up in object produces similar effect.\n",
"There is not ref by symbol in python - the right thing to do depends on your API. The question to ask yourself is who owns the object passed from C++ to python. Sometimes, the easiest ting it just to copy the object into a python object, but that may not always be the best thing to do.\nYou may be interested in boost.python http://www.boost.org/doc/libs/1_38_0/libs/python/doc/index.html\n"
] | [
5,
3,
2,
1,
1
] | [] | [] | [
"pass_by_reference",
"python",
"variables"
] | stackoverflow_0000578635_pass_by_reference_python_variables.txt |
Q:
Python 2.5 to Python 2.2 converter
I am working on a PyS60 application for S60 2nd Edition devices. I have coded my application logic in Python 2.5.
Is there any tool that automates th conversion from Python 2.5 to Python 2.2 or do I need to do in manually?
A:
The latest Python for S60, 1.9.0, actually includes Python 2.5.1. So maybe you don't need to convert.
A:
I don't know of any tool that would go from 2.5 to 2.2 automatically; but there was one a while ago that did 2.3 to 2.2 by RADLogic.
Depending on how many recent features your code uses, it may be trivial to convert it manually.
I had to backport some code a while back and all it actually took was to define True and False if they weren't already defined, write up some simple replacements for sum(), enumerate(), etc, and grab old versions of modules like datetime and logging that weren't yet in the 2.2 standard library.
| Python 2.5 to Python 2.2 converter | I am working on a PyS60 application for S60 2nd Edition devices. I have coded my application logic in Python 2.5.
Is there any tool that automates th conversion from Python 2.5 to Python 2.2 or do I need to do in manually?
| [
"The latest Python for S60, 1.9.0, actually includes Python 2.5.1. So maybe you don't need to convert.\n",
"I don't know of any tool that would go from 2.5 to 2.2 automatically; but there was one a while ago that did 2.3 to 2.2 by RADLogic.\nDepending on how many recent features your code uses, it may be trivial to convert it manually. \nI had to backport some code a while back and all it actually took was to define True and False if they weren't already defined, write up some simple replacements for sum(), enumerate(), etc, and grab old versions of modules like datetime and logging that weren't yet in the 2.2 standard library.\n"
] | [
3,
1
] | [] | [] | [
"pys60",
"python"
] | stackoverflow_0000578262_pys60_python.txt |
Q:
Python package import error
I'm trying to package my modules, but I can't seem to get it working.
My directory tree is something like the following:
snappy/
__init__.py
main/
__init__.py
main.py
config.py
...
...
and the code I'm using is
from snappy.main.config import *
I'm getting the error:
ImportError: No module named snappy.main.config
Any ideas what's going wrong? This is using Python 2.5 on Ubuntu 8.10.
Thanks in advance for your help.
A:
Is the parent directory of snappy in sys.path? If it's not, that's the only thing I can think of that would be causing your error.
A:
It depends on where your script using the import resides and your system PYTHONPATH. Basically, to have that import working you should run your script (the one having the import) in the parent directory of snappy or your script should change sys.path to include it.
./alex
| Python package import error | I'm trying to package my modules, but I can't seem to get it working.
My directory tree is something like the following:
snappy/
__init__.py
main/
__init__.py
main.py
config.py
...
...
and the code I'm using is
from snappy.main.config import *
I'm getting the error:
ImportError: No module named snappy.main.config
Any ideas what's going wrong? This is using Python 2.5 on Ubuntu 8.10.
Thanks in advance for your help.
| [
"Is the parent directory of snappy in sys.path? If it's not, that's the only thing I can think of that would be causing your error.\n",
"It depends on where your script using the import resides and your system PYTHONPATH. Basically, to have that import working you should run your script (the one having the import) in the parent directory of snappy or your script should change sys.path to include it.\n./alex\n"
] | [
5,
5
] | [] | [] | [
"package",
"python",
"python_import"
] | stackoverflow_0000578983_package_python_python_import.txt |
Q:
Is there a pattern for propagating details of both errors and warnings?
Is there a common pattern for propagating details of both errors and warnings? By errors I mean serious problems that should cause the flow of code to stop. By warnings I mean issues that merit informing the user of a problem, but are too trivial to stop program flow.
I currently use exceptions to deal with hard errors, and the Python logging framework to record warnings. But now I want to record warnings in a database field of the record currently being processed instead. I guess, I want the warnings to bubble up in the same manner as exceptions, but without stopping program flow.
>>> import logging
>>>
>>> def process_item(item):
... if item:
... if item == 'broken':
... logging.warning('soft error, continue with next item')
... else:
... raise Exception('hard error, cannot continue')
...
>>> process_item('good')
>>> process_item(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in process_item
Exception: hard error, cannot continue
>>> process_item('broken')
WARNING:root:soft error, continue with next item
This example (and my current problem) is in Python, but it should apply to other languages with exceptions too.
Following David's suggestion and a brief play with the example below, Python's warnings module is the way to go.
import warnings
class MyWarning(Warning):
pass
def causes_warnings():
print 'enter causes_warnings'
warnings.warn("my warning", MyWarning)
print 'leave causes_warnings'
def do_stuff():
print 'enter do_stuff'
causes_warnings()
causes_warnings()
causes_warnings()
print 'leave do_stuff'
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a number of warnings.
do_stuff()
# Do something (not very) useful with the warnings generated
print 'Warnings:',','.join([str(warning.message) for warning in w])
Output:
enter do_stuff
enter causes_warnings
leave causes_warnings
enter causes_warnings
leave causes_warnings
enter causes_warnings
leave causes_warnings
leave do_stuff
Warnings: my warning,my warning,my warning
Note: Python 2.6+ is required for catch_warnings.
A:
Look into Python's warnings module, http://docs.python.org/library/warnings.html
I don't think there's much you can say about this problem without specifying the language, as non-terminal error handling varies greatly from one language to another.
| Is there a pattern for propagating details of both errors and warnings? | Is there a common pattern for propagating details of both errors and warnings? By errors I mean serious problems that should cause the flow of code to stop. By warnings I mean issues that merit informing the user of a problem, but are too trivial to stop program flow.
I currently use exceptions to deal with hard errors, and the Python logging framework to record warnings. But now I want to record warnings in a database field of the record currently being processed instead. I guess, I want the warnings to bubble up in the same manner as exceptions, but without stopping program flow.
>>> import logging
>>>
>>> def process_item(item):
... if item:
... if item == 'broken':
... logging.warning('soft error, continue with next item')
... else:
... raise Exception('hard error, cannot continue')
...
>>> process_item('good')
>>> process_item(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in process_item
Exception: hard error, cannot continue
>>> process_item('broken')
WARNING:root:soft error, continue with next item
This example (and my current problem) is in Python, but it should apply to other languages with exceptions too.
Following David's suggestion and a brief play with the example below, Python's warnings module is the way to go.
import warnings
class MyWarning(Warning):
pass
def causes_warnings():
print 'enter causes_warnings'
warnings.warn("my warning", MyWarning)
print 'leave causes_warnings'
def do_stuff():
print 'enter do_stuff'
causes_warnings()
causes_warnings()
causes_warnings()
print 'leave do_stuff'
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a number of warnings.
do_stuff()
# Do something (not very) useful with the warnings generated
print 'Warnings:',','.join([str(warning.message) for warning in w])
Output:
enter do_stuff
enter causes_warnings
leave causes_warnings
enter causes_warnings
leave causes_warnings
enter causes_warnings
leave causes_warnings
leave do_stuff
Warnings: my warning,my warning,my warning
Note: Python 2.6+ is required for catch_warnings.
| [
"Look into Python's warnings module, http://docs.python.org/library/warnings.html\nI don't think there's much you can say about this problem without specifying the language, as non-terminal error handling varies greatly from one language to another.\n"
] | [
7
] | [
"Serious errors should bubble up, warning should just be logged in place without throwing exceptions.\n"
] | [
-1
] | [
"design_patterns",
"error_handling",
"python",
"warnings"
] | stackoverflow_0000579097_design_patterns_error_handling_python_warnings.txt |
Q:
Need help understanding function passing in Python
I am trying to teach myself Python by working through some problems I came up with, and I need some help understanding how to pass functions.
Let's say I am trying to predict tomorrow's temperature based on today's and yesterday's temperature, and I have written the following function:
def predict_temp(temp_today, temp_yest, k1, k2):
return k1*temp_today + k2*temp_yest
And I have also written an error function to compare a list of predicted temperatures with actual temperatures and return the mean absolute error:
def mean_abs_error(predictions, expected):
return sum([abs(x - y) for (x,y) in zip(predictions,expected)]) / float(len(predictions))
Now if I have a list of daily temperatures for some interval in the past, I can see how my prediction function would have done with specific k1 and k2 parameters like this:
>>> past_temps = [41, 35, 37, 42, 48, 30, 39, 42, 33]
>>> pred_temps = [predict_temp(past_temps[i-1],past_temps[i-2],0.5,0.5) for i in xrange(2,len(past_temps))]
>>> print pred_temps
[38.0, 36.0, 39.5, 45.0, 39.0, 34.5, 40.5]
>>> print mean_abs_error(pred_temps, past_temps[2:])
6.5
But how do I design a function to minimize my parameters k1 and k2 of my predict_temp function given an error function and my past_temps data?
Specifically I would like to write a function minimize(args*) that takes a prediction function, an error function, some training data, and that uses some search/optimization method (gradient descent for example) to estimate and return the values of k1 and k2 that minimize my error given the data?
I am not asking how to implement the optimization method. Assume I can do that. Rather, I would just like to know how to pass my predict and error functions (and my data) to my minimize function, and how to tell my minimize function that it should optimize the parameters k1 and k2, so that my minimize function can automatically search a bunch of different settings of k1 and k2, applying my prediction function with those parameters each time to the data and computing error (like I did manually for k1=0.5 and k2=0.5 above) and then return the best results.
I would like to be able to pass these functions so I can easily swap in different prediction and error functions (differing by more than just parameter settings that is). Each prediction function might have a different number of free parameters.
My minimize function should look something like this, but I don't know how to proceed:
def minimize(prediction_function, which_args_to_optimize, error_function, data):
# 1: guess initial parameters
# 2: apply prediction function with current parameters to data to compute predictions
# 3: use error function to compute error between predictions and data
# 4: if stopping criterion is met, return parameters
# 5: update parameters
# 6: GOTO 2
Edit: It's that easy?? This is no fun. I am going back to Java.
On a more serious note, I think I was also getting hung up on how to use different prediction functions with different numbers of parameters to tune. If I just take all the free parameters in as one tuple I can keep the form of the function the same so it easy to pass and use.
A:
Here is an example of how to pass a function into another function. apply_func_to will take a function f and a number num as parameters and return f(num).
def my_func(x):
return x*x
def apply_func_to(f, num):
return f(num)
>>>apply_func_to(my_func, 2)
4
If you wanna be clever you can use lambda (anonymous functions too). These allow you to pass functions "on the fly" without having to define them separately
>>>apply_func_to(lambda x:x*x, 3)
9
Hope this helps.
A:
Function passing in Python is easy, you just use the name of the function as a variable which contains the function itself.
def predict(...):
...
minimize(predict, ..., mean_abs_error, ...)
As for the rest of the question: I'd suggest looking at the way SciPy implements this as a model. Basically, they have a function leastsq which minimizes the sum of the squares of the residuals (I presume you know what least-squares minimization is ;-). What you pass to leastsq is a function to compute the residuals, initial guesses for the parameters, and an arbitrary parameter which gets passed on to your residual-computing function (the closure), which includes the data:
# params will be an array of your k's, i.e. [k1, k2]
def residuals(params, measurements, times):
return predict(params, times) - measurements
leastsq(residuals, initial_parameters, args = (measurements, times))
Note that SciPy doesn't actually concern itself with how you come up with the residuals. The measurements array is just passed unaltered to your residuals function.
I can look up an example I did recently if you want more information - or you can find examples online, of course, but in my experience they're not quite as clear. The particular bit of code I wrote would relate well to your scenario.
A:
As David and and Il-Bhima note, functions can be passed into other functions just like any other type of object. When you pass a function in, you simply call it like you ordinarily would. People sometimes refer to this ability by saying that functions are first class in Python. At a slightly greater level of detail, you should think of functions in Python as being one type of callable object. Another important type of callable object in Python is class objects; in this case, calling a class object creates an instance of that object. This concept is discussed in detail here.
Generically, you will probably want to leverage the positional and/or keyword argument feature of Python, as described here. This will allow you to write a generic
minimizer that can minimize prediction functions taking different sets of parameters. I've written an example---it's more complicated than I'd like (uses generators!) but it works for prediction functions with arbitrary parameters. I've glossed over a few details, but this should get you started:
def predict(data, k1=None, k2=None):
"""Make the prediction."""
pass
def expected(data):
"""Expected results from data."""
pass
def mean_abs_err(pred, exp):
"""Compute mean absolute error."""
pass
def gen_args(pred_args, args_to_opt):
"""Update prediction function parameters.
pred_args : a dict to update
args_to_opt : a dict of arguments/iterables to apply to pred_args
This is a generator that updates a number of variables
over a given numerical range. Equivalent to itertools.product.
"""
base_args = pred_args.copy() #don't modify input
argnames = args_to_opt.keys()
argvals = args_to_opt.values()
result = [[]]
# Generate the results
for argv in argvals:
result = [x+[y] for x in result for y in argv]
for prod in result:
base_args.update(zip(argnames, prod))
yield base_args
def minimize(pred_fn, pred_args, args_to_opt, err_fn, data):
"""Minimize pred_fn(data) over a set of parameters.
pred_fn : function used to make predictions
pred_args : dict of keyword arguments to pass to pred_fn
args_to_opt : a dict of arguments/iterables to apply to pred_args
err_fn : function used to compute error
data : data to use in the optimization
Returns a tuple (error, parameters) of the best set of input parameters.
"""
results = []
for new_args in gen_args(pred_args, args_to_opt):
pred = pred_fn(data, **new_args) # Unpack dictionary
err = err_fn(pred, expected(data))
results.append((err, new_args))
return sorted(results)[0]
const_args = {k1: 1}
opt_args = {k2: range(10)}
data = [] # Whatever data you like.
minimize(predict, const_args, opt_args, mean_abs_err, data)
| Need help understanding function passing in Python | I am trying to teach myself Python by working through some problems I came up with, and I need some help understanding how to pass functions.
Let's say I am trying to predict tomorrow's temperature based on today's and yesterday's temperature, and I have written the following function:
def predict_temp(temp_today, temp_yest, k1, k2):
return k1*temp_today + k2*temp_yest
And I have also written an error function to compare a list of predicted temperatures with actual temperatures and return the mean absolute error:
def mean_abs_error(predictions, expected):
return sum([abs(x - y) for (x,y) in zip(predictions,expected)]) / float(len(predictions))
Now if I have a list of daily temperatures for some interval in the past, I can see how my prediction function would have done with specific k1 and k2 parameters like this:
>>> past_temps = [41, 35, 37, 42, 48, 30, 39, 42, 33]
>>> pred_temps = [predict_temp(past_temps[i-1],past_temps[i-2],0.5,0.5) for i in xrange(2,len(past_temps))]
>>> print pred_temps
[38.0, 36.0, 39.5, 45.0, 39.0, 34.5, 40.5]
>>> print mean_abs_error(pred_temps, past_temps[2:])
6.5
But how do I design a function to minimize my parameters k1 and k2 of my predict_temp function given an error function and my past_temps data?
Specifically I would like to write a function minimize(args*) that takes a prediction function, an error function, some training data, and that uses some search/optimization method (gradient descent for example) to estimate and return the values of k1 and k2 that minimize my error given the data?
I am not asking how to implement the optimization method. Assume I can do that. Rather, I would just like to know how to pass my predict and error functions (and my data) to my minimize function, and how to tell my minimize function that it should optimize the parameters k1 and k2, so that my minimize function can automatically search a bunch of different settings of k1 and k2, applying my prediction function with those parameters each time to the data and computing error (like I did manually for k1=0.5 and k2=0.5 above) and then return the best results.
I would like to be able to pass these functions so I can easily swap in different prediction and error functions (differing by more than just parameter settings that is). Each prediction function might have a different number of free parameters.
My minimize function should look something like this, but I don't know how to proceed:
def minimize(prediction_function, which_args_to_optimize, error_function, data):
# 1: guess initial parameters
# 2: apply prediction function with current parameters to data to compute predictions
# 3: use error function to compute error between predictions and data
# 4: if stopping criterion is met, return parameters
# 5: update parameters
# 6: GOTO 2
Edit: It's that easy?? This is no fun. I am going back to Java.
On a more serious note, I think I was also getting hung up on how to use different prediction functions with different numbers of parameters to tune. If I just take all the free parameters in as one tuple I can keep the form of the function the same so it easy to pass and use.
| [
"Here is an example of how to pass a function into another function. apply_func_to will take a function f and a number num as parameters and return f(num).\ndef my_func(x):\n return x*x\n\ndef apply_func_to(f, num):\n return f(num)\n\n>>>apply_func_to(my_func, 2)\n4\n\nIf you wanna be clever you can use lambda (anonymous functions too). These allow you to pass functions \"on the fly\" without having to define them separately\n>>>apply_func_to(lambda x:x*x, 3)\n9\n\nHope this helps.\n",
"Function passing in Python is easy, you just use the name of the function as a variable which contains the function itself.\ndef predict(...):\n ...\n\nminimize(predict, ..., mean_abs_error, ...)\n\nAs for the rest of the question: I'd suggest looking at the way SciPy implements this as a model. Basically, they have a function leastsq which minimizes the sum of the squares of the residuals (I presume you know what least-squares minimization is ;-). What you pass to leastsq is a function to compute the residuals, initial guesses for the parameters, and an arbitrary parameter which gets passed on to your residual-computing function (the closure), which includes the data:\n# params will be an array of your k's, i.e. [k1, k2]\ndef residuals(params, measurements, times):\n return predict(params, times) - measurements\n\nleastsq(residuals, initial_parameters, args = (measurements, times))\n\nNote that SciPy doesn't actually concern itself with how you come up with the residuals. The measurements array is just passed unaltered to your residuals function.\nI can look up an example I did recently if you want more information - or you can find examples online, of course, but in my experience they're not quite as clear. The particular bit of code I wrote would relate well to your scenario.\n",
"As David and and Il-Bhima note, functions can be passed into other functions just like any other type of object. When you pass a function in, you simply call it like you ordinarily would. People sometimes refer to this ability by saying that functions are first class in Python. At a slightly greater level of detail, you should think of functions in Python as being one type of callable object. Another important type of callable object in Python is class objects; in this case, calling a class object creates an instance of that object. This concept is discussed in detail here.\nGenerically, you will probably want to leverage the positional and/or keyword argument feature of Python, as described here. This will allow you to write a generic \nminimizer that can minimize prediction functions taking different sets of parameters. I've written an example---it's more complicated than I'd like (uses generators!) but it works for prediction functions with arbitrary parameters. I've glossed over a few details, but this should get you started:\ndef predict(data, k1=None, k2=None):\n \"\"\"Make the prediction.\"\"\"\n pass\n\ndef expected(data):\n \"\"\"Expected results from data.\"\"\"\n pass\n\ndef mean_abs_err(pred, exp):\n \"\"\"Compute mean absolute error.\"\"\"\n pass\n\ndef gen_args(pred_args, args_to_opt):\n \"\"\"Update prediction function parameters.\n\n pred_args : a dict to update \n args_to_opt : a dict of arguments/iterables to apply to pred_args\n\n This is a generator that updates a number of variables \n over a given numerical range. Equivalent to itertools.product.\n\n \"\"\"\n\n base_args = pred_args.copy() #don't modify input\n\n argnames = args_to_opt.keys()\n argvals = args_to_opt.values()\n result = [[]]\n # Generate the results\n for argv in argvals:\n result = [x+[y] for x in result for y in argv]\n for prod in result:\n base_args.update(zip(argnames, prod))\n yield base_args\n\ndef minimize(pred_fn, pred_args, args_to_opt, err_fn, data):\n \"\"\"Minimize pred_fn(data) over a set of parameters.\n\n pred_fn : function used to make predictions\n pred_args : dict of keyword arguments to pass to pred_fn\n args_to_opt : a dict of arguments/iterables to apply to pred_args\n err_fn : function used to compute error\n data : data to use in the optimization\n\n Returns a tuple (error, parameters) of the best set of input parameters.\n \"\"\"\n results = []\n for new_args in gen_args(pred_args, args_to_opt):\n pred = pred_fn(data, **new_args) # Unpack dictionary\n err = err_fn(pred, expected(data))\n results.append((err, new_args))\n return sorted(results)[0]\n\nconst_args = {k1: 1}\nopt_args = {k2: range(10)}\ndata = [] # Whatever data you like.\nminimize(predict, const_args, opt_args, mean_abs_err, data)\n\n"
] | [
13,
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0000578812_python.txt |
Q:
PyGame not receiving events when 3+ keys are pressed at the same time
I am developing a simple game in PyGame... A rocket ship flying around and shooting stuff.
Question: Why does pygame stop emitting keyboard events when too may keys are pressed at once?
About the Key Handling: The program has a number of variables like KEYSTATE_FIRE, KEYSTATE_TURNLEFT, etc...
When a KEYDOWN event is handled, it sets the corresponding KEYSTATE_* variable to True.
When a KEYUP event is handled, it sets the same variable to False.
The problem:
If UP-ARROW and LEFT-ARROW are being pressed at the same time, pygame DOES NOT emit a KEYDOWN event when SPACE is pressed. This behavior varies depending on the keys. When pressing letters, it seems that I can hold about 5 of them before pygame stops emitting KEYDOWN events for additional keys.
Verification: In my main loop, I simply printed each event received to verify the above behavior.
The code: For reference, here is the (crude) way of handling key events at this point:
while GAME_RUNNING:
FRAME_NUMBER += 1
CLOCK.tick(FRAME_PER_SECOND)
#----------------------------------------------------------------------
# Check for events
for event in pygame.event.get():
print event
if event.type == pygame.QUIT:
raise SystemExit()
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_UP:
KEYSTATE_FORWARD = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_UP:
KEYSTATE_FORWARD = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_DOWN:
KEYSTATE_BACKWARD = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_DOWN:
KEYSTATE_BACKWARD = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_LEFT:
KEYSTATE_TURNLEFT = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_LEFT:
KEYSTATE_TURNLEFT = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_RIGHT:
KEYSTATE_TURNRIGHT = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_RIGHT:
KEYSTATE_TURNRIGHT = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_SPACE:
KEYSTATE_FIRE = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_SPACE:
KEYSTATE_FIRE = False
# remainder of game loop here...
For pressing this sequence:
a (down)
s (down)
d (down)
f (down)
g (down)
h (down)
j (down)
k (down)
a (up)
s (up)
d (up)
f (up)
g (up)
h (up)
j (up)
k (up)
Here is the output:
<Event(2-KeyDown {'scancode': 30, 'key': 97, 'unicode': u'a', 'mod': 0})>
<Event(2-KeyDown {'scancode': 31, 'key': 115, 'unicode': u's', 'mod': 0})>
<Event(2-KeyDown {'scancode': 32, 'key': 100, 'unicode': u'd', 'mod': 0})>
<Event(2-KeyDown {'scancode': 33, 'key': 102, 'unicode': u'f', 'mod': 0})>
<Event(3-KeyUp {'scancode': 30, 'key': 97, 'mod': 0})>
<Event(3-KeyUp {'scancode': 31, 'key': 115, 'mod': 0})>
<Event(3-KeyUp {'scancode': 32, 'key': 100, 'mod': 0})>
<Event(3-KeyUp {'scancode': 33, 'key': 102, 'mod': 0})>
<Event(2-KeyDown {'scancode': 36, 'key': 106, 'unicode': u'j', 'mod': 0})>
<Event(2-KeyDown {'scancode': 37, 'key': 107, 'unicode': u'k', 'mod': 0})>
<Event(3-KeyUp {'scancode': 36, 'key': 106, 'mod': 0})>
<Event(3-KeyUp {'scancode': 37, 'key': 107, 'mod': 0})>
Is this a common issue? Is there a workaround? If not, what is the best way to handle multiple-key control issues when using pygame?
A:
This sounds like a input problem, not a code problem - are you sure the problem isn't the keyboard itself? Most keyboards have limitations on the number of keys that can be pressed at the same time. Often times you can't press more than a few keys that are close together at a time.
To test it out, just start pressing and holding letters on the keyboard and see when new letters stop appearing.
My suggestion is to try mapping SPACE to a different key somewhere else and see what happens.
A:
As others have eluded to already, certain (especially cheaper, lower-end) keyboards have a low quality keyboard matrix. With these keyboards, certain key combinations will lead to the behavior you're experiencing. Another common side effect can be "ghost keys," where the an extra key press will appear in the input stream that was not actually pressed.
The only solution (if the problem is related to the keyboard matrix) is to change your key mapping to use keys on different rows/columns of the matrix, or buy a keyboard with a better matrix.
A:
Some keyboards cannot send certain keys together. Often this limit is reached with 3 keys.
A:
It may very well depend on the keyboard. My current no-name keyboard only supports two keys pressed at the same time, often a pain in games.
| PyGame not receiving events when 3+ keys are pressed at the same time | I am developing a simple game in PyGame... A rocket ship flying around and shooting stuff.
Question: Why does pygame stop emitting keyboard events when too may keys are pressed at once?
About the Key Handling: The program has a number of variables like KEYSTATE_FIRE, KEYSTATE_TURNLEFT, etc...
When a KEYDOWN event is handled, it sets the corresponding KEYSTATE_* variable to True.
When a KEYUP event is handled, it sets the same variable to False.
The problem:
If UP-ARROW and LEFT-ARROW are being pressed at the same time, pygame DOES NOT emit a KEYDOWN event when SPACE is pressed. This behavior varies depending on the keys. When pressing letters, it seems that I can hold about 5 of them before pygame stops emitting KEYDOWN events for additional keys.
Verification: In my main loop, I simply printed each event received to verify the above behavior.
The code: For reference, here is the (crude) way of handling key events at this point:
while GAME_RUNNING:
FRAME_NUMBER += 1
CLOCK.tick(FRAME_PER_SECOND)
#----------------------------------------------------------------------
# Check for events
for event in pygame.event.get():
print event
if event.type == pygame.QUIT:
raise SystemExit()
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_UP:
KEYSTATE_FORWARD = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_UP:
KEYSTATE_FORWARD = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_DOWN:
KEYSTATE_BACKWARD = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_DOWN:
KEYSTATE_BACKWARD = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_LEFT:
KEYSTATE_TURNLEFT = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_LEFT:
KEYSTATE_TURNLEFT = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_RIGHT:
KEYSTATE_TURNRIGHT = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_RIGHT:
KEYSTATE_TURNRIGHT = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_SPACE:
KEYSTATE_FIRE = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_SPACE:
KEYSTATE_FIRE = False
# remainder of game loop here...
For pressing this sequence:
a (down)
s (down)
d (down)
f (down)
g (down)
h (down)
j (down)
k (down)
a (up)
s (up)
d (up)
f (up)
g (up)
h (up)
j (up)
k (up)
Here is the output:
<Event(2-KeyDown {'scancode': 30, 'key': 97, 'unicode': u'a', 'mod': 0})>
<Event(2-KeyDown {'scancode': 31, 'key': 115, 'unicode': u's', 'mod': 0})>
<Event(2-KeyDown {'scancode': 32, 'key': 100, 'unicode': u'd', 'mod': 0})>
<Event(2-KeyDown {'scancode': 33, 'key': 102, 'unicode': u'f', 'mod': 0})>
<Event(3-KeyUp {'scancode': 30, 'key': 97, 'mod': 0})>
<Event(3-KeyUp {'scancode': 31, 'key': 115, 'mod': 0})>
<Event(3-KeyUp {'scancode': 32, 'key': 100, 'mod': 0})>
<Event(3-KeyUp {'scancode': 33, 'key': 102, 'mod': 0})>
<Event(2-KeyDown {'scancode': 36, 'key': 106, 'unicode': u'j', 'mod': 0})>
<Event(2-KeyDown {'scancode': 37, 'key': 107, 'unicode': u'k', 'mod': 0})>
<Event(3-KeyUp {'scancode': 36, 'key': 106, 'mod': 0})>
<Event(3-KeyUp {'scancode': 37, 'key': 107, 'mod': 0})>
Is this a common issue? Is there a workaround? If not, what is the best way to handle multiple-key control issues when using pygame?
| [
"This sounds like a input problem, not a code problem - are you sure the problem isn't the keyboard itself? Most keyboards have limitations on the number of keys that can be pressed at the same time. Often times you can't press more than a few keys that are close together at a time.\nTo test it out, just start pressing and holding letters on the keyboard and see when new letters stop appearing.\nMy suggestion is to try mapping SPACE to a different key somewhere else and see what happens.\n",
"As others have eluded to already, certain (especially cheaper, lower-end) keyboards have a low quality keyboard matrix. With these keyboards, certain key combinations will lead to the behavior you're experiencing. Another common side effect can be \"ghost keys,\" where the an extra key press will appear in the input stream that was not actually pressed.\nThe only solution (if the problem is related to the keyboard matrix) is to change your key mapping to use keys on different rows/columns of the matrix, or buy a keyboard with a better matrix.\n",
"Some keyboards cannot send certain keys together. Often this limit is reached with 3 keys.\n",
"It may very well depend on the keyboard. My current no-name keyboard only supports two keys pressed at the same time, often a pain in games.\n"
] | [
11,
5,
2,
1
] | [] | [] | [
"keyboard_events",
"pygame",
"python"
] | stackoverflow_0000576634_keyboard_events_pygame_python.txt |
Q:
Is there a known Win32 Tkinter bug with respect to displaying photos on a canvas?
I'm noticing a pretty strange bug with tkinter, and I am wondering if it's because there's something in how the python interacts with the tcl, at least in Win32.
Here I have a super simple program that displays a gif image. It works perfectly.
from Tkinter import *
canvas = Canvas(width=300, height=300, bg='white')
canvas.pack()
photo=PhotoImage(file=sys.argv[1])
canvas.create_image(0, 0, image=photo, anchor=NW) # embed a photo
print canvas
print photo
mainloop( )
Now, I change the program slightly to edit the canvas object from within a function. This time, I just get a blank canvas.
# demo all basic canvas interfaces
from Tkinter import *
canvas = Canvas(width=300, height=300, bg='white')
canvas.pack()
def set_canvas(cv):
photo=PhotoImage(file=sys.argv[1])
cv.create_image(0, 0, image=photo, anchor=NW) # embed a photo
print cv
print photo
set_canvas(canvas)
mainloop( )
The only difference between the two is that in one the canvas object is passed to a function instead of being used directly. Both print statements return identical results. I am wondering if there is perhaps some breakdown in the object model at the tcl/python layer.
Any thoughts, folks?
Thanks,
/YGA
A:
Do that as a quick solution, and I'll try to explain:
def set_canvas(cv):
global photo # here!
photo=PhotoImage(file=sys.argv[1])
cv.create_image(0, 0, image=photo, anchor=NW) # embed a photo
print cv
print photo
A PhotoImage needs to have at least one reference from any Python object, otherwise it's garbage collected. In my solution, I suggest to make photo be a module-level name, so when the function ends, there will still be a reference to the PhotoImage object. You might prefer to create a class and make set_canvas into a method, and store the PhotoImage object as an instance variable.
| Is there a known Win32 Tkinter bug with respect to displaying photos on a canvas? | I'm noticing a pretty strange bug with tkinter, and I am wondering if it's because there's something in how the python interacts with the tcl, at least in Win32.
Here I have a super simple program that displays a gif image. It works perfectly.
from Tkinter import *
canvas = Canvas(width=300, height=300, bg='white')
canvas.pack()
photo=PhotoImage(file=sys.argv[1])
canvas.create_image(0, 0, image=photo, anchor=NW) # embed a photo
print canvas
print photo
mainloop( )
Now, I change the program slightly to edit the canvas object from within a function. This time, I just get a blank canvas.
# demo all basic canvas interfaces
from Tkinter import *
canvas = Canvas(width=300, height=300, bg='white')
canvas.pack()
def set_canvas(cv):
photo=PhotoImage(file=sys.argv[1])
cv.create_image(0, 0, image=photo, anchor=NW) # embed a photo
print cv
print photo
set_canvas(canvas)
mainloop( )
The only difference between the two is that in one the canvas object is passed to a function instead of being used directly. Both print statements return identical results. I am wondering if there is perhaps some breakdown in the object model at the tcl/python layer.
Any thoughts, folks?
Thanks,
/YGA
| [
"Do that as a quick solution, and I'll try to explain:\ndef set_canvas(cv):\n global photo # here!\n photo=PhotoImage(file=sys.argv[1])\n cv.create_image(0, 0, image=photo, anchor=NW) # embed a photo\n print cv\n print photo\n\nA PhotoImage needs to have at least one reference from any Python object, otherwise it's garbage collected. In my solution, I suggest to make photo be a module-level name, so when the function ends, there will still be a reference to the PhotoImage object. You might prefer to create a class and make set_canvas into a method, and store the PhotoImage object as an instance variable.\n"
] | [
6
] | [] | [] | [
"python",
"tkinter",
"user_interface",
"winapi"
] | stackoverflow_0000576843_python_tkinter_user_interface_winapi.txt |
Q:
How do I code a source code converter from python to ruby?
My teacher told me that if I wanted to get the best grade in our programming class, I should code a Simple Source Code Converter.
Python to Ruby (the simplest he said)
Now my question to you: how hard is it to code a simple source code converter for python to ruby. (It should convert file controlling, Control Statements, etc.)
Do you have any tips for me?
Which language should I use to code the converter (C#, Python or Ruby)?
A:
I think your teacher is fibbing - this is pretty hard. It is equivalent to writing a compiler/interpreter. I don't know how much time you have available for this project, but you are typically looking at several man-years of work.
A:
There is a name for a program which converts one type of code to another. It's called a compiler (even if the target language is not in fact machine or byte code).
Compilers are not the easiest part of computer science, and this is project that, if it were to be anything more than a toy implementation of a converter, would be a massive project. Certainly larger than what one would normally do for a class project in most university courses. (Even many/most compilers courses have fairly modest project assignments.
As to what language to use? Well, whichever one you know best is probably the answer. Though if you want to learn something new, Haskell would be a good choice, with its pattern matching features. (Disclaimer: I'm new to haskell.) (Yacc could also be used, if you're really serious about getting into compilers.)
You'll also want to consult: The Dragon Compiler Book,
which is worth studying even if you don't plan to write compilers.
A:
As simple as coming up with enough clever regexps that convert the syntax correctly.
Ruby and python's syntax is close enough for this to be not very hard.
You might need to do abit of extra work to rewrite stuff that you have in python that doesn't exist in ruby like listing comprehension for instance.
A:
First simple may mean that it does not take care of all the valid semantics of Python, but only a subset of this.
The first thing I would get would be a copy of the dragon book, which you can find in any university library. The second thing I would do would be to get a copy of the syntax and semantics of Python.
A:
the language should not matter. pick the one you are most comfortable with strings in.
tips wise i would use a dictionary/look-up array for the keywords. The hardest part will be dealing with the white space in python
A:
It sounds like your teacher is a bit of a practical joker!
A:
The hardest part would be preserving semantics.
Like how do you deal with metaclass assignments, or function decorators, or yield-based generators when going to Ruby? I have no Ruby experience so I don't know what is directly supported.
A:
It would be fairly simple to write a converter between Brainf*** and C. I'm sure that's well below the scope of what you need to do (I'm guessing something that's teaching about parsing context-free grammars), but would be really easy to do.
| How do I code a source code converter from python to ruby? | My teacher told me that if I wanted to get the best grade in our programming class, I should code a Simple Source Code Converter.
Python to Ruby (the simplest he said)
Now my question to you: how hard is it to code a simple source code converter for python to ruby. (It should convert file controlling, Control Statements, etc.)
Do you have any tips for me?
Which language should I use to code the converter (C#, Python or Ruby)?
| [
"I think your teacher is fibbing - this is pretty hard. It is equivalent to writing a compiler/interpreter. I don't know how much time you have available for this project, but you are typically looking at several man-years of work.\n",
"There is a name for a program which converts one type of code to another. It's called a compiler (even if the target language is not in fact machine or byte code).\nCompilers are not the easiest part of computer science, and this is project that, if it were to be anything more than a toy implementation of a converter, would be a massive project. Certainly larger than what one would normally do for a class project in most university courses. (Even many/most compilers courses have fairly modest project assignments.\nAs to what language to use? Well, whichever one you know best is probably the answer. Though if you want to learn something new, Haskell would be a good choice, with its pattern matching features. (Disclaimer: I'm new to haskell.) (Yacc could also be used, if you're really serious about getting into compilers.)\nYou'll also want to consult: The Dragon Compiler Book,\nwhich is worth studying even if you don't plan to write compilers.\n",
"As simple as coming up with enough clever regexps that convert the syntax correctly.\nRuby and python's syntax is close enough for this to be not very hard.\nYou might need to do abit of extra work to rewrite stuff that you have in python that doesn't exist in ruby like listing comprehension for instance.\n",
"First simple may mean that it does not take care of all the valid semantics of Python, but only a subset of this.\nThe first thing I would get would be a copy of the dragon book, which you can find in any university library. The second thing I would do would be to get a copy of the syntax and semantics of Python.\n",
"the language should not matter. pick the one you are most comfortable with strings in. \ntips wise i would use a dictionary/look-up array for the keywords. The hardest part will be dealing with the white space in python\n",
"It sounds like your teacher is a bit of a practical joker!\n",
"The hardest part would be preserving semantics.\nLike how do you deal with metaclass assignments, or function decorators, or yield-based generators when going to Ruby? I have no Ruby experience so I don't know what is directly supported.\n",
"It would be fairly simple to write a converter between Brainf*** and C. I'm sure that's well below the scope of what you need to do (I'm guessing something that's teaching about parsing context-free grammars), but would be really easy to do.\n"
] | [
13,
2,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"converter",
"python",
"ruby"
] | stackoverflow_0000579524_converter_python_ruby.txt |
Q:
What's the Pythonic way to combine two sequences into a dictionary?
Is there a more concise way of doing this in Python?:
def toDict(keys, values):
d = dict()
for k,v in zip(keys, values):
d[k] = v
return d
A:
Yes:
dict(zip(keys,values))
A:
If keys' size may be larger then values' one then you could use itertools.izip_longest (Python 2.6) which allows to specify a default value for the rest of the keys:
from itertools import izip_longest
def to_dict(keys, values, default=None):
return dict(izip_longest(keys, values, fillvalue=default))
Example:
>>> to_dict("abcdef", range(3), 10)
{'a': 0, 'c': 2, 'b': 1, 'e': 10, 'd': 10, 'f': 10}
NOTE: itertools.izip*() functions unlike the zip() function return iterators not lists.
| What's the Pythonic way to combine two sequences into a dictionary? | Is there a more concise way of doing this in Python?:
def toDict(keys, values):
d = dict()
for k,v in zip(keys, values):
d[k] = v
return d
| [
"Yes:\ndict(zip(keys,values))\n\n",
"If keys' size may be larger then values' one then you could use itertools.izip_longest (Python 2.6) which allows to specify a default value for the rest of the keys:\nfrom itertools import izip_longest\n\ndef to_dict(keys, values, default=None):\n return dict(izip_longest(keys, values, fillvalue=default))\n\nExample:\n>>> to_dict(\"abcdef\", range(3), 10)\n{'a': 0, 'c': 2, 'b': 1, 'e': 10, 'd': 10, 'f': 10}\n\nNOTE: itertools.izip*() functions unlike the zip() function return iterators not lists.\n"
] | [
43,
4
] | [] | [] | [
"python"
] | stackoverflow_0000579856_python.txt |
Q:
Easiest way to create a scrollable area using wxPython?
Okay, so I want to display a series of windows within windows and have the whole lot scrollable. I've been hunting through the wxWidgets documentation and a load of examples from various sources on t'internet. Most of those seem to imply that a wx.ScrolledWindow should work if I just pass it a nested group of sizers(?):
The most automatic and newest way is to simply let sizers determine the scrolling area.This is now the default when you set an interior sizer into a wxScrolledWindow with wxWindow::SetSizer. The scrolling area will be set to the size requested by the sizer and the scrollbars will be assigned for each orientation according to the need for them and the scrolling increment set by wxScrolledWindow::SetScrollRate.
...but all the example's I've seen seem to use the older methods listed as ways to achieve scrolling. I've got something basic working, but as soon as you start scrolling you lose the child windows:
import wx
class MyCustomWindow(wx.Window):
def __init__(self, parent):
wx.Window.__init__(self, parent)
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.SetSize((50,50))
def OnPaint(self, event):
dc = wx.BufferedPaintDC(self)
dc.SetPen(wx.Pen('blue', 2))
dc.SetBrush(wx.Brush('blue'))
(width, height)=self.GetSizeTuple()
dc.DrawRoundedRectangle(0, 0,width, height, 8)
class TestFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1)
self.Bind(wx.EVT_SIZE, self.OnSize)
self.scrolling_window = wx.ScrolledWindow( self )
self.scrolling_window.SetScrollRate(1,1)
self.scrolling_window.EnableScrolling(True,True)
self.sizer_container = wx.BoxSizer( wx.VERTICAL )
self.sizer = wx.BoxSizer( wx.HORIZONTAL )
self.sizer_container.Add(self.sizer,1,wx.CENTER,wx.EXPAND)
self.child_windows = []
for i in range(0,50):
wind = MyCustomWindow(self.scrolling_window)
self.sizer.Add(wind, 0, wx.CENTER|wx.ALL, 5)
self.child_windows.append(wind)
self.scrolling_window.SetSizer(self.sizer_container)
def OnSize(self, event):
self.scrolling_window.SetSize(self.GetClientSize())
if __name__=='__main__':
app = wx.PySimpleApp()
f = TestFrame()
f.Show()
app.MainLoop()
A:
Oops.. turns out I was creating my child windows badly:
wind = MyCustomWindow(self)
should be:
wind = MyCustomWindow(self.scrolling_window)
..which meant the child windows were waiting for the top-level window (the frame) to be re-drawn instead of listening to the scroll window. Changing that makes it all work wonderfully :)
| Easiest way to create a scrollable area using wxPython? | Okay, so I want to display a series of windows within windows and have the whole lot scrollable. I've been hunting through the wxWidgets documentation and a load of examples from various sources on t'internet. Most of those seem to imply that a wx.ScrolledWindow should work if I just pass it a nested group of sizers(?):
The most automatic and newest way is to simply let sizers determine the scrolling area.This is now the default when you set an interior sizer into a wxScrolledWindow with wxWindow::SetSizer. The scrolling area will be set to the size requested by the sizer and the scrollbars will be assigned for each orientation according to the need for them and the scrolling increment set by wxScrolledWindow::SetScrollRate.
...but all the example's I've seen seem to use the older methods listed as ways to achieve scrolling. I've got something basic working, but as soon as you start scrolling you lose the child windows:
import wx
class MyCustomWindow(wx.Window):
def __init__(self, parent):
wx.Window.__init__(self, parent)
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.SetSize((50,50))
def OnPaint(self, event):
dc = wx.BufferedPaintDC(self)
dc.SetPen(wx.Pen('blue', 2))
dc.SetBrush(wx.Brush('blue'))
(width, height)=self.GetSizeTuple()
dc.DrawRoundedRectangle(0, 0,width, height, 8)
class TestFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1)
self.Bind(wx.EVT_SIZE, self.OnSize)
self.scrolling_window = wx.ScrolledWindow( self )
self.scrolling_window.SetScrollRate(1,1)
self.scrolling_window.EnableScrolling(True,True)
self.sizer_container = wx.BoxSizer( wx.VERTICAL )
self.sizer = wx.BoxSizer( wx.HORIZONTAL )
self.sizer_container.Add(self.sizer,1,wx.CENTER,wx.EXPAND)
self.child_windows = []
for i in range(0,50):
wind = MyCustomWindow(self.scrolling_window)
self.sizer.Add(wind, 0, wx.CENTER|wx.ALL, 5)
self.child_windows.append(wind)
self.scrolling_window.SetSizer(self.sizer_container)
def OnSize(self, event):
self.scrolling_window.SetSize(self.GetClientSize())
if __name__=='__main__':
app = wx.PySimpleApp()
f = TestFrame()
f.Show()
app.MainLoop()
| [
"Oops.. turns out I was creating my child windows badly:\nwind = MyCustomWindow(self)\n\nshould be:\nwind = MyCustomWindow(self.scrolling_window)\n\n..which meant the child windows were waiting for the top-level window (the frame) to be re-drawn instead of listening to the scroll window. Changing that makes it all work wonderfully :)\n"
] | [
0
] | [] | [] | [
"python",
"scroll",
"scrolledwindow",
"wxwidgets"
] | stackoverflow_0000578200_python_scroll_scrolledwindow_wxwidgets.txt |
Q:
Change basic (immutable) types inside a function in Python?
I am using a C++ SDK where there is a function like (it has a python wrapper, but not docs):
getPos ( int uvId, float & u, float & v ) const
How do I specify in Python so that the passed variables are changed?
I tried this example to see if I could modify floats inside a function, but it didn't work, so printed 12.0:
def change ( a ) :
a = 35.0
b = 12.0
change ( b )
print b
So how do I call this function that I can change 2 external floats in Python?
Related:
Python: variable scope and function calls
A:
In Python:
def getPos(uvID):
# compute u, v
return u, v
#
u, v = getPos(uvID)
A:
As far I know, Python doesn't support call-by-reference, so the exact code you are suggesting doesn't work (obviously).
The tool (or person) that generated the Python wrapper for the C++ function must have done something special to support this function (hopefully, or you won't be able to use it). Do you know what tool was used to generate the wrapper?
Usually tools like this will generate some sort of container data type:
b.value = 12.0
change(b)
print b.value
A:
You're going to have to resort to ctypes.
Specifically, see https://docs.python.org/library/ctypes.html#passing-pointers-or-passing-parameters-by-reference
A:
For simple cases have the function return the new value.
For more complicated cases you can pass in a object or a list and have that changed:
def foobar(alist):
alist[0] = 10
blist = [42]
foobar(blist)
print blist[0]
Edit:
For wrapping C++ references there isn't any standard way (basic python interfaces are at the C level - not C++) - so it depends how the python interface has been implemented - it might be arrays, or returning multiple values. I'm not sure how boost.python handles it but you might start there, or maybe look under a debugger to see how the parameter are handled.
| Change basic (immutable) types inside a function in Python? | I am using a C++ SDK where there is a function like (it has a python wrapper, but not docs):
getPos ( int uvId, float & u, float & v ) const
How do I specify in Python so that the passed variables are changed?
I tried this example to see if I could modify floats inside a function, but it didn't work, so printed 12.0:
def change ( a ) :
a = 35.0
b = 12.0
change ( b )
print b
So how do I call this function that I can change 2 external floats in Python?
Related:
Python: variable scope and function calls
| [
"In Python:\ndef getPos(uvID):\n # compute u, v\n return u, v\n\n# \nu, v = getPos(uvID)\n\n",
"As far I know, Python doesn't support call-by-reference, so the exact code you are suggesting doesn't work (obviously).\nThe tool (or person) that generated the Python wrapper for the C++ function must have done something special to support this function (hopefully, or you won't be able to use it). Do you know what tool was used to generate the wrapper?\nUsually tools like this will generate some sort of container data type:\nb.value = 12.0\nchange(b)\nprint b.value\n\n",
"You're going to have to resort to ctypes.\nSpecifically, see https://docs.python.org/library/ctypes.html#passing-pointers-or-passing-parameters-by-reference\n",
"For simple cases have the function return the new value.\nFor more complicated cases you can pass in a object or a list and have that changed:\ndef foobar(alist):\n alist[0] = 10\n\nblist = [42]\nfoobar(blist)\nprint blist[0]\n\nEdit:\nFor wrapping C++ references there isn't any standard way (basic python interfaces are at the C level - not C++) - so it depends how the python interface has been implemented - it might be arrays, or returning multiple values. I'm not sure how boost.python handles it but you might start there, or maybe look under a debugger to see how the parameter are handled.\n"
] | [
5,
2,
2,
0
] | [] | [] | [
"pass_by_reference",
"python",
"variables"
] | stackoverflow_0000579782_pass_by_reference_python_variables.txt |
Q:
How can I change a huge file into csv in python
I'm a beginner in python. I have a huge text file (hundreds of GB) and I want to convert the file into csv file. In my text file, I know the row delimiter is a string "<><><><><><><>". If a line contains that string, I want to replace it with ". Is there a way to do it without having to read the old file and rewriting a new file.
Normally I thought I need to do something like this:
fin = open("input", "r")
fout = open("outpout", "w")
line = f.readline
while line != "":
if line.contains("<><><><><><><>"):
fout.writeline("\"")
else:
fout.writeline(line)
line = f.readline
but copying hundreds of GB is wasteful. Also I don't know if open will eat lots of memory (does it treat file handler as a stream?)
Any help is greatly appreciated.
Note: an example of the file would be
file.txt
<><><><><><><>
abcdefeghsduai
asdjliwa
1231214 ""
<><><><><><><>
would be one row and one column in csv.
A:
@richard-levasseur
I agree, sed seems like the right way to go. Here's a rough cut at what the OP describes:
sed -i -e's/<><><><><><><>/"/g' foo.txt
This will do the replacement in-place in the existing foo.txt. For that reason, I recommend having the original file under some sort of version control; any of the DVCS should fit the bill.
A:
Yes, open() treats the file as a stream, as does readline(). It'll only read the next line. If you call read(), however, it'll read everything into memory.
Your example code looks ok at first glance. Almost every solution will require you to copy the file elsewhere. Its not exactly easy to modify the contents of a file inplace without a 1:1 replacement.
It may be faster to use some standard unix utilities (awk and sed most likely), but I lack the unix and bash-fu necessary to provide a full solution.
A:
It's only wasteful if you don't have disk to spare. That is, fix it when it's a problem. Your solution looks ok as a first attempt.
It's not wasteful of memory because a file handler is a stream.
A:
Reading lines is simply done using a file iterator:
for line in fin:
if line.contains("<><><><><><><>"):
fout.writeline("\"")
Also consider the CSV writer object to write CSV files, e.g:
import csv
writer = csv.writer(open("some.csv", "wb"))
writer.writerows(someiterable)
A:
With python you will have to create a new file for safety sake, it will cause alot less headaches than trying to write in place.
The below listed reads your input 1 line at a time and buffers the columns (from what I understood of your test input file was 1 row) and then once the end of row delimiter is hit it will write that buffer to disk, flushing manually every 1000 lines of the original file. This will save some IO as well instead of writing every segment, 1000 writes of 32 bytes each will be faster than 4000 writes of 8 bytes.
fin = open(input_fn, "rb")
fout = open(output_fn, "wb")
row_delim = "<><><><><><><>"
write_buffer = []
for i, line in enumerate(fin):
if not i % 1000:
fout.flush()
if row_delim in line and i:
fout.write('"%s"\r\n'%'","'.join(write_buffer))
write_buffer = []
else:
write_buffer.append(line.strip())
Hope that helps.
EDIT: Forgot to mention, while using .readline() is not a bad thing don't use .readlines() which will go and read the entire content of the file into a list containing each line which is incredibly inefficient. Using the built in iterator that comes with a file object is the best memory usage and speed.
A:
@Constatin suggests that if you would be satisfied with replacing '<><><><><><><>\n' by '" \n'
then the replacement string is the same length, and in that case you can craft a solution to in-place editing with mmap. You will need python 2.6. It's vital that the file is opened in the right mode!
import mmap, os
CHUNK = 2**20
oldStr = ''
newStr = '" '
strLen = len(oldStr)
assert strLen==len(newStr)
f = open("myfilename", "r+")
size = os.fstat(f.fileno()).st_size
for offset in range(0,size,CHUNK):
map = mmap.mmap(f.fileno(),
length=min(CHUNK+strLen,size-offset), # not beyond EOF
offset=offset)
index = 0 # start at beginning
while 1:
index = map.find(oldStr,index) # find next match
if index == -1: # no more matches in this map
break
map[index:index+strLen] = newStr
f.close()
This code is not debugged! It works for me on a 3 MB test case, but it may not work on a large ( > 2GB) file - the mmap module still seems a bit immature, so I wouldn't rely on it too much.
Looking at the bigger picture, from what you've posted it isn't clear that your file will end up as valid CSV. Also be aware that the tool you're planning to use to actually process the CSV may be flexible enough to deal with the file as it stands.
A:
[For the problem exactly as stated] There's no way that this can be done without copying the data, in python or any other language. If your processing always replaced substrings with new substrings of equal length, maybe you could do it in-place. But whenever you replace <><><><><><><> with " you are changing the position of all subsequent characters in the file. Copying from one place to another is the only way to handle this.
EDIT:
Note that the use of sed won't actually save any copying...sed doesn't really edit in-place either. From the GNU sed manual:
-i[SUFFIX]
--in-place[=SUFFIX]
This option specifies that files are to be edited in-place. GNU sed does this by creating a temporary file and sending output to this file rather than to the standard output.
(emphasis mine.)
A:
If you're delimiting fields with double quotes, it looks like you need to escape the double quotes you have occurring in your elements (for example 1231214 "" will need to be \n1231214 \"\").
Something like
fin = open("input", "r")
fout = open("output", "w")
for line in fin:
if line.contains("<><><><><><><>"):
fout.writeline("\"")
else:
fout.writeline(line.replace('"',r'\"')
fin.close()
fout.close()
| How can I change a huge file into csv in python | I'm a beginner in python. I have a huge text file (hundreds of GB) and I want to convert the file into csv file. In my text file, I know the row delimiter is a string "<><><><><><><>". If a line contains that string, I want to replace it with ". Is there a way to do it without having to read the old file and rewriting a new file.
Normally I thought I need to do something like this:
fin = open("input", "r")
fout = open("outpout", "w")
line = f.readline
while line != "":
if line.contains("<><><><><><><>"):
fout.writeline("\"")
else:
fout.writeline(line)
line = f.readline
but copying hundreds of GB is wasteful. Also I don't know if open will eat lots of memory (does it treat file handler as a stream?)
Any help is greatly appreciated.
Note: an example of the file would be
file.txt
<><><><><><><>
abcdefeghsduai
asdjliwa
1231214 ""
<><><><><><><>
would be one row and one column in csv.
| [
"@richard-levasseur\nI agree, sed seems like the right way to go. Here's a rough cut at what the OP describes:\n sed -i -e's/<><><><><><><>/\"/g' foo.txt \n\nThis will do the replacement in-place in the existing foo.txt. For that reason, I recommend having the original file under some sort of version control; any of the DVCS should fit the bill.\n",
"Yes, open() treats the file as a stream, as does readline(). It'll only read the next line. If you call read(), however, it'll read everything into memory.\nYour example code looks ok at first glance. Almost every solution will require you to copy the file elsewhere. Its not exactly easy to modify the contents of a file inplace without a 1:1 replacement.\nIt may be faster to use some standard unix utilities (awk and sed most likely), but I lack the unix and bash-fu necessary to provide a full solution.\n",
"It's only wasteful if you don't have disk to spare. That is, fix it when it's a problem. Your solution looks ok as a first attempt.\nIt's not wasteful of memory because a file handler is a stream.\n",
"Reading lines is simply done using a file iterator:\nfor line in fin:\n if line.contains(\"<><><><><><><>\"):\n fout.writeline(\"\\\"\")\n\nAlso consider the CSV writer object to write CSV files, e.g:\nimport csv\nwriter = csv.writer(open(\"some.csv\", \"wb\"))\nwriter.writerows(someiterable)\n\n",
"With python you will have to create a new file for safety sake, it will cause alot less headaches than trying to write in place.\nThe below listed reads your input 1 line at a time and buffers the columns (from what I understood of your test input file was 1 row) and then once the end of row delimiter is hit it will write that buffer to disk, flushing manually every 1000 lines of the original file. This will save some IO as well instead of writing every segment, 1000 writes of 32 bytes each will be faster than 4000 writes of 8 bytes.\nfin = open(input_fn, \"rb\")\nfout = open(output_fn, \"wb\")\nrow_delim = \"<><><><><><><>\"\nwrite_buffer = []\n\nfor i, line in enumerate(fin):\n if not i % 1000:\n fout.flush()\n if row_delim in line and i:\n fout.write('\"%s\"\\r\\n'%'\",\"'.join(write_buffer))\n write_buffer = []\n else:\n write_buffer.append(line.strip())\n\nHope that helps.\nEDIT: Forgot to mention, while using .readline() is not a bad thing don't use .readlines() which will go and read the entire content of the file into a list containing each line which is incredibly inefficient. Using the built in iterator that comes with a file object is the best memory usage and speed.\n",
"@Constatin suggests that if you would be satisfied with replacing '<><><><><><><>\\n' by '\" \\n'\nthen the replacement string is the same length, and in that case you can craft a solution to in-place editing with mmap. You will need python 2.6. It's vital that the file is opened in the right mode!\n\nimport mmap, os\nCHUNK = 2**20\n\noldStr = ''\nnewStr = '\" '\nstrLen = len(oldStr)\nassert strLen==len(newStr)\n\nf = open(\"myfilename\", \"r+\")\nsize = os.fstat(f.fileno()).st_size\n\nfor offset in range(0,size,CHUNK):\n map = mmap.mmap(f.fileno(),\n length=min(CHUNK+strLen,size-offset), # not beyond EOF\n offset=offset)\n index = 0 # start at beginning\n while 1:\n index = map.find(oldStr,index) # find next match\n if index == -1: # no more matches in this map\n break\n map[index:index+strLen] = newStr\n\nf.close()\n\nThis code is not debugged! It works for me on a 3 MB test case, but it may not work on a large ( > 2GB) file - the mmap module still seems a bit immature, so I wouldn't rely on it too much.\nLooking at the bigger picture, from what you've posted it isn't clear that your file will end up as valid CSV. Also be aware that the tool you're planning to use to actually process the CSV may be flexible enough to deal with the file as it stands.\n",
"[For the problem exactly as stated] There's no way that this can be done without copying the data, in python or any other language. If your processing always replaced substrings with new substrings of equal length, maybe you could do it in-place. But whenever you replace <><><><><><><> with \" you are changing the position of all subsequent characters in the file. Copying from one place to another is the only way to handle this.\nEDIT:\nNote that the use of sed won't actually save any copying...sed doesn't really edit in-place either. From the GNU sed manual:\n\n-i[SUFFIX]\n--in-place[=SUFFIX]\n This option specifies that files are to be edited in-place. GNU sed does this by creating a temporary file and sending output to this file rather than to the standard output.\n\n(emphasis mine.)\n",
"If you're delimiting fields with double quotes, it looks like you need to escape the double quotes you have occurring in your elements (for example 1231214 \"\" will need to be \\n1231214 \\\"\\\").\nSomething like\nfin = open(\"input\", \"r\")\nfout = open(\"output\", \"w\")\nfor line in fin:\n if line.contains(\"<><><><><><><>\"):\n fout.writeline(\"\\\"\")\n else:\n fout.writeline(line.replace('\"',r'\\\"')\nfin.close()\nfout.close()\n\n"
] | [
5,
4,
1,
1,
1,
1,
0,
0
] | [] | [] | [
"csv",
"file",
"python"
] | stackoverflow_0000576967_csv_file_python.txt |
Q:
Python ORM that auto-generates/updates tables and uses SQLite?
I am doing some prototyping for a new desktop app i am writing in Python, and i want to use SQLite and an ORM to store data.
My question is, are there any ORM libraries that support auto-generating/updating the database schema and work with SQLite?
A:
SQLAlchemy is a great choice in the Python ORM space that supports SQLite.
A:
SQLAlchemy, when used with the sqlalchemy-migrate library.
| Python ORM that auto-generates/updates tables and uses SQLite? | I am doing some prototyping for a new desktop app i am writing in Python, and i want to use SQLite and an ORM to store data.
My question is, are there any ORM libraries that support auto-generating/updating the database schema and work with SQLite?
| [
"SQLAlchemy is a great choice in the Python ORM space that supports SQLite.\n",
"SQLAlchemy, when used with the sqlalchemy-migrate library.\n"
] | [
16,
2
] | [] | [] | [
"auto_generate",
"orm",
"python",
"sqlite"
] | stackoverflow_0000579770_auto_generate_orm_python_sqlite.txt |
Q:
Need help debugging python html generator
The program is supposed to take user input, turn it into html and pass it into the clipboard.
Start the program with welcome_msg()
If you enter 1 in the main menu, it takes you through building an anchor tag. You'll add the link text, the url, then the title. After you enter the title, I get the following errors:
File "<pyshell#23>", line 1, in <module>
welcome_msg()
File "C:\Python26\html_hax.py", line 24, in welcome_msg
anchor()
File "C:\Python26\html_hax.py", line 71, in anchor
copy_to_clipboard(anchor_output)
File "C:\Python26\html_hax.py", line 45, in copy_to_clipboard
wc.SetClipboardData(win32con.CF_TEXT, msg)
error: (0, 'SetClipboardData', 'No error message is available')
Here's the Code:
http://pastie.org/398163
What is causing the errors above?
A:
In your make_link function you construct a link_output, but you don't actually return it as the functions result. Use return to do this:
def make_link(in_link):
...
if title == '':
link_output = ...
else:
link_output = ...
return link_output
This way you get the value passed to your anchor_output variable here:
anchor_output = make_link(anchor_text)
This was None because the function didn't return any value, and setting the clipboard to None failed. With the function returning a real string it should work as expected.
| Need help debugging python html generator | The program is supposed to take user input, turn it into html and pass it into the clipboard.
Start the program with welcome_msg()
If you enter 1 in the main menu, it takes you through building an anchor tag. You'll add the link text, the url, then the title. After you enter the title, I get the following errors:
File "<pyshell#23>", line 1, in <module>
welcome_msg()
File "C:\Python26\html_hax.py", line 24, in welcome_msg
anchor()
File "C:\Python26\html_hax.py", line 71, in anchor
copy_to_clipboard(anchor_output)
File "C:\Python26\html_hax.py", line 45, in copy_to_clipboard
wc.SetClipboardData(win32con.CF_TEXT, msg)
error: (0, 'SetClipboardData', 'No error message is available')
Here's the Code:
http://pastie.org/398163
What is causing the errors above?
| [
"In your make_link function you construct a link_output, but you don't actually return it as the functions result. Use return to do this:\ndef make_link(in_link):\n ...\n if title == '':\n link_output = ...\n else:\n link_output = ...\n return link_output\n\nThis way you get the value passed to your anchor_output variable here:\nanchor_output = make_link(anchor_text)\n\nThis was None because the function didn't return any value, and setting the clipboard to None failed. With the function returning a real string it should work as expected.\n"
] | [
3
] | [] | [] | [
"python",
"pywin32"
] | stackoverflow_0000580397_python_pywin32.txt |
Q:
AuiNotebook, where did the event happend
How can I find out from which AuiNotebook page an event occurred?
EDIT: Sorry about that. Here are a code example. How do I find the notebook page
from witch the mouse was clicked in?
#!/usr/bin/python
#12_aui_notebook1.py
import wx
import wx.lib.inspection
class MyFrame(wx.Frame):
def __init__(self, *args, **kwds):
wx.Frame.__init__(self, *args, **kwds)
self.nb = wx.aui.AuiNotebook(self)
self.new_panel('Pane 1')
self.new_panel('Pane 2')
self.new_panel('Pane 3')
def new_panel(self, nm):
pnl = wx.Panel(self)
self.nb.AddPage(pnl, nm)
self.sizer = wx.BoxSizer()
self.sizer.Add(self.nb, 1, wx.EXPAND)
self.SetSizer(self.sizer)
pnl.Bind(wx.EVT_LEFT_DOWN, self.click)
def click(self, event):
print 'Mouse click'
#How can I find out from witch page this click came from?
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, -1, '12_aui_notebook1.py')
frame.Show()
self.SetTopWindow(frame)
return 1
if __name__ == "__main__":
app = MyApp(0)
# wx.lib.inspection.InspectionTool().Show()
app.MainLoop()
Oerjan Pettersen
A:
For a mouse click you can assume the current selected page is the one that got the click. I added a few lines to your code. See comments
def new_panel(self, nm):
pnl = wx.Panel(self)
# just to debug, I added a string attribute to the panel
# don't you love dynamic languages? :)
pnl.identifierTag = nm
self.nb.AddPage(pnl, nm)
self.sizer = wx.BoxSizer()
self.sizer.Add(self.nb, 1, wx.EXPAND)
self.SetSizer(self.sizer)
pnl.Bind(wx.EVT_LEFT_DOWN, self.click)
def click(self, event):
print 'Mouse click'
# get the current selected page
page = self.nb.GetPage(self.nb.GetSelection())
# notice that it is the panel that you created in new_panel
print page.identifierTag
| AuiNotebook, where did the event happend | How can I find out from which AuiNotebook page an event occurred?
EDIT: Sorry about that. Here are a code example. How do I find the notebook page
from witch the mouse was clicked in?
#!/usr/bin/python
#12_aui_notebook1.py
import wx
import wx.lib.inspection
class MyFrame(wx.Frame):
def __init__(self, *args, **kwds):
wx.Frame.__init__(self, *args, **kwds)
self.nb = wx.aui.AuiNotebook(self)
self.new_panel('Pane 1')
self.new_panel('Pane 2')
self.new_panel('Pane 3')
def new_panel(self, nm):
pnl = wx.Panel(self)
self.nb.AddPage(pnl, nm)
self.sizer = wx.BoxSizer()
self.sizer.Add(self.nb, 1, wx.EXPAND)
self.SetSizer(self.sizer)
pnl.Bind(wx.EVT_LEFT_DOWN, self.click)
def click(self, event):
print 'Mouse click'
#How can I find out from witch page this click came from?
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, -1, '12_aui_notebook1.py')
frame.Show()
self.SetTopWindow(frame)
return 1
if __name__ == "__main__":
app = MyApp(0)
# wx.lib.inspection.InspectionTool().Show()
app.MainLoop()
Oerjan Pettersen
| [
"For a mouse click you can assume the current selected page is the one that got the click. I added a few lines to your code. See comments\ndef new_panel(self, nm):\n pnl = wx.Panel(self)\n # just to debug, I added a string attribute to the panel\n # don't you love dynamic languages? :)\n pnl.identifierTag = nm \n self.nb.AddPage(pnl, nm)\n self.sizer = wx.BoxSizer()\n self.sizer.Add(self.nb, 1, wx.EXPAND)\n self.SetSizer(self.sizer)\n\n pnl.Bind(wx.EVT_LEFT_DOWN, self.click)\n\ndef click(self, event):\n print 'Mouse click'\n # get the current selected page\n page = self.nb.GetPage(self.nb.GetSelection())\n # notice that it is the panel that you created in new_panel\n print page.identifierTag\n\n"
] | [
1
] | [] | [] | [
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0000578800_python_wxpython_wxwidgets.txt |
Q:
Python list serialization - fastest method
I need to load (de-serialize) a pre-computed list of integers from a file in a Python script (into a Python list). The list is large (upto millions of items), and I can choose the format I store it in, as long as loading is fastest.
Which is the fastest method, and why?
Using import on a .py file that just contains the list assigned to a variable
Using cPickle's load
Some other method (perhaps numpy?)
Also, how can one benchmark such things reliably?
Addendum: measuring this reliably is difficult, because import is cached so it can't be executed multiple times in a test. The loading with pickle also gets faster after the first time probably because page-precaching by the OS. Loading 1 million numbers with cPickle takes 1.1 sec the first time run, and 0.2 sec on subsequent executions of the script.
Intuitively I feel cPickle should be faster, but I'd appreciate numbers (this is quite a challenge to measure, I think).
And yes, it's important for me that this performs quickly.
Thanks
A:
I would guess cPickle will be fastest if you really need the thing in a list.
If you can use an array, which is a built-in sequence type, I timed this at a quarter of a second for 1 million integers:
from array import array
from datetime import datetime
def WriteInts(theArray,filename):
f = file(filename,"wb")
theArray.tofile(f)
f.close()
def ReadInts(filename):
d = datetime.utcnow()
theArray = array('i')
f = file(filename,"rb")
try:
theArray.fromfile(f,1000000000)
except EOFError:
pass
print "Read %d ints in %s" % (len(theArray),datetime.utcnow() - d)
return theArray
if __name__ == "__main__":
a = array('i')
a.extend(range(0,1000000))
filename = "a_million_ints.dat"
WriteInts(a,filename)
r = ReadInts(filename)
print "The 5th element is %d" % (r[4])
A:
For benchmarking, see the timeit module in the Python standard library. To see what is the fastest way, implement all the ways you can think of and measure them with timeit.
Random thought: depending on what you're doing exactly, you may find it fastest to store "sets of integers" in the style used in .newsrc files:
1, 3-1024, 11000-1200000
If you need to check whether something is in that set, then loading and matching with such a representation should be among the fastest ways. This assumes your sets of integers are reasonably dense, with long consecutive sequences of adjacent values.
A:
"how can one benchmark such things reliably?"
I don't get the question.
You write a bunch of little functions to create and save your list in various forms.
You write a bunch of little functions to load your lists in their various forms.
You write a little timer function to get start time, execute the load procedure a few dozen times (to get a solid average that's long enough that OS scheduling noise doesn't dominate your measurements).
You summarize your data in a little report.
What's unreliable about this?
Here are some unrelated questions that shows how to measure and compare performance.
Convert list of ints to one number?
String concatenation vs. string substitution in Python
A:
To help you with timing, the Python Library provides the timeit module:
This module provides a simple way to time small bits of Python code. It has both command line as well as callable interfaces. It avoids a number of common traps for measuring execution times.
An example (from the manual) that compares the cost of using hasattr() vs. try/except to test for missing and present object attributes:
% timeit.py 'try:' ' str.__nonzero__' 'except AttributeError:' ' pass'
100000 loops, best of 3: 15.7 usec per loop
% timeit.py 'if hasattr(str, "__nonzero__"): pass'
100000 loops, best of 3: 4.26 usec per loop
% timeit.py 'try:' ' int.__nonzero__' 'except AttributeError:' ' pass'
1000000 loops, best of 3: 1.43 usec per loop
% timeit.py 'if hasattr(int, "__nonzero__"): pass'
100000 loops, best of 3: 2.23 usec per loop
A:
Do you need to always load the whole file? If not, upack_from() might be the best solution. Suppose, that you have 1000000 integers, but you'd like to load just the ones from 50000 to 50099, you'd do:
import struct
intSize = struct.calcsize('i') #this value would be constant for a given arch
intFile = open('/your/file.of.integers')
intTuple5K100 = struct.unpack_from('i'*100,intFile,50000*intSize)
A:
cPickle will be the fastest since it is saved in binary and no real python code has to be parsed.
Other advantates are that it is more secure (since it does not execute commands) and you have no problems with setting $PYTHONPATH correctly.
| Python list serialization - fastest method | I need to load (de-serialize) a pre-computed list of integers from a file in a Python script (into a Python list). The list is large (upto millions of items), and I can choose the format I store it in, as long as loading is fastest.
Which is the fastest method, and why?
Using import on a .py file that just contains the list assigned to a variable
Using cPickle's load
Some other method (perhaps numpy?)
Also, how can one benchmark such things reliably?
Addendum: measuring this reliably is difficult, because import is cached so it can't be executed multiple times in a test. The loading with pickle also gets faster after the first time probably because page-precaching by the OS. Loading 1 million numbers with cPickle takes 1.1 sec the first time run, and 0.2 sec on subsequent executions of the script.
Intuitively I feel cPickle should be faster, but I'd appreciate numbers (this is quite a challenge to measure, I think).
And yes, it's important for me that this performs quickly.
Thanks
| [
"I would guess cPickle will be fastest if you really need the thing in a list.\nIf you can use an array, which is a built-in sequence type, I timed this at a quarter of a second for 1 million integers:\nfrom array import array\nfrom datetime import datetime\n\ndef WriteInts(theArray,filename):\n f = file(filename,\"wb\")\n theArray.tofile(f)\n f.close()\n\ndef ReadInts(filename):\n d = datetime.utcnow()\n theArray = array('i')\n f = file(filename,\"rb\")\n try:\n theArray.fromfile(f,1000000000)\n except EOFError:\n pass\n print \"Read %d ints in %s\" % (len(theArray),datetime.utcnow() - d)\n return theArray\n\nif __name__ == \"__main__\":\n a = array('i')\n a.extend(range(0,1000000))\n filename = \"a_million_ints.dat\"\n WriteInts(a,filename)\n r = ReadInts(filename)\n print \"The 5th element is %d\" % (r[4])\n\n",
"For benchmarking, see the timeit module in the Python standard library. To see what is the fastest way, implement all the ways you can think of and measure them with timeit.\nRandom thought: depending on what you're doing exactly, you may find it fastest to store \"sets of integers\" in the style used in .newsrc files:\n1, 3-1024, 11000-1200000\n\nIf you need to check whether something is in that set, then loading and matching with such a representation should be among the fastest ways. This assumes your sets of integers are reasonably dense, with long consecutive sequences of adjacent values.\n",
"\"how can one benchmark such things reliably?\" \nI don't get the question.\nYou write a bunch of little functions to create and save your list in various forms.\nYou write a bunch of little functions to load your lists in their various forms.\nYou write a little timer function to get start time, execute the load procedure a few dozen times (to get a solid average that's long enough that OS scheduling noise doesn't dominate your measurements).\nYou summarize your data in a little report. \nWhat's unreliable about this?\nHere are some unrelated questions that shows how to measure and compare performance. \nConvert list of ints to one number?\nString concatenation vs. string substitution in Python\n",
"To help you with timing, the Python Library provides the timeit module:\n\nThis module provides a simple way to time small bits of Python code. It has both command line as well as callable interfaces. It avoids a number of common traps for measuring execution times.\n\nAn example (from the manual) that compares the cost of using hasattr() vs. try/except to test for missing and present object attributes:\n% timeit.py 'try:' ' str.__nonzero__' 'except AttributeError:' ' pass'\n100000 loops, best of 3: 15.7 usec per loop\n% timeit.py 'if hasattr(str, \"__nonzero__\"): pass'\n100000 loops, best of 3: 4.26 usec per loop\n% timeit.py 'try:' ' int.__nonzero__' 'except AttributeError:' ' pass'\n1000000 loops, best of 3: 1.43 usec per loop\n% timeit.py 'if hasattr(int, \"__nonzero__\"): pass'\n100000 loops, best of 3: 2.23 usec per loop\n\n",
"Do you need to always load the whole file? If not, upack_from() might be the best solution. Suppose, that you have 1000000 integers, but you'd like to load just the ones from 50000 to 50099, you'd do:\nimport struct\nintSize = struct.calcsize('i') #this value would be constant for a given arch\nintFile = open('/your/file.of.integers')\nintTuple5K100 = struct.unpack_from('i'*100,intFile,50000*intSize)\n\n",
"cPickle will be the fastest since it is saved in binary and no real python code has to be parsed.\nOther advantates are that it is more secure (since it does not execute commands) and you have no problems with setting $PYTHONPATH correctly.\n"
] | [
7,
3,
2,
2,
2,
1
] | [] | [] | [
"caching",
"python",
"serialization"
] | stackoverflow_0000556730_caching_python_serialization.txt |
Q:
Alternative to 'for i in xrange(len(x))'
So I see in another post the following "bad" snippet, but the only alternatives I have seen involve patching Python.
for i in xrange(len(something)):
workwith = something[i]
# do things with workwith...
What do I do to avoid this "antipattern"?
A:
If you need to know the index in the loop body:
for index, workwith in enumerate(something):
print "element", index, "is", workwith
A:
See Pythonic
for workwith in something:
# do things with workwith
A:
As there are two answers to question that are perfectly valid (with an assumption each) and author of the question didn't inform us about the destiny of index, the valid answer should read:
If you do not need index at all:
for workwith in something:
print "element", workwith
If you need index:
for index, workwith in enumerate(something):
print "element", index, "is", workwith
If my answer is not appropriate, comment please, and I'll delete it :)
A:
for example:
[workwith(i) for i in something]
| Alternative to 'for i in xrange(len(x))' | So I see in another post the following "bad" snippet, but the only alternatives I have seen involve patching Python.
for i in xrange(len(something)):
workwith = something[i]
# do things with workwith...
What do I do to avoid this "antipattern"?
| [
"If you need to know the index in the loop body:\nfor index, workwith in enumerate(something):\n print \"element\", index, \"is\", workwith\n\n",
"See Pythonic\nfor workwith in something:\n # do things with workwith\n\n",
"As there are two answers to question that are perfectly valid (with an assumption each) and author of the question didn't inform us about the destiny of index, the valid answer should read:\n\nIf you do not need index at all:\nfor workwith in something:\n print \"element\", workwith\n\nIf you need index:\nfor index, workwith in enumerate(something):\n print \"element\", index, \"is\", workwith\n\n\nIf my answer is not appropriate, comment please, and I'll delete it :)\n",
"for example:\n[workwith(i) for i in something]\n\n"
] | [
23,
22,
12,
0
] | [
"What is x? If its a sequence or iterator or string then \nfor i in x:\n workwith = i\n\nwill work fine.\n"
] | [
-3
] | [
"anti_patterns",
"for_loop",
"python"
] | stackoverflow_0000578677_anti_patterns_for_loop_python.txt |
Q:
Double buffering with wxpython
I'm working on an multiplatform application with wxpython and I had flickering problems on windows, while drawing on a Panel.
I used to draw on a buffer (wx.Bitmap) during mouse motions events and my OnPaint method was composed of just on line:
dc = wx.BufferedPaintDC(self, self.buffer)
Pretty standard but still I had flickering problems on Windows, while everything worked fine on Linux.
I solved my problem calling SetDoubleBuffered(True) in the __init__ method.
The strange thing is that now everything works even if I don't use BufferedPaintDC anymore. I changed my application so that all the drawing part is done in the OnPaint method. I don't use a buffer and drawing is done directly on a wx.PaintDC with no flickering problems at all.
So my question is: is BufferedPaintDC totally useless? Or deprecated somehow? I'm owner of the book "WxPython in Action" (2006) and it doesn't even mention SetDoubleBuffered
A:
There is a high probability that the SetDoubleBuffered actually makes your panel use a buffered dc automatically, the documentation doesn't mention that those classes are deprecated (and I rather think they would if that were the case).
About wxPython in Action... 2006 was a long time ago... it is possible that the SetDoubleBuffered method didn't exist back then or that the author wanted to show how things work at the lower level.
| Double buffering with wxpython | I'm working on an multiplatform application with wxpython and I had flickering problems on windows, while drawing on a Panel.
I used to draw on a buffer (wx.Bitmap) during mouse motions events and my OnPaint method was composed of just on line:
dc = wx.BufferedPaintDC(self, self.buffer)
Pretty standard but still I had flickering problems on Windows, while everything worked fine on Linux.
I solved my problem calling SetDoubleBuffered(True) in the __init__ method.
The strange thing is that now everything works even if I don't use BufferedPaintDC anymore. I changed my application so that all the drawing part is done in the OnPaint method. I don't use a buffer and drawing is done directly on a wx.PaintDC with no flickering problems at all.
So my question is: is BufferedPaintDC totally useless? Or deprecated somehow? I'm owner of the book "WxPython in Action" (2006) and it doesn't even mention SetDoubleBuffered
| [
"There is a high probability that the SetDoubleBuffered actually makes your panel use a buffered dc automatically, the documentation doesn't mention that those classes are deprecated (and I rather think they would if that were the case).\nAbout wxPython in Action... 2006 was a long time ago... it is possible that the SetDoubleBuffered method didn't exist back then or that the author wanted to show how things work at the lower level.\n"
] | [
5
] | [] | [] | [
"doublebuffered",
"python",
"user_interface",
"wxpython"
] | stackoverflow_0000581085_doublebuffered_python_user_interface_wxpython.txt |
Q:
Python web programming
Good morning.
As the title indicates, I've got some questions about using python for web development.
What is the best setup for a development environment, more specifically, what webserver to use, how to bind python with it. Preferably, I'd like it to be implementable in both, *nix and win environment.
My major concern when I last tried apache + mod_python + CherryPy was having to reload webserver to see the changes. Is it considered normal? For some reason cherrypy's autoreload didn't work at all.
What is the best setup to deploy a working Python app to production and why? I'm now using lighttpd for my PHP web apps, but how would it do for python compared to nginx for example?
Is it worth diving straight with a framework or to roll something simple of my own? I see that Django has got quite a lot of fans, but I'm thinking it would be overkill for my needs, so I've started looking into CherryPy.
How exactly are Python apps served if I have to reload httpd to see the changes? Something like a permanent process spawning child processes, with all the major file includes happening on server start and then just lazy loading needed resources?
Python supports multithreading, do I need to look into using that for a benefit when developing web apps? What would be that benefit and in what situations?
Big thanks!
A:
What is the best setup for a development environment?
Doesn't much matter. We use Django, which runs in Windows and Unix nicely. For production, we use Apache in Red Hat.
Is having to reload webserver to see the changes considered normal?
Yes. Not clear why you'd want anything different. Web application software shouldn't be dynamic. Content yes. Software no.
In Django, we develop without using a web server of any kind on our desktop. The Django "runserver" command reloads the application under most circumstances. For development, this works great. The times when it won't reload are when we've damaged things so badly that the app doesn't properly.
What is the best setup to deploy a working Python app to production and why?
"Best" is undefined in this context. Therefore, please provide some qualification for "nest" (e.g., "fastest", "cheapest", "bluest")
Is it worth diving straight with a framework or to roll something simple of my own?
Don't waste time rolling your own. We use Django because of the built-in admin page that we don't have to write or maintain. Saves mountains of work.
How exactly are Python apps served if I have to reload httpd to see the changes?
Two methods:
Daemon - mod_wsgi or mod_fastcgi have a Python daemon process to which they connect. Change your software. Restart the daemon.
Embedded - mod_wsgi or mod_python have an embedded mode in which the Python interpreter is inside the mod, inside Apache. You have to restart httpd to restart that embedded interpreter.
Do I need to look into using multi-threaded?
Yes and no. Yes you do need to be aware of this. No, you don't need to do very much. Apache and mod_wsgi and Django should handle this for you.
A:
So here are my thoughts about it:
I am using Python Paste for developing my app and eventually also running it (or any other python web server). I am usually not using mod_python or mod_wsgi as it makes development setup more complex.
I am using zc.buildout for managing my development environment and all dependencies together with virtualenv. This gives me an isolated sandbox which does not interfere with any Python modules installed system wide.
For deployment I am also using buildout/virtualenv, eventually with a different buildout.cfg. I am also using Paste Deploy and it's configuration mechanism where I have different config files for development and deployment.
As I am usually running paste/cherrypy etc. standalone I am using Apache, NGINX or maybe just a Varnish alone in front of it. It depends on what configuration options you need. E.g. if no virtual hosting, rewrite rules etc. are needed, then I don't need a full featured web server in front. When using a web server I usually use ProxyPass or some more complex rewriting using mod_rewrite.
The Python web framework I use at the moment is repoze.bfg right now btw.
As for your questions about reloading I know about these problems when running it with e.g. mod_python but when using a standalone "paster serve ... -reload" etc. it so far works really well. repoze.bfg additionally has some setting for automatically reloading templates when they change. If the framework you use has that should be documented.
As for multithreading that's usually used then inside the python web server. As CherryPy supports this I guess you don't have to worry about that, it should be used automatically. You should just eventually make some benchmarks to find out under what number of threads your application performs the best.
Hope that helps.
A:
+1 to MrTopf's answer, but I'll add some additional opinions.
Webserver
Apache is the webserver that will give you the most configurability. Avoid mod_python because it is basically unsupported. On the other hand, mod_wsgi is very well supported and gives you better stability (in other words, easier to configure for cpu/memory usage to be stable as opposed to spikey and unpredictable).
Another huge benefit, you can configure mod_wsgi to reload your application if the wsgi application script is touched, no need to restart Apache. For development/testing servers you can even configure mod_wsgi to reload when any file in your application is changed. This is so helpful I even run Apache+mod_wsgi on my laptop during development.
Nginx and lighttpd are commonly used for webservers, either by serving Python apps directly through a fastCGI interface (don't bother with any WSGI interfaces on these servers yet) or by using them as a front end in front of Apache. Calls into the app get passed through (by proxy) to Apache+mod_wsgi and then nginx/lighttpd serve the static content directly.
Nginx has the added advantage of being able to serve content directly from memcached if you want to get that sophisticated. I've heard disparaging comments about lighttpd and it does seem to have some development problems, but there are certainly some big companies using it successfully.
Python stack
At the lowest level you can program to WSGI directly for the best performance. There are lots of helpful WSGI modules out there to help you in areas you don't want to develop yourself. At this level you'll probably want to pick third-party WSGI components to do things like URL resolving and HTTP request/response handling. A great request/response component is WebOb.
If you look at Pylons you can see their idea of "best-of-breed" WSGI components and a framework that makes it easier than Django to choose your own components like templating engine.
Django might be overkill but I don't think that's a really good argument against. Django makes the easy stuff easier. When you start to get into very complicated applications is where you really need to look at moving to lower level frameworks.
A:
Look at Google App Engine. From their website:
Google App Engine lets you run your
web applications on Google's
infrastructure. App Engine
applications are easy to build, easy
to maintain, and easy to scale as your
traffic and data storage needs grow.
With App Engine, there are no servers
to maintain: You just upload your
application, and it's ready to serve
your users.
You can serve your app using a free
domain name on the appspot.com domain,
or use Google Apps to serve it from
your own domain. You can share your
application with the world, or limit
access to members of your
organization.
App Engine costs nothing to get
started. Sign up for a free account,
and you can develop and publish your
application for the world to see, at
no charge and with no obligation. A
free account can use up to 500MB of
persistent storage and enough CPU and
bandwidth for about 5 million page
views a month.
Best part of all: It includes Python support, including Django. Go to http://code.google.com/appengine/docs/whatisgoogleappengine.html
A:
When you use mod_python on a threaded Apache server (the default on Windows), CherryPy runs in the same process as Apache. In that case, you almost certainly don't want CP to restart the process.
Solution: use mod_rewrite or mod_proxy so that CherryPy runs in its own process. Then you can autoreload to your heart's content. :)
| Python web programming | Good morning.
As the title indicates, I've got some questions about using python for web development.
What is the best setup for a development environment, more specifically, what webserver to use, how to bind python with it. Preferably, I'd like it to be implementable in both, *nix and win environment.
My major concern when I last tried apache + mod_python + CherryPy was having to reload webserver to see the changes. Is it considered normal? For some reason cherrypy's autoreload didn't work at all.
What is the best setup to deploy a working Python app to production and why? I'm now using lighttpd for my PHP web apps, but how would it do for python compared to nginx for example?
Is it worth diving straight with a framework or to roll something simple of my own? I see that Django has got quite a lot of fans, but I'm thinking it would be overkill for my needs, so I've started looking into CherryPy.
How exactly are Python apps served if I have to reload httpd to see the changes? Something like a permanent process spawning child processes, with all the major file includes happening on server start and then just lazy loading needed resources?
Python supports multithreading, do I need to look into using that for a benefit when developing web apps? What would be that benefit and in what situations?
Big thanks!
| [
"What is the best setup for a development environment?\nDoesn't much matter. We use Django, which runs in Windows and Unix nicely. For production, we use Apache in Red Hat.\nIs having to reload webserver to see the changes considered normal?\nYes. Not clear why you'd want anything different. Web application software shouldn't be dynamic. Content yes. Software no.\nIn Django, we develop without using a web server of any kind on our desktop. The Django \"runserver\" command reloads the application under most circumstances. For development, this works great. The times when it won't reload are when we've damaged things so badly that the app doesn't properly.\nWhat is the best setup to deploy a working Python app to production and why?\n\"Best\" is undefined in this context. Therefore, please provide some qualification for \"nest\" (e.g., \"fastest\", \"cheapest\", \"bluest\")\nIs it worth diving straight with a framework or to roll something simple of my own?\nDon't waste time rolling your own. We use Django because of the built-in admin page that we don't have to write or maintain. Saves mountains of work.\nHow exactly are Python apps served if I have to reload httpd to see the changes?\nTwo methods:\n\nDaemon - mod_wsgi or mod_fastcgi have a Python daemon process to which they connect. Change your software. Restart the daemon.\nEmbedded - mod_wsgi or mod_python have an embedded mode in which the Python interpreter is inside the mod, inside Apache. You have to restart httpd to restart that embedded interpreter.\n\nDo I need to look into using multi-threaded?\nYes and no. Yes you do need to be aware of this. No, you don't need to do very much. Apache and mod_wsgi and Django should handle this for you.\n",
"So here are my thoughts about it:\nI am using Python Paste for developing my app and eventually also running it (or any other python web server). I am usually not using mod_python or mod_wsgi as it makes development setup more complex.\nI am using zc.buildout for managing my development environment and all dependencies together with virtualenv. This gives me an isolated sandbox which does not interfere with any Python modules installed system wide. \nFor deployment I am also using buildout/virtualenv, eventually with a different buildout.cfg. I am also using Paste Deploy and it's configuration mechanism where I have different config files for development and deployment.\nAs I am usually running paste/cherrypy etc. standalone I am using Apache, NGINX or maybe just a Varnish alone in front of it. It depends on what configuration options you need. E.g. if no virtual hosting, rewrite rules etc. are needed, then I don't need a full featured web server in front. When using a web server I usually use ProxyPass or some more complex rewriting using mod_rewrite.\nThe Python web framework I use at the moment is repoze.bfg right now btw.\nAs for your questions about reloading I know about these problems when running it with e.g. mod_python but when using a standalone \"paster serve ... -reload\" etc. it so far works really well. repoze.bfg additionally has some setting for automatically reloading templates when they change. If the framework you use has that should be documented.\nAs for multithreading that's usually used then inside the python web server. As CherryPy supports this I guess you don't have to worry about that, it should be used automatically. You should just eventually make some benchmarks to find out under what number of threads your application performs the best.\nHope that helps.\n",
"+1 to MrTopf's answer, but I'll add some additional opinions.\nWebserver\nApache is the webserver that will give you the most configurability. Avoid mod_python because it is basically unsupported. On the other hand, mod_wsgi is very well supported and gives you better stability (in other words, easier to configure for cpu/memory usage to be stable as opposed to spikey and unpredictable).\nAnother huge benefit, you can configure mod_wsgi to reload your application if the wsgi application script is touched, no need to restart Apache. For development/testing servers you can even configure mod_wsgi to reload when any file in your application is changed. This is so helpful I even run Apache+mod_wsgi on my laptop during development.\nNginx and lighttpd are commonly used for webservers, either by serving Python apps directly through a fastCGI interface (don't bother with any WSGI interfaces on these servers yet) or by using them as a front end in front of Apache. Calls into the app get passed through (by proxy) to Apache+mod_wsgi and then nginx/lighttpd serve the static content directly.\nNginx has the added advantage of being able to serve content directly from memcached if you want to get that sophisticated. I've heard disparaging comments about lighttpd and it does seem to have some development problems, but there are certainly some big companies using it successfully.\nPython stack\nAt the lowest level you can program to WSGI directly for the best performance. There are lots of helpful WSGI modules out there to help you in areas you don't want to develop yourself. At this level you'll probably want to pick third-party WSGI components to do things like URL resolving and HTTP request/response handling. A great request/response component is WebOb.\nIf you look at Pylons you can see their idea of \"best-of-breed\" WSGI components and a framework that makes it easier than Django to choose your own components like templating engine.\nDjango might be overkill but I don't think that's a really good argument against. Django makes the easy stuff easier. When you start to get into very complicated applications is where you really need to look at moving to lower level frameworks.\n",
"Look at Google App Engine. From their website: \n\nGoogle App Engine lets you run your\n web applications on Google's\n infrastructure. App Engine\n applications are easy to build, easy\n to maintain, and easy to scale as your\n traffic and data storage needs grow.\n With App Engine, there are no servers\n to maintain: You just upload your\n application, and it's ready to serve\n your users.\nYou can serve your app using a free\n domain name on the appspot.com domain,\n or use Google Apps to serve it from\n your own domain. You can share your\n application with the world, or limit\n access to members of your\n organization.\nApp Engine costs nothing to get\n started. Sign up for a free account,\n and you can develop and publish your\n application for the world to see, at\n no charge and with no obligation. A\n free account can use up to 500MB of\n persistent storage and enough CPU and\n bandwidth for about 5 million page\n views a month.\n\nBest part of all: It includes Python support, including Django. Go to http://code.google.com/appengine/docs/whatisgoogleappengine.html\n",
"When you use mod_python on a threaded Apache server (the default on Windows), CherryPy runs in the same process as Apache. In that case, you almost certainly don't want CP to restart the process.\nSolution: use mod_rewrite or mod_proxy so that CherryPy runs in its own process. Then you can autoreload to your heart's content. :)\n"
] | [
8,
6,
6,
2,
1
] | [] | [] | [
"cherrypy",
"python"
] | stackoverflow_0000581038_cherrypy_python.txt |
Q:
Wrapping objects to extend/add functionality while working around isinstance
In Python, I've seen the recommendation to use holding or wrapping to extend the functionality of an object or class, rather than inheritance. In particular, I think that Alex Martelli spoke about this in his Python Design Patterns talk. I've seen this pattern used in libraries for dependency injection, like pycontainer.
One problem that I've run into is that when I have to interface with code that uses the
isinstance anti-pattern, this pattern fails because the holding/wrapping object fails the isinstance test. How can I set up the holding/wrapping object to get around unnecessary type checking? Can this be done generically? In some sense, I need something for class instances analogous to signature-preserving function decorators (e.g., simple_decorator or Michele Simionato's decorator).
A qualification: I'm not asserting that all isinstance usage is inappropriate; several answers make good points about this. That said, it should be recognized that isinstance usage poses significant limitations on object interactions---it forces inheritance to be the source of polymorphism, rather than behavior.
There seems to be some confusion about exactly how/why this is a problem, so let me provide a simple example (broadly lifted from pycontainer). Let's say we have a class Foo, as well as a FooFactory. For the sake of the example, assume we want to be able to instantiate Foo objects that log every function call, or don't---think AOP. Further, we want to do this without modifying the Foo class/source in any way (e.g., we may actually be implementing a generic factory that can add logging ability to any class instance on the fly). A first stab at this might be:
class Foo(object):
def bar():
print 'We\'re out of Red Leicester.'
class LogWrapped(object):
def __init__(self, wrapped):
self.wrapped = wrapped
def __getattr__(self, name):
attr = getattr(self.wrapped, name)
if not callable(attr):
return attr
else:
def fun(*args, **kwargs):
print 'Calling ', name
attr(*args, **kwargs)
print 'Called ', name
return fun
class FooFactory(object):
def get_foo(with_logging = False):
if not with_logging:
return Foo()
else:
return LogWrapped(Foo())
foo_fact = FooFactory()
my_foo = foo_fact.get_foo(True)
isinstance(my_foo, Foo) # False!
There are may reasons why you might want to do things exactly this way (use decorators instead, etc.) but keep in mind:
We don't want to touch the Foo class. Assume we're writing framework code that could be used by clients we don't know about yet.
The point is to return an object that is essentially a Foo, but with added functionality. It should appear to be a Foo---as much as possible---to any other client code expecting a Foo. Hence the desire to work around isinstance.
Yes, I know that I don't need the factory class (preemptively defending myself here).
A:
If the library code you depend on uses isinstance and relies on inheritance why not follow this route? If you cannot change the library then it is probably best to stay consistend with it.
I also think that there are legitimate uses for isinstance, and with the introduction of abstract base classes in 2.6 this has been officially acknowledged. There are situations where isinstance really is the right solution, as opposed to duck typing with hasattr or using exceptions.
Some dirty options if for some reason you really don't want to use inheritance:
You could modify only the class instances by using instance methods. With new.instancemethod you create the wrapper methods for your instance, which then calls the original method defined in the original class. This seems to be the only option which neither modifies the original class nor defines new classes.
If you can modify the class at runtime there are many options:
Use a runtime mixin, i.e. just add a class to the __base__ attribute of your class. But this is more used for adding specific functionality, not for indiscriminate wrapping where you don't know what need to be wrapped.
The options in Dave's answer (class decorators in Python >= 2.6 or Metaclasses).
Edit: For your specific example I guess only the first option works. But I would still consider the alternative of creating a LogFoo or chosing an altogether different solution for something specific like logging.
A:
One thing to keep in mind is that you don't necessarily have to use anything in a base class if you go the inheritance route. You can make a stub class to inherit from that doesn't add any concrete implementation. I've done something like this several times:
class Message(object):
pass
class ClassToBeWrapped(object):
#...
class MessageWithConcreteImplementation(Message):
def __init__(self):
self.x = ClassToBeWrapped()
#... add concrete implementation here
x = MessageWithConcreteImplementation()
isinstance(x, Message)
If you need to inherit from other things, I suppose you could run into some problems with multiple inheritance, but this should be fairly minimal if you don't provide any concrete implementation.
One problem that I've run into is that when I have to interface with code that uses the isinstance anti-pattern
I agree that isinstance is to be avoided if possible, but I'm not sure I'd call it an antipattern. There are some valid reasons to use isinstance. For instance, there are some message passing frameworks that use this to define messages. For example, if you get a class that inherits from Shutdown, it's time for a subsystem to shut down.
A:
Python 2.6 has the class-level decorators you desire. http://docs.python.org/3.0/whatsnew/2.6.html#pep-3129-class-decorators
You could maybe use Metaclasses: http://www.google.com/search?q=python+metaclasses (this way lies madness). I'm not sure it would completely solve your problem, but it might be fun trying. ;-)
A:
My first impulse would be to try to fix the offending code which uses isinstance. Otherwise you're just propagating its design mistakes in to your own design. Any reason you can't modify it?
Edit:
So your justification is that you're writing framework/library code that you want people to be able to use in all cases, even if they want to use isinstance?
I think there's several things wrong with this:
You're trying to support a broken paradigm
You're the one defining the library and its interfaces, it's up to the users to use it properly.
There's no way you can possibly anticipate all the bad programming your library users will do, so it's pretty much a futile effort to try to support bad programming practices
I think you're best off writing idiomatic, well designed code. Good code (and bad code) has a tendency to spread, so make yours an example. Hopefully it will lead to an overall increase in code quality. Going the other way will only continue the quality decline.
A:
If you're writing a framework that needs to accept some sort of inputs from your API users, then there's no reason I can think of to use isinstance. Ugly as it might be, I always just check to see if it actually provides the interface I mean to use:
def foo(bar):
if hasattr(bar, "baz") and hasattr(bar, "quux"):
twiddle(bar.baz, bar.quux())
elif hasattr(bar, "quuux"):
etc...
And I also often provide a nice class to inherit default functionality, if the API user wants to use it:
class Bar:
def baz(self):
return self.quux("glu")
def quux(self):
raise NotImplemented
| Wrapping objects to extend/add functionality while working around isinstance | In Python, I've seen the recommendation to use holding or wrapping to extend the functionality of an object or class, rather than inheritance. In particular, I think that Alex Martelli spoke about this in his Python Design Patterns talk. I've seen this pattern used in libraries for dependency injection, like pycontainer.
One problem that I've run into is that when I have to interface with code that uses the
isinstance anti-pattern, this pattern fails because the holding/wrapping object fails the isinstance test. How can I set up the holding/wrapping object to get around unnecessary type checking? Can this be done generically? In some sense, I need something for class instances analogous to signature-preserving function decorators (e.g., simple_decorator or Michele Simionato's decorator).
A qualification: I'm not asserting that all isinstance usage is inappropriate; several answers make good points about this. That said, it should be recognized that isinstance usage poses significant limitations on object interactions---it forces inheritance to be the source of polymorphism, rather than behavior.
There seems to be some confusion about exactly how/why this is a problem, so let me provide a simple example (broadly lifted from pycontainer). Let's say we have a class Foo, as well as a FooFactory. For the sake of the example, assume we want to be able to instantiate Foo objects that log every function call, or don't---think AOP. Further, we want to do this without modifying the Foo class/source in any way (e.g., we may actually be implementing a generic factory that can add logging ability to any class instance on the fly). A first stab at this might be:
class Foo(object):
def bar():
print 'We\'re out of Red Leicester.'
class LogWrapped(object):
def __init__(self, wrapped):
self.wrapped = wrapped
def __getattr__(self, name):
attr = getattr(self.wrapped, name)
if not callable(attr):
return attr
else:
def fun(*args, **kwargs):
print 'Calling ', name
attr(*args, **kwargs)
print 'Called ', name
return fun
class FooFactory(object):
def get_foo(with_logging = False):
if not with_logging:
return Foo()
else:
return LogWrapped(Foo())
foo_fact = FooFactory()
my_foo = foo_fact.get_foo(True)
isinstance(my_foo, Foo) # False!
There are may reasons why you might want to do things exactly this way (use decorators instead, etc.) but keep in mind:
We don't want to touch the Foo class. Assume we're writing framework code that could be used by clients we don't know about yet.
The point is to return an object that is essentially a Foo, but with added functionality. It should appear to be a Foo---as much as possible---to any other client code expecting a Foo. Hence the desire to work around isinstance.
Yes, I know that I don't need the factory class (preemptively defending myself here).
| [
"If the library code you depend on uses isinstance and relies on inheritance why not follow this route? If you cannot change the library then it is probably best to stay consistend with it.\nI also think that there are legitimate uses for isinstance, and with the introduction of abstract base classes in 2.6 this has been officially acknowledged. There are situations where isinstance really is the right solution, as opposed to duck typing with hasattr or using exceptions.\nSome dirty options if for some reason you really don't want to use inheritance:\n\nYou could modify only the class instances by using instance methods. With new.instancemethod you create the wrapper methods for your instance, which then calls the original method defined in the original class. This seems to be the only option which neither modifies the original class nor defines new classes.\n\nIf you can modify the class at runtime there are many options:\n\nUse a runtime mixin, i.e. just add a class to the __base__ attribute of your class. But this is more used for adding specific functionality, not for indiscriminate wrapping where you don't know what need to be wrapped.\nThe options in Dave's answer (class decorators in Python >= 2.6 or Metaclasses).\n\nEdit: For your specific example I guess only the first option works. But I would still consider the alternative of creating a LogFoo or chosing an altogether different solution for something specific like logging.\n",
"One thing to keep in mind is that you don't necessarily have to use anything in a base class if you go the inheritance route. You can make a stub class to inherit from that doesn't add any concrete implementation. I've done something like this several times:\nclass Message(object):\n pass\n\nclass ClassToBeWrapped(object):\n #...\n\nclass MessageWithConcreteImplementation(Message):\n def __init__(self):\n self.x = ClassToBeWrapped()\n #... add concrete implementation here\n\nx = MessageWithConcreteImplementation()\nisinstance(x, Message)\n\nIf you need to inherit from other things, I suppose you could run into some problems with multiple inheritance, but this should be fairly minimal if you don't provide any concrete implementation.\n\nOne problem that I've run into is that when I have to interface with code that uses the isinstance anti-pattern\n\nI agree that isinstance is to be avoided if possible, but I'm not sure I'd call it an antipattern. There are some valid reasons to use isinstance. For instance, there are some message passing frameworks that use this to define messages. For example, if you get a class that inherits from Shutdown, it's time for a subsystem to shut down.\n",
"\nPython 2.6 has the class-level decorators you desire. http://docs.python.org/3.0/whatsnew/2.6.html#pep-3129-class-decorators\nYou could maybe use Metaclasses: http://www.google.com/search?q=python+metaclasses (this way lies madness). I'm not sure it would completely solve your problem, but it might be fun trying. ;-)\n\n",
"My first impulse would be to try to fix the offending code which uses isinstance. Otherwise you're just propagating its design mistakes in to your own design. Any reason you can't modify it?\nEdit:\nSo your justification is that you're writing framework/library code that you want people to be able to use in all cases, even if they want to use isinstance?\nI think there's several things wrong with this:\n\nYou're trying to support a broken paradigm\nYou're the one defining the library and its interfaces, it's up to the users to use it properly.\nThere's no way you can possibly anticipate all the bad programming your library users will do, so it's pretty much a futile effort to try to support bad programming practices\n\nI think you're best off writing idiomatic, well designed code. Good code (and bad code) has a tendency to spread, so make yours an example. Hopefully it will lead to an overall increase in code quality. Going the other way will only continue the quality decline.\n",
"If you're writing a framework that needs to accept some sort of inputs from your API users, then there's no reason I can think of to use isinstance. Ugly as it might be, I always just check to see if it actually provides the interface I mean to use:\ndef foo(bar):\n if hasattr(bar, \"baz\") and hasattr(bar, \"quux\"):\n twiddle(bar.baz, bar.quux())\n elif hasattr(bar, \"quuux\"):\n etc...\n\nAnd I also often provide a nice class to inherit default functionality, if the API user wants to use it:\nclass Bar:\n def baz(self):\n return self.quux(\"glu\")\n\n def quux(self):\n raise NotImplemented\n\n"
] | [
2,
1,
0,
0,
0
] | [] | [] | [
"design_patterns",
"python"
] | stackoverflow_0000579620_design_patterns_python.txt |
Q:
regex '|' operator vs separate runs for each sub-expression
I've got a fairly large string (~700k) against which I need to run 10 regexes and count all the matches of any of the regexes. My quick and dirty impl was to do something like re.search('(expr1)|(expr2)|...'), but I was wondering if we'd see any performance gains by matching in a loop instead:
In other words, I want to compare the performance of:
def CountMatchesInBigstring(bigstring, my_regexes):
"""Counts how many of the expressions in my_regexes match bigstring."""
count = 0
combined_expr = '|'.join(['(%s)' % r for r in my_regexes])
matches = re.search(combined_expr, bigstring)
if matches:
count += NumMatches(matches)
return count
vs
def CountMatchesInBigstring(bigstring, my_regexes):
"""Counts how many of the expressions in my_regexes match bigstring."""
count = 0
for reg in my_regexes:
matches = re.search(reg, bigstring)
if matches:
count += NumMatches(matches)
return count
I'll stop being lazy and run some tests tomorrow (and post the results), but I wondered whether the answer will jump out to someone who actually understands how regexes work :)
A:
The two things will give slightly different results, unless it is guaranteed that a match will match one and only one regex. Otherwise if something matches 2 it will be counted twice.
In theory your solution ought to be quicker (if the expression are mutually exclusive) because the regex compiler ought to be able to make a more efficient search state machine, so only one pass is needed. I would expect the difference to be tiny though, unless the expressions are very similar.
Also, if it were a huge string (bigger than 700k) there might be gains from doing one pass, and so a factor of n fewer memory swaps would be needed (to disk or cpu cache).
My bet is in your tests it isn't really noticeable though. I'm interested in the actual result - please do post the results.
A:
To understand how re module works - compile _sre.c in debug mode (put #define VERBOSE at 103 line in _sre.c and recompile python). After this you ill see something like this:
>>> import re
>>> p = re.compile('(a)|(b)|(c)')
>>> p.search('a'); print '\n\n'; p.search('b')
|0xb7f9ab10|(nil)|SEARCH
prefix = (nil) 0 0
charset = (nil)
|0xb7f9ab1a|0xb7fb75f4|SEARCH
|0xb7f9ab1a|0xb7fb75f4|ENTER
allocating sre_match_context in 0 (32)
allocate/grow stack 1064
|0xb7f9ab1c|0xb7fb75f4|BRANCH
allocating sre_match_context in 32 (32)
|0xb7f9ab20|0xb7fb75f4|MARK 0
|0xb7f9ab24|0xb7fb75f4|LITERAL 97
|0xb7f9ab28|0xb7fb75f5|MARK 1
|0xb7f9ab2c|0xb7fb75f5|JUMP 20
|0xb7f9ab56|0xb7fb75f5|SUCCESS
discard data from 32 (32)
looking up sre_match_context at 0
|0xb7f9ab1c|0xb7fb75f4|JUMP_BRANCH
discard data from 0 (32)
|0xb7f9ab10|0xb7fb75f5|END
|0xb7f9ab10|(nil)|SEARCH
prefix = (nil) 0 0
charset = (nil)
|0xb7f9ab1a|0xb7fb7614|SEARCH
|0xb7f9ab1a|0xb7fb7614|ENTER
allocating sre_match_context in 0 (32)
allocate/grow stack 1064
|0xb7f9ab1c|0xb7fb7614|BRANCH
allocating sre_match_context in 32 (32)
|0xb7f9ab20|0xb7fb7614|MARK 0
|0xb7f9ab24|0xb7fb7614|LITERAL 97
discard data from 32 (32)
looking up sre_match_context at 0
|0xb7f9ab1c|0xb7fb7614|JUMP_BRANCH
allocating sre_match_context in 32 (32)
|0xb7f9ab32|0xb7fb7614|MARK 2
|0xb7f9ab36|0xb7fb7614|LITERAL 98
|0xb7f9ab3a|0xb7fb7615|MARK 3
|0xb7f9ab3e|0xb7fb7615|JUMP 11
|0xb7f9ab56|0xb7fb7615|SUCCESS
discard data from 32 (32)
looking up sre_match_context at 0
|0xb7f9ab2e|0xb7fb7614|JUMP_BRANCH
discard data from 0 (32)
|0xb7f9ab10|0xb7fb7615|END
>>>
A:
I believe your first implementation will be faster:
One of the key principles for Python performance is "move logic to the C level" -- meaning built-in functions (written in C) are faster than pure-Python implementations. So, when the loop is performed by the built-in Regex module, it should be faster
One regex can search for multiple pattens in one pass, meaning it only has to run through your file contents once, whereas multiple regex will have to read the whole file multiple times.
A:
I suspect that the regex will also do what you are trying to do ... only much better :)
so the "|" would win
A:
I agree with amartynov but I wanted to add that you also might consider compiling the regex first (re.compile()), esp. in the second variant as then you might save some setup time in the loop. Maybe you can measure this as well while you are on it.
The reason I think the one shot performs better is that I assume that it's fully done in C space and not so much python code needs to be interpreted.
But looking forward to numbers.
A:
A single compile and search should yield faster results, on a lower scale of expressions the gain could be negligible but the more you run through the greater gain. Think of it as compiling once and matching vs compiling 10 times and matching.
A:
The fewer passes the better: It'll just use more memory, which is typically not an issue.
If anything can be left to the interpreter to handle, it will always find a faster solution (both in time to implement and time to execute) than the typical human counterpart.
| regex '|' operator vs separate runs for each sub-expression | I've got a fairly large string (~700k) against which I need to run 10 regexes and count all the matches of any of the regexes. My quick and dirty impl was to do something like re.search('(expr1)|(expr2)|...'), but I was wondering if we'd see any performance gains by matching in a loop instead:
In other words, I want to compare the performance of:
def CountMatchesInBigstring(bigstring, my_regexes):
"""Counts how many of the expressions in my_regexes match bigstring."""
count = 0
combined_expr = '|'.join(['(%s)' % r for r in my_regexes])
matches = re.search(combined_expr, bigstring)
if matches:
count += NumMatches(matches)
return count
vs
def CountMatchesInBigstring(bigstring, my_regexes):
"""Counts how many of the expressions in my_regexes match bigstring."""
count = 0
for reg in my_regexes:
matches = re.search(reg, bigstring)
if matches:
count += NumMatches(matches)
return count
I'll stop being lazy and run some tests tomorrow (and post the results), but I wondered whether the answer will jump out to someone who actually understands how regexes work :)
| [
"The two things will give slightly different results, unless it is guaranteed that a match will match one and only one regex. Otherwise if something matches 2 it will be counted twice.\nIn theory your solution ought to be quicker (if the expression are mutually exclusive) because the regex compiler ought to be able to make a more efficient search state machine, so only one pass is needed. I would expect the difference to be tiny though, unless the expressions are very similar. \nAlso, if it were a huge string (bigger than 700k) there might be gains from doing one pass, and so a factor of n fewer memory swaps would be needed (to disk or cpu cache).\nMy bet is in your tests it isn't really noticeable though. I'm interested in the actual result - please do post the results.\n",
"To understand how re module works - compile _sre.c in debug mode (put #define VERBOSE at 103 line in _sre.c and recompile python). After this you ill see something like this:\n\n\n>>> import re\n>>> p = re.compile('(a)|(b)|(c)')\n>>> p.search('a'); print '\\n\\n'; p.search('b')\n|0xb7f9ab10|(nil)|SEARCH\nprefix = (nil) 0 0\ncharset = (nil)\n|0xb7f9ab1a|0xb7fb75f4|SEARCH\n|0xb7f9ab1a|0xb7fb75f4|ENTER\nallocating sre_match_context in 0 (32)\nallocate/grow stack 1064\n|0xb7f9ab1c|0xb7fb75f4|BRANCH\nallocating sre_match_context in 32 (32)\n|0xb7f9ab20|0xb7fb75f4|MARK 0\n|0xb7f9ab24|0xb7fb75f4|LITERAL 97\n|0xb7f9ab28|0xb7fb75f5|MARK 1\n|0xb7f9ab2c|0xb7fb75f5|JUMP 20\n|0xb7f9ab56|0xb7fb75f5|SUCCESS\ndiscard data from 32 (32)\nlooking up sre_match_context at 0\n|0xb7f9ab1c|0xb7fb75f4|JUMP_BRANCH\ndiscard data from 0 (32)\n|0xb7f9ab10|0xb7fb75f5|END\n\n\n\n\n|0xb7f9ab10|(nil)|SEARCH\nprefix = (nil) 0 0\ncharset = (nil)\n|0xb7f9ab1a|0xb7fb7614|SEARCH\n|0xb7f9ab1a|0xb7fb7614|ENTER\nallocating sre_match_context in 0 (32)\nallocate/grow stack 1064\n|0xb7f9ab1c|0xb7fb7614|BRANCH\nallocating sre_match_context in 32 (32)\n|0xb7f9ab20|0xb7fb7614|MARK 0\n|0xb7f9ab24|0xb7fb7614|LITERAL 97\ndiscard data from 32 (32)\nlooking up sre_match_context at 0\n|0xb7f9ab1c|0xb7fb7614|JUMP_BRANCH\nallocating sre_match_context in 32 (32)\n|0xb7f9ab32|0xb7fb7614|MARK 2\n|0xb7f9ab36|0xb7fb7614|LITERAL 98\n|0xb7f9ab3a|0xb7fb7615|MARK 3\n|0xb7f9ab3e|0xb7fb7615|JUMP 11\n|0xb7f9ab56|0xb7fb7615|SUCCESS\ndiscard data from 32 (32)\nlooking up sre_match_context at 0\n|0xb7f9ab2e|0xb7fb7614|JUMP_BRANCH\ndiscard data from 0 (32)\n|0xb7f9ab10|0xb7fb7615|END\n\n>>> \n\n\n",
"I believe your first implementation will be faster:\n\nOne of the key principles for Python performance is \"move logic to the C level\" -- meaning built-in functions (written in C) are faster than pure-Python implementations. So, when the loop is performed by the built-in Regex module, it should be faster\nOne regex can search for multiple pattens in one pass, meaning it only has to run through your file contents once, whereas multiple regex will have to read the whole file multiple times.\n\n",
"I suspect that the regex will also do what you are trying to do ... only much better :)\nso the \"|\" would win\n",
"I agree with amartynov but I wanted to add that you also might consider compiling the regex first (re.compile()), esp. in the second variant as then you might save some setup time in the loop. Maybe you can measure this as well while you are on it.\nThe reason I think the one shot performs better is that I assume that it's fully done in C space and not so much python code needs to be interpreted.\nBut looking forward to numbers.\n",
"A single compile and search should yield faster results, on a lower scale of expressions the gain could be negligible but the more you run through the greater gain. Think of it as compiling once and matching vs compiling 10 times and matching.\n",
"The fewer passes the better: It'll just use more memory, which is typically not an issue.\nIf anything can be left to the interpreter to handle, it will always find a faster solution (both in time to implement and time to execute) than the typical human counterpart.\n"
] | [
7,
5,
2,
1,
0,
0,
0
] | [] | [] | [
"performance",
"python",
"regex"
] | stackoverflow_0000580993_performance_python_regex.txt |
Q:
PyGTK widget opacity
Is there any way to set a widget's opacity in PyGTK?
I know there's a function for windows:
gtk.Window.set_opacity(0.85)
but there seems to be no equivalent for arbitrary widgets.
Anyone have any ideas?
Thanks in advance for your help.
A:
From pygtk reference:
For setting up per-pixel alpha, see gtk.gdk.Screen.get_rgba_colormap(). For making non-toplevel windows translucent, see gtk.gdk.Window.set_composited().
A:
It might also be worth looking into pygtkglext for fancier widget stuff.
| PyGTK widget opacity | Is there any way to set a widget's opacity in PyGTK?
I know there's a function for windows:
gtk.Window.set_opacity(0.85)
but there seems to be no equivalent for arbitrary widgets.
Anyone have any ideas?
Thanks in advance for your help.
| [
"From pygtk reference:\n\nFor setting up per-pixel alpha, see gtk.gdk.Screen.get_rgba_colormap(). For making non-toplevel windows translucent, see gtk.gdk.Window.set_composited().\n\n",
"It might also be worth looking into pygtkglext for fancier widget stuff.\n"
] | [
3,
0
] | [] | [] | [
"gtk",
"pygtk",
"python",
"user_interface"
] | stackoverflow_0000583906_gtk_pygtk_python_user_interface.txt |
Q:
How would I make the output of this for loop into a string, into a variable?
In this loop, I'm trying to take user input and continually put it in a list till they write "stop". When the loop is broken, the for loop prints out all of the li's.
How would I take the output of the for loop and make it a string so that I can load it into a variable?
x = ([])
while True:
item = raw_input('Enter List Text (e.g. <li><a href="#">LIST TEXT</a></li>) (Enter "stop" to end loop):\n')
if item == 'stop':
print 'Loop Stopped.'
break
else:
item = make_link(item)
x.append(item)
print 'List Item Added\n'
for i in range(len(x)):
print '<li>' + x[i] + '</li>\n'
I want it to end up like this:
Code:
print list_output
Output:
<li>Blah</li>
<li>Blah</li>
<li>etc.</li>
A:
In python, strings support a join method (conceptually the opposite of split) that allows you to join elements of a list (technically, of an iterable) together using the string. One very common use case is ', '.join(<list>) to copy the elements of the list into a comma separated string.
In your case, you probably want something like this:
list_output = ''.join('<li>' + item + '</li>\n' for item in x)
If you want the elements of the list separated by newlines, but no newline at the end of the string, you can do this:
list_output = '\n'.join('<li>' + item + '</li>' for item in x)
If you want to get really crazy, this might be the most efficient (although I don't recommend it):
list_output = '<li>' + '</li>\n<li>'.join(item for item in x) + '</li>\n'
A:
s = "\n".join(['<li>' + i + '</li>' for i in x])
A:
I hate to be the person to answer a different question, but hand-coded HTML generation makes me feel ill. Even if you're doing nothing more than this super-simple list generation, I'd strongly recommend looking at a templating language like Genshi.
A Genshi version of your program (a little longer, but way, way nicer):
from genshi.template import MarkupTemplate
TMPL = '''<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:py="http://genshi.edgewall.org/">
<li py:for="item in items">$item</li>
</html>'''
make_link = lambda x: x
item, x = None, []
while True:
item = raw_input('Enter List Text (Enter "stop" to end loop):\n')
if item == 'stop':
break
x.append(make_link(item))
print 'List Item Added\n'
template = MarkupTemplate(TMPL)
stream = template.generate(items = x)
print stream.render()
A:
list_output = "<li>%s</li>\n" * len(x) % tuple(x)
A:
Replace your for loop at the bottom with the following:
list_output=""
for aLine in x:
list_output += '<li>'+aLine+'</li>\n'
Note also that since x is a list, Python lets you iterate through the elements of the list instead of having to iterate on an index variable that is then used to lookup elements in the list.
| How would I make the output of this for loop into a string, into a variable? | In this loop, I'm trying to take user input and continually put it in a list till they write "stop". When the loop is broken, the for loop prints out all of the li's.
How would I take the output of the for loop and make it a string so that I can load it into a variable?
x = ([])
while True:
item = raw_input('Enter List Text (e.g. <li><a href="#">LIST TEXT</a></li>) (Enter "stop" to end loop):\n')
if item == 'stop':
print 'Loop Stopped.'
break
else:
item = make_link(item)
x.append(item)
print 'List Item Added\n'
for i in range(len(x)):
print '<li>' + x[i] + '</li>\n'
I want it to end up like this:
Code:
print list_output
Output:
<li>Blah</li>
<li>Blah</li>
<li>etc.</li>
| [
"In python, strings support a join method (conceptually the opposite of split) that allows you to join elements of a list (technically, of an iterable) together using the string. One very common use case is ', '.join(<list>) to copy the elements of the list into a comma separated string.\nIn your case, you probably want something like this:\nlist_output = ''.join('<li>' + item + '</li>\\n' for item in x)\n\nIf you want the elements of the list separated by newlines, but no newline at the end of the string, you can do this:\nlist_output = '\\n'.join('<li>' + item + '</li>' for item in x)\n\nIf you want to get really crazy, this might be the most efficient (although I don't recommend it):\nlist_output = '<li>' + '</li>\\n<li>'.join(item for item in x) + '</li>\\n'\n\n",
"s = \"\\n\".join(['<li>' + i + '</li>' for i in x])\n\n",
"I hate to be the person to answer a different question, but hand-coded HTML generation makes me feel ill. Even if you're doing nothing more than this super-simple list generation, I'd strongly recommend looking at a templating language like Genshi.\nA Genshi version of your program (a little longer, but way, way nicer):\nfrom genshi.template import MarkupTemplate\n\nTMPL = '''<html xmlns=\"http://www.w3.org/1999/xhtml\"\n xmlns:py=\"http://genshi.edgewall.org/\">\n <li py:for=\"item in items\">$item</li>\n </html>'''\n\nmake_link = lambda x: x\n\nitem, x = None, []\nwhile True:\n item = raw_input('Enter List Text (Enter \"stop\" to end loop):\\n')\n if item == 'stop':\n break\n x.append(make_link(item))\n print 'List Item Added\\n'\n\ntemplate = MarkupTemplate(TMPL)\nstream = template.generate(items = x)\nprint stream.render()\n\n",
"list_output = \"<li>%s</li>\\n\" * len(x) % tuple(x)\n\n",
"Replace your for loop at the bottom with the following:\nlist_output=\"\"\nfor aLine in x:\n list_output += '<li>'+aLine+'</li>\\n'\n\nNote also that since x is a list, Python lets you iterate through the elements of the list instead of having to iterate on an index variable that is then used to lookup elements in the list.\n"
] | [
3,
2,
2,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000583986_python.txt |
Q:
How does one enable authentication across a Django site, and transparently preserving any POST or GET data?
Suppose someone is editing a HTML form, and their session times out, how can one have Django re-authenticate that individual without losing the content the user had entered into the form?
The snippet Django Snippets: Require login across entire site suggests how to do site-wide authentication, but I expect it will lose the GET component of the string (namely because request.path does not include it), and definitely lose the POST data.
How can one preserve the POST and GET across those inconvenient timeouts. I find that finessed web-sites tend to handle this intelligently, and I'd like to be able to do it in Django (as would others, I imagine!).
Thoughts would be appreciated. Thank you.
A:
I have two suggestions.
Redirect/Middleware
Since you're already using middleware to handle the login requirement, you could modify this middleware. Or possibly, create another middleware class that is called after the login middleware. These ideas are intertwined so it may make more sense to modify the existing one.
If not logged in, capture the GET and POST data in the middleware, and store it in the session
If the user is authenticated, check for the value(s) set in #1. If they exist, modify request.GET and request.POST to reflect it, and delete the session data.
I think this should work cleanly, and many people would find it useful. It'd be a great post on djangosnippets.org.
Ajax technique
This is less practical if you already have your form handling in place, but could create a better user experience. If you POST asynchronously, your Javascript handler could recognize a "login required" response code, and then display a popup dialog requesting login. On completion, the user could resubmit the form.
A:
Add onsubmit handler to all your forms that would check session via JS and prompt use to login before proceeding. This way form submit would not really happen before user is logged in again.
And make sure you verify that logged in user stays the same across sessions.
A:
It is not exactly Django-specific but HTTP (The Stateless) specific... In the case system ends in issuing Redirect while handling POST (switching to GET from original POST) and risking loosing data one should store the data somewhere (db, memcached, etc.) and make the key under which they are stored be carried through authentication (or other) process.
The simplest is Cookie as requires zero-care about the key. The more difficult but more bullet-proof (against read-oly Cookie jars) is key in URL user is redirected to and consecutive relay from request to request (like SESSION in solution from almost decade ago).
Upon finishing the authentication (or other process) data (and process interrupted) can be picked from datastore by the key passed (either from Cookies, or from GET request variable).
A:
I don't like sessions in general, though I suppose with an authenticated site you are already using them so maybe the answers above fit into your approach.
Without sessions, I'd do something similar to Daniels answer, i.e. catch the original POST/GET in the middleware, but I'd change the redirect to include the posted info.
This is easier on GETs and normally is just the full GETstring encoded in a redirect component of the login url.
For POSTs you can either convert to the get method which works fine for smaller forms but for bigger forms that would make the url too long, I'd do a rePOST, posting the data to a login form possibly encoded and storing it in a single hidden (almost like a .net viewstate actually)
Doing this in django is tricky as you can't do a redirect, so I'd use the middleware to manually call the login view, and write to HttpResponse from there.
EDIT
After some more looking into this, apparently the magic admin side fo django has something similar already implemented, as found by Jerry Stratton
Looks like a good option. I'll try it out and feedback.
| How does one enable authentication across a Django site, and transparently preserving any POST or GET data? | Suppose someone is editing a HTML form, and their session times out, how can one have Django re-authenticate that individual without losing the content the user had entered into the form?
The snippet Django Snippets: Require login across entire site suggests how to do site-wide authentication, but I expect it will lose the GET component of the string (namely because request.path does not include it), and definitely lose the POST data.
How can one preserve the POST and GET across those inconvenient timeouts. I find that finessed web-sites tend to handle this intelligently, and I'd like to be able to do it in Django (as would others, I imagine!).
Thoughts would be appreciated. Thank you.
| [
"I have two suggestions.\nRedirect/Middleware\nSince you're already using middleware to handle the login requirement, you could modify this middleware. Or possibly, create another middleware class that is called after the login middleware. These ideas are intertwined so it may make more sense to modify the existing one.\n\nIf not logged in, capture the GET and POST data in the middleware, and store it in the session\nIf the user is authenticated, check for the value(s) set in #1. If they exist, modify request.GET and request.POST to reflect it, and delete the session data.\n\nI think this should work cleanly, and many people would find it useful. It'd be a great post on djangosnippets.org.\nAjax technique\nThis is less practical if you already have your form handling in place, but could create a better user experience. If you POST asynchronously, your Javascript handler could recognize a \"login required\" response code, and then display a popup dialog requesting login. On completion, the user could resubmit the form.\n",
"Add onsubmit handler to all your forms that would check session via JS and prompt use to login before proceeding. This way form submit would not really happen before user is logged in again.\nAnd make sure you verify that logged in user stays the same across sessions.\n",
"It is not exactly Django-specific but HTTP (The Stateless) specific... In the case system ends in issuing Redirect while handling POST (switching to GET from original POST) and risking loosing data one should store the data somewhere (db, memcached, etc.) and make the key under which they are stored be carried through authentication (or other) process. \nThe simplest is Cookie as requires zero-care about the key. The more difficult but more bullet-proof (against read-oly Cookie jars) is key in URL user is redirected to and consecutive relay from request to request (like SESSION in solution from almost decade ago).\nUpon finishing the authentication (or other process) data (and process interrupted) can be picked from datastore by the key passed (either from Cookies, or from GET request variable).\n",
"I don't like sessions in general, though I suppose with an authenticated site you are already using them so maybe the answers above fit into your approach.\nWithout sessions, I'd do something similar to Daniels answer, i.e. catch the original POST/GET in the middleware, but I'd change the redirect to include the posted info.\nThis is easier on GETs and normally is just the full GETstring encoded in a redirect component of the login url.\nFor POSTs you can either convert to the get method which works fine for smaller forms but for bigger forms that would make the url too long, I'd do a rePOST, posting the data to a login form possibly encoded and storing it in a single hidden (almost like a .net viewstate actually)\nDoing this in django is tricky as you can't do a redirect, so I'd use the middleware to manually call the login view, and write to HttpResponse from there.\nEDIT\nAfter some more looking into this, apparently the magic admin side fo django has something similar already implemented, as found by Jerry Stratton\nLooks like a good option. I'll try it out and feedback.\n"
] | [
3,
2,
2,
0
] | [] | [] | [
"authentication",
"django",
"middleware",
"python",
"wsgi"
] | stackoverflow_0000583857_authentication_django_middleware_python_wsgi.txt |
Q:
How to determine number of files on a drive with Python?
I have been trying to figure out how to retrieve (quickly) the number of files on a given HFS+ drive with python.
I have been playing with os.statvfs and such, but can't quite get anything (that seems helpful to me).
Any ideas?
Edit: Let me be a bit more specific. =]
I am writing a timemachine-like wrapper around rsync for various reasons, and would like a very fast estimate (does not have to be perfect) of the number of files on the drive rsync is going to scan. This way I can watch the progress from rsync (if you call it like rsync -ax --progress, or with the -P option) as it builds its initial file list, and report a percentage and/or ETA back to the user.
This is completely separate from the actual backup, which is no problem tracking progress. But with the drives I am working on with several million files, it means the user is watching a counter of the number of files go up with no upper bound for a few minutes.
I have tried playing with os.statvfs with exactly the method described in one of the answers so far, but the results do not make sense to me.
>>> import os
>>> os.statvfs('/').f_files - os.statvfs('/').f_ffree
64171205L
The more portable way gives me around 1.1 million on this machine, which is the same as every other indicator I have seen on this machine, including rsync running its preparations:
>>> sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
1084224
Note that the first method is instantaneous, while the second one made me come back 15 minutes later to update because it took just that long to run.
Does anyone know of a similar way to get this number, or what is wrong with how I am treating/interpreting the os.statvfs numbers?
A:
The right answer for your purpose is to live without a progress bar once, store the number rsync came up with and assume you have the same number of files as last time for each successive backup.
I didn't believe it, but this seems to work on Linux:
os.statvfs('/').f_files - os.statvfs('/').f_ffree
This computes the total number of file blocks minus the free file blocks. It seems to show results for the whole filesystem even if you point it at another directory. os.statvfs is implemented on Unix only.
OK, I admit, I didn't actually let the 'slow, correct' way finish before marveling at the fast method. Just a few drawbacks: I suspect .f_files would also count directories, and the result is probably totally wrong. It might work to count the files the slow way, once, and adjust the result from the 'fast' way?
The portable way:
import os
files = sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
os.walk returns a 3-tuple (dirpath, dirnames, filenames) for each directory in the filesystem starting at the given path. This will probably take a long time for "/", but you knew that already.
The easy way:
Let's face it, nobody knows or cares how many files they really have, it's a humdrum and nugatory statistic. You can add this cool 'number of files' feature to your program with this code:
import random
num_files = random.randint(69000, 4000000)
Let us know if any of these methods works for you.
See also How do I prevent Python's os.walk from walking across mount points?
A:
You could use a number from a previous rsync run. It is quick, portable, and for 10**6 files and any reasonable backup strategy it will give you 1% or better precision.
A:
If traversing the directory tree is an option (would be slower than querying the drive directly):
import os
dirs = 0
files = 0
for r, d, f in os.walk('/path/to/drive'):
dirs += len(d)
files += len(f)
A:
Edit: Spotlight does not track every file, so its metadata will not suffice.
| How to determine number of files on a drive with Python? | I have been trying to figure out how to retrieve (quickly) the number of files on a given HFS+ drive with python.
I have been playing with os.statvfs and such, but can't quite get anything (that seems helpful to me).
Any ideas?
Edit: Let me be a bit more specific. =]
I am writing a timemachine-like wrapper around rsync for various reasons, and would like a very fast estimate (does not have to be perfect) of the number of files on the drive rsync is going to scan. This way I can watch the progress from rsync (if you call it like rsync -ax --progress, or with the -P option) as it builds its initial file list, and report a percentage and/or ETA back to the user.
This is completely separate from the actual backup, which is no problem tracking progress. But with the drives I am working on with several million files, it means the user is watching a counter of the number of files go up with no upper bound for a few minutes.
I have tried playing with os.statvfs with exactly the method described in one of the answers so far, but the results do not make sense to me.
>>> import os
>>> os.statvfs('/').f_files - os.statvfs('/').f_ffree
64171205L
The more portable way gives me around 1.1 million on this machine, which is the same as every other indicator I have seen on this machine, including rsync running its preparations:
>>> sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
1084224
Note that the first method is instantaneous, while the second one made me come back 15 minutes later to update because it took just that long to run.
Does anyone know of a similar way to get this number, or what is wrong with how I am treating/interpreting the os.statvfs numbers?
| [
"The right answer for your purpose is to live without a progress bar once, store the number rsync came up with and assume you have the same number of files as last time for each successive backup.\nI didn't believe it, but this seems to work on Linux:\nos.statvfs('/').f_files - os.statvfs('/').f_ffree\n\nThis computes the total number of file blocks minus the free file blocks. It seems to show results for the whole filesystem even if you point it at another directory. os.statvfs is implemented on Unix only.\nOK, I admit, I didn't actually let the 'slow, correct' way finish before marveling at the fast method. Just a few drawbacks: I suspect .f_files would also count directories, and the result is probably totally wrong. It might work to count the files the slow way, once, and adjust the result from the 'fast' way?\nThe portable way:\nimport os\nfiles = sum(len(filenames) for path, dirnames, filenames in os.walk(\"/\"))\n\nos.walk returns a 3-tuple (dirpath, dirnames, filenames) for each directory in the filesystem starting at the given path. This will probably take a long time for \"/\", but you knew that already.\nThe easy way:\nLet's face it, nobody knows or cares how many files they really have, it's a humdrum and nugatory statistic. You can add this cool 'number of files' feature to your program with this code:\nimport random\nnum_files = random.randint(69000, 4000000)\n\nLet us know if any of these methods works for you.\nSee also How do I prevent Python's os.walk from walking across mount points?\n",
"You could use a number from a previous rsync run. It is quick, portable, and for 10**6 files and any reasonable backup strategy it will give you 1% or better precision.\n",
"If traversing the directory tree is an option (would be slower than querying the drive directly):\nimport os\n\ndirs = 0\nfiles = 0\n\nfor r, d, f in os.walk('/path/to/drive'):\n dirs += len(d)\n files += len(f)\n\n",
"Edit: Spotlight does not track every file, so its metadata will not suffice.\n"
] | [
7,
2,
1,
0
] | [] | [] | [
"filesystems",
"hard_drive",
"macos",
"python"
] | stackoverflow_0000574236_filesystems_hard_drive_macos_python.txt |
Q:
tkinter - set geometry without showing window
I'm trying to line up some label and canvas widgets. To do so I need to know how wide my label boxes are. I'd like my widget to auto-adjust if the user changes the system font size, so I don't want to hard code 12 pixels per character. If I measure the label widget it's always 1 pixel wide. Until I call .update(), then I get the correct value. But .update() puts a window onscreen with my label, said window then goes away when I finally pack my final widgets. But this causes an unwelcome flash when I first put up the widget.
So, how can I measure a label widget without .update()'ing it? Or how can I .update() a widget without having it display onscreen? I'm using Python if it matters.
A:
Withdraw the window before calling update. The command you want is wm_withdraw
root = Tk()
root.wm_withdraw()
<your code here>
root.wm_deiconify()
However, if your real problem is lining up widgets you usually don't need to know the size of widgets. Use the grid geometry manager. Get out a piece of graph paper and lay your widgets out on it. Feel free to span as many squares as necessary for each widget. The design can then translate easily to a series of grid calls.
| tkinter - set geometry without showing window | I'm trying to line up some label and canvas widgets. To do so I need to know how wide my label boxes are. I'd like my widget to auto-adjust if the user changes the system font size, so I don't want to hard code 12 pixels per character. If I measure the label widget it's always 1 pixel wide. Until I call .update(), then I get the correct value. But .update() puts a window onscreen with my label, said window then goes away when I finally pack my final widgets. But this causes an unwelcome flash when I first put up the widget.
So, how can I measure a label widget without .update()'ing it? Or how can I .update() a widget without having it display onscreen? I'm using Python if it matters.
| [
"Withdraw the window before calling update. The command you want is wm_withdraw\nroot = Tk()\nroot.wm_withdraw()\n<your code here>\nroot.wm_deiconify()\n\nHowever, if your real problem is lining up widgets you usually don't need to know the size of widgets. Use the grid geometry manager. Get out a piece of graph paper and lay your widgets out on it. Feel free to span as many squares as necessary for each widget. The design can then translate easily to a series of grid calls.\n"
] | [
1
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0000584127_python_tkinter.txt |
Q:
Django development server shutdown error
Whenever i shut down my development server (./manage.py runserver) with CTRL+c i get following message:
[24/Feb/2009 22:05:23] "GET /home/ HTTP/1.1" 200 1571
[24/Feb/2009 22:05:24] "GET /contact HTTP/1.1" 301 0
[24/Feb/2009 22:05:24] "GET /contact/ HTTP/1.1" 200 2377
^C
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/logging/__init__.py", line 1354, in shutdown
h.flush()
TypeError: flush() takes exactly 2 arguments (1 given)
Error in sys.exitfunc:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/logging/__init__.py", line 1354, in shutdown
h.flush()
TypeError: flush() takes exactly 2 arguments (1 given)
I recently moved the project to another directory, but everything else works fine, so i don't know if that has anything to do with it ...
If i just start the development server and then shut it down immediately, i do not see the error. Only when i click around some in the browser and then shut down the server...
Can anyone point me in the right direction to sort this one out plz?
Thanks in advance.
A:
It appears that you are using the Mac's default python install. I know this has been reputed to have odd issues from time to time. I would recommend install MacPython and installing Django into that python instance.
| Django development server shutdown error | Whenever i shut down my development server (./manage.py runserver) with CTRL+c i get following message:
[24/Feb/2009 22:05:23] "GET /home/ HTTP/1.1" 200 1571
[24/Feb/2009 22:05:24] "GET /contact HTTP/1.1" 301 0
[24/Feb/2009 22:05:24] "GET /contact/ HTTP/1.1" 200 2377
^C
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/logging/__init__.py", line 1354, in shutdown
h.flush()
TypeError: flush() takes exactly 2 arguments (1 given)
Error in sys.exitfunc:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/logging/__init__.py", line 1354, in shutdown
h.flush()
TypeError: flush() takes exactly 2 arguments (1 given)
I recently moved the project to another directory, but everything else works fine, so i don't know if that has anything to do with it ...
If i just start the development server and then shut it down immediately, i do not see the error. Only when i click around some in the browser and then shut down the server...
Can anyone point me in the right direction to sort this one out plz?
Thanks in advance.
| [
"It appears that you are using the Mac's default python install. I know this has been reputed to have odd issues from time to time. I would recommend install MacPython and installing Django into that python instance. \n"
] | [
2
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000583740_django_python.txt |
Q:
Database design of survey query system
I am working on a so-called Behavioral Risk Factor Surveillance System (BRFSS), a web query system dealing with questionnaires coming every year.
I had hard time in coming up with a suitable database design for it. Here is the problem: Each questionnaire contains about 80 questions, with demographic info, e.g. age, education, etc, and survey questions, e.g. smoking, health, etc. Every year, some questions change, some don't. Data source is an Excel file with 80+ columns. The system has to support queries like:
SELECT [question var], [demo var], count(*)
FROM survey
WHERE age in (...) AND educ in (...) [etc]
GROUP BY <question var>
The data is read only, ie. never change once imported. So it does not have to be normalized too much. Intuitively, a spreadsheet-like table will do a good job wrt. speed and space. This becomes a problem, though, because questions will change, then we can't keep all the data in this table, which is necessary because of cross year queries.
I tried normalize the responses into three tables: questions, responses, and response_values, which can support question variations. But then the response table spans over 98*14268 = 1,398,264 rows for one year! That's really huge. Query is slow like crazy!
How should I design the database? Any help is appreciated! Thanks in advance!
ps. I am using Python+Django+Sqlite.
A:
Have you checked DatabaseAnswers to see if there is a schema you could use as a starting point?
A:
Sounds like a case for a star schema.
You would have a (huge) fact table like this:
question_id, survey_id, age_group_id, health_classifier_id, is_smoking ... , answer_value
and denormalised dimension tables:
age_group:
group_name, min_age, max_age, age_group_id
1.4 million rows doesn't sound like much for a system like this.
Some databases have special features to support querying on this kind of schema:
On Oracle those would be:
'dimensions' for supporting aggragation allong dimensions
bitmap index for filtering on low cardinality attributes like age_group_id and is_smoking
bitmap joind index for filtering on low cardinality attributes in a joined table, i.e. selecting from the fact table but filtering on min_age in the age_group table.
partitioning tables to handle large tables
materialized views for precalculating aggregation results
There are also specialised db systems for this kind of data called multidimensional database.
check if there are similiar constructs for your database or consider switching the database engine
A:
you need at least 3 tables:
1) Questions which contains the text for each question, with autoincrement id key
eg: (123, "What is the colour of your hair?")
2) Questionaires, which map Q#'s onto questions.
eg) question #10 on questionaire #3 maps on to question #123.
3) Answers, which link each respondant with their questionaire and the data
eg) Bob's response to question #10 on questionaire #3 is "brown".
You should see how easy it is to add new questionaires using existing questions and adding new questions. Yes, there are going to be huge tables, but a good database engine should be able to handle 1M entries easily. You could use partitioning to make it more efficient, such as partition by year.
I'll leave it as an exercise on how to convert this into sql.
A:
I've been also thinking after my post on stackoverflow. Here is how I can use the denormalized wide table (80+ columns) to support question changing every year and also aggregate cross tabulation. Please comment on. Thanks
Create a new table for each year with the questions placed on columns
e.g.
id year age sex educ income ... smoke hiv drink ...
Create two tables: Question and Query_Year, a many-to-many table Question_Year. Then we can populate a list of questions that are available for a specified year, and vice versa.
Queries within one year is easy. And queries cross years, we can use a UNION operator. Since the questions should be compatible among the selected years, UNION is legitimate.
e.g
SELECT * FROM (
SELECT id, , , COUNT() FROM survey_2001
UNION ALL
SELECT id, , , COUNT() FROM survey_2003
UNION ALL
SELECT id, , , COUNT(*) FROM survey_2004
UNION ALL
etc etc
)
WHERE (
AGE in (...) AND
EDUC in (...) AND
etc etc
)
GROUP BY ,
I suppose UNION is a relational operator which should not decrease the efficiency of a RDBMS. So it does not hurt if I combine many tables by union. The engine can also do some query analysis to boost the speed.
I think this one is adequate and simple enough. Please comment on. Thanks!
| Database design of survey query system | I am working on a so-called Behavioral Risk Factor Surveillance System (BRFSS), a web query system dealing with questionnaires coming every year.
I had hard time in coming up with a suitable database design for it. Here is the problem: Each questionnaire contains about 80 questions, with demographic info, e.g. age, education, etc, and survey questions, e.g. smoking, health, etc. Every year, some questions change, some don't. Data source is an Excel file with 80+ columns. The system has to support queries like:
SELECT [question var], [demo var], count(*)
FROM survey
WHERE age in (...) AND educ in (...) [etc]
GROUP BY <question var>
The data is read only, ie. never change once imported. So it does not have to be normalized too much. Intuitively, a spreadsheet-like table will do a good job wrt. speed and space. This becomes a problem, though, because questions will change, then we can't keep all the data in this table, which is necessary because of cross year queries.
I tried normalize the responses into three tables: questions, responses, and response_values, which can support question variations. But then the response table spans over 98*14268 = 1,398,264 rows for one year! That's really huge. Query is slow like crazy!
How should I design the database? Any help is appreciated! Thanks in advance!
ps. I am using Python+Django+Sqlite.
| [
"Have you checked DatabaseAnswers to see if there is a schema you could use as a starting point?\n",
"Sounds like a case for a star schema.\nYou would have a (huge) fact table like this:\nquestion_id, survey_id, age_group_id, health_classifier_id, is_smoking ... , answer_value\nand denormalised dimension tables:\nage_group:\ngroup_name, min_age, max_age, age_group_id\n1.4 million rows doesn't sound like much for a system like this. \nSome databases have special features to support querying on this kind of schema:\nOn Oracle those would be:\n\n'dimensions' for supporting aggragation allong dimensions\nbitmap index for filtering on low cardinality attributes like age_group_id and is_smoking\nbitmap joind index for filtering on low cardinality attributes in a joined table, i.e. selecting from the fact table but filtering on min_age in the age_group table.\npartitioning tables to handle large tables\nmaterialized views for precalculating aggregation results\n\nThere are also specialised db systems for this kind of data called multidimensional database.\ncheck if there are similiar constructs for your database or consider switching the database engine\n",
"you need at least 3 tables:\n1) Questions which contains the text for each question, with autoincrement id key\neg: (123, \"What is the colour of your hair?\")\n2) Questionaires, which map Q#'s onto questions.\neg) question #10 on questionaire #3 maps on to question #123.\n3) Answers, which link each respondant with their questionaire and the data\neg) Bob's response to question #10 on questionaire #3 is \"brown\".\nYou should see how easy it is to add new questionaires using existing questions and adding new questions. Yes, there are going to be huge tables, but a good database engine should be able to handle 1M entries easily. You could use partitioning to make it more efficient, such as partition by year.\nI'll leave it as an exercise on how to convert this into sql.\n",
"I've been also thinking after my post on stackoverflow. Here is how I can use the denormalized wide table (80+ columns) to support question changing every year and also aggregate cross tabulation. Please comment on. Thanks\n\nCreate a new table for each year with the questions placed on columns\ne.g. \nid year age sex educ income ... smoke hiv drink ...\nCreate two tables: Question and Query_Year, a many-to-many table Question_Year. Then we can populate a list of questions that are available for a specified year, and vice versa. \nQueries within one year is easy. And queries cross years, we can use a UNION operator. Since the questions should be compatible among the selected years, UNION is legitimate. \ne.g \nSELECT * FROM (\n SELECT id, , , COUNT() FROM survey_2001\n UNION ALL\n SELECT id, , , COUNT() FROM survey_2003\n UNION ALL\n SELECT id, , , COUNT(*) FROM survey_2004\n UNION ALL\n etc etc\n)\nWHERE (\n AGE in (...) AND\n EDUC in (...) AND\n etc etc\n)\nGROUP BY , \n\nI suppose UNION is a relational operator which should not decrease the efficiency of a RDBMS. So it does not hurt if I combine many tables by union. The engine can also do some query analysis to boost the speed. \nI think this one is adequate and simple enough. Please comment on. Thanks! \n"
] | [
6,
1,
1,
0
] | [] | [] | [
"database",
"django",
"python",
"sqlite"
] | stackoverflow_0000585006_database_django_python_sqlite.txt |
Q:
What are good ways to upload bulk .csv data into a webapp using Django/Python?
I have a very basic CSV file upload module working to bulk upload my user's data into my site. I process the CSV file in the backend with a python script that runs on crontab and then email the user the results of the bulk upload. This process works ok operationally, but my issue is with the format of the csv file.
Are there good tools or even basic rules on how to accept different formats of the csv file? The user may have a different order of data columns, slightly different names for the column headers (I want the email column to be entitled "Email", but it may say "Primary Email", "Email Address"), or missing additional data columns. Any good examples of CSV upload functionality that is very permissive and user friendly?
Also, how do I tell the user to export as CSV data? I'm importing address book information, so this data often comes from Outlook, Thunderbird, other software packages that have address books. Are there other popular data formats that I should accept?
A:
I'd check out Python's built-in csv module. Frankly a .replace() on your first row should cover your synonyms issue, and if you're using csv.DictReader you should be able to deal with missing columns very easily:
my_dict_reader = csv.DictReader(somecsvfile)
for row in my_dict_reader:
SomeDBModel.address2=row.get('address2', None)
assuming you wanted to store a None value for missing fields.
A:
You should force the first row to be the headers, make the user match up their headers to your field names on the next page, and remember that mapping for their future dumps.
Whenever I do CSV imports the data really came from an Excel spreadsheet. I've been able to save time by using pyexcelerator to import the .xls directly. My .csv or .xls code is a generator that yields {'field_name':'data', ...} dictionaries that can be assigned to model objects.
If you're doing address data, you should accept vCard.
A:
I would handle the random column header mapping in your script once it's uploaded. It's hard to make a "catch all" that would handle whatever the users might enter. I would have it evolve as you go and slowly build a list of one-one relations based on what your user uploads.
Or!
Check the column headers and make sure it's properly formatted and advise them how to fix it if it is not.
"Primary Email" not recognized, our
schema is "Email", "Address", "Phone",
etc.
You could also accept XML and this would allow you to create your own schema that they would have to adhere to. Check out this tutorial.
A:
Take a look at this project: django-batchimport
It might be overkill for you, but it can still give you some good ideas on improving your own code.
Edit: also, ignore that it is only using xlrd for importing Excel. The base concepts are the same, just that you will use the csv module instead of xlrd.
A:
If you'll copy excel table into clipboard and then paste results into notepad, you'll notice that it's tab separated. I once used it to make bulk import from most of table editors by copy-pasting data from the editor into textarea on html page.
You can use a background for textarea as a hint for number of columns and place your headers at the top suggesting the order for a user.
Javascript will process pasted data and display them to the user immediately with simple prevalidation making it easy to fix an error and repaste.
Then import button is clicked, data is validated again and import results are displayed.
Unfortunately, I've never heard any feedback about whenever this was easy to use or not.
Anyway, I still see it as an option when implementing bulk import.
A:
Look at csv module from stdlib. It contains presets for popualr CSV dialects like one produced by Excel.
Reader class support field mapping and if file contains column header it coes not depend on column order. For more complex logic, like looking up several alternative names for a field, you'll need to write your own implementation.
| What are good ways to upload bulk .csv data into a webapp using Django/Python? | I have a very basic CSV file upload module working to bulk upload my user's data into my site. I process the CSV file in the backend with a python script that runs on crontab and then email the user the results of the bulk upload. This process works ok operationally, but my issue is with the format of the csv file.
Are there good tools or even basic rules on how to accept different formats of the csv file? The user may have a different order of data columns, slightly different names for the column headers (I want the email column to be entitled "Email", but it may say "Primary Email", "Email Address"), or missing additional data columns. Any good examples of CSV upload functionality that is very permissive and user friendly?
Also, how do I tell the user to export as CSV data? I'm importing address book information, so this data often comes from Outlook, Thunderbird, other software packages that have address books. Are there other popular data formats that I should accept?
| [
"I'd check out Python's built-in csv module. Frankly a .replace() on your first row should cover your synonyms issue, and if you're using csv.DictReader you should be able to deal with missing columns very easily:\nmy_dict_reader = csv.DictReader(somecsvfile)\nfor row in my_dict_reader:\n SomeDBModel.address2=row.get('address2', None)\n\nassuming you wanted to store a None value for missing fields.\n",
"You should force the first row to be the headers, make the user match up their headers to your field names on the next page, and remember that mapping for their future dumps.\nWhenever I do CSV imports the data really came from an Excel spreadsheet. I've been able to save time by using pyexcelerator to import the .xls directly. My .csv or .xls code is a generator that yields {'field_name':'data', ...} dictionaries that can be assigned to model objects.\nIf you're doing address data, you should accept vCard.\n",
"I would handle the random column header mapping in your script once it's uploaded. It's hard to make a \"catch all\" that would handle whatever the users might enter. I would have it evolve as you go and slowly build a list of one-one relations based on what your user uploads.\nOr!\nCheck the column headers and make sure it's properly formatted and advise them how to fix it if it is not.\n\n\"Primary Email\" not recognized, our\n schema is \"Email\", \"Address\", \"Phone\",\n etc.\n\nYou could also accept XML and this would allow you to create your own schema that they would have to adhere to. Check out this tutorial.\n",
"Take a look at this project: django-batchimport\nIt might be overkill for you, but it can still give you some good ideas on improving your own code.\nEdit: also, ignore that it is only using xlrd for importing Excel. The base concepts are the same, just that you will use the csv module instead of xlrd.\n",
"If you'll copy excel table into clipboard and then paste results into notepad, you'll notice that it's tab separated. I once used it to make bulk import from most of table editors by copy-pasting data from the editor into textarea on html page.\nYou can use a background for textarea as a hint for number of columns and place your headers at the top suggesting the order for a user. \nJavascript will process pasted data and display them to the user immediately with simple prevalidation making it easy to fix an error and repaste.\nThen import button is clicked, data is validated again and import results are displayed.\nUnfortunately, I've never heard any feedback about whenever this was easy to use or not.\nAnyway, I still see it as an option when implementing bulk import.\n",
"Look at csv module from stdlib. It contains presets for popualr CSV dialects like one produced by Excel.\nReader class support field mapping and if file contains column header it coes not depend on column order. For more complex logic, like looking up several alternative names for a field, you'll need to write your own implementation.\n"
] | [
4,
3,
1,
1,
1,
1
] | [] | [] | [
"csv",
"django",
"django_models",
"jquery",
"python"
] | stackoverflow_0000586517_csv_django_django_models_jquery_python.txt |
Q:
How to produce a colored GUI in a console application?
For the following questions, answers may be for C/C++, C#, or Python. I would like the answers to be cross platform if possible but I realize I will probably need conio or ncurses
How do I output colored text?
How would I do a GUI like top or nethack where certain things are "drawn" to certain spaces in the terminal?
If possible a small oneliner code example would be great.
A:
Yes, these are VT100 escape codes. The simplest thing is to use some flavor of Curses. Once, you choose a curses flavor it is pretty simple to do both 1 and 2.
Here's a HowTo on ncurses.
http://web.cs.mun.ca/~rod/ncurses/ncurses.html
A:
Most terminal windows understand the ANSI escape sequences, which allow coloring, cursor movement etc. You can find a list of them here.
Use of these sequences can seem a bit "old school", but you can use them in cases where curses isn't really applicable. For example, I use the folowing function in my bash scripts to display error messages in red:
color_red()
{
echo -e "\033[01;31m$1\033[00m"
}
You can then say things like:
color_red "something has gone horribly wrong!"
exit 1
A:
From this point of view, the console is in many ways just an emulation of a classic terminal device. Curses was created originally to support a way of doing common operations on different terminal types, where the actual terminal in use could be selected by the user as part of the login sequence. That heritage survives today in ncurses.
The ncurses library provides functions to call to directly position the cursor and emit text, and it is known to work for the Windows Console (where CMD.EXE runs), as well as on various *nix platform equivalents such as XTerms and the like. It probably even works with a true Dec VT100 over a serial line if you had such a thing...
The escape sequences understood by the VT100 and later models became the basis for the ANSI standard terminal. But you really don't want to have to know about that. Use ncurses and you won't have to.
Leaning on conio won't get you cross platform, as that is a DOS/Windows specific API.
Edit: Apparently the ncurses library itself is not easily built on mingw, at least as observed from a quick attempt to Google it up. However, all is not lost, as ncurses is only one of the descendants of the original curses library.
Another is PDCurses which is known to compile and run for Windows Consoles, as well as for X11 and a variety of *nix platforms.
(I was just reminded from chasing references at Wikipedia that curses came out of writing the game rogue, which is the ancestor of nethack. Some of its code was "borrowed" from the cursor management module of the vi editor, as well. So spelunking in the nethack source kit for ideas may not be a crazy idea at all...)
A:
Not cross platform but for Windows / C# colour, see
Color your Console text (C#)
c++
A:
In C#, you can set the text color and the background color via the Console.ForegroundColor and Console.BackgroundColor properties, respectively. For a list of valid colors, see this MSDN doc.
| How to produce a colored GUI in a console application? | For the following questions, answers may be for C/C++, C#, or Python. I would like the answers to be cross platform if possible but I realize I will probably need conio or ncurses
How do I output colored text?
How would I do a GUI like top or nethack where certain things are "drawn" to certain spaces in the terminal?
If possible a small oneliner code example would be great.
| [
"Yes, these are VT100 escape codes. The simplest thing is to use some flavor of Curses. Once, you choose a curses flavor it is pretty simple to do both 1 and 2.\nHere's a HowTo on ncurses.\nhttp://web.cs.mun.ca/~rod/ncurses/ncurses.html\n",
"Most terminal windows understand the ANSI escape sequences, which allow coloring, cursor movement etc. You can find a list of them here.\nUse of these sequences can seem a bit \"old school\", but you can use them in cases where curses isn't really applicable. For example, I use the folowing function in my bash scripts to display error messages in red:\ncolor_red()\n{\n echo -e \"\\033[01;31m$1\\033[00m\"\n}\n\nYou can then say things like:\ncolor_red \"something has gone horribly wrong!\"\nexit 1\n\n",
"From this point of view, the console is in many ways just an emulation of a classic terminal device. Curses was created originally to support a way of doing common operations on different terminal types, where the actual terminal in use could be selected by the user as part of the login sequence. That heritage survives today in ncurses. \nThe ncurses library provides functions to call to directly position the cursor and emit text, and it is known to work for the Windows Console (where CMD.EXE runs), as well as on various *nix platform equivalents such as XTerms and the like. It probably even works with a true Dec VT100 over a serial line if you had such a thing...\nThe escape sequences understood by the VT100 and later models became the basis for the ANSI standard terminal. But you really don't want to have to know about that. Use ncurses and you won't have to.\nLeaning on conio won't get you cross platform, as that is a DOS/Windows specific API.\nEdit: Apparently the ncurses library itself is not easily built on mingw, at least as observed from a quick attempt to Google it up. However, all is not lost, as ncurses is only one of the descendants of the original curses library. \nAnother is PDCurses which is known to compile and run for Windows Consoles, as well as for X11 and a variety of *nix platforms.\n(I was just reminded from chasing references at Wikipedia that curses came out of writing the game rogue, which is the ancestor of nethack. Some of its code was \"borrowed\" from the cursor management module of the vi editor, as well. So spelunking in the nethack source kit for ideas may not be a crazy idea at all...)\n",
"Not cross platform but for Windows / C# colour, see\nColor your Console text (C#)\nc++\n",
"In C#, you can set the text color and the background color via the Console.ForegroundColor and Console.BackgroundColor properties, respectively. For a list of valid colors, see this MSDN doc.\n"
] | [
4,
1,
1,
0,
0
] | [] | [] | [
"c",
"c#",
"c++",
"console_application",
"python"
] | stackoverflow_0000588622_c_c#_c++_console_application_python.txt |
Q:
Python persistent Popen
Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
A:
You're not "making a call" when you use popen, you're running an executable and talking to it over stdin, stdout, and stderr. If the executable has some way of doing a "session" of work (for instance, by reading lines from stdin) then, yes, you can do it. Otherwise, you'll need to exec multiple times.
subprocess.Popen is (mostly) just a wrapper around execvp(3)
A:
Assuming you want to be able to run a shell and send it multiple commands (and read their output), it appears you can do something like this:
from subprocess import *
p = Popen(['/bin/sh'], shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)
After which, e.g.,:
>>> p.stdin.write("cat /etc/motd\n")
>>> p.stdout.readline()
'Welcome to dev-linux.mongo.com.\n'
(Of course, you should check stderr too, or else ask Popen to merge it with stdout). One major problem with the above is that the stdin and stdout pipes are in blocking mode, so it's easy to get "stuck" waiting forever for output from the shell. Although I haven't tried it, there's a recipe at the ActiveState site that shows how to address this.
Update: after looking at the related questions/answers, it looks like it might be simpler to just use Python's built-in select module to see if there's data to read on stdout (you should also do the same for stderr, of course), e.g.:
>>> select.select([p.stdout], [], [], 0)
([<open file '<fdopen>', mode 'rb' at 0x10341690>], [], [])
A:
For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
Sounds like you're using shell=True. Don't, unless you need to. Instead use shell=False (the default) and pass in a command/arg list.
Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
Any reason you can't just create two Popen instances and wait/communicate on each as necessary? That's the normal way to do it, if I understand you correctly.
| Python persistent Popen | Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
| [
"You're not \"making a call\" when you use popen, you're running an executable and talking to it over stdin, stdout, and stderr. If the executable has some way of doing a \"session\" of work (for instance, by reading lines from stdin) then, yes, you can do it. Otherwise, you'll need to exec multiple times.\nsubprocess.Popen is (mostly) just a wrapper around execvp(3)\n",
"Assuming you want to be able to run a shell and send it multiple commands (and read their output), it appears you can do something like this:\nfrom subprocess import *\np = Popen(['/bin/sh'], shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n\nAfter which, e.g.,:\n>>> p.stdin.write(\"cat /etc/motd\\n\")\n>>> p.stdout.readline()\n'Welcome to dev-linux.mongo.com.\\n'\n\n(Of course, you should check stderr too, or else ask Popen to merge it with stdout). One major problem with the above is that the stdin and stdout pipes are in blocking mode, so it's easy to get \"stuck\" waiting forever for output from the shell. Although I haven't tried it, there's a recipe at the ActiveState site that shows how to address this.\nUpdate: after looking at the related questions/answers, it looks like it might be simpler to just use Python's built-in select module to see if there's data to read on stdout (you should also do the same for stderr, of course), e.g.:\n>>> select.select([p.stdout], [], [], 0)\n([<open file '<fdopen>', mode 'rb' at 0x10341690>], [], [])\n\n",
"\nFor instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?\n\nSounds like you're using shell=True. Don't, unless you need to. Instead use shell=False (the default) and pass in a command/arg list.\n\nIs there a way to do multiple calls in the same \"session\" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?\n\nAny reason you can't just create two Popen instances and wait/communicate on each as necessary? That's the normal way to do it, if I understand you correctly.\n"
] | [
3,
1,
0
] | [] | [] | [
"popen",
"python",
"subprocess"
] | stackoverflow_0000589093_popen_python_subprocess.txt |
Q:
What's the fastest way to test the validity of a large number of well-formed URLs
My project requires me to validate a large number of web URLs. These URLs have been captured by a very unreliable process which I do not control. All of the URLs have already been regexp validated and are known to be well-formed. I also know that they all have valid TLDs
I want to be able to filter these URLs quickly in order to determine which of these are incorrect. At this point I do not care what content is on the pages - I'd just like to know as quickly as possible which of the pages are inaccessible (e.g. produce a 404 error).
Given that there are a lot of these I do not want to download the entire page, just the HTTP header and then take a good guess from the content of the header whether the page is likely to exist.
Can it be done?
A:
To really make this fast you might also use eventlet which uses non-blocking IO to speed things up.
You can use a head request like this:
from eventlet import httpc
try:
res = httpc.head(url)
except httpc.NotFound:
# handle 404
You can then put this into some simple script like that example script here. With that you should get pretty much concurrency by using a coroutines pool.
A:
I'm assuming you want to do it in Python based on your tags. In that case, I'd use httplib. Optionally, somehow group the URLs by host so you can make multiple requests in one connection for those URLs that have the same host. Use the HEAD request.
conn = httplib.HTTPConnection("example.com")
conn.request("HEAD", "/index.html")
resp = conn.getresponse()
print resp.status
A:
Using httplib and urlparse:
def checkURL(url):
import httplib
import urlparse
protocol, host, path, query, fragment = urlparse.urlsplit(url)
if protocol == "http":
conntype = httplib.HTTPConnection
elif protocol == "https":
conntype = httplib.HTTPSConnection
else:
raise ValueError("unsupported protocol: " + protocol)
conn = conntype(host)
conn.request("HEAD", path)
resp = conn.getresponse()
conn.close()
if resp.status < 400:
return true
return false
A:
Just send HTTP HEAD requests as shown in the accepted answer to this question.
A:
Instead of sending an HTTP GET request for each URL you can try sending an HTTP HEAD request. They are described in this document.
A:
This is a trivial case for twisted. There are a couple of concurrency tools you can use to slow it down, otherwise, it'll pretty much do it all at once.
Twisted is definitely my favorite thing about python. :)
A:
This might help you to start. The file sitelist.txt contains a list of URIs. You might have to install httplib2, highly recommended. I put a sleep between each request so if you have many URIs on the same site, your client will not be blacklisted for abusing resources.
import httplib2
import time
h = httplib2.Http(".cache")
f = open("sitelist.txt", "r")
urllist = f.readlines()
f.close()
for url in urllist:
# wait 10 seconds before the next request - be nice with the site
time.sleep(10)
resp= {}
urlrequest = url.strip()
try:
resp, content = h.request(urlrequest, "HEAD")
if resp['status'] == "200":
print url, "200 - Good"
else:
print url, resp['status'], " you might want to double check"
except:
pass
A:
A Python program which does a similar work (for a list of URL stored at del.icio.us) is disastrous.
And, yes, it uses HEAD and not GET but do note some (not HTTP standard) servers send different results for HEAD and for GET: the Python environment Zope is a typical culprit.(Also, in some case, network problems, for instance tunnels + broken firewalls which block ICMP, prevent big packets to get through so HEAD works and not GET.)
| What's the fastest way to test the validity of a large number of well-formed URLs | My project requires me to validate a large number of web URLs. These URLs have been captured by a very unreliable process which I do not control. All of the URLs have already been regexp validated and are known to be well-formed. I also know that they all have valid TLDs
I want to be able to filter these URLs quickly in order to determine which of these are incorrect. At this point I do not care what content is on the pages - I'd just like to know as quickly as possible which of the pages are inaccessible (e.g. produce a 404 error).
Given that there are a lot of these I do not want to download the entire page, just the HTTP header and then take a good guess from the content of the header whether the page is likely to exist.
Can it be done?
| [
"To really make this fast you might also use eventlet which uses non-blocking IO to speed things up.\nYou can use a head request like this:\nfrom eventlet import httpc\ntry:\n res = httpc.head(url)\nexcept httpc.NotFound:\n # handle 404\n\nYou can then put this into some simple script like that example script here. With that you should get pretty much concurrency by using a coroutines pool.\n",
"I'm assuming you want to do it in Python based on your tags. In that case, I'd use httplib. Optionally, somehow group the URLs by host so you can make multiple requests in one connection for those URLs that have the same host. Use the HEAD request.\nconn = httplib.HTTPConnection(\"example.com\")\nconn.request(\"HEAD\", \"/index.html\")\nresp = conn.getresponse()\nprint resp.status\n\n",
"Using httplib and urlparse:\ndef checkURL(url):\n import httplib\n import urlparse\n\n protocol, host, path, query, fragment = urlparse.urlsplit(url)\n\n if protocol == \"http\":\n conntype = httplib.HTTPConnection\n elif protocol == \"https\":\n conntype = httplib.HTTPSConnection\n else:\n raise ValueError(\"unsupported protocol: \" + protocol)\n\n conn = conntype(host)\n conn.request(\"HEAD\", path)\n resp = conn.getresponse()\n conn.close()\n\n if resp.status < 400:\n return true\n\n return false\n\n",
"Just send HTTP HEAD requests as shown in the accepted answer to this question.\n",
"Instead of sending an HTTP GET request for each URL you can try sending an HTTP HEAD request. They are described in this document.\n",
"This is a trivial case for twisted. There are a couple of concurrency tools you can use to slow it down, otherwise, it'll pretty much do it all at once.\nTwisted is definitely my favorite thing about python. :)\n",
"This might help you to start. The file sitelist.txt contains a list of URIs. You might have to install httplib2, highly recommended. I put a sleep between each request so if you have many URIs on the same site, your client will not be blacklisted for abusing resources.\n import httplib2\n import time\n\n h = httplib2.Http(\".cache\")\n\n f = open(\"sitelist.txt\", \"r\")\n urllist = f.readlines()\n f.close()\n\n for url in urllist:\n # wait 10 seconds before the next request - be nice with the site\n time.sleep(10)\n resp= {}\n urlrequest = url.strip()\n try:\n resp, content = h.request(urlrequest, \"HEAD\")\n if resp['status'] == \"200\":\n print url, \"200 - Good\"\n else:\n print url, resp['status'], \" you might want to double check\"\n except:\n pass\n\n",
"A Python program which does a similar work (for a list of URL stored at del.icio.us) is disastrous. \nAnd, yes, it uses HEAD and not GET but do note some (not HTTP standard) servers send different results for HEAD and for GET: the Python environment Zope is a typical culprit.(Also, in some case, network problems, for instance tunnels + broken firewalls which block ICMP, prevent big packets to get through so HEAD works and not GET.)\n"
] | [
8,
6,
4,
3,
1,
0,
0,
0
] | [] | [] | [
"http",
"python"
] | stackoverflow_0000563384_http_python.txt |
Q:
Python or IronPython
How does IronPython stack up to the default Windows implementation of Python from python.org? If I am learning Python, will I be learning a subtley different language with IronPython, and what libraries would I be doing without?
Are there, alternatively, any pros to IronPython (not including .NET IL compiled classes) that would make it more attractive an option?
A:
There are a number of important differences:
Interoperability with other .NET languages. You can use other .NET libraries from an IronPython application, or use IronPython from a C# application, for example. This interoperability is increasing, with a movement toward greater support for dynamic types in .NET 4.0. For a lot of detail on this, see these two presentations at PDC 2008.
Better concurrency/multi-core support, due to lack of a GIL. (Note that the GIL doesn't inhibit threading on a single-core machine---it only limits performance on multi-core machines.)
Limited ability to consume Python C extensions. The Ironclad project is making significant strides toward improving this---they've nearly gotten Numpy working!
Less cross-platform support; basically, you've got the CLR and Mono. Mono is impressive, though, and runs on many platforms---and they've got an implementation of Silverlight, called Moonlight.
Reports of improved performance, although I have not looked into this carefully.
Feature lag: since CPython is the reference Python implementation, it has the "latest and greatest" Python features, whereas IronPython necessarily lags behind. Many people do not find this to be a problem.
A:
There are some subtle differences in how you write your code, but the biggest difference is in the libraries you have available.
With IronPython, you have all the .Net libraries available, but at the expense of some of the "normal" python libraries that haven't been ported to the .Net VM I think.
Basically, you should expect the syntax and the idioms to be the same, but a script written for IronPython wont run if you try giving it to the "regular" Python interpreter. The other way around is probably more likely, but there too you will find differences I think.
A:
Well, it's generally faster.
Can't use modules, and only has a subset of the library.
Here's a list of differences.
A:
See the blog post IronPython is a one-way gate. It summarizes some things I've learned about IronPython from asking questions on StackOverflow.
A:
Python is Python, the only difference is that IronPython was designed to run on the CLR (.NET Framework), and as such, can inter-operate and consume .NET assemblies written in other .NET languages. So if your platform is Windows and you also use .NET or your company does then should consider IronPython.
A:
One of the pros of IronPython is that, unlike CPython, IronPython doesn't use the Global Interpreter Lock, thus making threading more effective.
In the standard Python implementation, threads grab the GIL on each object access. This limits parallel execution, which matters especially if you expect to fully utilize multiple CPUs.
A:
Pro: You can run IronPython in a browser if SilverLight is installed.
A:
It also depends on whether you want your code to work on Linux. Dunno if IronPython will work on anything beside windows platforms.
| Python or IronPython | How does IronPython stack up to the default Windows implementation of Python from python.org? If I am learning Python, will I be learning a subtley different language with IronPython, and what libraries would I be doing without?
Are there, alternatively, any pros to IronPython (not including .NET IL compiled classes) that would make it more attractive an option?
| [
"There are a number of important differences:\n\nInteroperability with other .NET languages. You can use other .NET libraries from an IronPython application, or use IronPython from a C# application, for example. This interoperability is increasing, with a movement toward greater support for dynamic types in .NET 4.0. For a lot of detail on this, see these two presentations at PDC 2008.\nBetter concurrency/multi-core support, due to lack of a GIL. (Note that the GIL doesn't inhibit threading on a single-core machine---it only limits performance on multi-core machines.)\nLimited ability to consume Python C extensions. The Ironclad project is making significant strides toward improving this---they've nearly gotten Numpy working!\nLess cross-platform support; basically, you've got the CLR and Mono. Mono is impressive, though, and runs on many platforms---and they've got an implementation of Silverlight, called Moonlight.\nReports of improved performance, although I have not looked into this carefully.\nFeature lag: since CPython is the reference Python implementation, it has the \"latest and greatest\" Python features, whereas IronPython necessarily lags behind. Many people do not find this to be a problem.\n\n",
"There are some subtle differences in how you write your code, but the biggest difference is in the libraries you have available.\nWith IronPython, you have all the .Net libraries available, but at the expense of some of the \"normal\" python libraries that haven't been ported to the .Net VM I think.\nBasically, you should expect the syntax and the idioms to be the same, but a script written for IronPython wont run if you try giving it to the \"regular\" Python interpreter. The other way around is probably more likely, but there too you will find differences I think.\n",
"Well, it's generally faster.\nCan't use modules, and only has a subset of the library.\nHere's a list of differences.\n",
"See the blog post IronPython is a one-way gate. It summarizes some things I've learned about IronPython from asking questions on StackOverflow.\n",
"Python is Python, the only difference is that IronPython was designed to run on the CLR (.NET Framework), and as such, can inter-operate and consume .NET assemblies written in other .NET languages. So if your platform is Windows and you also use .NET or your company does then should consider IronPython.\n",
"One of the pros of IronPython is that, unlike CPython, IronPython doesn't use the Global Interpreter Lock, thus making threading more effective. \nIn the standard Python implementation, threads grab the GIL on each object access. This limits parallel execution, which matters especially if you expect to fully utilize multiple CPUs.\n",
"Pro: You can run IronPython in a browser if SilverLight is installed.\n",
"It also depends on whether you want your code to work on Linux. Dunno if IronPython will work on anything beside windows platforms.\n"
] | [
32,
13,
5,
3,
2,
2,
2,
0
] | [] | [] | [
"cpython",
"ironpython",
"python"
] | stackoverflow_0000590007_cpython_ironpython_python.txt |
Q:
python variables not accepting names
I'm trying to declare a few simple variables as part of a function in a very basic collision detection programme. For some reason it's rejecting my variables (although only some of them even though they're near identical). Here's the code for the function;
def TimeCheck():
timechecknumber = int(time.time())
timecheckdiv = backcolourcheck % 5
if timecheckdiv < 1:
timecheck = true
else:
timecheck = false
if timecheck == true:
backgroundr = (int(random.random()*255)+1
backgroundg = (int(random.random()*255)+1
backgroundb = (int(random.random()*255)+1
for some reason it accepts backgroundr but not backgroundg, anyone got any ideas why? thanks
A:
You have mismatched parentheses on the line beginning with backgroundr. I think maybe you want this:
backgroundr = int(random.random() * 255) + 1
Note that each of the next two lines also have mismatched parentheses, so you'll have to fix those, too.
A:
mipadi's answer will always yield a 1. You need to multiply by 255 before you cast to int. Try this.
backgroundr = int(random.random() * 255) + 1
| python variables not accepting names | I'm trying to declare a few simple variables as part of a function in a very basic collision detection programme. For some reason it's rejecting my variables (although only some of them even though they're near identical). Here's the code for the function;
def TimeCheck():
timechecknumber = int(time.time())
timecheckdiv = backcolourcheck % 5
if timecheckdiv < 1:
timecheck = true
else:
timecheck = false
if timecheck == true:
backgroundr = (int(random.random()*255)+1
backgroundg = (int(random.random()*255)+1
backgroundb = (int(random.random()*255)+1
for some reason it accepts backgroundr but not backgroundg, anyone got any ideas why? thanks
| [
"You have mismatched parentheses on the line beginning with backgroundr. I think maybe you want this:\nbackgroundr = int(random.random() * 255) + 1\n\nNote that each of the next two lines also have mismatched parentheses, so you'll have to fix those, too.\n",
"mipadi's answer will always yield a 1. You need to multiply by 255 before you cast to int. Try this.\nbackgroundr = int(random.random() * 255) + 1\n\n"
] | [
8,
2
] | [] | [] | [
"python",
"syntax_error",
"variables"
] | stackoverflow_0000591421_python_syntax_error_variables.txt |
Q:
Python Win32 - DriveInfo On Mapped Drive
Does anyone know how I can determine the server and share name of a mapped network drive?
For example:
import win32file, win32api
for logDrive in win32api.GetLogicalDriveStrings().split("\x00"):
if win32file.GetDriveType(logDrive) != win32file.DRIVE_REMOTE: continue
# get server and share name here
Is there a handy api call for this?
Thanks.
A:
You'll have to call the win32 API: WNetGetUniversalName
| Python Win32 - DriveInfo On Mapped Drive | Does anyone know how I can determine the server and share name of a mapped network drive?
For example:
import win32file, win32api
for logDrive in win32api.GetLogicalDriveStrings().split("\x00"):
if win32file.GetDriveType(logDrive) != win32file.DRIVE_REMOTE: continue
# get server and share name here
Is there a handy api call for this?
Thanks.
| [
"You'll have to call the win32 API: WNetGetUniversalName\n"
] | [
2
] | [] | [] | [
"python",
"winapi"
] | stackoverflow_0000591443_python_winapi.txt |
Q:
pygame function appears to be being ignored
I'm building a relatively simple programme to test collision detection, it's all working fine at the moment except one thing, I'm trying to make the background colour change randomly, the only issue is that it appears to be completely skipping the function to do this;
import pygame
from pygame.locals import *
import random, math, time, sys
pygame.init()
Surface = pygame.display.set_mode((800,600))
backgroundr = int(random.random()*255)+1
backgroundg = int(random.random()*255)+1
backgroundb = int(random.random()*255)+1
Circles = []
class Circle:
def __init__(self):
self.radius = int(random.random()*50) + 1
self.x = random.randint(self.radius, 800-self.radius)
self.y = random.randint(self.radius, 600-self.radius)
self.speedx = 0.5*(random.random()+1.0)
self.speedy = 0.5*(random.random()+1.0)
self.r = int(random.random()*255)+1
self.g = int(random.random()*255)+1
self.b = int(random.random()*255)+1
## self.mass = math.sqrt(self.radius)
for x in range(int(random.random()*30) + 1):
Circles.append(Circle())
def CircleCollide(C1,C2):
C1Speed = math.sqrt((C1.speedx**2)+(C1.speedy**2))
XDiff = -(C1.x-C2.x)
YDiff = -(C1.y-C2.y)
if XDiff > 0:
if YDiff > 0:
Angle = math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif YDiff < 0:
Angle = math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif XDiff < 0:
if YDiff > 0:
Angle = 180 + math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif YDiff < 0:
Angle = -180 + math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif XDiff == 0:
if YDiff > 0:
Angle = -90
else:
Angle = 90
XSpeed = C1Speed*math.cos(math.radians(Angle))
YSpeed = C1Speed*math.sin(math.radians(Angle))
elif YDiff == 0:
if XDiff < 0:
Angle = 0
else:
Angle = 180
XSpeed = C1Speed*math.cos(math.radians(Angle))
YSpeed = C1Speed*math.sin(math.radians(Angle))
C1.speedx = XSpeed
C1.speedy = YSpeed
C1.r = int(random.random()*255)+1
C1.g = int(random.random()*255)+1
C1.b = int(random.random()*255)+1
C2.r = int(random.random()*255)+1
C2.g = int(random.random()*255)+1
C2.b = int(random.random()*255)+1
def ColourCheck():
checknumber = int(random.random()*50)+1
if checknumber == 50:
backgroundr = int(random.random()*255)+1
backgroundg = int(random.random()*255)+1
backgroundb = int(random.random()*255)+1
def Move():
for Circle in Circles:
Circle.x += Circle.speedx
Circle.y += Circle.speedy
def CollisionDetect():
for Circle in Circles:
if Circle.x < Circle.radius or Circle.x > 800-Circle.radius:
Circle.speedx *= -1
Circle.r = int(random.random()*255)+1
Circle.g = int(random.random()*255)+1
Circle.b = int(random.random()*255)+1
if Circle.y < Circle.radius or Circle.y > 600-Circle.radius:
Circle.speedy *= -1
Circle.r = int(random.random()*255)+1
Circle.g = int(random.random()*255)+1
Circle.b = int(random.random()*255)+1
for Circle in Circles:
for Circle2 in Circles:
if Circle != Circle2:
if math.sqrt( ((Circle.x-Circle2.x)**2) + ((Circle.y-Circle2.y)**2) ) <= (Circle.radius+Circle2.radius):
CircleCollide(Circle,Circle2)
def Draw():
Surface.fill((backgroundr,backgroundg,backgroundb))
for Circle in Circles:
pygame.draw.circle(Surface,(Circle.r,Circle.g,Circle.b),(int(Circle.x),int(600-Circle.y)),Circle.radius)
pygame.display.flip()
def GetInput():
keystate = pygame.key.get_pressed()
for event in pygame.event.get():
if event.type == QUIT or keystate[K_ESCAPE]:
pygame.quit(); sys.exit()
def main():
while True:
ColourCheck()
GetInput()
Move()
CollisionDetect()
Draw()
if __name__ == '__main__': main()
it's the ColourCheck function that's being ignored, any ideas why?
A:
I believe backgroundr, backgroundg, and backgroundb are local variables to your ColourCheck() function.
If you're determined to use global variables, try this at the top of your file:
global backgroundr;
global backgroundg;
global backgroundb;
backgroundr = int(random.random()*255)+1
backgroundg = int(random.random()*255)+1
backgroundb = int(random.random()*255)+1
and this in your function:
def ColourCheck():
global backgroundr;
global backgroundg;
global backgroundb;
checknumber = int(random.random()*50)+1
if checknumber == 50:
backgroundr = int(random.random()*255)+1
backgroundg = int(random.random()*255)+1
backgroundb = int(random.random()*255)+1
A:
Move(), CollisionDetect(), and Draw() all refer to Circles, but don't declare it global. Try adding a global Circles line to the beginning of each function. Also, I'd recommend changing your variables to lower-case; not only does an initial cap typically indicate a class in Python, but you're actually generating (insignificant) collisions between the Circle variable and the Circle class.
For example:
circles = []
# ...
for x in range(int(random.random()*30) + 1):
circles.append(Circle())
# ...
def Move():
global circles
for circle in circles:
circle.x += circle.speedx
circle.y += circle.speedy
# ...
Edit:
And as Nathan notes, your backgroundX variables also need to be declared global in ColorCheck() and Draw().
You might consider wrapping all of these functions into a Game class (or some such), to avoid working with so many globals.
| pygame function appears to be being ignored | I'm building a relatively simple programme to test collision detection, it's all working fine at the moment except one thing, I'm trying to make the background colour change randomly, the only issue is that it appears to be completely skipping the function to do this;
import pygame
from pygame.locals import *
import random, math, time, sys
pygame.init()
Surface = pygame.display.set_mode((800,600))
backgroundr = int(random.random()*255)+1
backgroundg = int(random.random()*255)+1
backgroundb = int(random.random()*255)+1
Circles = []
class Circle:
def __init__(self):
self.radius = int(random.random()*50) + 1
self.x = random.randint(self.radius, 800-self.radius)
self.y = random.randint(self.radius, 600-self.radius)
self.speedx = 0.5*(random.random()+1.0)
self.speedy = 0.5*(random.random()+1.0)
self.r = int(random.random()*255)+1
self.g = int(random.random()*255)+1
self.b = int(random.random()*255)+1
## self.mass = math.sqrt(self.radius)
for x in range(int(random.random()*30) + 1):
Circles.append(Circle())
def CircleCollide(C1,C2):
C1Speed = math.sqrt((C1.speedx**2)+(C1.speedy**2))
XDiff = -(C1.x-C2.x)
YDiff = -(C1.y-C2.y)
if XDiff > 0:
if YDiff > 0:
Angle = math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif YDiff < 0:
Angle = math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif XDiff < 0:
if YDiff > 0:
Angle = 180 + math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif YDiff < 0:
Angle = -180 + math.degrees(math.atan(YDiff/XDiff))
XSpeed = -C1Speed*math.cos(math.radians(Angle))
YSpeed = -C1Speed*math.sin(math.radians(Angle))
elif XDiff == 0:
if YDiff > 0:
Angle = -90
else:
Angle = 90
XSpeed = C1Speed*math.cos(math.radians(Angle))
YSpeed = C1Speed*math.sin(math.radians(Angle))
elif YDiff == 0:
if XDiff < 0:
Angle = 0
else:
Angle = 180
XSpeed = C1Speed*math.cos(math.radians(Angle))
YSpeed = C1Speed*math.sin(math.radians(Angle))
C1.speedx = XSpeed
C1.speedy = YSpeed
C1.r = int(random.random()*255)+1
C1.g = int(random.random()*255)+1
C1.b = int(random.random()*255)+1
C2.r = int(random.random()*255)+1
C2.g = int(random.random()*255)+1
C2.b = int(random.random()*255)+1
def ColourCheck():
checknumber = int(random.random()*50)+1
if checknumber == 50:
backgroundr = int(random.random()*255)+1
backgroundg = int(random.random()*255)+1
backgroundb = int(random.random()*255)+1
def Move():
for Circle in Circles:
Circle.x += Circle.speedx
Circle.y += Circle.speedy
def CollisionDetect():
for Circle in Circles:
if Circle.x < Circle.radius or Circle.x > 800-Circle.radius:
Circle.speedx *= -1
Circle.r = int(random.random()*255)+1
Circle.g = int(random.random()*255)+1
Circle.b = int(random.random()*255)+1
if Circle.y < Circle.radius or Circle.y > 600-Circle.radius:
Circle.speedy *= -1
Circle.r = int(random.random()*255)+1
Circle.g = int(random.random()*255)+1
Circle.b = int(random.random()*255)+1
for Circle in Circles:
for Circle2 in Circles:
if Circle != Circle2:
if math.sqrt( ((Circle.x-Circle2.x)**2) + ((Circle.y-Circle2.y)**2) ) <= (Circle.radius+Circle2.radius):
CircleCollide(Circle,Circle2)
def Draw():
Surface.fill((backgroundr,backgroundg,backgroundb))
for Circle in Circles:
pygame.draw.circle(Surface,(Circle.r,Circle.g,Circle.b),(int(Circle.x),int(600-Circle.y)),Circle.radius)
pygame.display.flip()
def GetInput():
keystate = pygame.key.get_pressed()
for event in pygame.event.get():
if event.type == QUIT or keystate[K_ESCAPE]:
pygame.quit(); sys.exit()
def main():
while True:
ColourCheck()
GetInput()
Move()
CollisionDetect()
Draw()
if __name__ == '__main__': main()
it's the ColourCheck function that's being ignored, any ideas why?
| [
"I believe backgroundr, backgroundg, and backgroundb are local variables to your ColourCheck() function.\nIf you're determined to use global variables, try this at the top of your file:\nglobal backgroundr;\nglobal backgroundg;\nglobal backgroundb;\nbackgroundr = int(random.random()*255)+1\nbackgroundg = int(random.random()*255)+1\nbackgroundb = int(random.random()*255)+1\n\nand this in your function:\ndef ColourCheck():\n global backgroundr;\n global backgroundg;\n global backgroundb;\n checknumber = int(random.random()*50)+1\n if checknumber == 50:\n backgroundr = int(random.random()*255)+1\n backgroundg = int(random.random()*255)+1\n backgroundb = int(random.random()*255)+1\n\n",
"Move(), CollisionDetect(), and Draw() all refer to Circles, but don't declare it global. Try adding a global Circles line to the beginning of each function. Also, I'd recommend changing your variables to lower-case; not only does an initial cap typically indicate a class in Python, but you're actually generating (insignificant) collisions between the Circle variable and the Circle class.\nFor example:\ncircles = []\n\n# ...\n\nfor x in range(int(random.random()*30) + 1):\n circles.append(Circle())\n\n# ...\n\ndef Move():\n global circles\n\n for circle in circles:\n circle.x += circle.speedx\n circle.y += circle.speedy\n\n# ...\n\nEdit:\nAnd as Nathan notes, your backgroundX variables also need to be declared global in ColorCheck() and Draw().\nYou might consider wrapping all of these functions into a Game class (or some such), to avoid working with so many globals.\n"
] | [
6,
0
] | [] | [] | [
"function",
"pygame",
"python"
] | stackoverflow_0000591776_function_pygame_python.txt |
Q:
Python - How to check if a file is used by another application?
I want to open a file which is periodically written to by another application. This application cannot be modified. I'd therefore like to only open the file when I know it is not been written to by an other application.
Is there a pythonic way to do this? Otherwise, how do I achieve this in Unix and Windows?
edit: I'll try and clarify. Is there a way to check if the current file has been opened by another application?
I'd like to start with this question. Whether those other application read/write is irrelevant for now.
I realize it is probably OS dependent, so this may not really be python related right now.
A:
Will your python script desire to open the file for writing or for reading? Is the legacy application opening and closing the file between writes, or does it keep it open?
It is extremely important that we understand what the legacy application is doing, and what your python script is attempting to achieve.
This area of functionality is highly OS-dependent, and the fact that you have no control over the legacy application only makes things harder unfortunately. Whether there is a pythonic or non-pythonic way of doing this will probably be the least of your concerns - the hard question will be whether what you are trying to achieve will be possible at all.
UPDATE
OK, so knowing (from your comment) that:
the legacy application is opening and
closing the file every X minutes, but
I do not want to assume that at t =
t_0 + n*X + eps it already closed
the file.
then the problem's parameters are changed. It can actually be done in an OS-independent way given a few assumptions, or as a combination of OS-dependent and OS-independent techniques. :)
OS-independent way: if it is safe to assume that the legacy application keeps the file open for at most some known quantity of time, say T seconds (e.g. opens the file, performs one write, then closes the file), and re-opens it more or less every X seconds, where X is larger than 2*T.
stat the file
subtract file's modification time from now(), yielding D
if T <= D < X then open the file and do what you need with it
This may be safe enough for your application. Safety increases as T/X decreases. On *nix you may have to double check /etc/ntpd.conf for proper time-stepping vs. slew configuration (see tinker). For Windows see MSDN
Windows: in addition (or in-lieu) of the OS-independent method above, you may attempt to use either:
sharing (locking): this assumes that the legacy program also opens the file in shared mode (usually the default in Windows apps); moreover, if your application acquires the lock just as the legacy application is attempting the same (race condition), the legacy application will fail.
this is extremely intrusive and error prone. Unless both the new application and the legacy application need synchronized access for writing to the same file and you are willing to handle the possibility of the legacy application being denied opening of the file, do not use this method.
attempting to find out what files are open in the legacy application, using the same techniques as ProcessExplorer (the equivalent of *nix's lsof)
you are even more vulnerable to race conditions than the OS-independent technique
Linux/etc.: in addition (or in-lieu) of the OS-independent method above, you may attempt to use the same technique as lsof or, on some systems, simply check which file the symbolic link /proc/<pid>/fd/<fdes> points to
you are even more vulnerable to race conditions than the OS-independent technique
it is highly unlikely that the legacy application uses locking, but if it is, locking is not a real option unless the legacy application can handle a locked file gracefully (by blocking, not by failing - and if your own application can guarantee that the file will not remain locked, blocking the legacy application for extender periods of time.)
UPDATE 2
If favouring the "check whether the legacy application has the file open" (intrusive approach prone to race conditions) then you can solve the said race condition by:
checking whether the legacy application has the file open (a la lsof or ProcessExplorer)
suspending the legacy application process
repeating the check in step 1 to confirm that the legacy application did not open the file between steps 1 and 2; delay and restart at step 1 if so, otherwise proceed to step 4
doing your business on the file -- ideally simply renaming it for subsequent, independent processing in order to keep the legacy application suspended for a minimal amount of time
resuming the legacy application process
A:
Unix does not have file locking as a default. The best suggestion I have for a Unix environment would be to look at the sources for the lsof command. It has deep knowledge about which process have which files open. You could use that as the basis of your solution. Here are the Ubuntu sources for lsof.
A:
One thing I've done is have python very temporarily rename the file. If we're able to rename it, then no other process is using it. I only tested this on Windows.
| Python - How to check if a file is used by another application? | I want to open a file which is periodically written to by another application. This application cannot be modified. I'd therefore like to only open the file when I know it is not been written to by an other application.
Is there a pythonic way to do this? Otherwise, how do I achieve this in Unix and Windows?
edit: I'll try and clarify. Is there a way to check if the current file has been opened by another application?
I'd like to start with this question. Whether those other application read/write is irrelevant for now.
I realize it is probably OS dependent, so this may not really be python related right now.
| [
"Will your python script desire to open the file for writing or for reading? Is the legacy application opening and closing the file between writes, or does it keep it open?\nIt is extremely important that we understand what the legacy application is doing, and what your python script is attempting to achieve.\nThis area of functionality is highly OS-dependent, and the fact that you have no control over the legacy application only makes things harder unfortunately. Whether there is a pythonic or non-pythonic way of doing this will probably be the least of your concerns - the hard question will be whether what you are trying to achieve will be possible at all.\n\nUPDATE\nOK, so knowing (from your comment) that:\n\nthe legacy application is opening and\n closing the file every X minutes, but\n I do not want to assume that at t =\n t_0 + n*X + eps it already closed\n the file.\n\nthen the problem's parameters are changed. It can actually be done in an OS-independent way given a few assumptions, or as a combination of OS-dependent and OS-independent techniques. :)\n\nOS-independent way: if it is safe to assume that the legacy application keeps the file open for at most some known quantity of time, say T seconds (e.g. opens the file, performs one write, then closes the file), and re-opens it more or less every X seconds, where X is larger than 2*T.\n\n\nstat the file\nsubtract file's modification time from now(), yielding D\nif T <= D < X then open the file and do what you need with it\nThis may be safe enough for your application. Safety increases as T/X decreases. On *nix you may have to double check /etc/ntpd.conf for proper time-stepping vs. slew configuration (see tinker). For Windows see MSDN\n\nWindows: in addition (or in-lieu) of the OS-independent method above, you may attempt to use either:\n\n\nsharing (locking): this assumes that the legacy program also opens the file in shared mode (usually the default in Windows apps); moreover, if your application acquires the lock just as the legacy application is attempting the same (race condition), the legacy application will fail.\n\n\nthis is extremely intrusive and error prone. Unless both the new application and the legacy application need synchronized access for writing to the same file and you are willing to handle the possibility of the legacy application being denied opening of the file, do not use this method.\n\nattempting to find out what files are open in the legacy application, using the same techniques as ProcessExplorer (the equivalent of *nix's lsof)\n\n\nyou are even more vulnerable to race conditions than the OS-independent technique\n\n\nLinux/etc.: in addition (or in-lieu) of the OS-independent method above, you may attempt to use the same technique as lsof or, on some systems, simply check which file the symbolic link /proc/<pid>/fd/<fdes> points to\n\n\nyou are even more vulnerable to race conditions than the OS-independent technique\nit is highly unlikely that the legacy application uses locking, but if it is, locking is not a real option unless the legacy application can handle a locked file gracefully (by blocking, not by failing - and if your own application can guarantee that the file will not remain locked, blocking the legacy application for extender periods of time.)\n\n\n\nUPDATE 2\nIf favouring the \"check whether the legacy application has the file open\" (intrusive approach prone to race conditions) then you can solve the said race condition by:\n\nchecking whether the legacy application has the file open (a la lsof or ProcessExplorer)\nsuspending the legacy application process\nrepeating the check in step 1 to confirm that the legacy application did not open the file between steps 1 and 2; delay and restart at step 1 if so, otherwise proceed to step 4\ndoing your business on the file -- ideally simply renaming it for subsequent, independent processing in order to keep the legacy application suspended for a minimal amount of time\nresuming the legacy application process\n\n",
"Unix does not have file locking as a default. The best suggestion I have for a Unix environment would be to look at the sources for the lsof command. It has deep knowledge about which process have which files open. You could use that as the basis of your solution. Here are the Ubuntu sources for lsof.\n",
"One thing I've done is have python very temporarily rename the file. If we're able to rename it, then no other process is using it. I only tested this on Windows.\n"
] | [
7,
0,
0
] | [] | [] | [
"file_io",
"logging",
"python",
"unix",
"windows"
] | stackoverflow_0000589407_file_io_logging_python_unix_windows.txt |
Q:
In Python what is the preferred way to create and manage threads?
Python provides several methods to create threads. Which provides
the best API and the most control?
Thanks.
A:
When necessary, the threading module and its high-level interface is preferred. Of course, many people suggest that it's rarely/never necessary, and threads aren't very nice to deal with. The thread module may be necessary for some weird use-case or other, but I've never needed it (and of course, I've only rarely used threading, a long time ago). There's some other modules that do neater things, such as multiprocessing, which may be of interest. That doesn't do threading, it just shares the interface (quite cool). I've heard good things about it, but haven't wanted anything like either of them for quite a while.
A:
I believe the threading module is the recommended one. The thread module is being renamed to _thread in Python 3.x, and is meant as a lower-level interface. See the note at the top of this page:
http://docs.python.org/library/thread.html
| In Python what is the preferred way to create and manage threads? | Python provides several methods to create threads. Which provides
the best API and the most control?
Thanks.
| [
"When necessary, the threading module and its high-level interface is preferred. Of course, many people suggest that it's rarely/never necessary, and threads aren't very nice to deal with. The thread module may be necessary for some weird use-case or other, but I've never needed it (and of course, I've only rarely used threading, a long time ago). There's some other modules that do neater things, such as multiprocessing, which may be of interest. That doesn't do threading, it just shares the interface (quite cool). I've heard good things about it, but haven't wanted anything like either of them for quite a while.\n",
"I believe the threading module is the recommended one. The thread module is being renamed to _thread in Python 3.x, and is meant as a lower-level interface. See the note at the top of this page:\nhttp://docs.python.org/library/thread.html\n"
] | [
8,
4
] | [] | [] | [
"multithreading",
"python"
] | stackoverflow_0000592143_multithreading_python.txt |
Q:
PyQt4: Databinding?
Coming from the .NET world over to Python and PyQt4. Was wondering if anyone is familiar with any functionality that would allow me to bind data to Qt widgets? For example (using sqlalchemy for data):
gems = session.query(Gem).all()
list = QListWidget()
list.datasource = gems
Is such a thing possible?
A:
Although not a direct replacement, you might find it useful to look at the QDataWidgetMapper class:
http://pyqt.sourceforge.net/Docs/PyQt4/qdatawidgetmapper.html
If you're not scared of reading C++ code, this example might also prove to be helpful:
https://doc.qt.io/qt-4.8/qt-sql-sqlwidgetmapper-example.html
Note that the mapper operates within Qt's Model/View framework. In this example, the model just happens to be a SQL database model.
A:
One option would have a function that returns a list (or tuple) object from a query, and then use that to update the QListWidget. Remember that the QListWidget stores QListStrings. Your update function might look like this:
def updateQListWidget(qlistwidget, values):
""" Updates a QListWidget object with a list of values
ARGS:
qlistwidget - QListWidget object
values - list of values to add to list widget
"""
qlistwidget.clear()
qlist = QtCore.QStringList()
for v in values:
s = QtCore.QString(v)
qlist.append(s)
qlistwidget.addItems(qlist)
| PyQt4: Databinding? | Coming from the .NET world over to Python and PyQt4. Was wondering if anyone is familiar with any functionality that would allow me to bind data to Qt widgets? For example (using sqlalchemy for data):
gems = session.query(Gem).all()
list = QListWidget()
list.datasource = gems
Is such a thing possible?
| [
"Although not a direct replacement, you might find it useful to look at the QDataWidgetMapper class:\nhttp://pyqt.sourceforge.net/Docs/PyQt4/qdatawidgetmapper.html\nIf you're not scared of reading C++ code, this example might also prove to be helpful:\nhttps://doc.qt.io/qt-4.8/qt-sql-sqlwidgetmapper-example.html\nNote that the mapper operates within Qt's Model/View framework. In this example, the model just happens to be a SQL database model.\n",
"One option would have a function that returns a list (or tuple) object from a query, and then use that to update the QListWidget. Remember that the QListWidget stores QListStrings. Your update function might look like this:\ndef updateQListWidget(qlistwidget, values):\n \"\"\" Updates a QListWidget object with a list of values\n ARGS:\n qlistwidget - QListWidget object\n values - list of values to add to list widget\n \"\"\"\n qlistwidget.clear()\n qlist = QtCore.QStringList()\n for v in values:\n s = QtCore.QString(v)\n qlist.append(s)\n qlistwidget.addItems(qlist) \n\n"
] | [
4,
3
] | [] | [] | [
"data_binding",
"pyqt4",
"python",
"qt4"
] | stackoverflow_0000592404_data_binding_pyqt4_python_qt4.txt |
Q:
How can you print a variable name in python?
Say I have a variable named choice it is equal to 2. How would I access the name of the variable? Something equivalent to
In [53]: namestr(choice)
Out[53]: 'choice'
for use in making a dictionary. There's a good way to do this and I'm just missing it.
EDIT:
The reason to do this is thus. I am running some data analysis stuff where I call the program with multiple parameters that I would like to tweak, or not tweak, at runtime. I read in the parameters I used in the last run from a .config file formated as
filename
no_sig_resonance.dat
mass_peak
700
choice
1,2,3
When prompted for values, the previously used is displayed and an empty string input will use the previously used value.
My question comes about because when it comes to writing the dictionary that these values have been scanned into. If a parameter is needed I run get_param which accesses the file and finds the parameter.
I think I will avoid the problem all together by reading the .config file once and producing a dictionary from that. I avoided that originally for... reasons I no longer remember. Perfect situation to update my code!
A:
If you insist, here is some horrible inspect-based solution.
import inspect, re
def varname(p):
for line in inspect.getframeinfo(inspect.currentframe().f_back)[3]:
m = re.search(r'\bvarname\s*\(\s*([A-Za-z_][A-Za-z0-9_]*)\s*\)', line)
if m:
return m.group(1)
if __name__ == '__main__':
spam = 42
print varname(spam)
I hope it will inspire you to reevaluate the problem you have and look for another approach.
A:
To answer your original question:
def namestr(obj, namespace):
return [name for name in namespace if namespace[name] is obj]
Example:
>>> a = 'some var'
>>> namestr(a, globals())
['a']
As @rbright already pointed out whatever you do there are probably better ways to do it.
A:
If you are trying to do this, it means you are doing something wrong. Consider using a dict instead.
def show_val(vals, name):
print "Name:", name, "val:", vals[name]
vals = {'a': 1, 'b': 2}
show_val(vals, 'b')
Output:
Name: b val: 2
A:
You can't, as there are no variables in Python but only names.
For example:
> a = [1,2,3]
> b = a
> a is b
True
Which of those two is now the correct variable? There's no difference between a and b.
There's been a similar question before.
A:
Rather than ask for details to a specific solution, I recommend describing the problem you face; I think you'll get better answers. I say this since there's almost certainly a better way to do whatever it is you're trying to do. Accessing variable names in this way is not commonly needed to solve problems in any language.
That said, all of your variable names are already in dictionaries which are accessible through the built-in functions locals and globals. Use the correct one for the scope you are inspecting.
One of the few common idioms for inspecting these dictionaries is for easy string interpolation:
>>> first = 'John'
>>> last = 'Doe'
>>> print '%(first)s %(last)s' % globals()
John Doe
This sort of thing tends to be a bit more readable than the alternatives even though it requires inspecting variables by name.
A:
With eager evaluation, variables essentially turn into their values any time you look at them (to paraphrase). That said, Python does have built-in namespaces. For example, locals() will return a dictionary mapping a function's variables' names to their values, and globals() does the same for a module. Thus:
for name, value in globals().items():
if value is unknown_variable:
... do something with name
Note that you don't need to import anything to be able to access locals() and globals().
Also, if there are multiple aliases for a value, iterating through a namespace only finds the first one.
A:
Will something like this work for you?
>>> def namestr(**kwargs):
... for k,v in kwargs.items():
... print "%s = %s" % (k, repr(v))
...
>>> namestr(a=1, b=2)
a = 1
b = 2
And in your example:
>>> choice = {'key': 24; 'data': None}
>>> namestr(choice=choice)
choice = {'data': None, 'key': 24}
>>> printvars(**globals())
__builtins__ = <module '__builtin__' (built-in)>
__name__ = '__main__'
__doc__ = None
namestr = <function namestr at 0xb7d8ec34>
choice = {'data': None, 'key': 24}
A:
For the revised question of how to read in configuration parameters, I'd strongly recommend saving yourself some time and effort and use ConfigParser or (my preferred tool) ConfigObj.
They can do everything you need, they're easy to use, and someone else has already worried about how to get them to work properly!
| How can you print a variable name in python? | Say I have a variable named choice it is equal to 2. How would I access the name of the variable? Something equivalent to
In [53]: namestr(choice)
Out[53]: 'choice'
for use in making a dictionary. There's a good way to do this and I'm just missing it.
EDIT:
The reason to do this is thus. I am running some data analysis stuff where I call the program with multiple parameters that I would like to tweak, or not tweak, at runtime. I read in the parameters I used in the last run from a .config file formated as
filename
no_sig_resonance.dat
mass_peak
700
choice
1,2,3
When prompted for values, the previously used is displayed and an empty string input will use the previously used value.
My question comes about because when it comes to writing the dictionary that these values have been scanned into. If a parameter is needed I run get_param which accesses the file and finds the parameter.
I think I will avoid the problem all together by reading the .config file once and producing a dictionary from that. I avoided that originally for... reasons I no longer remember. Perfect situation to update my code!
| [
"If you insist, here is some horrible inspect-based solution.\nimport inspect, re\n\ndef varname(p):\n for line in inspect.getframeinfo(inspect.currentframe().f_back)[3]:\n m = re.search(r'\\bvarname\\s*\\(\\s*([A-Za-z_][A-Za-z0-9_]*)\\s*\\)', line)\n if m:\n return m.group(1)\n\nif __name__ == '__main__':\n spam = 42\n print varname(spam)\n\nI hope it will inspire you to reevaluate the problem you have and look for another approach.\n",
"To answer your original question:\ndef namestr(obj, namespace):\n return [name for name in namespace if namespace[name] is obj]\n\nExample:\n>>> a = 'some var'\n>>> namestr(a, globals())\n['a']\n\nAs @rbright already pointed out whatever you do there are probably better ways to do it.\n",
"If you are trying to do this, it means you are doing something wrong. Consider using a dict instead.\ndef show_val(vals, name):\n print \"Name:\", name, \"val:\", vals[name]\n\nvals = {'a': 1, 'b': 2}\nshow_val(vals, 'b')\n\nOutput:\nName: b val: 2\n\n",
"You can't, as there are no variables in Python but only names.\nFor example:\n> a = [1,2,3]\n> b = a\n> a is b\nTrue\n\nWhich of those two is now the correct variable? There's no difference between a and b.\nThere's been a similar question before.\n",
"Rather than ask for details to a specific solution, I recommend describing the problem you face; I think you'll get better answers. I say this since there's almost certainly a better way to do whatever it is you're trying to do. Accessing variable names in this way is not commonly needed to solve problems in any language.\nThat said, all of your variable names are already in dictionaries which are accessible through the built-in functions locals and globals. Use the correct one for the scope you are inspecting.\nOne of the few common idioms for inspecting these dictionaries is for easy string interpolation:\n>>> first = 'John'\n>>> last = 'Doe'\n>>> print '%(first)s %(last)s' % globals()\nJohn Doe\n\nThis sort of thing tends to be a bit more readable than the alternatives even though it requires inspecting variables by name.\n",
"With eager evaluation, variables essentially turn into their values any time you look at them (to paraphrase). That said, Python does have built-in namespaces. For example, locals() will return a dictionary mapping a function's variables' names to their values, and globals() does the same for a module. Thus:\nfor name, value in globals().items():\n if value is unknown_variable:\n ... do something with name\n\nNote that you don't need to import anything to be able to access locals() and globals().\nAlso, if there are multiple aliases for a value, iterating through a namespace only finds the first one.\n",
"Will something like this work for you?\n>>> def namestr(**kwargs):\n... for k,v in kwargs.items():\n... print \"%s = %s\" % (k, repr(v))\n...\n>>> namestr(a=1, b=2)\na = 1\nb = 2\n\nAnd in your example:\n>>> choice = {'key': 24; 'data': None}\n>>> namestr(choice=choice)\nchoice = {'data': None, 'key': 24}\n>>> printvars(**globals())\n__builtins__ = <module '__builtin__' (built-in)>\n__name__ = '__main__'\n__doc__ = None\nnamestr = <function namestr at 0xb7d8ec34>\nchoice = {'data': None, 'key': 24}\n\n",
"For the revised question of how to read in configuration parameters, I'd strongly recommend saving yourself some time and effort and use ConfigParser or (my preferred tool) ConfigObj.\nThey can do everything you need, they're easy to use, and someone else has already worried about how to get them to work properly!\n"
] | [
144,
103,
16,
10,
9,
4,
4,
3
] | [] | [] | [
"dictionary",
"introspection",
"python",
"variables"
] | stackoverflow_0000592746_dictionary_introspection_python_variables.txt |
Q:
Can I use a decorator to mutate the local scope of a function in Python?
Is there any way of writing a decorator such that the following would work?
assert 'z' not in globals()
@my_decorator
def func(x, y):
print z
EDIT: moved from anwser
In answer to hop's "why?": syntax sugar / DRY.
It's not about caching, it's about calculating z (and z1, z2, z3, ...) based upon the values of x & y.
I have lots of functions which do related things, and I don't want to do have to write
z1, z2, z3=calculate_from(x, y)
at the beginning of every single function - I'll get it wrong somewhere. If this were c I'd do this with cpp (if this were lisp, I'd do this with macros ...), but I wanted to see if decorators could do the same thing.
If it helps, I'd almost certainly call the decorator "precalculate_z", and it certainly wouldn't be part of any public API.
I could probably get a similar effect from using the class infrastructure as well, but I wanted to see if it was doable with raw functions.
A:
Echoing Hop's answer
Don't do it.
Seriously, don't do this. Lisp and Ruby are more appropriate languages for writing your own custom syntax. Use one of those. Or find a cleaner way to do this
If you must, you want dynamic scoped variables, not lexically scoped.
Python doesn't have dynamically scoped variables, but you can simulate it. Here's an example that simulates it by creating a global binding, but restores the previous value on exit:
http://codepad.org/6vAY8Leh
def adds_dynamic_z_decorator(f):
def replacement(*arg,**karg):
# create a new 'z' binding in globals, saving previous
if 'z' in globals():
oldZ = (globals()['z'],)
else:
oldZ = None
try:
globals()['z'] = None
#invoke the original function
res = f(*arg, **karg)
finally:
#restore any old bindings
if oldZ:
globals()['z'] = oldZ[0]
else:
del(globals()['z'])
return res
return replacement
@adds_dynamic_z_decorator
def func(x,y):
print z
def other_recurse(x):
global z
print 'x=%s, z=%s' %(x,z)
recurse(x+1)
print 'x=%s, z=%s' %(x,z)
@adds_dynamic_z_decorator
def recurse(x=0):
global z
z = x
if x < 3:
other_recurse(x)
print 'calling func(1,2)'
func(1,2)
print 'calling recurse()'
recurse()
I make no warranties on the utility or sanity of the above code. Actually, I warrant that it is insane, and you should avoid using it unless you want a flogging from your Python peers.
This code is similar to both eduffy's and John Montgomery's code, but ensures that 'z' is created and properly restored "like" a local variable would be -- for instance, note how 'other_recurse' is able to see the binding for 'z' specified in the body of 'recurse'.
A:
I don't know about the local scope, but you could provide an alternative global name space temporarily. Something like:
import types
def my_decorator(fn):
def decorated(*args,**kw):
my_globals={}
my_globals.update(globals())
my_globals['z']='value of z'
call_fn=types.FunctionType(fn.func_code,my_globals)
return call_fn(*args,**kw)
return decorated
@my_decorator
def func(x, y):
print z
func(0,1)
Which should print "value of z"
A:
a) don't do it.
b) seriously, why would you do that?
c) you could declare z as global within your decorator, so z will not be in globals() until after the decorator has been called for the first time, so the assert won't bark.
d) why???
A:
I'll first echo the "please don't", but that's your choice. Here's a solution for you:
assert 'z' not in globals ()
class my_dec:
def __init__ (self, f):
self.f = f
def __call__ (self,x,y):
z = x+y
self.f(x,y,z)
@my_dec
def func (x,y,z):
print z
func (1,3)
It does require z in the formal parameters, but not the actual.
A:
I could probably get a similar effect from using the class infrastructure as well, but I wanted to see if it was doable with raw functions.
Well, Python is an object-oriented language. You should do this in a class, in my opinion. Making a nice class interface would surely simplify your problem. This isn't what decorators were made for.
A:
Explicit is better than implicit.
Is this good enough?
def provide_value(f):
f.foo = "Bar"
return f
@provide_value
def g(x):
print g.foo
(If you really want evil, assigning to f.func_globals seems fun.)
A:
Others have given a few ways of making a working decorator, many have advised against doing so because it's so stylistically different from normal python behavior that it'll really confuse anyone trying to understand the code.
If you're needing to recalculate things a lot, would it make sense to group them together in an object? Compute z1...zN in the constructor, then the functions that use these values can access the pre-computed answers as part of the instance.
| Can I use a decorator to mutate the local scope of a function in Python? | Is there any way of writing a decorator such that the following would work?
assert 'z' not in globals()
@my_decorator
def func(x, y):
print z
EDIT: moved from anwser
In answer to hop's "why?": syntax sugar / DRY.
It's not about caching, it's about calculating z (and z1, z2, z3, ...) based upon the values of x & y.
I have lots of functions which do related things, and I don't want to do have to write
z1, z2, z3=calculate_from(x, y)
at the beginning of every single function - I'll get it wrong somewhere. If this were c I'd do this with cpp (if this were lisp, I'd do this with macros ...), but I wanted to see if decorators could do the same thing.
If it helps, I'd almost certainly call the decorator "precalculate_z", and it certainly wouldn't be part of any public API.
I could probably get a similar effect from using the class infrastructure as well, but I wanted to see if it was doable with raw functions.
| [
"Echoing Hop's answer\n\nDon't do it.\nSeriously, don't do this. Lisp and Ruby are more appropriate languages for writing your own custom syntax. Use one of those. Or find a cleaner way to do this\nIf you must, you want dynamic scoped variables, not lexically scoped.\n\nPython doesn't have dynamically scoped variables, but you can simulate it. Here's an example that simulates it by creating a global binding, but restores the previous value on exit:\nhttp://codepad.org/6vAY8Leh\ndef adds_dynamic_z_decorator(f):\n def replacement(*arg,**karg):\n # create a new 'z' binding in globals, saving previous\n if 'z' in globals():\n oldZ = (globals()['z'],)\n else:\n oldZ = None\n try:\n globals()['z'] = None\n #invoke the original function\n res = f(*arg, **karg)\n finally:\n #restore any old bindings\n if oldZ:\n globals()['z'] = oldZ[0]\n else:\n del(globals()['z'])\n return res\n return replacement\n\n@adds_dynamic_z_decorator\ndef func(x,y):\n print z\n\ndef other_recurse(x):\n global z\n print 'x=%s, z=%s' %(x,z)\n recurse(x+1)\n print 'x=%s, z=%s' %(x,z)\n\n@adds_dynamic_z_decorator\ndef recurse(x=0):\n global z\n z = x\n if x < 3:\n other_recurse(x)\n\nprint 'calling func(1,2)'\nfunc(1,2)\n\nprint 'calling recurse()'\nrecurse()\n\nI make no warranties on the utility or sanity of the above code. Actually, I warrant that it is insane, and you should avoid using it unless you want a flogging from your Python peers.\nThis code is similar to both eduffy's and John Montgomery's code, but ensures that 'z' is created and properly restored \"like\" a local variable would be -- for instance, note how 'other_recurse' is able to see the binding for 'z' specified in the body of 'recurse'. \n",
"I don't know about the local scope, but you could provide an alternative global name space temporarily. Something like:\n\n\n\nimport types\n\ndef my_decorator(fn):\n def decorated(*args,**kw):\n my_globals={}\n my_globals.update(globals())\n my_globals['z']='value of z'\n call_fn=types.FunctionType(fn.func_code,my_globals)\n return call_fn(*args,**kw)\n return decorated\n\n@my_decorator\ndef func(x, y):\n print z\n\nfunc(0,1)\n\n\n\nWhich should print \"value of z\"\n",
"a) don't do it.\nb) seriously, why would you do that?\nc) you could declare z as global within your decorator, so z will not be in globals() until after the decorator has been called for the first time, so the assert won't bark.\nd) why???\n",
"I'll first echo the \"please don't\", but that's your choice. Here's a solution for you:\nassert 'z' not in globals ()\n\nclass my_dec:\n def __init__ (self, f):\n self.f = f\n def __call__ (self,x,y):\n z = x+y\n self.f(x,y,z)\n\n@my_dec\ndef func (x,y,z):\n print z\n\nfunc (1,3)\n\nIt does require z in the formal parameters, but not the actual.\n",
"I could probably get a similar effect from using the class infrastructure as well, but I wanted to see if it was doable with raw functions.\nWell, Python is an object-oriented language. You should do this in a class, in my opinion. Making a nice class interface would surely simplify your problem. This isn't what decorators were made for.\n",
"Explicit is better than implicit.\nIs this good enough?\ndef provide_value(f):\n f.foo = \"Bar\"\n return f\n\n@provide_value\ndef g(x):\n print g.foo\n\n(If you really want evil, assigning to f.func_globals seems fun.)\n",
"Others have given a few ways of making a working decorator, many have advised against doing so because it's so stylistically different from normal python behavior that it'll really confuse anyone trying to understand the code.\nIf you're needing to recalculate things a lot, would it make sense to group them together in an object? Compute z1...zN in the constructor, then the functions that use these values can access the pre-computed answers as part of the instance.\n"
] | [
11,
8,
7,
2,
1,
1,
0
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0000591200_decorator_python.txt |
Q:
Parsing an HTML file with selectorgadget.com
How can I use beautiful soup and selectorgadget to scrape a website. For example I have a website - (a newegg product) and I would like my script to return all of the specifications of that product (click on SPECIFICATIONS) by this I mean - Intel, Desktop, ......, 2.4GHz, 1066Mhz, ...... , 3 years limited.
After using selectorgadget I get the string-
.desc
How do I use this?
Thanks :)
A:
Inspecting the page, I can see that the specifications are placed in a div with the ID pcraSpecs:
<div id="pcraSpecs">
<script type="text/javascript">...</script>
<TABLE cellpadding="0" cellspacing="0" class="specification">
<TR>
<TD colspan="2" class="title">Model</TD>
</TR>
<TR>
<TD class="name">Brand</TD>
<TD class="desc"><script type="text/javascript">document.write(neg_specification_newline('Intel'));</script></TD>
</TR>
<TR>
<TD class="name">Processors Type</TD>
<TD class="desc"><script type="text/javascript">document.write(neg_specification_newline('Desktop'));</script></TD>
</TR>
...
</TABLE>
</div>
desc is the class of the table cells.
What you want to do is to extract the contents of this table.
soup.find(id="pcraSpecs").findAll("td") should get you started.
A:
Have you tried using Feedity - http://feedity.com for creating a custom RSS feed from any webpage.
| Parsing an HTML file with selectorgadget.com | How can I use beautiful soup and selectorgadget to scrape a website. For example I have a website - (a newegg product) and I would like my script to return all of the specifications of that product (click on SPECIFICATIONS) by this I mean - Intel, Desktop, ......, 2.4GHz, 1066Mhz, ...... , 3 years limited.
After using selectorgadget I get the string-
.desc
How do I use this?
Thanks :)
| [
"Inspecting the page, I can see that the specifications are placed in a div with the ID pcraSpecs:\n<div id=\"pcraSpecs\">\n <script type=\"text/javascript\">...</script>\n <TABLE cellpadding=\"0\" cellspacing=\"0\" class=\"specification\">\n <TR>\n <TD colspan=\"2\" class=\"title\">Model</TD>\n </TR>\n <TR>\n <TD class=\"name\">Brand</TD>\n <TD class=\"desc\"><script type=\"text/javascript\">document.write(neg_specification_newline('Intel'));</script></TD>\n </TR>\n <TR>\n <TD class=\"name\">Processors Type</TD>\n <TD class=\"desc\"><script type=\"text/javascript\">document.write(neg_specification_newline('Desktop'));</script></TD> \n </TR>\n ...\n </TABLE>\n</div>\n\ndesc is the class of the table cells.\nWhat you want to do is to extract the contents of this table.\nsoup.find(id=\"pcraSpecs\").findAll(\"td\") should get you started.\n",
"Have you tried using Feedity - http://feedity.com for creating a custom RSS feed from any webpage.\n"
] | [
1,
0
] | [] | [] | [
"beautifulsoup",
"css",
"html_content_extraction",
"python",
"screen_scraping"
] | stackoverflow_0000592910_beautifulsoup_css_html_content_extraction_python_screen_scraping.txt |
Q:
Rotating a glViewport?
In a "multitouch" environement, any application showed on a surface can be rotated/scaled to the direction of an user. Actual solution is to drawing the application on a FBO, and draw a rotated/scaled rectangle with the texture on it. I don't think it's good for performance, and all graphics cards don't provide FBO.
The idea is to clip the rendering viewport in the direction of user.
Since glViewport cannot be used for that, is another way exist to achieve that ?
(glViewport use (x, y, width, height), and i would like (x, y, width, height, rotation from center?))
PS: rotating the modelview or projection matrix will not help, i would like to "rotate the clipping plan" generated by glViewport. (only part of the all scene).
A:
If you already have the code set up to render your scene, try adding a glRotate() call to the viewmodel matrix setup, to "rotate the camera" before rendering the scene.
A:
There's no way to have a rotated viewport in OpenGL, you have to handle it manually. I see the following possible solutions :
Keep on using textures, perhaps using glCopyTexSubImage instead of FBOs, as this is basic OpenGL feature. If your target platforms are hardware accelerated, performance should be ok, depending on the number of viewports you need on your desk, as this is a very common use case nowadays.
Without textures, you could setup your glViewport to the screen-aligned bounding rectangle (rA) of your rotated viewport (rB) (setting also proper scissor testing area). Then draw a masking area, possibly only in depth or stencil buffer, filling the (rA - rB) area, that will prevent further drawing on those pixels. Then draw normally your application, using a glRotate to adjust you projection matrix, so that the rendering is properly oriented according to rB.
| Rotating a glViewport? | In a "multitouch" environement, any application showed on a surface can be rotated/scaled to the direction of an user. Actual solution is to drawing the application on a FBO, and draw a rotated/scaled rectangle with the texture on it. I don't think it's good for performance, and all graphics cards don't provide FBO.
The idea is to clip the rendering viewport in the direction of user.
Since glViewport cannot be used for that, is another way exist to achieve that ?
(glViewport use (x, y, width, height), and i would like (x, y, width, height, rotation from center?))
PS: rotating the modelview or projection matrix will not help, i would like to "rotate the clipping plan" generated by glViewport. (only part of the all scene).
| [
"If you already have the code set up to render your scene, try adding a glRotate() call to the viewmodel matrix setup, to \"rotate the camera\" before rendering the scene.\n",
"There's no way to have a rotated viewport in OpenGL, you have to handle it manually. I see the following possible solutions :\n\nKeep on using textures, perhaps using glCopyTexSubImage instead of FBOs, as this is basic OpenGL feature. If your target platforms are hardware accelerated, performance should be ok, depending on the number of viewports you need on your desk, as this is a very common use case nowadays.\nWithout textures, you could setup your glViewport to the screen-aligned bounding rectangle (rA) of your rotated viewport (rB) (setting also proper scissor testing area). Then draw a masking area, possibly only in depth or stencil buffer, filling the (rA - rB) area, that will prevent further drawing on those pixels. Then draw normally your application, using a glRotate to adjust you projection matrix, so that the rendering is properly oriented according to rB.\n\n"
] | [
2,
2
] | [] | [] | [
"math",
"opengl",
"python"
] | stackoverflow_0000577639_math_opengl_python.txt |
Q:
How do I find images with a similar color using Python and PIL?
I have a lot of images in a folder, and I would like to find images with a similar color to a pre chosen image.
I would like to be able to do something like:
python find_similar.py sample.jpg
and have that return something like:
234324.jpg
55.jpg
9945.jpg
345434.jpg
104.jpg
Is this doable?
A:
I cannot give you a canned solution, but here's an angle to tackle the problem. It's not PIL-specific, and it might be entirely bogus, since I have no experience in image processing.
Perform color quantization on the image. That gives you a palette that encodes the color information in the image without any shape information.
Run a principal components analysis to get the dominant components in the color cube. Strictly, you could run this without quantization first, but it might be too expensive.
Do a least-squares fitting on the principal components of different images.
Hope this helps.
A:
The algorithm for finding similar images is discussed in a Question on Stackoverflow, you might want to implement one of those in Python & PIL.
Also, you can straightaway use the ImageChops module from PIL and use the difference method to compare two images like this:
import Image
import ImageChops
im1 = Image.open("original.jpg")
im2 = Image.open("sample.jpg")
diff = ImageChops.difference(im2, im1)
That might help you in getting some idea about the difference in your original image and the others.
There is another similar question on Stackoverflow which discusses this.
| How do I find images with a similar color using Python and PIL? | I have a lot of images in a folder, and I would like to find images with a similar color to a pre chosen image.
I would like to be able to do something like:
python find_similar.py sample.jpg
and have that return something like:
234324.jpg
55.jpg
9945.jpg
345434.jpg
104.jpg
Is this doable?
| [
"I cannot give you a canned solution, but here's an angle to tackle the problem. It's not PIL-specific, and it might be entirely bogus, since I have no experience in image processing.\n\nPerform color quantization on the image. That gives you a palette that encodes the color information in the image without any shape information.\nRun a principal components analysis to get the dominant components in the color cube. Strictly, you could run this without quantization first, but it might be too expensive.\nDo a least-squares fitting on the principal components of different images.\n\nHope this helps.\n",
"The algorithm for finding similar images is discussed in a Question on Stackoverflow, you might want to implement one of those in Python & PIL.\nAlso, you can straightaway use the ImageChops module from PIL and use the difference method to compare two images like this:\nimport Image\nimport ImageChops\n\nim1 = Image.open(\"original.jpg\")\nim2 = Image.open(\"sample.jpg\")\n\ndiff = ImageChops.difference(im2, im1)\n\nThat might help you in getting some idea about the difference in your original image and the others.\nThere is another similar question on Stackoverflow which discusses this.\n"
] | [
4,
1
] | [] | [] | [
"image_processing",
"python",
"python_imaging_library"
] | stackoverflow_0000593925_image_processing_python_python_imaging_library.txt |
Q:
Choosing between different switch-case replacements in Python - dictionary or if-elif-else?
I recently read the questions that recommend against using switch-case statements in languages that do support it. As far as Python goes, I've seen a number of switch case replacements, such as:
Using a dictionary (Many variants)
Using a Tuple
Using a function decorator (http://code.activestate.com/recipes/440499/)
Using Polymorphism (Recommended method instead of type checking objects)
Using an if-elif-else ladder
Someone even recommended the Visitor pattern (Possibly Extrinsic)
Given the wide variety of options, I am having a bit of difficulty deciding what to do for a particular piece of code. I would like to learn the criteria for selecting one of these methods over the other in general. In addition, I would appreciate advice on what to do in the specific cases where I am having trouble deciding (with an explanation of the choice).
Here is the specific problem:
(1)
def _setCurrentCurve(self, curve):
if curve == "sine":
self.currentCurve = SineCurve(startAngle = 0, endAngle = 14,
lineColor = (0.0, 0.0, 0.0), expansionFactor = 1,
centerPos = (0.0, 0.0))
elif curve == "quadratic":
self.currentCurve = QuadraticCurve(lineColor = (0.0, 0.0, 0.0))
This method is called by a qt-slot in response to choosing to draw a curve from a menu. The above method will contain a total of 4-7 curves once the application is complete. Is it justified to use a throw away dictionary in this case? Since the most obvious way to do this is if-elif-else, should I stick with that? I have also consider using **kargs here (with a friends help) since all the curve classes use **kargs...
(2)
This second piece of code is a qt-slot that is called when the user changes a property of a curve. Basically the slot takes the data from the gui (spinBox) and puts it in an instance variable of the appropriate curve class. In this case, I again have the same question - should I use a dict?
Here is the aforementioned slot-
def propertyChanged(self, name, value):
"""A Qt slot, to react to changes of SineCurve's properties."""
if name == "amplitude":
self.amplitude = value
elif name == "expansionFactor":
self.expansionFactor = value
elif name == "startAngle":
self.startAngle = value
elif name == "endAngle":
self.endAngle = value
For reference, here is the code for connecting to the above slot -
def _connectToPage(self, page):
for connectionData in page.getConnectibles():
self.connect(connectionData["object"],
SIGNAL(connectionData["signal"]),
lambda value, name = connectionData["property"]:\
self.currentCurve.propertyChanged(name, value))
self.connect(connectionData["object"],
SIGNAL(connectionData["signal"]),
self.hackedDisplayArea.update)
Note - The self.endAngle etc. are initialized in the constructor.
As far as I know, the reasons for choosing a dict is for fast lookup. When is that warranted? when I have 100 cases or more? Is it a good idea to keep building and throwing away a dictionary each time the function is called? If I build a dict for this purpose outside a function, should I check If it is needed elswhere? What happens if it is not needed elsewhere?
My question is what is the best-practice if there is one? What is the best/most elegant way to go about things? Put in yet another way, when to use if-elif-else, when to use each of the other options?
A:
Sigh. Too much hand-wringing over the wrong part of the problem. The switch statement is not the issue. There are many ways of expressing "alternative" that don't add meaning.
The issue is meaning -- not technical statement choices.
There are three common patterns.
Mapping a key to an object. Use a dictionary if it is almost totally static and you have a mapping between a simple key and another more complex thing. Building a dictionary on the fly each time you need it is silly. You can use this if it's what you mean: your "conditions" are simple, static key values that map to objects.
Variant behavior among subclasses. Use Polymorphism instead of type checking objects. Correct. If you have similar objects in multiple classes with variant behavior, they should be polymorphic. Use this as often as possible.
Other variant behavior. Use an if-elif-else ladder. Use this when you don't have largely static key-to-value mapping. Use this when the conditions are complex, or you mean procedures, not objects.
Everything else is just tricky code that can achieve similar results.
Using a Tuple. This is just dictionary without the mapping. This requires search, and search should be avoided whenever possible. Don't do this, it's inefficient. Use a dictionary.
Using a function decorator (http://code.activestate.com/recipes/440499/). Icky. This conceals the if-elif-elif nature of the problem you're solving. Don't do this, it isn't obvious that the choices are exclusive. Use anything else.
Someone even recommended the Visitor pattern. Use this when you have an object which follows the Composite design pattern. This depends on polymorphism to work, so it's not really a different solution.
A:
In the first example I would certainly stick with the if-else statement. In fact I don't see a reason not to use if-else unless
You find (using e.g. the profile module) that the if statement is a bottleneck (very unlikely IMO unless you have a huge number of cases that do very little)
The code using a dictionary is clearer / has less repetition.
Your second example I would actually rewrite
setattr(self, name, value)
(probably adding an assert statement to catch invalid names).
A:
Considering that this is done in response to a user action (pickings something from a menu), and the number of choices you anticipate is very small, I'd definitely go with a simple if-elif-else ladder.
There's no point in optinizing for speed, since it only happens as fast as the user can make the selection anyway, this is not "inner loop of a raytracer"-territory. Sure, it matters to give the user quick feedback, but since the number of cases is so small, there is no danger of that either.
There's no point in optimizing for conciseness, since the (imo clearer, zero-readability-overhead) if-ladder will be so very short anyway.
A:
Regarding your dictionary questions:
As far as I know, the reasons for choosing a dict is for fast lookup. When is that warranted? when I have 100 cases or more? Is it a good idea to keep building and throwing away a dictionary each time the function is called? If I build a dict for this purpose outside a function, should I check If it is needed elswhere? What happens if it is not needed elsewhere?
Another issue is maintainability. Having the string->curveFunction dictionary allows you to data-drive the menu. Then adding another option is just a matter of putting another string->function entry in the dictionary (which lives in a part of the code dedicated to configuration.
Even if you have only a few entries, it "separates concerns"; _setCurrentCurve is responsible for wiring up at run time, not defining the box of components.
Build the dictionary and hold onto it.
Even if it's not used elsewhere, you get the above benefits (locatability, maintainability).
My rule of thumb is to ask "What's going on here?" for each component of my code. If the answer is of the form
... and ... and ...
(as in "defining the library of functions and associating each with a value in the menu") then there are some concerns begging to be separated.
A:
Each of the exposed options fit well some scenarios:
if-elif-else: simplicity, clarity
dictionary: useful when you configure it dynamically (imagine that you need a particular functionality to be executed on a branch)
tuple: simplicity over if-else case for multiple choices per branch.
polymorphism: automatic object oriented branching
etc.
Python is about readability and consistency and even if your decision will always be a subjective and it will depend on your style, you should always think about Python mantras.
./alex
A:
I agree with df regarding the second example. The first example I would probably try to rewrite using a dictionary, particularly if all the curve constructors have the same type signature (perhaps using *args and/or **kwargs). Something like
def _setCurrentCurve(self, new_curve):
self.currentCurve = self.preset_curves[new_curve](options_here)
or perhaps even
def _setCurrentCurve(self, new_curve):
self.currentCurve = self.preset_curves[new_curve](**preset_curve_defaults[new_curve])
A:
In Python, don't event think about how to replace a switch statement.
Use classes and polymorphism instead. Try to keep the information about each availble choice and how to implement it in one place (i.e. the class that implements it).
Otherwise you will end up having lots of places that each contain a tiny fraction of each choice, and updating/extending will be a maintenance nightmare.
This is exactly the kind of problem that OOD tries to solve by abstraction, information hiding, polymorphism and the lot.
Think about what classes of objects you have and their properties, then create an OO architecture around them. This way you will never ever have to worry about a missing "switch" statement again.
| Choosing between different switch-case replacements in Python - dictionary or if-elif-else? | I recently read the questions that recommend against using switch-case statements in languages that do support it. As far as Python goes, I've seen a number of switch case replacements, such as:
Using a dictionary (Many variants)
Using a Tuple
Using a function decorator (http://code.activestate.com/recipes/440499/)
Using Polymorphism (Recommended method instead of type checking objects)
Using an if-elif-else ladder
Someone even recommended the Visitor pattern (Possibly Extrinsic)
Given the wide variety of options, I am having a bit of difficulty deciding what to do for a particular piece of code. I would like to learn the criteria for selecting one of these methods over the other in general. In addition, I would appreciate advice on what to do in the specific cases where I am having trouble deciding (with an explanation of the choice).
Here is the specific problem:
(1)
def _setCurrentCurve(self, curve):
if curve == "sine":
self.currentCurve = SineCurve(startAngle = 0, endAngle = 14,
lineColor = (0.0, 0.0, 0.0), expansionFactor = 1,
centerPos = (0.0, 0.0))
elif curve == "quadratic":
self.currentCurve = QuadraticCurve(lineColor = (0.0, 0.0, 0.0))
This method is called by a qt-slot in response to choosing to draw a curve from a menu. The above method will contain a total of 4-7 curves once the application is complete. Is it justified to use a throw away dictionary in this case? Since the most obvious way to do this is if-elif-else, should I stick with that? I have also consider using **kargs here (with a friends help) since all the curve classes use **kargs...
(2)
This second piece of code is a qt-slot that is called when the user changes a property of a curve. Basically the slot takes the data from the gui (spinBox) and puts it in an instance variable of the appropriate curve class. In this case, I again have the same question - should I use a dict?
Here is the aforementioned slot-
def propertyChanged(self, name, value):
"""A Qt slot, to react to changes of SineCurve's properties."""
if name == "amplitude":
self.amplitude = value
elif name == "expansionFactor":
self.expansionFactor = value
elif name == "startAngle":
self.startAngle = value
elif name == "endAngle":
self.endAngle = value
For reference, here is the code for connecting to the above slot -
def _connectToPage(self, page):
for connectionData in page.getConnectibles():
self.connect(connectionData["object"],
SIGNAL(connectionData["signal"]),
lambda value, name = connectionData["property"]:\
self.currentCurve.propertyChanged(name, value))
self.connect(connectionData["object"],
SIGNAL(connectionData["signal"]),
self.hackedDisplayArea.update)
Note - The self.endAngle etc. are initialized in the constructor.
As far as I know, the reasons for choosing a dict is for fast lookup. When is that warranted? when I have 100 cases or more? Is it a good idea to keep building and throwing away a dictionary each time the function is called? If I build a dict for this purpose outside a function, should I check If it is needed elswhere? What happens if it is not needed elsewhere?
My question is what is the best-practice if there is one? What is the best/most elegant way to go about things? Put in yet another way, when to use if-elif-else, when to use each of the other options?
| [
"Sigh. Too much hand-wringing over the wrong part of the problem. The switch statement is not the issue. There are many ways of expressing \"alternative\" that don't add meaning.\nThe issue is meaning -- not technical statement choices. \nThere are three common patterns.\n\nMapping a key to an object. Use a dictionary if it is almost totally static and you have a mapping between a simple key and another more complex thing. Building a dictionary on the fly each time you need it is silly. You can use this if it's what you mean: your \"conditions\" are simple, static key values that map to objects.\nVariant behavior among subclasses. Use Polymorphism instead of type checking objects. Correct. If you have similar objects in multiple classes with variant behavior, they should be polymorphic. Use this as often as possible.\nOther variant behavior. Use an if-elif-else ladder. Use this when you don't have largely static key-to-value mapping. Use this when the conditions are complex, or you mean procedures, not objects.\n\nEverything else is just tricky code that can achieve similar results.\nUsing a Tuple. This is just dictionary without the mapping. This requires search, and search should be avoided whenever possible. Don't do this, it's inefficient. Use a dictionary.\nUsing a function decorator (http://code.activestate.com/recipes/440499/). Icky. This conceals the if-elif-elif nature of the problem you're solving. Don't do this, it isn't obvious that the choices are exclusive. Use anything else.\nSomeone even recommended the Visitor pattern. Use this when you have an object which follows the Composite design pattern. This depends on polymorphism to work, so it's not really a different solution.\n",
"In the first example I would certainly stick with the if-else statement. In fact I don't see a reason not to use if-else unless\n\nYou find (using e.g. the profile module) that the if statement is a bottleneck (very unlikely IMO unless you have a huge number of cases that do very little)\nThe code using a dictionary is clearer / has less repetition. \n\nYour second example I would actually rewrite\nsetattr(self, name, value)\n\n(probably adding an assert statement to catch invalid names).\n",
"Considering that this is done in response to a user action (pickings something from a menu), and the number of choices you anticipate is very small, I'd definitely go with a simple if-elif-else ladder.\nThere's no point in optinizing for speed, since it only happens as fast as the user can make the selection anyway, this is not \"inner loop of a raytracer\"-territory. Sure, it matters to give the user quick feedback, but since the number of cases is so small, there is no danger of that either.\nThere's no point in optimizing for conciseness, since the (imo clearer, zero-readability-overhead) if-ladder will be so very short anyway.\n",
"Regarding your dictionary questions:\n\nAs far as I know, the reasons for choosing a dict is for fast lookup. When is that warranted? when I have 100 cases or more? Is it a good idea to keep building and throwing away a dictionary each time the function is called? If I build a dict for this purpose outside a function, should I check If it is needed elswhere? What happens if it is not needed elsewhere?\n\n\nAnother issue is maintainability. Having the string->curveFunction dictionary allows you to data-drive the menu. Then adding another option is just a matter of putting another string->function entry in the dictionary (which lives in a part of the code dedicated to configuration.\nEven if you have only a few entries, it \"separates concerns\"; _setCurrentCurve is responsible for wiring up at run time, not defining the box of components.\nBuild the dictionary and hold onto it.\nEven if it's not used elsewhere, you get the above benefits (locatability, maintainability).\n\nMy rule of thumb is to ask \"What's going on here?\" for each component of my code. If the answer is of the form\n\n... and ... and ...\n\n(as in \"defining the library of functions and associating each with a value in the menu\") then there are some concerns begging to be separated.\n",
"Each of the exposed options fit well some scenarios:\n\nif-elif-else: simplicity, clarity\ndictionary: useful when you configure it dynamically (imagine that you need a particular functionality to be executed on a branch)\ntuple: simplicity over if-else case for multiple choices per branch.\npolymorphism: automatic object oriented branching\netc.\n\nPython is about readability and consistency and even if your decision will always be a subjective and it will depend on your style, you should always think about Python mantras.\n./alex\n",
"I agree with df regarding the second example. The first example I would probably try to rewrite using a dictionary, particularly if all the curve constructors have the same type signature (perhaps using *args and/or **kwargs). Something like\ndef _setCurrentCurve(self, new_curve):\n self.currentCurve = self.preset_curves[new_curve](options_here)\n\nor perhaps even\ndef _setCurrentCurve(self, new_curve):\n self.currentCurve = self.preset_curves[new_curve](**preset_curve_defaults[new_curve])\n\n",
"In Python, don't event think about how to replace a switch statement.\nUse classes and polymorphism instead. Try to keep the information about each availble choice and how to implement it in one place (i.e. the class that implements it).\nOtherwise you will end up having lots of places that each contain a tiny fraction of each choice, and updating/extending will be a maintenance nightmare. \nThis is exactly the kind of problem that OOD tries to solve by abstraction, information hiding, polymorphism and the lot.\nThink about what classes of objects you have and their properties, then create an OO architecture around them. This way you will never ever have to worry about a missing \"switch\" statement again.\n"
] | [
24,
8,
2,
2,
1,
1,
1
] | [] | [] | [
"python",
"switch_statement"
] | stackoverflow_0000594442_python_switch_statement.txt |
Q:
Make a python property with the same name as the class member name
Is it possible in python to create a property with the same name as the member variable name of the class. e.g.
Class X:
...
self.i = 10 # marker
...
property(fget = get_i, fset = set_i)
Please tell me how I can do so. Because if I do so, for the statement at marker I get stack overflow for the assingm
A:
Is it possible in python to create a property with the same name as the member variable name
No. properties, members and methods all share the same namespace.
the statement at marker I get stack overflow
Clearly. You try to set i, which calls the setter for property i, which tries to set i, which calls the setter for property i... ad stackoverflowum.
The usual pattern is to make the backend value member conventionally non-public, by prefixing it with ‘_’:
class X(object):
def get_i(self):
return self._i
def set_i(self, value):
self._i= value
i= property(get_i, set_i)
Note you must use new-style objects (subclass ‘object’) for ‘property’ to work properly.
| Make a python property with the same name as the class member name | Is it possible in python to create a property with the same name as the member variable name of the class. e.g.
Class X:
...
self.i = 10 # marker
...
property(fget = get_i, fset = set_i)
Please tell me how I can do so. Because if I do so, for the statement at marker I get stack overflow for the assingm
| [
"\nIs it possible in python to create a property with the same name as the member variable name\n\nNo. properties, members and methods all share the same namespace.\n\nthe statement at marker I get stack overflow\n\nClearly. You try to set i, which calls the setter for property i, which tries to set i, which calls the setter for property i... ad stackoverflowum.\nThe usual pattern is to make the backend value member conventionally non-public, by prefixing it with ‘_’:\nclass X(object):\n def get_i(self):\n return self._i\n def set_i(self, value):\n self._i= value\n i= property(get_i, set_i)\n\nNote you must use new-style objects (subclass ‘object’) for ‘property’ to work properly.\n"
] | [
23
] | [] | [] | [
"python"
] | stackoverflow_0000594856_python.txt |
Q:
admin template for manytomany
I have a manytomany relationship between publication and pathology. Each publication can have many pathologies. When a publication appears in the admin template, I need to be able to see the many pathologies associated with that publication. Here is the model statement:
class Pathology(models.Model):
pathology = models.CharField(max_length=100)
def __unicode__(self):
return self.pathology
class Meta:
ordering = ["pathology"]
class Publication(models.Model):
pubtitle = models.TextField()
pathology = models.ManyToManyField(Pathology)
def __unicode__(self):
return self.pubtitle
class Meta:
ordering = ["pubtitle"]
Here is the admin.py. I have tried variations of the following, but always
get an error saying either publication or pathology doesn't have a foreign key
associated.
from myprograms.cpssite.models import Pathology
class PathologyAdmin(admin.ModelAdmin):
# ...
list_display = ('pathology', 'id')
admin.site.register(Pathology, PathologyAdmin)
class PathologyInline(admin.TabularInline):
#...
model = Pathology
extra = 3
class PublicationAdmin(admin.ModelAdmin):
# ...
ordering = ('pubtitle', 'year')
inlines = [PathologyInline]
admin.site.register(Publication,PublicationAdmin)
Thanks for any help.
A:
Unless you are using a intermediate table as documented here http://docs.djangoproject.com/en/dev/ref/contrib/admin/#working-with-many-to-many-intermediary-models, I don't think you need to create an Inline class. Try removing the line includes=[PathologyInline] and see what happens.
A:
I realize now that Django is great for the administration (data entry) of a website, simple searching and template inheritance, but Django and Python are not very good for complex web applications, where data is moved back and forth between a database and an html template. I have decided to combine Django and PHP, hopefully, applying the strengths of both. Thanks for you help!
A:
That looks more like a one-to-many relationship to me, tho I'm somewhat unclear on what exactly Pathologies are. Also, so far as I understand, Inlines don't work on manytomany. That should work if you flip the order of the models, remove the manytomany and add a ForeignKey field to Publication in Pathology.
class Publication(models.Model):
pubtitle = models.TextField()
def __unicode__(self):
return self.pubtitle
class Meta:
ordering = ["pubtitle"]
class Pathology(models.Model):
pathology = models.CharField(max_length=100)
publication = models.ForeignKey(Publication)
def __unicode__(self):
return self.pathology
class Meta:
ordering = ["pathology"]
| admin template for manytomany | I have a manytomany relationship between publication and pathology. Each publication can have many pathologies. When a publication appears in the admin template, I need to be able to see the many pathologies associated with that publication. Here is the model statement:
class Pathology(models.Model):
pathology = models.CharField(max_length=100)
def __unicode__(self):
return self.pathology
class Meta:
ordering = ["pathology"]
class Publication(models.Model):
pubtitle = models.TextField()
pathology = models.ManyToManyField(Pathology)
def __unicode__(self):
return self.pubtitle
class Meta:
ordering = ["pubtitle"]
Here is the admin.py. I have tried variations of the following, but always
get an error saying either publication or pathology doesn't have a foreign key
associated.
from myprograms.cpssite.models import Pathology
class PathologyAdmin(admin.ModelAdmin):
# ...
list_display = ('pathology', 'id')
admin.site.register(Pathology, PathologyAdmin)
class PathologyInline(admin.TabularInline):
#...
model = Pathology
extra = 3
class PublicationAdmin(admin.ModelAdmin):
# ...
ordering = ('pubtitle', 'year')
inlines = [PathologyInline]
admin.site.register(Publication,PublicationAdmin)
Thanks for any help.
| [
"Unless you are using a intermediate table as documented here http://docs.djangoproject.com/en/dev/ref/contrib/admin/#working-with-many-to-many-intermediary-models, I don't think you need to create an Inline class. Try removing the line includes=[PathologyInline] and see what happens.\n",
"I realize now that Django is great for the administration (data entry) of a website, simple searching and template inheritance, but Django and Python are not very good for complex web applications, where data is moved back and forth between a database and an html template. I have decided to combine Django and PHP, hopefully, applying the strengths of both. Thanks for you help!\n",
"That looks more like a one-to-many relationship to me, tho I'm somewhat unclear on what exactly Pathologies are. Also, so far as I understand, Inlines don't work on manytomany. That should work if you flip the order of the models, remove the manytomany and add a ForeignKey field to Publication in Pathology.\nclass Publication(models.Model):\n pubtitle = models.TextField()\n def __unicode__(self):\n return self.pubtitle\n class Meta:\n ordering = [\"pubtitle\"]\n\nclass Pathology(models.Model):\n pathology = models.CharField(max_length=100)\n publication = models.ForeignKey(Publication)\n def __unicode__(self):\n return self.pathology\n class Meta:\n ordering = [\"pathology\"]\n\n"
] | [
1,
0,
0
] | [] | [] | [
"django",
"django_admin",
"many_to_many",
"python"
] | stackoverflow_0000570138_django_django_admin_many_to_many_python.txt |
Q:
Django - designing models with virtual fields?
I'd like to ask about the most elegant approach when it comes to designing models with virtual fields such as below in Django...
Let's say we're building an online store and all the products in the system are defined by the model "Product".
class Product(models.Model):
# common fields that all products share
name = ...
brand = ...
price = ...
But the store will have lots of product types completely unrelated with eachother, so I need some way to store those virtual fields of different product types (ie. capacity of a MP3 player, pagecount of a book,..).
The solutions I could come up with my raw Django skills are far from perfect so far:
Having a "custom_fields" property
and intermediate tables that I
manage manually. (screaming ugly in
my face :))
Or inheriting classes from
"Product" on the fly with Python's
dangerous exec-eval statements (that is too
much voodoo magic for maintenance
and also implementation would
require knowledge of Django internals).
What's your take on this?
TIA.
A:
Products have Features.
class Feature( models.Model ):
feature_name = models.CharField( max_length=128 )
feature_value = models.TextField()
part_of = models.ForeignKey( Product )
Like that.
Just a list of features.
p= Product( "iPhone", "Apple", 350 )
p.save()
f= Feature( "mp3 capacity", "16Gb", p )
f.save()
If you want, you can have a master list of feature names in a separate table. Don't over-analyze features. You can't do any processing on them. All you do is present them.
A:
Ruby on Rails has a "serialized" field which allows you to pack a dictionary into a text field. Perhaps DJango offers something similar?
This article has an implementation of a SerializedDataField.
A:
Personally, I'd go with S. Lott's answer. However, you might want to create a custom JSON Field:
http://svn.navi.cx/misc/trunk/djblets/djblets/util/fields.py
http://www.djangosnippets.org/snippets/377/
A:
Go with the inheritance. Create Produce subclasses with their own, additional fields.
| Django - designing models with virtual fields? | I'd like to ask about the most elegant approach when it comes to designing models with virtual fields such as below in Django...
Let's say we're building an online store and all the products in the system are defined by the model "Product".
class Product(models.Model):
# common fields that all products share
name = ...
brand = ...
price = ...
But the store will have lots of product types completely unrelated with eachother, so I need some way to store those virtual fields of different product types (ie. capacity of a MP3 player, pagecount of a book,..).
The solutions I could come up with my raw Django skills are far from perfect so far:
Having a "custom_fields" property
and intermediate tables that I
manage manually. (screaming ugly in
my face :))
Or inheriting classes from
"Product" on the fly with Python's
dangerous exec-eval statements (that is too
much voodoo magic for maintenance
and also implementation would
require knowledge of Django internals).
What's your take on this?
TIA.
| [
"Products have Features.\nclass Feature( models.Model ):\n feature_name = models.CharField( max_length=128 )\n feature_value = models.TextField()\n part_of = models.ForeignKey( Product )\n\nLike that.\nJust a list of features. \np= Product( \"iPhone\", \"Apple\", 350 )\np.save()\nf= Feature( \"mp3 capacity\", \"16Gb\", p )\nf.save()\n\nIf you want, you can have a master list of feature names in a separate table. Don't over-analyze features. You can't do any processing on them. All you do is present them. \n",
"Ruby on Rails has a \"serialized\" field which allows you to pack a dictionary into a text field. Perhaps DJango offers something similar?\nThis article has an implementation of a SerializedDataField.\n",
"Personally, I'd go with S. Lott's answer. However, you might want to create a custom JSON Field:\nhttp://svn.navi.cx/misc/trunk/djblets/djblets/util/fields.py\nhttp://www.djangosnippets.org/snippets/377/\n",
"Go with the inheritance. Create Produce subclasses with their own, additional fields. \n"
] | [
13,
3,
2,
0
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000590921_django_django_models_python.txt |
Q:
Python: efficiently join chunks of bytes into one big chunk?
I'm trying to jury-rig the Amazon S3 python library to allow chunked handling of large files. Right now it does a "self.body = http_response.read()", so if you have a 3G file you're going to read the entire thing into memory before getting any control over it.
My current approach is to try to keep the interface for the library the same but provide a callback after reading each chunk of data. Something like the following:
data = []
while True:
chunk = http_response.read(CHUNKSIZE)
if not chunk:
break
if callback:
callback(chunk)
data.append(chunk)
Now I need to do something like:
self.body = ''.join(data)
Is join the right way to do this or is there another (better) way of putting all the chunks together?
A:
''join() is the best method for joining chunks of data. The alternative boils down to repeated concatenation, which is O(n**2) due to the immutability of strings and the need to create more at every concatenation. Given, this repeated concatenation is optimized by recent versions of CPython if used with += to become O(n), but that optimization only gives it a rough equivalent to ''.join() anyway, which is explicitly O(n) over the number of bytes.
A:
hm - what problem are you trying to solve? I suspect the answer depends on what you are trying to do with the data.
Since in general you don't want a whole 3Gb file in memory, I'd not store the chunks in an array, but iterate over the http_response and write it straight to disk, in a temporary or persistent file using the normal write() method on an appropriate file handle.
if you do want two copies of the data in memory, your method will require be at least 6Gb for your hypothetical 3Gb file, which presumably is significant for most hardware. I know that array join methods are fast and all that, but since this is a really ram-constrained process maybe you want to find some way of doing it better? StringIO (http://docs.python.org/library/stringio.html) creates string objects that can be appended to in memory; the pure python one, since it has to work with immutable strings, just uses your array join trick internally, but the c-based cStringIO might actually append to a memory buffer internall. I don't have its source code to hand, so that would bear checking.
if you do wish to do some kind of analysis on the data and really wish to keep in in memory with minimal overhead, you might want to consider some of the byte array objets from Numeric/NumPy as an alternative to StringIO. they are high-performance code optimised for large arrays and might be what you need.
as a useful example, for a general-purpose file-handling object which has memory-efficient iterator-friendly approach you might want to check out the django File obeject chunk handling code:
http://code.djangoproject.com/browser/django/trunk/django/core/files/base.py.
A:
In python3, bytes objects are distinct from str, but I don't know any reason why there would be anything wrong with this.
A:
join seems fine if you really do need to put the entire string together, but then you just wind up storing the whole thing in RAM anyway. In a situation like this, I would try to see if there's a way to process each part of the string and then discard the processed part, so you only need to hold a fixed number of bytes in memory at a time. That's usually the point of the callback approach. (If you can only process part of a chunk at a time, use a buffer as a queue to store the unprocessed data.)
| Python: efficiently join chunks of bytes into one big chunk? | I'm trying to jury-rig the Amazon S3 python library to allow chunked handling of large files. Right now it does a "self.body = http_response.read()", so if you have a 3G file you're going to read the entire thing into memory before getting any control over it.
My current approach is to try to keep the interface for the library the same but provide a callback after reading each chunk of data. Something like the following:
data = []
while True:
chunk = http_response.read(CHUNKSIZE)
if not chunk:
break
if callback:
callback(chunk)
data.append(chunk)
Now I need to do something like:
self.body = ''.join(data)
Is join the right way to do this or is there another (better) way of putting all the chunks together?
| [
"''join() is the best method for joining chunks of data. The alternative boils down to repeated concatenation, which is O(n**2) due to the immutability of strings and the need to create more at every concatenation. Given, this repeated concatenation is optimized by recent versions of CPython if used with += to become O(n), but that optimization only gives it a rough equivalent to ''.join() anyway, which is explicitly O(n) over the number of bytes.\n",
"hm - what problem are you trying to solve? I suspect the answer depends on what you are trying to do with the data.\nSince in general you don't want a whole 3Gb file in memory, I'd not store the chunks in an array, but iterate over the http_response and write it straight to disk, in a temporary or persistent file using the normal write() method on an appropriate file handle.\nif you do want two copies of the data in memory, your method will require be at least 6Gb for your hypothetical 3Gb file, which presumably is significant for most hardware. I know that array join methods are fast and all that, but since this is a really ram-constrained process maybe you want to find some way of doing it better? StringIO (http://docs.python.org/library/stringio.html) creates string objects that can be appended to in memory; the pure python one, since it has to work with immutable strings, just uses your array join trick internally, but the c-based cStringIO might actually append to a memory buffer internall. I don't have its source code to hand, so that would bear checking.\nif you do wish to do some kind of analysis on the data and really wish to keep in in memory with minimal overhead, you might want to consider some of the byte array objets from Numeric/NumPy as an alternative to StringIO. they are high-performance code optimised for large arrays and might be what you need.\nas a useful example, for a general-purpose file-handling object which has memory-efficient iterator-friendly approach you might want to check out the django File obeject chunk handling code: \nhttp://code.djangoproject.com/browser/django/trunk/django/core/files/base.py.\n",
"In python3, bytes objects are distinct from str, but I don't know any reason why there would be anything wrong with this.\n",
"join seems fine if you really do need to put the entire string together, but then you just wind up storing the whole thing in RAM anyway. In a situation like this, I would try to see if there's a way to process each part of the string and then discard the processed part, so you only need to hold a fixed number of bytes in memory at a time. That's usually the point of the callback approach. (If you can only process part of a chunk at a time, use a buffer as a queue to store the unprocessed data.)\n"
] | [
3,
2,
1,
0
] | [] | [] | [
"amazon_s3",
"python"
] | stackoverflow_0000597289_amazon_s3_python.txt |
Q:
AttributeError: 'str' object has no attribute 'readline'
Update: My current question is how can I get my code to read to the EOF starting from the beginning with each new search phrase.
This is an assignment I am doing and currently stuck on. Mind you this is a beginner's programming class using Python.
jargon = open("jargonFile.txt","r")
searchPhrase = raw_input("Enter the search phrase: ")
while searchPhrase != "":
result = jargon.readline().find(searchPhrase)
if result == -1:
print "Cannot find this term."
else:
print result
searchPhrase = raw_input("Enter the search phrase: ")
jargon.close()
The assignment is to take a user's searchPhrase and find it in a file (jargonFile.txt) and then have it print the result (which is the line it occured and the character occurence). I will be using a counter to find the line number of the occurence but I will implement this later. For now my question is the error I am getting. I cann't find a way for it to search the entire file.
Sample run:
Enter the search phrase: dog
16
Enter the search phrase: hack
Cannot find this term.
Enter the search phrase:
"dog" is found in the first line however it is also found in other lines of the jargonFile (multiple times as a string) but it is only showing the first occurence in the first line. The string hack is found numerous times in the jargonFile but my code is setup to only search the first line. How may I go about solving this problem?
If this is not clear enough I can post up the assignment if need be.
A:
First you open the file and read it into a string with readline(). Later on you try to readline() from the string you obtained in the first step.
You need to take care what object (thing) you're handling: open() gave you a file "jargon", readline on jargon gave you the string "jargonFile".
So jargonFile.readline does not make sense anymore
Update as answer to comment:
Okay, now that the str error problem is solved think about the program structure:
big loop
enter a search term
open file
inner loop
read a line
print result if string found
close file
You'd need to change your program so it follows that descripiton
Update II:
SD, if you want to avoid reopening the file you'd still need two loops, but this time one loop reads the file into memory, when that's done the second loop asks for the search term. So you would structure it like
create empty list
open file
read loop:
read a line from the file
append the file to the list
close file
query loop:
ask the user for input
for each line in the array:
print result if string found
For extra points from your professor add some comments to your solution that mention both possible solutions and say why you choose the one you did. Hint: In this case it is a classic tradeoff between execution time (memory is fast) and memory usage (what if your jargon file contains 100 million entries ... ok, you'd use something more complicated than a flat file in that case, bu you can't load it in memory either.)
Oh and one more hint to the second solution: Python supports tuples ("a","b","c") and lists ["a","b","c"]. You want to use the latter one, because list can be modified (a tuple can't.)
myList = ["Hello", "SD"]
myList.append("How are you?")
foreach line in myList:
print line
==>
Hello
SD
How are you?
Okay that last example contains all the new stuff (define list, append to list, loop over list) for the second solution of your program. Have fun putting it all together.
A:
Your file is jargon, not jargonFile (a string). That's probably what's causing your error message. You'll also need a second loop to read each line of the file from the beginning until you find the word you're looking for. Your code currently stops searching if the word is not found in the current line of the file.
How about trying to write code that only gives the user one chance to enter a string? Input that string, search the file until you find it (or not) and output a result. After you get that working you can go back and add the code that allows multiple searches and ends on an empty string.
Update:
To avoid iterating the file multiple times, you could start your program by slurping the entire file into a list of strings, one line at a time. Look up the readlines method of file objects. You can then search that list for each user input instead of re-reading the file.
A:
Hmm, I don't know anything at all about Python, but it looks to me like you are not iterating through all the lines of the file for the search string entered.
Typically, you need to do something like this:
enter search string
open file
if file has data
start loop
get next line of file
search the line for your string and do something
Exit loop if line was end of file
So for your code:
jargon = open("jargonFile.txt","r")
searchPhrase = raw_input("Enter the search phrase: ")
while searchPhrase != "":
<<if file has data?>>
<<while>>
result = jargon.readline().find(searchPhrase)
if result == -1:
print "Cannot find this term."
else:
print result
<<result is not end of file>>
searchPhrase = raw_input("Enter the search phrase: ")
jargon.close()
Cool, did a little research on the page DNS provided and Python happens to have the "with" keyword. Example:
with open("hello.txt") as f:
for line in f:
print line
So another form of your code could be:
searchPhrase = raw_input("Enter the search phrase: ")
while searchPhrase != "":
with open("jargonFile.txt") as f:
for line in f:
result = line.find(searchPhrase)
if result == -1:
print "Cannot find this term."
else:
print result
searchPhrase = raw_input("Enter the search phrase: ")
Note that "with" automatically closes the file when you're done.
A:
you shouldn't try to re-invent the wheel. just use the
re module functions.
your program could work better if you used:
result = jargon.read() .
instead of:
result = jargon.readline() .
then you could use the re.findall() function
and join the strings (with the indexes) you searched for with str.join()
this could get a little messy but if take some time to work it out, this could fix your problem.
the python documentation has this perfectly documented
A:
Everytime you enter a search phrase, it looks for it on the next line, not the first one. You need to re-open the file for every search phrase, if you want it behave like you describe.
A:
Take a look at the documentation for File objects:
http://docs.python.org/library/stdtypes.html#file-objects
You might be interested in the readlines method. For a simple case where your file is not enormous, you could use that to read all the lines into a list. Then, whenever you get a new search string, you can run through the whole list to see whether it's there.
| AttributeError: 'str' object has no attribute 'readline' | Update: My current question is how can I get my code to read to the EOF starting from the beginning with each new search phrase.
This is an assignment I am doing and currently stuck on. Mind you this is a beginner's programming class using Python.
jargon = open("jargonFile.txt","r")
searchPhrase = raw_input("Enter the search phrase: ")
while searchPhrase != "":
result = jargon.readline().find(searchPhrase)
if result == -1:
print "Cannot find this term."
else:
print result
searchPhrase = raw_input("Enter the search phrase: ")
jargon.close()
The assignment is to take a user's searchPhrase and find it in a file (jargonFile.txt) and then have it print the result (which is the line it occured and the character occurence). I will be using a counter to find the line number of the occurence but I will implement this later. For now my question is the error I am getting. I cann't find a way for it to search the entire file.
Sample run:
Enter the search phrase: dog
16
Enter the search phrase: hack
Cannot find this term.
Enter the search phrase:
"dog" is found in the first line however it is also found in other lines of the jargonFile (multiple times as a string) but it is only showing the first occurence in the first line. The string hack is found numerous times in the jargonFile but my code is setup to only search the first line. How may I go about solving this problem?
If this is not clear enough I can post up the assignment if need be.
| [
"First you open the file and read it into a string with readline(). Later on you try to readline() from the string you obtained in the first step.\nYou need to take care what object (thing) you're handling: open() gave you a file \"jargon\", readline on jargon gave you the string \"jargonFile\".\nSo jargonFile.readline does not make sense anymore \nUpdate as answer to comment:\nOkay, now that the str error problem is solved think about the program structure:\nbig loop\n enter a search term\n open file\n inner loop\n read a line\n print result if string found\n close file\n\nYou'd need to change your program so it follows that descripiton\nUpdate II:\nSD, if you want to avoid reopening the file you'd still need two loops, but this time one loop reads the file into memory, when that's done the second loop asks for the search term. So you would structure it like\ncreate empty list\nopen file\nread loop:\n read a line from the file\n append the file to the list\nclose file\nquery loop:\n ask the user for input\n for each line in the array:\n print result if string found\n\nFor extra points from your professor add some comments to your solution that mention both possible solutions and say why you choose the one you did. Hint: In this case it is a classic tradeoff between execution time (memory is fast) and memory usage (what if your jargon file contains 100 million entries ... ok, you'd use something more complicated than a flat file in that case, bu you can't load it in memory either.)\nOh and one more hint to the second solution: Python supports tuples (\"a\",\"b\",\"c\") and lists [\"a\",\"b\",\"c\"]. You want to use the latter one, because list can be modified (a tuple can't.)\nmyList = [\"Hello\", \"SD\"]\nmyList.append(\"How are you?\")\nforeach line in myList:\n print line\n\n==>\nHello\nSD\nHow are you?\n\nOkay that last example contains all the new stuff (define list, append to list, loop over list) for the second solution of your program. Have fun putting it all together.\n",
"Your file is jargon, not jargonFile (a string). That's probably what's causing your error message. You'll also need a second loop to read each line of the file from the beginning until you find the word you're looking for. Your code currently stops searching if the word is not found in the current line of the file.\nHow about trying to write code that only gives the user one chance to enter a string? Input that string, search the file until you find it (or not) and output a result. After you get that working you can go back and add the code that allows multiple searches and ends on an empty string.\nUpdate:\nTo avoid iterating the file multiple times, you could start your program by slurping the entire file into a list of strings, one line at a time. Look up the readlines method of file objects. You can then search that list for each user input instead of re-reading the file.\n",
"Hmm, I don't know anything at all about Python, but it looks to me like you are not iterating through all the lines of the file for the search string entered.\nTypically, you need to do something like this:\nenter search string\nopen file\nif file has data\n start loop\n get next line of file\n search the line for your string and do something\n\n Exit loop if line was end of file\n\nSo for your code:\njargon = open(\"jargonFile.txt\",\"r\")\nsearchPhrase = raw_input(\"Enter the search phrase: \")\nwhile searchPhrase != \"\":\n <<if file has data?>>\n <<while>>\n result = jargon.readline().find(searchPhrase)\n if result == -1:\n print \"Cannot find this term.\"\n else:\n print result\n <<result is not end of file>>\n searchPhrase = raw_input(\"Enter the search phrase: \")\njargon.close()\n\nCool, did a little research on the page DNS provided and Python happens to have the \"with\" keyword. Example:\nwith open(\"hello.txt\") as f:\n for line in f:\n print line\n\nSo another form of your code could be:\nsearchPhrase = raw_input(\"Enter the search phrase: \")\nwhile searchPhrase != \"\":\n with open(\"jargonFile.txt\") as f:\n for line in f:\n result = line.find(searchPhrase)\n if result == -1:\n print \"Cannot find this term.\"\n else:\n print result\n searchPhrase = raw_input(\"Enter the search phrase: \")\n\nNote that \"with\" automatically closes the file when you're done.\n",
"you shouldn't try to re-invent the wheel. just use the\nre module functions.\nyour program could work better if you used:\nresult = jargon.read() .\ninstead of:\nresult = jargon.readline() .\nthen you could use the re.findall() function\nand join the strings (with the indexes) you searched for with str.join() \nthis could get a little messy but if take some time to work it out, this could fix your problem.\nthe python documentation has this perfectly documented\n",
"Everytime you enter a search phrase, it looks for it on the next line, not the first one. You need to re-open the file for every search phrase, if you want it behave like you describe. \n",
"Take a look at the documentation for File objects:\nhttp://docs.python.org/library/stdtypes.html#file-objects\nYou might be interested in the readlines method. For a simple case where your file is not enormous, you could use that to read all the lines into a list. Then, whenever you get a new search string, you can run through the whole list to see whether it's there.\n"
] | [
3,
2,
2,
2,
1,
1
] | [] | [] | [
"file",
"python",
"readline"
] | stackoverflow_0000596886_file_python_readline.txt |
Q:
wxPython toolbar help
I am new to Python. I am writing an application using wxPython and I currently my code that generates a toolbar looks like this:
class Window(wx.Frame)
def __init__(self, parent, plot):
wx.Frame.__init__(self, parent, wx.ID_ANY, "Name", size =(900, 600))
self.Centre()
self.toolbar = self.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))
self.toolbar.SetToolBitmapSize((32,32))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))
self.toolbar.AddSeparator()
self.toolbar.Realize()
I am trying to clean up the code a bit and I want the toolbar to have its own class so when I want to create a toolbar, I simply call it something like this:
toolbar = Toolbar()
My question is how can I rewrite it so it works like that? Currently my code looks like this:
class Toolbar():
def __init__(self):
self.toolbar = self.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))
self.toolbar.SetToolBitmapSize((32,32))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))
self.toolbar.AddSeparator()
self.toolbar.Realize()
I am not quite sure how 'self' works. Do I need to rewrite the init function? How do I fix it? Any help is greatly appreciated. Thanks
A:
Instead of a class that sets up your toolbar, use a function. The function can be a member function of your Window that subclasses wx.Frame. That way, the toolbar will get Created from the correct window, and be attached the way you would expect.
The class that you're writing above would work, if it knew which wx.Frame (your class called Window) to connect the toolbar to. To get it to work you would have to pass the frame object to the toolbar creator class...
class Toolbar():
def __init__(self, frame_to_connect_to):
frame_to_connect_to.toolbar = frame_to_connect_to.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))
frame_to_connect_to.toolbar.SetToolBitmapSize((32,32))
frame_to_connect_to.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))
frame_to_connect_to.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))
frame_to_connect_to.toolbar.AddSeparator()
frame_to_connect_to.toolbar.Realize()
It looks like a quick fix... but really using a class to do this is not a good use of classes. (I'd even go so far as to say it was incorrect.)
Really, what would clean things up a bit would be just to move the toolbar stuff to its own member function:
class Window(wx.Frame)
def __init__(self, parent, plot):
wx.Frame.__init__(self, parent, wx.ID_ANY, "Name", size =(900, 600))
self.Centre()
self._init_toolbar()
def _init_toolbar(self):
self.toolbar = self.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))
self.toolbar.SetToolBitmapSize((32,32))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))
self.toolbar.AddSeparator()
self.toolbar.Realize()
You get all the benefits.
A:
Yes. Break your wxpython code into objects like this. It is much easier to maintain if you are going to code your GUI by hand (I do).
You need to subclass wx.ToolBar (which itself is a subclass of wx.ToolBarBase, and most of wx.ToolBar's functions are derived from that namespace):
class MyToolBar(wx.ToolBar):
def __init__(self, parent, *args, **kwargs):
wx.ToolBar.__init__(self, parent, *args, **kwargs)
#note self here and not self.toolbar
self.SetToolBitmapSize((32,32))
#add other code here
Then in your __init__ for your wx.Frame call your toolbar:
class MyFrame(wx.Frame):
def __init__(self, parent, *args, **kwargs):
wx.Frame.__init__(self, parent, *args, **kwargs)
#note that below, self refers to the wx.Frame
#self(wx.Frame) = parent for the toolbar constructor
toolbar = MyToolBar(self)
wxPython Style Guide
Another thing to note is that often the wxWidgets docs are much easier to navigate and to decipher.
| wxPython toolbar help | I am new to Python. I am writing an application using wxPython and I currently my code that generates a toolbar looks like this:
class Window(wx.Frame)
def __init__(self, parent, plot):
wx.Frame.__init__(self, parent, wx.ID_ANY, "Name", size =(900, 600))
self.Centre()
self.toolbar = self.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))
self.toolbar.SetToolBitmapSize((32,32))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))
self.toolbar.AddSeparator()
self.toolbar.Realize()
I am trying to clean up the code a bit and I want the toolbar to have its own class so when I want to create a toolbar, I simply call it something like this:
toolbar = Toolbar()
My question is how can I rewrite it so it works like that? Currently my code looks like this:
class Toolbar():
def __init__(self):
self.toolbar = self.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))
self.toolbar.SetToolBitmapSize((32,32))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))
self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))
self.toolbar.AddSeparator()
self.toolbar.Realize()
I am not quite sure how 'self' works. Do I need to rewrite the init function? How do I fix it? Any help is greatly appreciated. Thanks
| [
"Instead of a class that sets up your toolbar, use a function. The function can be a member function of your Window that subclasses wx.Frame. That way, the toolbar will get Created from the correct window, and be attached the way you would expect.\nThe class that you're writing above would work, if it knew which wx.Frame (your class called Window) to connect the toolbar to. To get it to work you would have to pass the frame object to the toolbar creator class...\nclass Toolbar():\n def __init__(self, frame_to_connect_to):\n frame_to_connect_to.toolbar = frame_to_connect_to.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))\n frame_to_connect_to.toolbar.SetToolBitmapSize((32,32))\n frame_to_connect_to.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))\n frame_to_connect_to.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))\n frame_to_connect_to.toolbar.AddSeparator()\n frame_to_connect_to.toolbar.Realize()\n\nIt looks like a quick fix... but really using a class to do this is not a good use of classes. (I'd even go so far as to say it was incorrect.)\nReally, what would clean things up a bit would be just to move the toolbar stuff to its own member function:\nclass Window(wx.Frame)\n def __init__(self, parent, plot):\n wx.Frame.__init__(self, parent, wx.ID_ANY, \"Name\", size =(900, 600))\n self.Centre()\n self._init_toolbar()\n\n def _init_toolbar(self):\n self.toolbar = self.CreateToolBar(style=(wx.TB_HORZ_LAYOUT | wx.TB_TEXT))\n self.toolbar.SetToolBitmapSize((32,32))\n self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/fileopen.png'))\n self.toolbar.AddLabelTool(3, '', wx.Bitmap('GUI/icons/filesave.png'))\n self.toolbar.AddSeparator()\n self.toolbar.Realize()\n\nYou get all the benefits.\n",
"Yes. Break your wxpython code into objects like this. It is much easier to maintain if you are going to code your GUI by hand (I do).\nYou need to subclass wx.ToolBar (which itself is a subclass of wx.ToolBarBase, and most of wx.ToolBar's functions are derived from that namespace):\nclass MyToolBar(wx.ToolBar):\n def __init__(self, parent, *args, **kwargs):\n wx.ToolBar.__init__(self, parent, *args, **kwargs)\n #note self here and not self.toolbar\n self.SetToolBitmapSize((32,32))\n #add other code here\n\nThen in your __init__ for your wx.Frame call your toolbar:\nclass MyFrame(wx.Frame):\n def __init__(self, parent, *args, **kwargs):\n wx.Frame.__init__(self, parent, *args, **kwargs)\n #note that below, self refers to the wx.Frame\n #self(wx.Frame) = parent for the toolbar constructor\n toolbar = MyToolBar(self)\n\nwxPython Style Guide\nAnother thing to note is that often the wxWidgets docs are much easier to navigate and to decipher.\n"
] | [
3,
1
] | [] | [] | [
"python",
"toolbars",
"user_interface",
"wxpython"
] | stackoverflow_0000596190_python_toolbars_user_interface_wxpython.txt |
Q:
Is it possible to make text translucent in wxPython?
I am adding some wx.StaticText objects on top of my main wx.Frame, which already has a background image applied. However, the StaticText always seems to draw with a solid (opaque) background color, hiding the image. I have tried creating a wx.Color object and changing the alpha value there, but that yields no results. Is there any way I can put text on the frame and have the background shine through? And furthermore, is it possible to make the text itself translucent? Thanks.
A:
You probably need some graphics rendering widget. As far as I know, in wxPython you can use either built-in wxGraphicsContext or pyCairo directly. Cairo is more powerful. However, I don't know the details.
A:
I would try aggdraw into a small canvas.
Any Static Text uses the platform's native label machinery, so you don't get that sort of control over it.
| Is it possible to make text translucent in wxPython? | I am adding some wx.StaticText objects on top of my main wx.Frame, which already has a background image applied. However, the StaticText always seems to draw with a solid (opaque) background color, hiding the image. I have tried creating a wx.Color object and changing the alpha value there, but that yields no results. Is there any way I can put text on the frame and have the background shine through? And furthermore, is it possible to make the text itself translucent? Thanks.
| [
"You probably need some graphics rendering widget. As far as I know, in wxPython you can use either built-in wxGraphicsContext or pyCairo directly. Cairo is more powerful. However, I don't know the details.\n",
"I would try aggdraw into a small canvas.\nAny Static Text uses the platform's native label machinery, so you don't get that sort of control over it.\n"
] | [
1,
0
] | [] | [] | [
"opacity",
"python",
"transparency",
"wxpython"
] | stackoverflow_0000462933_opacity_python_transparency_wxpython.txt |
Q:
Calling function defined in exe
I need to know a way to call a function defined in the exe from a python script.
I know how to call entire exe from py file.
A:
Unless your EXE is a COM object, or specifically exports certain functions like a dll does, then this is not possible.
For the COM method take a look at these resources:
Python Programming On Win32 book, by Mark Hammond and Andy Robinson.
COM and Python quick start on learning COM with python
For the exported functions like a dll method, please use python's win32 module along with the Win32 API LoadLibrary and related functions.
If you have access to the EXE's source code:
If you have access to the source code of this EXE though, you should define command line arguments and tie that into calling the function you want to call. In this case you can use the python os.system call to start your application or subprocess.call().
A:
Not sure if it is for windows. But you can treat an exe like a dll (if functions are exported). And they can be used by other programs.
A:
Unless the said executable takes command line arguments which will specify which function to use, I don't think this is possible.
With that being said, if you created the EXE, command line arguments are a good way to implement the functionality you're looking for.
| Calling function defined in exe | I need to know a way to call a function defined in the exe from a python script.
I know how to call entire exe from py file.
| [
"Unless your EXE is a COM object, or specifically exports certain functions like a dll does, then this is not possible. \nFor the COM method take a look at these resources:\n\nPython Programming On Win32 book, by Mark Hammond and Andy Robinson.\nCOM and Python quick start on learning COM with python\n\nFor the exported functions like a dll method, please use python's win32 module along with the Win32 API LoadLibrary and related functions.\nIf you have access to the EXE's source code:\nIf you have access to the source code of this EXE though, you should define command line arguments and tie that into calling the function you want to call. In this case you can use the python os.system call to start your application or subprocess.call(). \n",
"Not sure if it is for windows. But you can treat an exe like a dll (if functions are exported). And they can be used by other programs.\n",
"Unless the said executable takes command line arguments which will specify which function to use, I don't think this is possible.\nWith that being said, if you created the EXE, command line arguments are a good way to implement the functionality you're looking for.\n"
] | [
7,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000598569_python.txt |
Q:
Difference between class (Python) and struct (C)
I'm new to python. I've studied C and I noticed that that the C structure (struct) seemed to have the same task as "class" in python. So what is, conceptually, the difference?
A:
Structs encapsulate data.
Classes encapsulate behavior and data.
A:
Aside from numerous technical differences between how they're implemented, they serve roughly the same purpose: the organization of data.
The big difference is that in Python (and other object oriented languages such as C++, Java, or C#), a class can also have functions associated with it that operate only on the instance of the class, whereas in C, a function that wishes to operate on a struct must accept the structure as a parameter in some way, usually by pointer.
I won't delve into the technical differences between the two, as they are fairly significant, but I suggest you look into the concept of Object Oriented Programming.
A:
Without taking pages and pages to go into the details, think of a C struct as a way to organize data, while a Python (or C++ or Objective-C) "class" is a way to organize not only your data, but the operations for that data. A Python "class," for example, can inherit from other objects, to give you an interator for the data associated with it (or you can write your own interator in the class). There's more to say, I'm sure, but this gets into "what is OOP" pretty quickly, well beyond the scope of this thread.
A:
One thing that hasn't been mentioned is that C structs are value types, whereas Python classes are reference types.
For example, consider this statement:
var1 = var2
If var1 and var2 were C structs, then that statement would copy the contents of var2 into var1. That's copy by value.
If however var1 and var2 were Python objects, then that statement would make var1 refer to the object that var2 was referring to. (They are like pointers to structs in C.) That's copy by by reference.
The same thing happens when passing arguments to functions (because they have to be copied to get into the function).
A:
Classes typically have methods (which are mostly just functions) associated with them, whereas C structs don't. You'll probably want to learn about what object-oriented programming is, if you want to make effective use of classes in Python (or any other object-oriented language, like Java or C++).
Of course, it is possible to use a Python class the same way you'd use a C struct, as a container for data. But that's not done very often.
| Difference between class (Python) and struct (C) | I'm new to python. I've studied C and I noticed that that the C structure (struct) seemed to have the same task as "class" in python. So what is, conceptually, the difference?
| [
"Structs encapsulate data.\nClasses encapsulate behavior and data.\n",
"Aside from numerous technical differences between how they're implemented, they serve roughly the same purpose: the organization of data. \nThe big difference is that in Python (and other object oriented languages such as C++, Java, or C#), a class can also have functions associated with it that operate only on the instance of the class, whereas in C, a function that wishes to operate on a struct must accept the structure as a parameter in some way, usually by pointer. \nI won't delve into the technical differences between the two, as they are fairly significant, but I suggest you look into the concept of Object Oriented Programming.\n",
"Without taking pages and pages to go into the details, think of a C struct as a way to organize data, while a Python (or C++ or Objective-C) \"class\" is a way to organize not only your data, but the operations for that data. A Python \"class,\" for example, can inherit from other objects, to give you an interator for the data associated with it (or you can write your own interator in the class). There's more to say, I'm sure, but this gets into \"what is OOP\" pretty quickly, well beyond the scope of this thread.\n",
"One thing that hasn't been mentioned is that C structs are value types, whereas Python classes are reference types.\nFor example, consider this statement:\nvar1 = var2\n\nIf var1 and var2 were C structs, then that statement would copy the contents of var2 into var1. That's copy by value.\nIf however var1 and var2 were Python objects, then that statement would make var1 refer to the object that var2 was referring to. (They are like pointers to structs in C.) That's copy by by reference.\nThe same thing happens when passing arguments to functions (because they have to be copied to get into the function).\n",
"Classes typically have methods (which are mostly just functions) associated with them, whereas C structs don't. You'll probably want to learn about what object-oriented programming is, if you want to make effective use of classes in Python (or any other object-oriented language, like Java or C++).\nOf course, it is possible to use a Python class the same way you'd use a C struct, as a container for data. But that's not done very often.\n"
] | [
25,
13,
5,
5,
3
] | [] | [] | [
"c",
"python"
] | stackoverflow_0000598931_c_python.txt |
Q:
What SHOULDN'T Django's admin interface be used for?
I've been applying Django's automatic administration capabilities to some applications who had previously been very difficult to administer. I'm thinking of a lot of ways to apply it to other applications we use (including using it to replace some internal apps altogether). Before I go overboard though, is there anything in particular I shouldn't use it for?
A:
User-specific privileges. I myself had been trying to work it into that-- some of the new (and at least at the time, undocumented) features (from newforms-admin) make it actually possible. Depending on how fine you want the control to be, though, you can end up getting very, very deep into the Django/admin internals. Just because you can doesn't mean you should-- it's easier and less fragile to do so with a custom admin app.
A:
Generally, you shouldn't use the admin for access by people you don't really trust. Even though there's plenty of flexibility in terms of locking things down and controlling access (much more so since Django 1.0), the admin is still designed on the assumption that the people using it are trusted members of your staff.
| What SHOULDN'T Django's admin interface be used for? | I've been applying Django's automatic administration capabilities to some applications who had previously been very difficult to administer. I'm thinking of a lot of ways to apply it to other applications we use (including using it to replace some internal apps altogether). Before I go overboard though, is there anything in particular I shouldn't use it for?
| [
"User-specific privileges. I myself had been trying to work it into that-- some of the new (and at least at the time, undocumented) features (from newforms-admin) make it actually possible. Depending on how fine you want the control to be, though, you can end up getting very, very deep into the Django/admin internals. Just because you can doesn't mean you should-- it's easier and less fragile to do so with a custom admin app.\n",
"Generally, you shouldn't use the admin for access by people you don't really trust. Even though there's plenty of flexibility in terms of locking things down and controlling access (much more so since Django 1.0), the admin is still designed on the assumption that the people using it are trusted members of your staff.\n"
] | [
7,
5
] | [] | [] | [
"administration",
"django",
"python"
] | stackoverflow_0000598577_administration_django_python.txt |
Q:
How to match a text node then follow parent nodes using XPath
I'm trying to parse some HTML with XPath. Following the simplified XML example below, I want to match the string 'Text 1', then grab the contents of the relevant content node.
<doc>
<block>
<title>Text 1</title>
<content>Stuff I want</content>
</block>
<block>
<title>Text 2</title>
<content>Stuff I don't want</content>
</block>
</doc>
My Python code throws a wobbly:
>>> from lxml import etree
>>>
>>> tree = etree.XML("<doc><block><title>Text 1</title><content>Stuff
I want</content></block><block><title>Text 2</title><content>Stuff I d
on't want</content></block></doc>")
>>>
>>> # get all titles
... tree.xpath('//title/text()')
['Text 1', 'Text 2']
>>>
>>> # match 'Text 1'
... tree.xpath('//title/text()="Text 1"')
True
>>>
>>> # Follow parent from selected nodes
... tree.xpath('//title/text()/../..//text()')
['Text 1', 'Stuff I want', 'Text 2', "Stuff I don't want"]
>>>
>>> # Follow parent from selected node
... tree.xpath('//title/text()="Text 1"/../..//text()')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lxml.etree.pyx", line 1330, in lxml.etree._Element.xpath (src/
lxml/lxml.etree.c:14542)
File "xpath.pxi", line 287, in lxml.etree.XPathElementEvaluator.__ca
ll__ (src/lxml/lxml.etree.c:90093)
File "xpath.pxi", line 209, in lxml.etree._XPathEvaluatorBase._handl
e_result (src/lxml/lxml.etree.c:89446)
File "xpath.pxi", line 194, in lxml.etree._XPathEvaluatorBase._raise
_eval_error (src/lxml/lxml.etree.c:89281)
lxml.etree.XPathEvalError: Invalid type
Is this possible in XPath? Do I need to express what I want to do in a different way?
A:
Do you want that?
//title[text()='Text 1']/../content/text()
A:
Use:
string(/*/*/title[. = 'Text 1']/following-sibling::content)
This represents at least two improvements as compared to the currently accepted solution of Johannes Weiß:
The very expensive abbreviation "//" (usually causing the whole XML document to be scanned) is avoided as it should be whenever the structure of the XML document is known in advance.
There is no return back to the parent (the location step "/.." is avoided)
| How to match a text node then follow parent nodes using XPath | I'm trying to parse some HTML with XPath. Following the simplified XML example below, I want to match the string 'Text 1', then grab the contents of the relevant content node.
<doc>
<block>
<title>Text 1</title>
<content>Stuff I want</content>
</block>
<block>
<title>Text 2</title>
<content>Stuff I don't want</content>
</block>
</doc>
My Python code throws a wobbly:
>>> from lxml import etree
>>>
>>> tree = etree.XML("<doc><block><title>Text 1</title><content>Stuff
I want</content></block><block><title>Text 2</title><content>Stuff I d
on't want</content></block></doc>")
>>>
>>> # get all titles
... tree.xpath('//title/text()')
['Text 1', 'Text 2']
>>>
>>> # match 'Text 1'
... tree.xpath('//title/text()="Text 1"')
True
>>>
>>> # Follow parent from selected nodes
... tree.xpath('//title/text()/../..//text()')
['Text 1', 'Stuff I want', 'Text 2', "Stuff I don't want"]
>>>
>>> # Follow parent from selected node
... tree.xpath('//title/text()="Text 1"/../..//text()')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lxml.etree.pyx", line 1330, in lxml.etree._Element.xpath (src/
lxml/lxml.etree.c:14542)
File "xpath.pxi", line 287, in lxml.etree.XPathElementEvaluator.__ca
ll__ (src/lxml/lxml.etree.c:90093)
File "xpath.pxi", line 209, in lxml.etree._XPathEvaluatorBase._handl
e_result (src/lxml/lxml.etree.c:89446)
File "xpath.pxi", line 194, in lxml.etree._XPathEvaluatorBase._raise
_eval_error (src/lxml/lxml.etree.c:89281)
lxml.etree.XPathEvalError: Invalid type
Is this possible in XPath? Do I need to express what I want to do in a different way?
| [
"Do you want that?\n//title[text()='Text 1']/../content/text()\n\n",
"Use:\nstring(/*/*/title[. = 'Text 1']/following-sibling::content)\n\nThis represents at least two improvements as compared to the currently accepted solution of Johannes Weiß:\n\nThe very expensive abbreviation \"//\" (usually causing the whole XML document to be scanned) is avoided as it should be whenever the structure of the XML document is known in advance.\nThere is no return back to the parent (the location step \"/..\" is avoided)\n\n"
] | [
23,
16
] | [] | [] | [
"html",
"lxml",
"python",
"xpath"
] | stackoverflow_0000598722_html_lxml_python_xpath.txt |
Q:
choosing between Modules and Classes
In my application I have to maintain some global application state and global application wide methods like currently connected users, total number of answers, create an application config file etc. There are two options:
Make a separate appstate.py file with global variables with functions over them. It looks fine initially but it seems that I am missing something in clarity of my code.
Create a class AppState with class functions in a appstate.py file, all other modules have been defined by their specific jobs. This looks fine. But now I have to write longer line like appstate.AppState.get_user_list(). Moreover, the methods are not so much related to each other. I can create separate classes but that would be too many classes.
EDIT: If I use classes I will be using classmethods. I don't think there is a need to instantiate the class to an object.
A:
Sounds like the classic conundrum :-).
In Python, there's nothing dirty or shameful about choosing to use a module if that's the best approach. After all, modules, functions, and the like are, in fact, first-class citizens in the language, and offer introspection and properties that many other programming languages offer only by the use of objects.
The way you've described your options, it kinda sounds like you're not too crazy about a class-based approach in this case.
I don't know if you've used the Django framework, but if not, have a look at the documentation on how it handle settings. These are app-wide, they are defined in a module, and they are available globally. The way it parses the options and expose them globally is quite elegant, and you may find such an approach inspiring for your needs.
A:
The second approach is only significantly different from the first approach if you have application state stored in an instance of AppState, in which case your complaint doesn't apply. If you're just storing stuff in a class and using static/class methods, your class is no different than a module, and it would be pythonic to instead actually have it as a module.
A:
The second approach seems better. I'd use the first one only for configuration files or something.
Anyway, to avoid the problem you could always:
from myapp.appstate import AppState
That way you don't have to write the long line anymore.
A:
Why not go with an instance of that class? That way you might even be able later on to have 2 different "sessions" running, depending on what instance you use. It might make it more flexible. Maybe add some method get_appstate() to the module so it instanciates the class once. Later on if you might want several instances you can change this method to eventually take a parameter and use some dictionary etc. to store those instances.
You could also use property decorators btw to make things more readable and have the flexibility of storing it how and where you want it stores.
I agree that it would be more pythonic to use the module approach instead of classmethods.
BTW, I am not such a big fan of having things available globally by some "magic". I'd rather use some explicit call to obtain that information. Then I know where things come from and how to debug it when things fail.
A:
Consider this example:
configuration
|
+-> graphics
| |
| +-> 3D
| |
| +-> 2D
|
+-> sound
The real question is: What is the difference between classes and modules in this hierarchy, as it could be represented by both means?
Classes represent types. If you implement your solution with classes instead of modules, you are able to check a graphics object for it's proper type, but write generic graphics functions.
With classes you can generate parametrized values. This means it is possible to initialize differently the sounds class with a constructor, but it is hard to initialize a module with different parameters.
The point is, that you really something different from the modeling standpoint.
A:
I would go with the classes route as it will better organize your code. Remember that for readability you can do this:
from appstate import AppSate
A:
I'd definitely go for the second option : having already used the first one, I'm now forced to refactor, as my application evolved and have to support more modular constructs, so I now need to handle multiple simulataneous 'configurations'.
The second approach is, IMO, more flexible and future proof. To avoid the longer lines of code, you could use from appstate import AppState instead of just import appstate.
| choosing between Modules and Classes | In my application I have to maintain some global application state and global application wide methods like currently connected users, total number of answers, create an application config file etc. There are two options:
Make a separate appstate.py file with global variables with functions over them. It looks fine initially but it seems that I am missing something in clarity of my code.
Create a class AppState with class functions in a appstate.py file, all other modules have been defined by their specific jobs. This looks fine. But now I have to write longer line like appstate.AppState.get_user_list(). Moreover, the methods are not so much related to each other. I can create separate classes but that would be too many classes.
EDIT: If I use classes I will be using classmethods. I don't think there is a need to instantiate the class to an object.
| [
"Sounds like the classic conundrum :-).\nIn Python, there's nothing dirty or shameful about choosing to use a module if that's the best approach. After all, modules, functions, and the like are, in fact, first-class citizens in the language, and offer introspection and properties that many other programming languages offer only by the use of objects.\nThe way you've described your options, it kinda sounds like you're not too crazy about a class-based approach in this case.\nI don't know if you've used the Django framework, but if not, have a look at the documentation on how it handle settings. These are app-wide, they are defined in a module, and they are available globally. The way it parses the options and expose them globally is quite elegant, and you may find such an approach inspiring for your needs.\n",
"The second approach is only significantly different from the first approach if you have application state stored in an instance of AppState, in which case your complaint doesn't apply. If you're just storing stuff in a class and using static/class methods, your class is no different than a module, and it would be pythonic to instead actually have it as a module.\n",
"The second approach seems better. I'd use the first one only for configuration files or something.\nAnyway, to avoid the problem you could always:\nfrom myapp.appstate import AppState\n\nThat way you don't have to write the long line anymore.\n",
"Why not go with an instance of that class? That way you might even be able later on to have 2 different \"sessions\" running, depending on what instance you use. It might make it more flexible. Maybe add some method get_appstate() to the module so it instanciates the class once. Later on if you might want several instances you can change this method to eventually take a parameter and use some dictionary etc. to store those instances.\nYou could also use property decorators btw to make things more readable and have the flexibility of storing it how and where you want it stores.\nI agree that it would be more pythonic to use the module approach instead of classmethods. \nBTW, I am not such a big fan of having things available globally by some \"magic\". I'd rather use some explicit call to obtain that information. Then I know where things come from and how to debug it when things fail.\n",
"Consider this example:\nconfiguration\n|\n+-> graphics\n| |\n| +-> 3D\n| |\n| +-> 2D\n|\n+-> sound\n\nThe real question is: What is the difference between classes and modules in this hierarchy, as it could be represented by both means?\nClasses represent types. If you implement your solution with classes instead of modules, you are able to check a graphics object for it's proper type, but write generic graphics functions.\nWith classes you can generate parametrized values. This means it is possible to initialize differently the sounds class with a constructor, but it is hard to initialize a module with different parameters.\nThe point is, that you really something different from the modeling standpoint.\n",
"I would go with the classes route as it will better organize your code. Remember that for readability you can do this:\nfrom appstate import AppSate\n\n",
"I'd definitely go for the second option : having already used the first one, I'm now forced to refactor, as my application evolved and have to support more modular constructs, so I now need to handle multiple simulataneous 'configurations'.\nThe second approach is, IMO, more flexible and future proof. To avoid the longer lines of code, you could use from appstate import AppState instead of just import appstate.\n"
] | [
28,
6,
4,
1,
1,
0,
0
] | [] | [] | [
"module",
"oop",
"python"
] | stackoverflow_0000600190_module_oop_python.txt |
Q:
Cheap exception handling in Python?
I read in an earlier answer that exception handling is cheap in Python so we shouldn't do pre-conditional checking.
I have not heard of this before, but I'm relatively new to Python. Exception handling means a dynamic call and a static return, whereas an if statement is static call, static return.
How can doing the checking be bad and the try-except be good, seems to be the other way around. Can someone explain this to me?
A:
Don't sweat the small stuff. You've already picked one of the slower scripting languages out there, so trying to optimize down to the opcode is not going to help you much. The reason to choose an interpreted, dynamic language like Python is to optimize your time, not the CPU's.
If you use common language idioms, then you'll see all the benefits of fast prototyping and clean design and your code will naturally run faster as new versions of Python are released and the computer hardware is upgraded.
If you have performance problems, then profile your code and optimize your slow algorithms. But in the mean time, use exceptions for exceptional situations since it will make any refactoring you ultimately do along these lines a lot easier.
A:
You might find this post helpful: Try / Except Performance in Python: A Simple Test where Patrick Altman did some simple testing to see what the performance is in various scenarios pre-conditional checking (specific to dictionary keys in this case) and using only exceptions. Code is provided as well if you want to adapt it to test other conditionals.
The conclusions he came to:
From these results, I think it is fair
to quickly determine a number of
conclusions:
If there is a high likelihood that the element doesn't exist, then
you are better off checking for it
with has_key.
If you are not going to do anything with the Exception if it is
raised, then you are better off not
putting one have the except
If it is likely that the element does exist, then there is a very
slight advantage to using a try/except
block instead of using has_key,
however, the advantage is very slight.
A:
Putting aside the performance measurements that others have said, the guiding principle is often structured as "it is easier to ask forgiveness than ask permission" vs. "look before you leap."
Consider these two snippets:
# Look before you leap
if not os.path.exists(filename):
raise SomeError("Cannot open configuration file")
f = open(filename)
vs.
# Ask forgiveness ...
try:
f = open(filename)
except IOError:
raise SomeError("Cannot open configuration file")
Equivalent? Not really. OSes are multi-taking systems. What happens if the file was deleted between the test for 'exists' and 'open' call?
What happens if the file exists but it's not readable? What if it's a directory name instead of a file. There can be many possible failure modes and checking all of them is a lot of work. Especially since the 'open' call already checks and reports all of those possible failures.
The guideline should be to reduce the chance of inconsistent state, and the best way for that is to use exceptions instead of test/call.
A:
"Can someone explain this to me?"
Depends.
Here's one explanation, but it's not helpful. Your question stems from your assumptions. Since the real world conflicts with your assumptions, it must mean your assumptions are wrong. Not much of an explanation, but that's why you're asking.
"Exception handling means a dynamic call and a static return, whereas an if statement is static call, static return."
What does "dynamic call" mean? Searching stack frames for a handler? I'm assuming that's what you're talking about. And a "static call" is somehow locating the block after the if statement.
Perhaps this "dynamic call" is not the most costly part of the operation. Perhaps the if-statement expression evaluation is slightly more expensive than the simpler "try-it-and-fail".
Turns out that Python's internal integrity checks are almost the same as your if-statement, and have to be done anyway. Since Python's always going to check, your if-statement is (mostly) redundant.
You can read about low-level exception handling in http://docs.python.org/c-api/intro.html#exceptions.
Edit
More to the point: The if vs. except debate doesn't matter.
Since exceptions are cheap, do not label them as a performance problem.
Use what makes your code clear and meaningful. Don't waste time on micro-optimizations like this.
A:
With Python, it is easy to check different possibilities for speed - get to know the timeit module :
... example session (using the command line) that compare the cost of using hasattr() vs. try/except to test for missing and present object attributes.
% timeit.py 'try:' ' str.__nonzero__' 'except AttributeError:' ' pass'
100000 loops, best of 3: 15.7 usec per loop
% timeit.py 'if hasattr(str, "__nonzero__"): pass'
100000 loops, best of 3: 4.26 usec per loop
% timeit.py 'try:' ' int.__nonzero__' 'except AttributeError:' ' pass'
1000000 loops, best of 3: 1.43 usec per loop
% timeit.py 'if hasattr(int, "__nonzero__"): pass'
100000 loops, best of 3: 2.23 usec per loop
These timing results show in the hasattr() case, raising an exception is slow, but performing a test is slower than not raising the exception. So, in terms of running time, using an exception for handling exceptional cases makes sense.
EDIT: The command line option -n will default to a large enough count so that the run time is meaningful. A quote from the manual:
If -n is not given, a suitable number of loops is calculated by trying successive powers of 10 until the total time is at least 0.2 seconds.
A:
I am a python beginner as well. While I cannot say why exactly Exception handling has been called cheap in the context of that answer, here are my thoughts:
Note that checking with if-elif-else has to evaluate a condition every time. Exception handling, including the search for an exception handler occurs only in an exceptional condition, which is likely to be rare in most cases. That is a clear efficiency gain.
As pointed out by Jay, it is better to use conditional logic rather than exceptions when there is a high likelihood of the key being absent. This is because if the key is absent most of the time, it is not an exceptional condition.
That said, I suggest that you don't worry about efficiency and rather about meaning. Use exception handling to detect exceptional cases and checking conditions when you want to decide upon something. I was reminded about the importance of meaning by S.Lott just yesterday.
Case in point:
def xyz(key):
dictOb = {x:1, y:2, z:3}
#Condition evaluated every time
if dictOb.has_key(key): #Access 1 to dict
print dictOb[key] #Access 2
Versus
#Exception mechanism is in play only when the key isn't found.
def xyz(key):
dictOb = {x:1, y:2, z:3}
try:
print dictOb[key] #Access 1
except KeyError:
print "Not Found"
Overall, having some code that handles something,like a missing key, just in case needs exception handling, but in situations like when the key isn't present most of the time, what you really want to do is to decide if the key is present => if-else. Python emphasizes and encourages saying what you mean.
Why Exceptions are preferred to if-elif ->
It expresses the meaning more clearly when you are looking foe exceptional aka unusual/unexpected conditions in your code.
It is cleaner and a whole lot more readable.
It is more flexible.
It can be used to write more concise code.
Avoids a lot of nasty checking.
It is more maintainable.
Note
When we avoid using try-except, Exceptions continue being raised. Exceptions which aren't handled simply go to the default handler. When you use try-except, you can handle the error yourself. It might be more efficient because if-else requires condition evaluation, while looking for an exception handler may be cheaper. Even if this is true, the gain from it will be too minor to bother thinking about.
I hope my answer helps.
A:
What are static versus dynamic calls and returns, and why do you think that calls and returns are any different in Python depending on if you are doing it in a try/except block? Even if you aren't catching an exception, Python still has to handle the call possibly raising something, so it doesn't make a difference to Python in regards to how the calls and returns are handled.
Every function call in Python involves pushing the arguments onto the stack, and invoking the callable. Every single function termination is followed by the caller, in the internal wiring of Python, checking for a successful or exception termination, and handles it accordingly. In other words, if you think that there is some additional handling when you are in a try/except block that is somehow skipped when you are not in one, you are mistaken. I assume that is what you "static" versus "dynamic" distinction was about.
Further, it is a matter of style, and experienced Python developers come to read exception catching well, so that when they see the appropriate try/except around a call, it is more readable than a conditional check.
A:
The general message, as S.Lott said, is that try/except doesn't hurt so you should feel free to use it whenever it seems appropriate.
This debate is often called "LBYL vs EAFP" – that's "look before you leap" vs "easier to ask forgiveness than permission". Alex Martelli weighs forth on the subject here: http://mail.python.org/pipermail/python-list/2003-May/205182.html This debate is almost six years old, but I don't think the basic issues have changed very much.
| Cheap exception handling in Python? | I read in an earlier answer that exception handling is cheap in Python so we shouldn't do pre-conditional checking.
I have not heard of this before, but I'm relatively new to Python. Exception handling means a dynamic call and a static return, whereas an if statement is static call, static return.
How can doing the checking be bad and the try-except be good, seems to be the other way around. Can someone explain this to me?
| [
"Don't sweat the small stuff. You've already picked one of the slower scripting languages out there, so trying to optimize down to the opcode is not going to help you much. The reason to choose an interpreted, dynamic language like Python is to optimize your time, not the CPU's.\nIf you use common language idioms, then you'll see all the benefits of fast prototyping and clean design and your code will naturally run faster as new versions of Python are released and the computer hardware is upgraded.\nIf you have performance problems, then profile your code and optimize your slow algorithms. But in the mean time, use exceptions for exceptional situations since it will make any refactoring you ultimately do along these lines a lot easier.\n",
"You might find this post helpful: Try / Except Performance in Python: A Simple Test where Patrick Altman did some simple testing to see what the performance is in various scenarios pre-conditional checking (specific to dictionary keys in this case) and using only exceptions. Code is provided as well if you want to adapt it to test other conditionals.\nThe conclusions he came to: \n\nFrom these results, I think it is fair\n to quickly determine a number of\n conclusions:\n\nIf there is a high likelihood that the element doesn't exist, then\n you are better off checking for it\n with has_key.\nIf you are not going to do anything with the Exception if it is\n raised, then you are better off not\n putting one have the except\nIf it is likely that the element does exist, then there is a very\n slight advantage to using a try/except\n block instead of using has_key,\n however, the advantage is very slight.\n\n\n",
"Putting aside the performance measurements that others have said, the guiding principle is often structured as \"it is easier to ask forgiveness than ask permission\" vs. \"look before you leap.\"\nConsider these two snippets:\n# Look before you leap\nif not os.path.exists(filename):\n raise SomeError(\"Cannot open configuration file\")\nf = open(filename)\n\nvs.\n# Ask forgiveness ...\ntry:\n f = open(filename)\nexcept IOError:\n raise SomeError(\"Cannot open configuration file\")\n\nEquivalent? Not really. OSes are multi-taking systems. What happens if the file was deleted between the test for 'exists' and 'open' call?\nWhat happens if the file exists but it's not readable? What if it's a directory name instead of a file. There can be many possible failure modes and checking all of them is a lot of work. Especially since the 'open' call already checks and reports all of those possible failures.\nThe guideline should be to reduce the chance of inconsistent state, and the best way for that is to use exceptions instead of test/call.\n",
"\"Can someone explain this to me?\"\nDepends. \nHere's one explanation, but it's not helpful. Your question stems from your assumptions. Since the real world conflicts with your assumptions, it must mean your assumptions are wrong. Not much of an explanation, but that's why you're asking.\n\"Exception handling means a dynamic call and a static return, whereas an if statement is static call, static return.\"\nWhat does \"dynamic call\" mean? Searching stack frames for a handler? I'm assuming that's what you're talking about. And a \"static call\" is somehow locating the block after the if statement.\nPerhaps this \"dynamic call\" is not the most costly part of the operation. Perhaps the if-statement expression evaluation is slightly more expensive than the simpler \"try-it-and-fail\".\nTurns out that Python's internal integrity checks are almost the same as your if-statement, and have to be done anyway. Since Python's always going to check, your if-statement is (mostly) redundant.\nYou can read about low-level exception handling in http://docs.python.org/c-api/intro.html#exceptions. \n\nEdit\nMore to the point: The if vs. except debate doesn't matter.\nSince exceptions are cheap, do not label them as a performance problem.\nUse what makes your code clear and meaningful. Don't waste time on micro-optimizations like this. \n",
"With Python, it is easy to check different possibilities for speed - get to know the timeit module :\n\n... example session (using the command line) that compare the cost of using hasattr() vs. try/except to test for missing and present object attributes.\n\n% timeit.py 'try:' ' str.__nonzero__' 'except AttributeError:' ' pass'\n100000 loops, best of 3: 15.7 usec per loop\n% timeit.py 'if hasattr(str, \"__nonzero__\"): pass'\n100000 loops, best of 3: 4.26 usec per loop\n% timeit.py 'try:' ' int.__nonzero__' 'except AttributeError:' ' pass'\n1000000 loops, best of 3: 1.43 usec per loop\n% timeit.py 'if hasattr(int, \"__nonzero__\"): pass'\n100000 loops, best of 3: 2.23 usec per loop\n\nThese timing results show in the hasattr() case, raising an exception is slow, but performing a test is slower than not raising the exception. So, in terms of running time, using an exception for handling exceptional cases makes sense.\nEDIT: The command line option -n will default to a large enough count so that the run time is meaningful. A quote from the manual:\n\nIf -n is not given, a suitable number of loops is calculated by trying successive powers of 10 until the total time is at least 0.2 seconds.\n\n",
"I am a python beginner as well. While I cannot say why exactly Exception handling has been called cheap in the context of that answer, here are my thoughts:\nNote that checking with if-elif-else has to evaluate a condition every time. Exception handling, including the search for an exception handler occurs only in an exceptional condition, which is likely to be rare in most cases. That is a clear efficiency gain.\nAs pointed out by Jay, it is better to use conditional logic rather than exceptions when there is a high likelihood of the key being absent. This is because if the key is absent most of the time, it is not an exceptional condition. \nThat said, I suggest that you don't worry about efficiency and rather about meaning. Use exception handling to detect exceptional cases and checking conditions when you want to decide upon something. I was reminded about the importance of meaning by S.Lott just yesterday.\nCase in point: \ndef xyz(key):\n dictOb = {x:1, y:2, z:3}\n #Condition evaluated every time\n if dictOb.has_key(key): #Access 1 to dict\n print dictOb[key] #Access 2\n\nVersus\n#Exception mechanism is in play only when the key isn't found.\ndef xyz(key):\n dictOb = {x:1, y:2, z:3}\n try:\n print dictOb[key] #Access 1\n except KeyError:\n print \"Not Found\"\n\nOverall, having some code that handles something,like a missing key, just in case needs exception handling, but in situations like when the key isn't present most of the time, what you really want to do is to decide if the key is present => if-else. Python emphasizes and encourages saying what you mean.\nWhy Exceptions are preferred to if-elif ->\n\nIt expresses the meaning more clearly when you are looking foe exceptional aka unusual/unexpected conditions in your code.\nIt is cleaner and a whole lot more readable.\nIt is more flexible.\nIt can be used to write more concise code.\nAvoids a lot of nasty checking. \nIt is more maintainable.\n\nNote \nWhen we avoid using try-except, Exceptions continue being raised. Exceptions which aren't handled simply go to the default handler. When you use try-except, you can handle the error yourself. It might be more efficient because if-else requires condition evaluation, while looking for an exception handler may be cheaper. Even if this is true, the gain from it will be too minor to bother thinking about. \nI hope my answer helps.\n",
"What are static versus dynamic calls and returns, and why do you think that calls and returns are any different in Python depending on if you are doing it in a try/except block? Even if you aren't catching an exception, Python still has to handle the call possibly raising something, so it doesn't make a difference to Python in regards to how the calls and returns are handled.\nEvery function call in Python involves pushing the arguments onto the stack, and invoking the callable. Every single function termination is followed by the caller, in the internal wiring of Python, checking for a successful or exception termination, and handles it accordingly. In other words, if you think that there is some additional handling when you are in a try/except block that is somehow skipped when you are not in one, you are mistaken. I assume that is what you \"static\" versus \"dynamic\" distinction was about.\nFurther, it is a matter of style, and experienced Python developers come to read exception catching well, so that when they see the appropriate try/except around a call, it is more readable than a conditional check.\n",
"The general message, as S.Lott said, is that try/except doesn't hurt so you should feel free to use it whenever it seems appropriate.\nThis debate is often called \"LBYL vs EAFP\" – that's \"look before you leap\" vs \"easier to ask forgiveness than permission\". Alex Martelli weighs forth on the subject here: http://mail.python.org/pipermail/python-list/2003-May/205182.html This debate is almost six years old, but I don't think the basic issues have changed very much.\n"
] | [
36,
26,
23,
9,
8,
4,
1,
1
] | [] | [] | [
"exception_handling",
"performance",
"python"
] | stackoverflow_0000598157_exception_handling_performance_python.txt |
Q:
Python error when using urllib.open
When I run this:
import urllib
feed = urllib.urlopen("http://www.yahoo.com")
print feed
I get this output in the interactive window (PythonWin):
<addinfourl at 48213968 whose fp = <socket._fileobject object at 0x02E14070>>
I'm expecting to get the source of the above URL. I know this has worked on other computers (like the ones at school) but this is on my laptop and I'm not sure what the problem is here. Also, I don't understand this error at all. What does it mean? Addinfourl? fp? Please help.
A:
Try this:
print feed.read()
See Python docs here.
A:
urllib.urlopen actually returns a file-like object so to retrieve the contents you will need to use:
import urllib
feed = urllib.urlopen("http://www.yahoo.com")
print feed.read()
A:
In python 3.0:
import urllib
import urllib.request
fh = urllib.request.urlopen(url)
html = fh.read().decode("iso-8859-1")
fh.close()
print (html)
| Python error when using urllib.open | When I run this:
import urllib
feed = urllib.urlopen("http://www.yahoo.com")
print feed
I get this output in the interactive window (PythonWin):
<addinfourl at 48213968 whose fp = <socket._fileobject object at 0x02E14070>>
I'm expecting to get the source of the above URL. I know this has worked on other computers (like the ones at school) but this is on my laptop and I'm not sure what the problem is here. Also, I don't understand this error at all. What does it mean? Addinfourl? fp? Please help.
| [
"Try this:\nprint feed.read()\nSee Python docs here.\n",
"urllib.urlopen actually returns a file-like object so to retrieve the contents you will need to use:\nimport urllib\n\nfeed = urllib.urlopen(\"http://www.yahoo.com\")\n\nprint feed.read()\n\n",
"In python 3.0:\nimport urllib\nimport urllib.request\n\nfh = urllib.request.urlopen(url)\nhtml = fh.read().decode(\"iso-8859-1\")\nfh.close()\n\nprint (html)\n\n"
] | [
55,
17,
7
] | [] | [] | [
"python",
"urllib"
] | stackoverflow_0000600389_python_urllib.txt |
Q:
What's the best technology for connecting from linux to MS SQL Server using python? ODBC?
By best, I mean most-common, easiest to setup, free. Performance doesn't matter.
A:
I decided that pyodbc was the best fit. Very simple, stable, supported:
http://code.google.com/p/pyodbc/
A:
pymssql, the simple MS SQL Python extension module.
A:
FreeTDS
A:
I'm just learning Python myself, but it seems like there are Python libraries that look more like Java JDBC drivers. Google found these articles about SQLObject and cx_Oracle.
| What's the best technology for connecting from linux to MS SQL Server using python? ODBC? | By best, I mean most-common, easiest to setup, free. Performance doesn't matter.
| [
"I decided that pyodbc was the best fit. Very simple, stable, supported:\nhttp://code.google.com/p/pyodbc/\n",
"pymssql, the simple MS SQL Python extension module.\n",
"FreeTDS\n",
"I'm just learning Python myself, but it seems like there are Python libraries that look more like Java JDBC drivers. Google found these articles about SQLObject and cx_Oracle.\n"
] | [
3,
2,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000598979_python.txt |
Q:
Solution basis of underdetermined equation set in python
I have an underdetermined equation set (m equations of n variables, m smaller than n). As such, if it is solvable then the set of solutions are a linear space (if it is a homogenic set) or affine space (non-homogenic).
Is there an easy way in Python (possibly with other libraries) to obtain this space - for example, a basis of which?
Thanks.
A:
Use linalg package from SciPy
A:
Like the previous poster said, you'll want linalg from SciPy, but focus on the Singular Value Decomposition solution. The matrix U is the basis for the output vectors.
| Solution basis of underdetermined equation set in python | I have an underdetermined equation set (m equations of n variables, m smaller than n). As such, if it is solvable then the set of solutions are a linear space (if it is a homogenic set) or affine space (non-homogenic).
Is there an easy way in Python (possibly with other libraries) to obtain this space - for example, a basis of which?
Thanks.
| [
"Use linalg package from SciPy\n",
"Like the previous poster said, you'll want linalg from SciPy, but focus on the Singular Value Decomposition solution. The matrix U is the basis for the output vectors.\n"
] | [
2,
1
] | [] | [] | [
"linear_equation",
"python"
] | stackoverflow_0000601941_linear_equation_python.txt |
Q:
Making a 2-player web-based textual game
I'm making a simple web-based, turn-based game and am trying to determine what modules exist out there to help me on this task.
Here's the web app I'm looking to build:
User visits the homepage, clicks on a "play game" link
This takes the user to a "game room" where he either joins someone else who has been waiting for a partner to play with or waits for someone to join him
As soon as there are two users in the room, the game starts. It's a very simple turn-based textual game. One user enters a number, then the other user responds by entering another number, and so on, until some conditions are met and the game is over; each player is shown their final score.
My default plan has been to do this using Django and AJAX. Are there any existing modules/frameworks out there that would potentially save me some of the work of writing this from scratch? (Note: I might be able to negotiate to have this done in .NET if there are some great .NET libraries.)
A:
Try the Jabber protocol ... It works great for IM, but was designed for use by other types of systems as well and there's already a set of bindings for Python since it has become so popular.
A:
If you're not going to have huge numbers of concurrent users or want it done quickly I would go for holding game state on the server and polling via Ajax.
The js library of your choice will make that polling easier.
If you want it to be larger and hairier, you might look at Strophe, a js library for writing XMPP clients -- it has a handful of example sites.
| Making a 2-player web-based textual game | I'm making a simple web-based, turn-based game and am trying to determine what modules exist out there to help me on this task.
Here's the web app I'm looking to build:
User visits the homepage, clicks on a "play game" link
This takes the user to a "game room" where he either joins someone else who has been waiting for a partner to play with or waits for someone to join him
As soon as there are two users in the room, the game starts. It's a very simple turn-based textual game. One user enters a number, then the other user responds by entering another number, and so on, until some conditions are met and the game is over; each player is shown their final score.
My default plan has been to do this using Django and AJAX. Are there any existing modules/frameworks out there that would potentially save me some of the work of writing this from scratch? (Note: I might be able to negotiate to have this done in .NET if there are some great .NET libraries.)
| [
"Try the Jabber protocol ... It works great for IM, but was designed for use by other types of systems as well and there's already a set of bindings for Python since it has become so popular.\n",
"If you're not going to have huge numbers of concurrent users or want it done quickly I would go for holding game state on the server and polling via Ajax.\nThe js library of your choice will make that polling easier.\nIf you want it to be larger and hairier, you might look at Strophe, a js library for writing XMPP clients -- it has a handful of example sites.\n"
] | [
1,
1
] | [] | [] | [
".net",
"ajax",
"javascript",
"python"
] | stackoverflow_0000600621_.net_ajax_javascript_python.txt |
Q:
Evil code from the Python standard library
So, we have had this: The 1000% Speedup, or, the stdlib sucks. It demonstrates a rather bad bug that is probably costing the universe a load of cycles even as we speak. It's fixed now, which is great.
So what parts of the standard library have you noticed to be evil?
I would expect all the responsible people to match up an answer with a bug report (if suitable) and a patch (if superman).
A:
The rexec module has so many security holes in it that it's almost useless.
A:
(since this is a different module, placing it in a different answer)
cgitb has some weird threading issues. See this bug report.
| Evil code from the Python standard library | So, we have had this: The 1000% Speedup, or, the stdlib sucks. It demonstrates a rather bad bug that is probably costing the universe a load of cycles even as we speak. It's fixed now, which is great.
So what parts of the standard library have you noticed to be evil?
I would expect all the responsible people to match up an answer with a bug report (if suitable) and a patch (if superman).
| [
"The rexec module has so many security holes in it that it's almost useless.\n",
"(since this is a different module, placing it in a different answer)\ncgitb has some weird threading issues. See this bug report.\n"
] | [
3,
2
] | [] | [] | [
"python",
"standard_library"
] | stackoverflow_0000602445_python_standard_library.txt |
Q:
How to show the output of 'l' in python pdb after every command entered
I would like to have the output of the python pdb 'l' command printed to the screen after every command I enter in an interactive debugging session.
Is there a way to setup python pdb to do this?
A:
One way to do this is to alias your favourite commands to run the command and then l.
e.g.
(Pdb) alias s step ;; l
(Pdb) s
> /usr/lib/python2.5/distutils/core.py(14)<module>()
-> from types import *
9 # This module should be kept compatible with Python 2.1.
10
11 __revision__ = "$Id: core.py 38672 2005-03-20 22:19:47Z fdrake $"
12
13 import sys, os
14 -> from types import *
15
16 from distutils.debug import DEBUG
17 from distutils.errors import *
18 from distutils.util import grok_environment_error
19
In your ~/.pdbrc you can add the aliases so you have them every time:
alias s step ;; l
A:
';;' allow to separate commands
[crchemist@test tmp]$ python t.py
> /home/crchemist/tmp/t.py(7)()
-> a()
(Pdb) p a ;; l
function a at 0xb7e96df4
2 b = 49 + 45
3 v = 'fff'
4 return v
5
6 import pdb; pdb.set_trace()
7 -> a() [EOF]
(Pdb) s ;; l
--Call--
> /home/crchemist/tmp/t.py(1)a()
-> def a():
1 -> def a():
2 b = 49 + 45
3 v = 'fff'
4 return v
5
6 import pdb; pdb.set_trace()
7 a() [EOF]
(Pdb) s ;; l
> /home/crchemist/tmp/t.py(2)a()
-> b = 49 + 45
1 def a():
2 -> b = 49 + 45
3 v = 'fff'
4 return v
5
6 import pdb; pdb.set_trace()
7 a() [EOF]
(Pdb)
| How to show the output of 'l' in python pdb after every command entered | I would like to have the output of the python pdb 'l' command printed to the screen after every command I enter in an interactive debugging session.
Is there a way to setup python pdb to do this?
| [
"One way to do this is to alias your favourite commands to run the command and then l.\ne.g.\n(Pdb) alias s step ;; l\n(Pdb) s\n> /usr/lib/python2.5/distutils/core.py(14)<module>()\n-> from types import *\n 9 # This module should be kept compatible with Python 2.1.\n10 \n11 __revision__ = \"$Id: core.py 38672 2005-03-20 22:19:47Z fdrake $\"\n12 \n13 import sys, os\n14 -> from types import *\n15 \n16 from distutils.debug import DEBUG\n17 from distutils.errors import *\n18 from distutils.util import grok_environment_error\n19 \n\nIn your ~/.pdbrc you can add the aliases so you have them every time:\nalias s step ;; l\n\n",
"';;' allow to separate commands\n\n\n[crchemist@test tmp]$ python t.py\n> /home/crchemist/tmp/t.py(7)()\n-> a()\n(Pdb) p a ;; l\nfunction a at 0xb7e96df4\n 2 b = 49 + 45\n 3 v = 'fff'\n 4 return v\n 5\n 6 import pdb; pdb.set_trace()\n 7 -> a() [EOF]\n(Pdb) s ;; l\n--Call--\n> /home/crchemist/tmp/t.py(1)a()\n-> def a():\n 1 -> def a():\n 2 b = 49 + 45\n 3 v = 'fff'\n 4 return v\n 5\n 6 import pdb; pdb.set_trace()\n 7 a() [EOF]\n(Pdb) s ;; l\n> /home/crchemist/tmp/t.py(2)a()\n-> b = 49 + 45\n 1 def a():\n 2 -> b = 49 + 45\n 3 v = 'fff'\n 4 return v\n 5\n 6 import pdb; pdb.set_trace()\n 7 a() [EOF]\n(Pdb)\n\n\n"
] | [
6,
2
] | [] | [] | [
"debugging",
"pdb",
"python"
] | stackoverflow_0000602599_debugging_pdb_python.txt |
Q:
How can I modify password expiration in Windows using Python?
How can I modify the password expiration to "never" on Windows XP for a local user with Python? I have the PyWIN and WMI modules on board but have no solution. I managed to query the current settings via WMI(based on Win32_UserAccount class), but how can modify it?
A:
If you are running your python script with ActvePython against Active Directory, then you can use something like this:
import win32com.client
ads = win32com.client.Dispatch('ADsNameSpaces')
user = ads.getObject("", "WinNT://DOMAIN/username,user")
user.Getinfo()
user.Put('userAccountControl', 65536 | user.Get('userAccountControl'))
user.Setinfo()
But if your python is running under unix, you need two things to talk to Active Directory: Kerberos and LDAP. Once you have a SASL(GSSAPI(KRB5)) authenticated LDAP connection to your Active Directory server, then you access the target user's "userAccountControl" attribute.
userAccountControl is an integer attribute, treated as a bit field, on which you must set the DONT EXPIRE PASSWORD bit. See this KB article for bit values.
A:
That change would require administrator permissions, which may (or may not) cause issues inside of PyWin32. I don't see any straight-forward way of making this change from a Python script, but I'm sure this can be automated using a different method.
This MSFN thread seems to have info that will help you, or at least a start:
http://www.msfn.org/board/Password-Expires-Chang-t115757.html
A:
You might need admin priviliges to do that, so look into elevating the current process or launch a new process with more priviliges. (I.e. something like vista's UAC but on XP.)
Can't help with details though. :-/
| How can I modify password expiration in Windows using Python? | How can I modify the password expiration to "never" on Windows XP for a local user with Python? I have the PyWIN and WMI modules on board but have no solution. I managed to query the current settings via WMI(based on Win32_UserAccount class), but how can modify it?
| [
"If you are running your python script with ActvePython against Active Directory, then you can use something like this:\nimport win32com.client\nads = win32com.client.Dispatch('ADsNameSpaces')\nuser = ads.getObject(\"\", \"WinNT://DOMAIN/username,user\")\nuser.Getinfo()\nuser.Put('userAccountControl', 65536 | user.Get('userAccountControl'))\nuser.Setinfo()\n\nBut if your python is running under unix, you need two things to talk to Active Directory: Kerberos and LDAP. Once you have a SASL(GSSAPI(KRB5)) authenticated LDAP connection to your Active Directory server, then you access the target user's \"userAccountControl\" attribute. \nuserAccountControl is an integer attribute, treated as a bit field, on which you must set the DONT EXPIRE PASSWORD bit. See this KB article for bit values.\n",
"That change would require administrator permissions, which may (or may not) cause issues inside of PyWin32. I don't see any straight-forward way of making this change from a Python script, but I'm sure this can be automated using a different method.\nThis MSFN thread seems to have info that will help you, or at least a start:\nhttp://www.msfn.org/board/Password-Expires-Chang-t115757.html\n",
"You might need admin priviliges to do that, so look into elevating the current process or launch a new process with more priviliges. (I.e. something like vista's UAC but on XP.)\nCan't help with details though. :-/\n"
] | [
1,
0,
0
] | [] | [] | [
"passwords",
"python",
"windows"
] | stackoverflow_0000591300_passwords_python_windows.txt |
Q:
What is the proper way to address permissions?
I added a new model with one permission, and now I need to add that permission to a few users on the production machine after deploying the code and running syncdb for the new app involved. I haven't found the correct way to do this. The auth docs mention User.user_permissions.add(permission), but never tell me what 'permission' is or the best way to get it.
A:
Permission (which lives in django.contrib.auth.models) is a database object. You'll be able to see all of them with Permission.objects.all(). They are created automatically by a post-sync signal for each model (and as the docs mention, you can also define your own).
To assign the permissions to a User, you will first have to get the Permission objects (using Permission.objects.get(*args)), and then you can add it to the User with User.user_permissions.add(permission) as you mentioned.
Alternatively, and the easier way if you can do this, is just to use the Django admin site. In the detail page for each user, there is a section relating to permissions. I'm guessing you aren't using these permissions outside of the admin, so that's the only area they will affect. If you want all of your users to have all permissions, you can make them superusers by setting the is_superuser flag on each User to True.
| What is the proper way to address permissions? | I added a new model with one permission, and now I need to add that permission to a few users on the production machine after deploying the code and running syncdb for the new app involved. I haven't found the correct way to do this. The auth docs mention User.user_permissions.add(permission), but never tell me what 'permission' is or the best way to get it.
| [
"Permission (which lives in django.contrib.auth.models) is a database object. You'll be able to see all of them with Permission.objects.all(). They are created automatically by a post-sync signal for each model (and as the docs mention, you can also define your own).\nTo assign the permissions to a User, you will first have to get the Permission objects (using Permission.objects.get(*args)), and then you can add it to the User with User.user_permissions.add(permission) as you mentioned.\nAlternatively, and the easier way if you can do this, is just to use the Django admin site. In the detail page for each user, there is a section relating to permissions. I'm guessing you aren't using these permissions outside of the admin, so that's the only area they will affect. If you want all of your users to have all permissions, you can make them superusers by setting the is_superuser flag on each User to True.\n"
] | [
4
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000603595_django_python.txt |
Q:
Need python lxml syntax help for parsing html
I am brand new to python, and I need some help with the syntax for finding and iterating through html tags using lxml. Here are the use-cases I am dealing with:
HTML file is fairly well formed (but not perfect). Has multiple tables on screen, one containing a set of search results, and one each for a header and footer. Each result row contains a link for the search result detail.
I need to find the middle table with the search result rows (this one I was able to figure out):
self.mySearchTables = self.mySearchTree.findall(".//table")
self.myResultRows = self.mySearchTables[1].findall(".//tr")
I need to find the links contained in this table (this is where I'm getting stuck):
for searchRow in self.myResultRows:
searchLink = patentRow.findall(".//a")
It doesn't seem to actually locate the link elements.
I need the plain text of the link. I imagine it would be something like searchLink.text if I actually got the link elements in the first place.
Finally, in the actual API reference for lxml, I wasn't able to find information on the find and the findall calls. I gleaned these from bits of code I found on google. Am I missing something about how to effectively find and iterate over HTML tags using lxml?
A:
Okay, first, in regards to parsing the HTML: if you follow the recommendation of zweiterlinde and S.Lott at least use the version of beautifulsoup included with lxml. That way you will also reap the benefit of a nice xpath or css selector interface.
However, I personally prefer Ian Bicking's HTML parser included in lxml.
Secondly, .find() and .findall() come from lxml trying to be compatible with ElementTree, and those two methods are described in XPath Support in ElementTree.
Those two functions are fairly easy to use but they are very limited XPath. I recommend trying to use either the full lxml xpath() method or, if you are already familiar with CSS, using the cssselect() method.
Here are some examples, with an HTML string parsed like this:
from lxml.html import fromstring
mySearchTree = fromstring(your_input_string)
Using the css selector class your program would roughly look something like this:
# Find all 'a' elements inside 'tr' table rows with css selector
for a in mySearchTree.cssselect('tr a'):
print 'found "%s" link to href "%s"' % (a.text, a.get('href'))
The equivalent using xpath method would be:
# Find all 'a' elements inside 'tr' table rows with xpath
for a in mySearchTree.xpath('.//tr/*/a'):
print 'found "%s" link to href "%s"' % (a.text, a.get('href'))
A:
Is there a reason you're not using Beautiful Soup for this project? It will make dealing with imperfectly formed documents much easier.
| Need python lxml syntax help for parsing html | I am brand new to python, and I need some help with the syntax for finding and iterating through html tags using lxml. Here are the use-cases I am dealing with:
HTML file is fairly well formed (but not perfect). Has multiple tables on screen, one containing a set of search results, and one each for a header and footer. Each result row contains a link for the search result detail.
I need to find the middle table with the search result rows (this one I was able to figure out):
self.mySearchTables = self.mySearchTree.findall(".//table")
self.myResultRows = self.mySearchTables[1].findall(".//tr")
I need to find the links contained in this table (this is where I'm getting stuck):
for searchRow in self.myResultRows:
searchLink = patentRow.findall(".//a")
It doesn't seem to actually locate the link elements.
I need the plain text of the link. I imagine it would be something like searchLink.text if I actually got the link elements in the first place.
Finally, in the actual API reference for lxml, I wasn't able to find information on the find and the findall calls. I gleaned these from bits of code I found on google. Am I missing something about how to effectively find and iterate over HTML tags using lxml?
| [
"Okay, first, in regards to parsing the HTML: if you follow the recommendation of zweiterlinde and S.Lott at least use the version of beautifulsoup included with lxml. That way you will also reap the benefit of a nice xpath or css selector interface.\nHowever, I personally prefer Ian Bicking's HTML parser included in lxml.\nSecondly, .find() and .findall() come from lxml trying to be compatible with ElementTree, and those two methods are described in XPath Support in ElementTree.\nThose two functions are fairly easy to use but they are very limited XPath. I recommend trying to use either the full lxml xpath() method or, if you are already familiar with CSS, using the cssselect() method.\nHere are some examples, with an HTML string parsed like this:\nfrom lxml.html import fromstring\nmySearchTree = fromstring(your_input_string)\n\nUsing the css selector class your program would roughly look something like this:\n# Find all 'a' elements inside 'tr' table rows with css selector\nfor a in mySearchTree.cssselect('tr a'):\n print 'found \"%s\" link to href \"%s\"' % (a.text, a.get('href'))\n\nThe equivalent using xpath method would be:\n# Find all 'a' elements inside 'tr' table rows with xpath\nfor a in mySearchTree.xpath('.//tr/*/a'):\n print 'found \"%s\" link to href \"%s\"' % (a.text, a.get('href'))\n\n",
"Is there a reason you're not using Beautiful Soup for this project? It will make dealing with imperfectly formed documents much easier.\n"
] | [
27,
5
] | [] | [] | [
"html_parsing",
"lxml",
"python"
] | stackoverflow_0000603287_html_parsing_lxml_python.txt |
Q:
Differential AJAX updates for HTML table?
I have a game that's based on a 25x20 HTML table (the game board). Every 3 seconds the user can "move," which sends an AJAX request to the server, at which time the server rerenders the entire HTML table and sends it to the user.
This was easy to write, but it wastes a lot of bandwidth. Are there any libraries, client (preferably jquery) or server-side, that help send differential instead of full updates for large tables? Usually only 5-10 tiles change on a given reload, so I feel like I could cut bandwidth use by an order of magnitude by sending just those tiles instead of all 500 every 3 seconds.
I'm also open to "you idiot, why are you using HTML tables"-type comments if you can suggest a better alternative. For example are there any CSS/DOM manipulation techniques I should be considering instead of using an HTML table? Should I use a table but give each td coordinates for an id (like "12x08") and then use jquery to replace cells by id?
A clarification: the tiles are text, not images.
A:
If you known the state between refreshes on the server side (see comment on question), you an send the data using JSON like so (not sure about exact syntax):
[
{ x: 3, y: 5, class: "asdf", content: "1234" },
{ x: 6, y: 5, class: "asdf", content: "8156" },
{ x: 2, y: 2, class: "qwer", content: "1337" }
]
Compact that (remove extra whitespace, etc.), gzip it, and send it to your Javascript. Surprisingly, the Javascript code to read this isn't that complicated (simply DOM manipulations).
A:
You can model your game board as a multidimensional javascript array:
[[x0, x1, x2, x3 ... xn],
.....
.....]
each entry is an array representing a row. Each cell holds the numerical value of the game piece/square.
This model can be the "contract" you send to the server via ajax as JSON. The server calculates the same array and sends it back to the UI. You can render that array into a table, divs or whatever you like. Prototype.js and jQuery make creating dhtml super easy.
This array format will be much smaller than a whole HTML response laden with markup. It also gives you freedom to render the board in whatever way you like.
You can further compress this format and just send the deltas. For example: save the coordinates of tiles changed by the user and send those to the server:
[(x1, y2),.....(xn, yn)]
Or you can do it the other way around: send the full model array to the server, and have the server calculate the deltas.
Check out Sponty, and watch the ajax traffic every few minutes or so, we do something very similar: http://www.thesponty.com/
The client sends the full model to the server, and the server sends the diffs.
A:
Without thinking of deltas:
You can use JSON quite easily to do this sort of thing. You can roll out your own compressed format, too.
I think compressing the data using gzip would help a lot. Most browsers nowadays support it, and it will greatly reduce the size of your responses.
| Differential AJAX updates for HTML table? | I have a game that's based on a 25x20 HTML table (the game board). Every 3 seconds the user can "move," which sends an AJAX request to the server, at which time the server rerenders the entire HTML table and sends it to the user.
This was easy to write, but it wastes a lot of bandwidth. Are there any libraries, client (preferably jquery) or server-side, that help send differential instead of full updates for large tables? Usually only 5-10 tiles change on a given reload, so I feel like I could cut bandwidth use by an order of magnitude by sending just those tiles instead of all 500 every 3 seconds.
I'm also open to "you idiot, why are you using HTML tables"-type comments if you can suggest a better alternative. For example are there any CSS/DOM manipulation techniques I should be considering instead of using an HTML table? Should I use a table but give each td coordinates for an id (like "12x08") and then use jquery to replace cells by id?
A clarification: the tiles are text, not images.
| [
"If you known the state between refreshes on the server side (see comment on question), you an send the data using JSON like so (not sure about exact syntax):\n[\n { x: 3, y: 5, class: \"asdf\", content: \"1234\" },\n { x: 6, y: 5, class: \"asdf\", content: \"8156\" },\n { x: 2, y: 2, class: \"qwer\", content: \"1337\" }\n]\n\nCompact that (remove extra whitespace, etc.), gzip it, and send it to your Javascript. Surprisingly, the Javascript code to read this isn't that complicated (simply DOM manipulations).\n",
"You can model your game board as a multidimensional javascript array:\n[[x0, x1, x2, x3 ... xn],\n.....\n.....]\n\neach entry is an array representing a row. Each cell holds the numerical value of the game piece/square.\nThis model can be the \"contract\" you send to the server via ajax as JSON. The server calculates the same array and sends it back to the UI. You can render that array into a table, divs or whatever you like. Prototype.js and jQuery make creating dhtml super easy.\nThis array format will be much smaller than a whole HTML response laden with markup. It also gives you freedom to render the board in whatever way you like.\nYou can further compress this format and just send the deltas. For example: save the coordinates of tiles changed by the user and send those to the server:\n[(x1, y2),.....(xn, yn)]\n\nOr you can do it the other way around: send the full model array to the server, and have the server calculate the deltas.\nCheck out Sponty, and watch the ajax traffic every few minutes or so, we do something very similar: http://www.thesponty.com/\nThe client sends the full model to the server, and the server sends the diffs.\n",
"Without thinking of deltas:\nYou can use JSON quite easily to do this sort of thing. You can roll out your own compressed format, too.\nI think compressing the data using gzip would help a lot. Most browsers nowadays support it, and it will greatly reduce the size of your responses.\n"
] | [
2,
2,
1
] | [] | [] | [
"dhtml",
"html",
"jquery",
"python"
] | stackoverflow_0000602322_dhtml_html_jquery_python.txt |
Q:
Find all strings in python code files
I would like to list all strings within my large python project.
Imagine the different possibilities to create a string in python:
mystring = "hello world"
mystring = ("hello "
"world")
mystring = "hello " \
"world"
I need a tool that outputs "filename, linenumber, string" for each string in my project. Strings that are spread over multiple lines using "\" or "('')" should be shown in a single line.
Any ideas how this could be done?
A:
unwind's suggestion of using the ast module in 2.6 is a good one. (There's also the undocumented _ast module in 2.5.) Here's example code for that
code = """a = 'blah'
b = '''multi
line
string'''
c = u"spam"
"""
import ast
root = ast.parse(code)
class ShowStrings(ast.NodeVisitor):
def visit_Str(self, node):
print "string at", node.lineno, node.col_offset, repr(node.s)
show_strings = ShowStrings()
show_strings.visit(root)
The problem is multiline strings. If you run the above you'll get.
string at 1 4 'blah'
string at 4 -1 'multi\nline\nstring'
string at 5 4 u'spam'
You see that it doesn't report the start of the multiline string, only the end. There's no good solution for that using the builtin Python tools.
Another option is that you can use my 'python4ply' module. This is a grammar definition for Python for PLY, which is a parser generator. Here's how you might use it:
import compiler
import compiler.visitor
# from python4ply; requires the ply parser generator
import python_yacc
code = """a = 'blah'
b = '''multi
line
string'''
c = u"spam"
d = 1
"""
tree = python_yacc.parse(code, "<string>")
#print tree
class ShowStrings(compiler.visitor.ASTVisitor):
def visitConst(self, node):
if isinstance(node.value, basestring):
print "string at", node.lineno, repr(node.value)
visitor = ShowStrings()
compiler.walk(tree, visitor)
The output from this is
string at 1 'blah'
string at 2 'multi\nline\nstring'
string at 5 u'spam'
There's no support for column information. (There is some mostly complete commented out code to support that, but it's not fully tested.) Then again, I see you don't need it. It also means working with Python's 'compiler' module, which is clumsier than the AST module.
Still, with a 30-40 lines of code you should have exactly what you want.
A:
Python's included tokenize module will also do the trick.
from __future__ import with_statement
import sys
import tokenize
for filename in sys.argv[1:]:
with open(filename) as f:
for toktype, tokstr, (lineno, _), _, _ in tokenize.generate_tokens(f.readline):
if toktype == tokenize.STRING:
strrepr = repr(eval(tokstr))
print filename, lineno, strrepr
A:
If you can do this in Python, I'd suggest starting by looking at the ast (Abstract Syntax Tree) module, and going from there.
A:
Are you asking about the I18N utilities in Python?
http://docs.python.org/library/gettext.html#internationalizing-your-programs-and-modules
There's a utility called po-utils (formerly xpot) that can help with this.
http://po-utils.progiciels-bpi.ca/README.html
A:
You may also consider to parse your code with
pygments.
I don't know the other solution, but it sure is very
simple to use.
A:
Gettext might help you. Put your strings in _(...) structures:
a = _('Test')
b = a
c = _('Another text')
Then run in the shell prompt:
pygettext test.py
You'll get a messages.pot file with the information you need:
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR ORGANIZATION
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"POT-Creation-Date: 2009-02-25 08:48+BRT\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=CHARSET\n"
"Content-Transfer-Encoding: ENCODING\n"
"Generated-By: pygettext.py 1.5\n"
#: teste.py:1
msgid "Test"
msgstr ""
#: teste.py:3
msgid "Another text"
msgstr ""
| Find all strings in python code files | I would like to list all strings within my large python project.
Imagine the different possibilities to create a string in python:
mystring = "hello world"
mystring = ("hello "
"world")
mystring = "hello " \
"world"
I need a tool that outputs "filename, linenumber, string" for each string in my project. Strings that are spread over multiple lines using "\" or "('')" should be shown in a single line.
Any ideas how this could be done?
| [
"unwind's suggestion of using the ast module in 2.6 is a good one. (There's also the undocumented _ast module in 2.5.) Here's example code for that\ncode = \"\"\"a = 'blah'\nb = '''multi\nline\nstring'''\nc = u\"spam\"\n\"\"\"\n\nimport ast\nroot = ast.parse(code)\n\nclass ShowStrings(ast.NodeVisitor):\n def visit_Str(self, node):\n print \"string at\", node.lineno, node.col_offset, repr(node.s)\n\nshow_strings = ShowStrings()\nshow_strings.visit(root)\n\nThe problem is multiline strings. If you run the above you'll get.\nstring at 1 4 'blah'\nstring at 4 -1 'multi\\nline\\nstring'\nstring at 5 4 u'spam'\n\nYou see that it doesn't report the start of the multiline string, only the end. There's no good solution for that using the builtin Python tools.\nAnother option is that you can use my 'python4ply' module. This is a grammar definition for Python for PLY, which is a parser generator. Here's how you might use it:\nimport compiler\nimport compiler.visitor\n\n# from python4ply; requires the ply parser generator\nimport python_yacc\n\ncode = \"\"\"a = 'blah'\nb = '''multi\nline\nstring'''\nc = u\"spam\"\nd = 1\n\"\"\"\n\ntree = python_yacc.parse(code, \"<string>\")\n#print tree\n\nclass ShowStrings(compiler.visitor.ASTVisitor):\n def visitConst(self, node):\n if isinstance(node.value, basestring):\n print \"string at\", node.lineno, repr(node.value)\n\nvisitor = ShowStrings()\ncompiler.walk(tree, visitor)\n\nThe output from this is\nstring at 1 'blah'\nstring at 2 'multi\\nline\\nstring'\nstring at 5 u'spam'\n\nThere's no support for column information. (There is some mostly complete commented out code to support that, but it's not fully tested.) Then again, I see you don't need it. It also means working with Python's 'compiler' module, which is clumsier than the AST module.\nStill, with a 30-40 lines of code you should have exactly what you want.\n",
"Python's included tokenize module will also do the trick.\nfrom __future__ import with_statement\nimport sys\nimport tokenize\n\nfor filename in sys.argv[1:]:\n with open(filename) as f:\n for toktype, tokstr, (lineno, _), _, _ in tokenize.generate_tokens(f.readline):\n if toktype == tokenize.STRING:\n strrepr = repr(eval(tokstr))\n print filename, lineno, strrepr\n\n",
"If you can do this in Python, I'd suggest starting by looking at the ast (Abstract Syntax Tree) module, and going from there.\n",
"Are you asking about the I18N utilities in Python?\nhttp://docs.python.org/library/gettext.html#internationalizing-your-programs-and-modules\nThere's a utility called po-utils (formerly xpot) that can help with this.\nhttp://po-utils.progiciels-bpi.ca/README.html\n",
"You may also consider to parse your code with \npygments.\nI don't know the other solution, but it sure is very\nsimple to use.\n",
"Gettext might help you. Put your strings in _(...) structures:\na = _('Test')\nb = a\nc = _('Another text')\n\nThen run in the shell prompt:\npygettext test.py\n\nYou'll get a messages.pot file with the information you need:\n# SOME DESCRIPTIVE TITLE.\n# Copyright (C) YEAR ORGANIZATION\n# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.\n#\nmsgid \"\"\nmsgstr \"\"\n\"Project-Id-Version: PACKAGE VERSION\\n\"\n\"POT-Creation-Date: 2009-02-25 08:48+BRT\\n\"\n\"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\\n\"\n\"Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n\"\n\"Language-Team: LANGUAGE <LL@li.org>\\n\"\n\"MIME-Version: 1.0\\n\"\n\"Content-Type: text/plain; charset=CHARSET\\n\"\n\"Content-Transfer-Encoding: ENCODING\\n\"\n\"Generated-By: pygettext.py 1.5\\n\"\n\n\n#: teste.py:1\nmsgid \"Test\"\nmsgstr \"\"\n\n#: teste.py:3\nmsgid \"Another text\"\nmsgstr \"\"\n\n"
] | [
12,
9,
3,
2,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0000585529_python.txt |
Q:
How would you draw cell borders in a wxPython FlexGridSizer?
I'm new to Python, but I can't really find much decent documentation on the web, so I'm hoping somebody will know the answer or where the answer is...
I have a wxPython FlexGridSizer bound to a panel that contains other FlexGridSizers, I'd like to display some cell borders on the main FlexGridSizer, so each section looks encapsulated but I can't find any documentation to do it.
I tried using a panel to add to my main FlexGridView, with the panel's border on, but the panel's border doesn't always fill up the entire FlexGridView cell, so it looks choppy and uneven.
Does anybody know how to properly simulate this?
A:
Sizers are used just to organize widgets spatially, as a matter of a fact they are 'invisible'.
I think you're on the right track with putting a panel inside each cell and turning on it's borders. Try adding it with wx.EXPAND flag, it has a chance to help.
Concerning documentation:
wxPython is essentially a wrapper (well, with few extras) for the wxWidgets C++ library, so virtually everything you need can be found in wxwidgets documentation.
I find this documentation browser useful. And here are some notes on interpreting C++ documentation for wxPython users, but usually everything is obvious enough.
Also for borders you might be interested in wx.StaticBox or wx.StaticBoxSizer (that etched-line box around a group of controls, often with a label)
| How would you draw cell borders in a wxPython FlexGridSizer? | I'm new to Python, but I can't really find much decent documentation on the web, so I'm hoping somebody will know the answer or where the answer is...
I have a wxPython FlexGridSizer bound to a panel that contains other FlexGridSizers, I'd like to display some cell borders on the main FlexGridSizer, so each section looks encapsulated but I can't find any documentation to do it.
I tried using a panel to add to my main FlexGridView, with the panel's border on, but the panel's border doesn't always fill up the entire FlexGridView cell, so it looks choppy and uneven.
Does anybody know how to properly simulate this?
| [
"Sizers are used just to organize widgets spatially, as a matter of a fact they are 'invisible'.\nI think you're on the right track with putting a panel inside each cell and turning on it's borders. Try adding it with wx.EXPAND flag, it has a chance to help.\nConcerning documentation:\nwxPython is essentially a wrapper (well, with few extras) for the wxWidgets C++ library, so virtually everything you need can be found in wxwidgets documentation.\nI find this documentation browser useful. And here are some notes on interpreting C++ documentation for wxPython users, but usually everything is obvious enough.\nAlso for borders you might be interested in wx.StaticBox or wx.StaticBoxSizer (that etched-line box around a group of controls, often with a label)\n"
] | [
1
] | [] | [] | [
"drawing",
"python",
"wxpython"
] | stackoverflow_0000603831_drawing_python_wxpython.txt |
Q:
Finding anchor text when there are tags there
I want to find the text between a pair of <a> tags that link to a given site
Here's the re string that I'm using to find the content:
r'''(<a([^<>]*)href=("|')(http://)?(www\.)?%s([^'"]*)("|')([^<>]*)>([^<]*))</a>''' % our_url
The result will be something like this:
r'''(<a([^<>]*)href=("|')(http://)?(www\.)?stackoverflow.com([^'"]*)("|')([^<>]*)>([^<]*))</a>'''
This is great for most links but it errors with a link with tags within it. I tried changing the final part of the regex from:
([^<]*))</a>'''
to:
(.*))</a>'''
But that just got everything on the page after the link, which I don't want. Are there any suggestions on what I can do to solve this?
A:
Instead of:
[^<>]*
Try:
((?!</a).)*
In other words, match any character that isn't the start of a </a sequence.
A:
>>> import re
>>> pattern = re.compile(r'<a.+href=[\'|\"](.+)[\'|\"].*?>(.+)</a>', re.IGNORECASE)
>>> link = '<a href="http://stackoverflow.com/questions/603199/finding-anchor-text-when-there-are-tags-there">Finding anchor text when there are tags there</a>'
>>> re.match(pattern, link).group(1)
'http://stackoverflow.com/questions/603199/finding-anchor-text-when-there-are-tags-there'
>>> re.match(pattern, link).group(2)
'Finding anchor text when there are tags there'
A:
I would not use a regex - use an HTML parser like Beautiful Soup.
A:
Do a non greedy search i.e.
(.*?)
| Finding anchor text when there are tags there | I want to find the text between a pair of <a> tags that link to a given site
Here's the re string that I'm using to find the content:
r'''(<a([^<>]*)href=("|')(http://)?(www\.)?%s([^'"]*)("|')([^<>]*)>([^<]*))</a>''' % our_url
The result will be something like this:
r'''(<a([^<>]*)href=("|')(http://)?(www\.)?stackoverflow.com([^'"]*)("|')([^<>]*)>([^<]*))</a>'''
This is great for most links but it errors with a link with tags within it. I tried changing the final part of the regex from:
([^<]*))</a>'''
to:
(.*))</a>'''
But that just got everything on the page after the link, which I don't want. Are there any suggestions on what I can do to solve this?
| [
"Instead of:\n[^<>]*\n\nTry:\n((?!</a).)*\n\nIn other words, match any character that isn't the start of a </a sequence.\n",
">>> import re\n>>> pattern = re.compile(r'<a.+href=[\\'|\\\"](.+)[\\'|\\\"].*?>(.+)</a>', re.IGNORECASE)\n>>> link = '<a href=\"http://stackoverflow.com/questions/603199/finding-anchor-text-when-there-are-tags-there\">Finding anchor text when there are tags there</a>'\n>>> re.match(pattern, link).group(1)\n'http://stackoverflow.com/questions/603199/finding-anchor-text-when-there-are-tags-there'\n>>> re.match(pattern, link).group(2)\n'Finding anchor text when there are tags there'\n\n",
"I would not use a regex - use an HTML parser like Beautiful Soup.\n",
"Do a non greedy search i.e. \n(.*?)\n\n"
] | [
3,
3,
2,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000603199_python_regex.txt |
Q:
Should I check the types of constructor arguments (and at other places too)?
Python discourages checking the types. But in many cases this may be useful:
Checking constructor arguments. e.g. checking foe Boolean, string, dict etc. If I don't and set the object's members to the arguments it will cause problems later.
Checking functions arguments.
In properties. If someone sets a wrong value or different type, I should respond quickly.
A:
The answer is almost always "no". The general idea in Python, Ruby, and some other languages us called "Duck Typing". You shouldn't care what something is, only how it works. In other words, "if all you want is something that quacks, you don't need to check that it's actually a duck."
In real life, the problem with putting in all those type checks is the inability to replace inputs with alternate implementations. You may check for dict, but I may want to pass something in which is not a dict, but implements the dict API.
Type checking only checks for one of many possible errors in code. For example, it doesn't include range checking (at least not in Python). A modern response to the assertion that there needs to be type checking is that it's more effective to develop unit tests which ensure that not only are the types correct, but also that the functionality is correct.
Another viewpoint is that you should treat your API users like consenting adults, and trust them to use the API correctly. Of course there are times when input checking is helpful, but that's less common than you think. One example is input from untrusted sources, like from the public web.
A:
The simple answer is No, use Polymorphism, Exceptions etc.
In the case of constructor arguments being of the wrong type, an exception will be thrown when executing code that depend s on the parameter being of a particular type. If it is a weird, domain specific thing, raise your own Exception. Surround blocks of code which are likely to fail with try-except and handle errors. So it is better to use Exception handling. (Same goes for function arguments)
In properties, the same argument applies. If you are validating the value received, use an assertion to check its range etc. If the value is of the wrong type, it will fail anyway. Then, handle AssertionError.
In Python, you treat programmers as intelligent beings!! Just document your code well (make things obvious), raise Exceptions where appropriate, write polymorphic code etc. Leave the Exception handling(where it is appropriate only)/errors in construction to the client code.
Warning
Leaving Exception handling to clients doesn't mean that you should chuck a lot of garbage errors at the unwitting user. If at all possible, handle exceptions that might occur due to bad construction or any other reason in your code itself. Your code should be robust. Where it is impossible for you to handle the error, politely inform the user/client code programmer!
Note
In general, bad arguments to a constructor isn't something I worry about too much.
A:
Check all you like, you just have to be explicit. The following example is a constructor from a module in the standard library - it checks the extrasaction arg:
class DictWriter:
def __init__(self, f, fieldnames, restval="", extrasaction="raise",
dialect="excel", *args, **kwds):
self.fieldnames = fieldnames # list of keys for the dict
self.restval = restval # for writing short dicts
if extrasaction.lower() not in ("raise", "ignore"):
raise ValueError, \
("extrasaction (%s) must be 'raise' or 'ignore'" %
extrasaction)
self.extrasaction = extrasaction
self.writer = writer(f, dialect, *args, **kwds)
A:
AFAIU, you want to make sure that some objects behave ("follow an interface") at an earlier time than that of the actual use. In your example, you want to know that objects are appropriate at instance creation time, not when they will actually be used.
Keeping in mind that we're talking Python here, I won't suggest assert (what if python -O or an environment variable PYTHONOPTIMIZE is set to 1 when your program runs?) or checking for specific types (because that unnecessarily restricts the types you can use), but I will suggest early testing functionality, something along the lines:
def __init__(self, a_number, a_boolean, a_duck, a_sequence):
self.a_number= a_number + 0
self.a_boolean= not not a_boolean
try:
a_duck.quack
except AttributeError:
raise TypeError, "can't use it if it doesn't quack"
else:
self.a_duck= a_duck
try:
iter(a_sequence)
except TypeError:
raise TypeError, "expected an iterable sequence"
else:
self.a_sequence= a_sequence
I used try… except… else in this suggestion because I want to set the instance members only if the test succeeded, even if the code is changed or augmented. You don't have to do it so, obviously.
For function arguments and setting properties, I wouldn't do these tests in advance, I'd just use the provided objects and act on thrown exceptions, unless the suspect objects are going to be used after a lengthy process.
A:
It is often a good thing to do. Checking for explicit types is probably not so useful in Python (as others have said), but checking for legal values can be a good idea. The reason it's a good idea is that the software will fail closer to the source of the bug (it follows the Fail Fast Principle). Also, the checks act as documentation to other programmers and yourself. Even better, it is "executable documentation", which is good because it's documentation that can't lie.
A quick and dirty but reasonable way to check your arguments is to use assert:
def my_sqrt(x):
assert x >= 0, "must be greater or equal to zero"
# ...
Asserting your arguments is a kind of poor man's Design by Contract. (You might like to look up Design by Contract; it is interesting.)
A:
"If I don't and set the object's members to the arguments it will cause problems later."
Please be very clear on the exact list of "problems" which will be caused later.
Will it not work at all? That what try/except blocks are for.
Will it behave "oddly"? This is really rare, and is limited to "near-miss" types and operators. The standard example is division. If you expected integers, but got floating-point, then division may not do what you wanted. But that's fixed with the //, vs. / division operators.
Will it be simply wrong, but still appear to complete? This is really rare, and would require a pretty-carefully crafted type that used standard names, but did non-standard things. For example
class MyMaliciousList( list ):
def append( self, value ):
super( MyMaliciousList, self ).remove( value )
Other than that, it's hard to have things "cause problems later". Please update your question with specific examples of "problems".
A:
As dalke says, the answer is almost always "no". In Python, you generally do not care that a a parameter is a certain type but rather that it behaves like a certain type. This is known as "Duck Typing". There are two ways to test whether a parameter behaves like a given type: (1) you can use it as if it behaved as you expect and throw an exception when/if it doesn't or (2) you can define an interface that describes how that type should behave and test conformance with that interface.
zope.interface is my prefered interface system for Python, but there are several others. With any one of them, you define an interface, then declare that a given type conforms to that interface or define an adaptor that turns your type into something that does conform to that interface. You can then assert (or test as you wish) that paramters provide (in the zope.interface terminology) that interface.
| Should I check the types of constructor arguments (and at other places too)? | Python discourages checking the types. But in many cases this may be useful:
Checking constructor arguments. e.g. checking foe Boolean, string, dict etc. If I don't and set the object's members to the arguments it will cause problems later.
Checking functions arguments.
In properties. If someone sets a wrong value or different type, I should respond quickly.
| [
"The answer is almost always \"no\". The general idea in Python, Ruby, and some other languages us called \"Duck Typing\". You shouldn't care what something is, only how it works. In other words, \"if all you want is something that quacks, you don't need to check that it's actually a duck.\"\nIn real life, the problem with putting in all those type checks is the inability to replace inputs with alternate implementations. You may check for dict, but I may want to pass something in which is not a dict, but implements the dict API.\nType checking only checks for one of many possible errors in code. For example, it doesn't include range checking (at least not in Python). A modern response to the assertion that there needs to be type checking is that it's more effective to develop unit tests which ensure that not only are the types correct, but also that the functionality is correct.\nAnother viewpoint is that you should treat your API users like consenting adults, and trust them to use the API correctly. Of course there are times when input checking is helpful, but that's less common than you think. One example is input from untrusted sources, like from the public web.\n",
"The simple answer is No, use Polymorphism, Exceptions etc. \n\nIn the case of constructor arguments being of the wrong type, an exception will be thrown when executing code that depend s on the parameter being of a particular type. If it is a weird, domain specific thing, raise your own Exception. Surround blocks of code which are likely to fail with try-except and handle errors. So it is better to use Exception handling. (Same goes for function arguments)\nIn properties, the same argument applies. If you are validating the value received, use an assertion to check its range etc. If the value is of the wrong type, it will fail anyway. Then, handle AssertionError.\n\nIn Python, you treat programmers as intelligent beings!! Just document your code well (make things obvious), raise Exceptions where appropriate, write polymorphic code etc. Leave the Exception handling(where it is appropriate only)/errors in construction to the client code.\nWarning\nLeaving Exception handling to clients doesn't mean that you should chuck a lot of garbage errors at the unwitting user. If at all possible, handle exceptions that might occur due to bad construction or any other reason in your code itself. Your code should be robust. Where it is impossible for you to handle the error, politely inform the user/client code programmer!\nNote\nIn general, bad arguments to a constructor isn't something I worry about too much.\n",
"Check all you like, you just have to be explicit. The following example is a constructor from a module in the standard library - it checks the extrasaction arg:\nclass DictWriter:\n def __init__(self, f, fieldnames, restval=\"\", extrasaction=\"raise\",\n dialect=\"excel\", *args, **kwds):\n self.fieldnames = fieldnames # list of keys for the dict\n self.restval = restval # for writing short dicts\n if extrasaction.lower() not in (\"raise\", \"ignore\"):\n raise ValueError, \\\n (\"extrasaction (%s) must be 'raise' or 'ignore'\" %\n extrasaction)\n self.extrasaction = extrasaction\n self.writer = writer(f, dialect, *args, **kwds)\n\n",
"AFAIU, you want to make sure that some objects behave (\"follow an interface\") at an earlier time than that of the actual use. In your example, you want to know that objects are appropriate at instance creation time, not when they will actually be used.\nKeeping in mind that we're talking Python here, I won't suggest assert (what if python -O or an environment variable PYTHONOPTIMIZE is set to 1 when your program runs?) or checking for specific types (because that unnecessarily restricts the types you can use), but I will suggest early testing functionality, something along the lines:\ndef __init__(self, a_number, a_boolean, a_duck, a_sequence):\n\n self.a_number= a_number + 0\n\n self.a_boolean= not not a_boolean\n\n try:\n a_duck.quack\n except AttributeError:\n raise TypeError, \"can't use it if it doesn't quack\"\n else:\n self.a_duck= a_duck\n\n try:\n iter(a_sequence)\n except TypeError:\n raise TypeError, \"expected an iterable sequence\"\n else:\n self.a_sequence= a_sequence\n\nI used try… except… else in this suggestion because I want to set the instance members only if the test succeeded, even if the code is changed or augmented. You don't have to do it so, obviously.\nFor function arguments and setting properties, I wouldn't do these tests in advance, I'd just use the provided objects and act on thrown exceptions, unless the suspect objects are going to be used after a lengthy process.\n",
"It is often a good thing to do. Checking for explicit types is probably not so useful in Python (as others have said), but checking for legal values can be a good idea. The reason it's a good idea is that the software will fail closer to the source of the bug (it follows the Fail Fast Principle). Also, the checks act as documentation to other programmers and yourself. Even better, it is \"executable documentation\", which is good because it's documentation that can't lie.\nA quick and dirty but reasonable way to check your arguments is to use assert:\ndef my_sqrt(x):\n assert x >= 0, \"must be greater or equal to zero\"\n # ...\n\nAsserting your arguments is a kind of poor man's Design by Contract. (You might like to look up Design by Contract; it is interesting.) \n",
"\"If I don't and set the object's members to the arguments it will cause problems later.\"\nPlease be very clear on the exact list of \"problems\" which will be caused later.\n\nWill it not work at all? That what try/except blocks are for.\nWill it behave \"oddly\"? This is really rare, and is limited to \"near-miss\" types and operators. The standard example is division. If you expected integers, but got floating-point, then division may not do what you wanted. But that's fixed with the //, vs. / division operators.\nWill it be simply wrong, but still appear to complete? This is really rare, and would require a pretty-carefully crafted type that used standard names, but did non-standard things. For example\nclass MyMaliciousList( list ):\n def append( self, value ):\n super( MyMaliciousList, self ).remove( value )\n\n\nOther than that, it's hard to have things \"cause problems later\". Please update your question with specific examples of \"problems\".\n",
"As dalke says, the answer is almost always \"no\". In Python, you generally do not care that a a parameter is a certain type but rather that it behaves like a certain type. This is known as \"Duck Typing\". There are two ways to test whether a parameter behaves like a given type: (1) you can use it as if it behaved as you expect and throw an exception when/if it doesn't or (2) you can define an interface that describes how that type should behave and test conformance with that interface.\nzope.interface is my prefered interface system for Python, but there are several others. With any one of them, you define an interface, then declare that a given type conforms to that interface or define an adaptor that turns your type into something that does conform to that interface. You can then assert (or test as you wish) that paramters provide (in the zope.interface terminology) that interface.\n"
] | [
14,
13,
5,
3,
2,
0,
0
] | [] | [] | [
"python",
"typechecking"
] | stackoverflow_0000602046_python_typechecking.txt |
Q:
What's the search engine used in the new Python documentation?
Is it built-in in Sphinx?
A:
It look like Sphinx contains own search engine for English language. See http://sphinx.pocoo.org/_static/searchtools.js and searchindex.js/.json (see Sphinx docs index 36Kb, Python docs index 857Kb, and Grok docs 37Kb).
Index is being precomputed when docs are generated.
When one searches, static page is being loaded and then _static/searchtools.js extract search terms from query string, normalizes (case, stemming, etc.) them and looks up in searchindex.js as it is being loaded.
First search attempt takes rather long time, consecutive are much faster as index is cached in your browser.
A:
The Sphinx search engine is built in Javascript. It uses JQuery and a (sometimes very big) javascript file containing the search terms.
| What's the search engine used in the new Python documentation? | Is it built-in in Sphinx?
| [
"It look like Sphinx contains own search engine for English language. See http://sphinx.pocoo.org/_static/searchtools.js and searchindex.js/.json (see Sphinx docs index 36Kb, Python docs index 857Kb, and Grok docs 37Kb). \nIndex is being precomputed when docs are generated.\nWhen one searches, static page is being loaded and then _static/searchtools.js extract search terms from query string, normalizes (case, stemming, etc.) them and looks up in searchindex.js as it is being loaded.\nFirst search attempt takes rather long time, consecutive are much faster as index is cached in your browser.\n",
"The Sphinx search engine is built in Javascript. It uses JQuery and a (sometimes very big) javascript file containing the search terms.\n"
] | [
24,
5
] | [
"Yes. Sphinx is not built-in, however. The search widget is part of sphinx. What context did you mean by \"built-in\"? \nOn the page iteself: http://docs.python.org/about.html\nhttp://sphinx.pocoo.org/\n"
] | [
-3
] | [
"python",
"python_sphinx"
] | stackoverflow_0000605888_python_python_sphinx.txt |
Q:
How do I access my webcam in Python?
I would like to access my webcam from Python.
I tried using the VideoCapture extension (tutorial), but that didn't work very well for me, I had to work around some problems such as it's a bit slow with resolutions >320x230, and sometimes it returns None for no apparent reason.
Is there a better way to access my webcam from Python?
A:
OpenCV has support for getting data from a webcam, and it comes with Python wrappers by default, you also need to install numpy for the OpenCV Python extension (called cv2) to work.
As of 2019, you can install both of these libraries with pip:
pip install numpy
pip install opencv-python
More information on using OpenCV with Python.
An example copied from Displaying webcam feed using opencv and python:
import cv2
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
if vc.isOpened(): # try to get the first frame
rval, frame = vc.read()
else:
rval = False
while rval:
cv2.imshow("preview", frame)
rval, frame = vc.read()
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
vc.release()
cv2.destroyWindow("preview")
A:
gstreamer can handle webcam input. If I remeber well, there are python bindings for it!
| How do I access my webcam in Python? | I would like to access my webcam from Python.
I tried using the VideoCapture extension (tutorial), but that didn't work very well for me, I had to work around some problems such as it's a bit slow with resolutions >320x230, and sometimes it returns None for no apparent reason.
Is there a better way to access my webcam from Python?
| [
"OpenCV has support for getting data from a webcam, and it comes with Python wrappers by default, you also need to install numpy for the OpenCV Python extension (called cv2) to work.\nAs of 2019, you can install both of these libraries with pip:\npip install numpy\npip install opencv-python\nMore information on using OpenCV with Python.\nAn example copied from Displaying webcam feed using opencv and python:\nimport cv2\n\ncv2.namedWindow(\"preview\")\nvc = cv2.VideoCapture(0)\n\nif vc.isOpened(): # try to get the first frame\n rval, frame = vc.read()\nelse:\n rval = False\n\nwhile rval:\n cv2.imshow(\"preview\", frame)\n rval, frame = vc.read()\n key = cv2.waitKey(20)\n if key == 27: # exit on ESC\n break\n\nvc.release()\ncv2.destroyWindow(\"preview\")\n\n",
"gstreamer can handle webcam input. If I remeber well, there are python bindings for it!\n"
] | [
107,
3
] | [] | [] | [
"python",
"webcam"
] | stackoverflow_0000604749_python_webcam.txt |
Q:
Python variable assigned by an outside module is accessible for printing but not for assignment in the target module
I have two files, one is in the webroot, and another is a bootstrap located one folder above the web root (this is CGI programming by the way).
The index file in the web root imports the bootstrap and assigns a variable to it, then calls a a function to initialize the application. Everything up to here works as expected.
Now, in the bootstrap file I can print the variable, but when I try to assign a value to the variable an error is thrown. If you take away the assignment statement no errors are thrown.
I'm really curious about how the scoping works in this situation. I can print the variable, but I can't asign to it. This is on Python 3.
index.py
# Import modules
import sys
import cgitb;
# Enable error reporting
cgitb.enable()
#cgitb.enable(display=0, logdir="/tmp")
# Add the application root to the include path
sys.path.append('path')
# Include the bootstrap
import bootstrap
bootstrap.VAR = 'testVar'
bootstrap.initialize()
bootstrap.py
def initialize():
print('Content-type: text/html\n\n')
print(VAR)
VAR = 'h'
print(VAR)
Thanks.
Edit: The error message
UnboundLocalError: local variable 'VAR' referenced before assignment
args = ("local variable 'VAR' referenced before assignment",)
with_traceback = <built-in method with_traceback of UnboundLocalError object at 0x00C6ACC0>
A:
try this:
def initialize():
global VAR
print('Content-type: text/html\n\n')
print(VAR)
VAR = 'h'
print(VAR)
Without 'global VAR' python want to use local variable VAR and give you "UnboundLocalError: local variable 'VAR' referenced before assignment"
A:
Don't declare it global, pass it instead and return it if you need to have a new value, like this:
def initialize(a):
print('Content-type: text/html\n\n')
print a
return 'h'
----
import bootstrap
b = bootstrap.initialize('testVar')
| Python variable assigned by an outside module is accessible for printing but not for assignment in the target module | I have two files, one is in the webroot, and another is a bootstrap located one folder above the web root (this is CGI programming by the way).
The index file in the web root imports the bootstrap and assigns a variable to it, then calls a a function to initialize the application. Everything up to here works as expected.
Now, in the bootstrap file I can print the variable, but when I try to assign a value to the variable an error is thrown. If you take away the assignment statement no errors are thrown.
I'm really curious about how the scoping works in this situation. I can print the variable, but I can't asign to it. This is on Python 3.
index.py
# Import modules
import sys
import cgitb;
# Enable error reporting
cgitb.enable()
#cgitb.enable(display=0, logdir="/tmp")
# Add the application root to the include path
sys.path.append('path')
# Include the bootstrap
import bootstrap
bootstrap.VAR = 'testVar'
bootstrap.initialize()
bootstrap.py
def initialize():
print('Content-type: text/html\n\n')
print(VAR)
VAR = 'h'
print(VAR)
Thanks.
Edit: The error message
UnboundLocalError: local variable 'VAR' referenced before assignment
args = ("local variable 'VAR' referenced before assignment",)
with_traceback = <built-in method with_traceback of UnboundLocalError object at 0x00C6ACC0>
| [
"try this:\n\n\ndef initialize():\n global VAR\n print('Content-type: text/html\\n\\n')\n print(VAR)\n VAR = 'h'\n print(VAR)\n\n\nWithout 'global VAR' python want to use local variable VAR and give you \"UnboundLocalError: local variable 'VAR' referenced before assignment\" \n",
"Don't declare it global, pass it instead and return it if you need to have a new value, like this:\ndef initialize(a):\n print('Content-type: text/html\\n\\n')\n print a\n return 'h'\n\n----\n\nimport bootstrap\nb = bootstrap.initialize('testVar')\n\n"
] | [
3,
0
] | [] | [] | [
"python",
"scoping"
] | stackoverflow_0000605399_python_scoping.txt |
Q:
How do I configure my sys.path variable in linux?
I want to automatically add entries to python's sys.path variable when run by my user in linux.
Is there something I can tweak in my home directory to get it done?
A:
The environment variable PYTHONPATH sets the initial sys.path value.
You can set that it your shell initialization script (e.g. .bashrc or .cshrc)
| How do I configure my sys.path variable in linux? | I want to automatically add entries to python's sys.path variable when run by my user in linux.
Is there something I can tweak in my home directory to get it done?
| [
"The environment variable PYTHONPATH sets the initial sys.path value.\nYou can set that it your shell initialization script (e.g. .bashrc or .cshrc)\n"
] | [
6
] | [] | [] | [
"configuration",
"linux",
"path",
"python"
] | stackoverflow_0000606226_configuration_linux_path_python.txt |
Q:
python regex trouble
I have the following code :
what = re.match("get|post|put|head\s+(\S+) ",data,re.IGNORECASE)
and in the data variable let's say I have this line :
GET some-site.com HTTP/1.0 ...
If I stop the script in the debugger, and inspect the what variable, I can see it only matched GET. Why doesn't it match some-site.com ?
A:
Regex language operator precedence puts head\s+(\S+) as the 4th alternative. The parenthesis in @Mykola Kharechko's answer arrange for head as the 4th alternative, and \s+(\S+) is appended to whatever alternative matched the group.
A:
>>> re.match("(get|post|put|head)\s+(\S+) ",'GET some-site.com HTTP/1.0 ...',re.IGNORECASE).groups()
('GET', 'some-site.com')
>>>
A:
+1 Mykola's answer and gimel's explanation. In addition, do you really want to use regex for this? As you've found out, they are not as straightforward as they look. Here's a non-regex-based method:
def splitandpad(s, find, limit):
seq= s.split(find, limit)
return seq+['']*(limit-len(seq)+1)
method, path, protocol= splitandpad(data, ' ', 2)
if method.lower() not in ('get', 'head', 'post', 'put'):
# complain, unknown method
if protocol.lower() not in ('http/1.0', 'http/1.1'):
# complain, unknown protocol
| python regex trouble | I have the following code :
what = re.match("get|post|put|head\s+(\S+) ",data,re.IGNORECASE)
and in the data variable let's say I have this line :
GET some-site.com HTTP/1.0 ...
If I stop the script in the debugger, and inspect the what variable, I can see it only matched GET. Why doesn't it match some-site.com ?
| [
"Regex language operator precedence puts head\\s+(\\S+) as the 4th alternative. The parenthesis in @Mykola Kharechko's answer arrange for head as the 4th alternative, and \\s+(\\S+) is appended to whatever alternative matched the group.\n",
"\n\n>>> re.match(\"(get|post|put|head)\\s+(\\S+) \",'GET some-site.com HTTP/1.0 ...',re.IGNORECASE).groups()\n('GET', 'some-site.com')\n>>> \n\n\n",
"+1 Mykola's answer and gimel's explanation. In addition, do you really want to use regex for this? As you've found out, they are not as straightforward as they look. Here's a non-regex-based method:\ndef splitandpad(s, find, limit):\n seq= s.split(find, limit)\n return seq+['']*(limit-len(seq)+1)\n\nmethod, path, protocol= splitandpad(data, ' ', 2)\nif method.lower() not in ('get', 'head', 'post', 'put'):\n # complain, unknown method\nif protocol.lower() not in ('http/1.0', 'http/1.1'):\n # complain, unknown protocol\n\n"
] | [
4,
3,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000606221_python_regex.txt |
Q:
How do I install python's sphinx documentation generator in linux?
And how do I run it?
A:
Sphinx website says:
easy_install -U Sphinx
If you want that installed in system python you'd probably need elevated permissions with sudo:
sudo easy_install -U Sphinx
If you do not have easy_install yet, see http://peak.telecommunity.com/DevCenter/EasyInstall
A:
How do I run it?
http://sphinx-doc.org/tutorial.html#running-the-build
Basically, the easiest way is to start with sphinx-quickstart command.
A:
http://showmedo.com/videos/video?name=2910020&fromSeriesID=291
This demo shows how you can use sphinx to document your own program.
| How do I install python's sphinx documentation generator in linux? | And how do I run it?
| [
"Sphinx website says:\neasy_install -U Sphinx\n\nIf you want that installed in system python you'd probably need elevated permissions with sudo:\nsudo easy_install -U Sphinx\n\nIf you do not have easy_install yet, see http://peak.telecommunity.com/DevCenter/EasyInstall\n",
"How do I run it?\nhttp://sphinx-doc.org/tutorial.html#running-the-build\nBasically, the easiest way is to start with sphinx-quickstart command.\n",
"http://showmedo.com/videos/video?name=2910020&fromSeriesID=291\nThis demo shows how you can use sphinx to document your own program.\n"
] | [
6,
1,
1
] | [] | [] | [
"python",
"python_sphinx"
] | stackoverflow_0000606283_python_python_sphinx.txt |
Q:
How to load an RSA key from a PEM file and use it in python-crypto
I have not found a way to load an RSA private key from a PEM file to use it in python-crypto (signature).
python-openssl can load a PEM file but the PKey object can't be used to retrieved key information (p, q, ...) to use with Crypto.PublicKey.construct().
A:
I recommend M2Crypto instead of python-crypto. You will need M2Crypto to parse PEM anyway and its EVP api frees your code from depending on a particular algorithm.
private = """
-----BEGIN RSA PRIVATE KEY-----
MIIBOwIBAAJBANQNY7RD9BarYRsmMazM1hd7a+u3QeMPFZQ7Ic+BmmeWHvvVP4Yj
yu1t6vAut7mKkaDeKbT3yiGVUgAEUaWMXqECAwEAAQJAIHCz8h37N4ScZHThYJgt
oIYHKpZsg/oIyRaKw54GKxZq5f7YivcWoZ8j7IQ65lHVH3gmaqKOvqdAVVt5imKZ
KQIhAPPsr9i3FxU+Mac0pvQKhFVJUzAFfKiG3ulVUdHgAaw/AiEA3ozHKzfZWKxH
gs8v8ZQ/FnfI7DwYYhJC0YsXb6NSvR8CIHymwLo73mTxsogjBQqDcVrwLL3GoAyz
V6jf+/8HvXMbAiEAj1b3FVQEboOQD6WoyJ1mQO9n/xf50HjYhqRitOnp6ZsCIQDS
AvkvYKc6LG8IANmVv93g1dyKZvU/OQkAZepqHZB2MQ==
-----END RSA PRIVATE KEY-----
"""
message = "python-crypto sucks"
# Grab RSA parameters e, n
from M2Crypto import RSA, BIO
bio = BIO.MemoryBuffer(private)
rsa = RSA.load_key_bio(bio)
n, e = rsa.n, rsa.e
# In Python-crypto:
import Crypto.PublicKey.RSA
pycrypto_key = Crypto.PublicKey.RSA.construct((n, e))
# Use EVP api to sign message
from M2Crypto import EVP
key = EVP.load_key_string(private)
# if you need a different digest than the default 'sha1':
key.reset_context(md='sha256')
key.sign_init()
key.sign_update(message)
signature = key.sign_final()
# Use EVP api to verify signature
public = """
-----BEGIN PUBLIC KEY-----
MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANQNY7RD9BarYRsmMazM1hd7a+u3QeMP
FZQ7Ic+BmmeWHvvVP4Yjyu1t6vAut7mKkaDeKbT3yiGVUgAEUaWMXqECAwEAAQ==
-----END PUBLIC KEY-----
"""
from M2Crypto import BIO, RSA, EVP
bio = BIO.MemoryBuffer(public)
rsa = RSA.load_pub_key_bio(bio)
pubkey = EVP.PKey()
pubkey.assign_rsa(rsa)
pubkey.reset_context(md="sha256")
pubkey.verify_init()
pubkey.verify_update(message)
assert pubkey.verify_final(signature) == 1
See http://svn.osafoundation.org/m2crypto/trunk/tests/test_rsa.py, but I prefer using the algorithm-independent EVP API http://svn.osafoundation.org/m2crypto/trunk/tests/test_evp.py.
How do you verify an RSA SHA1 signature in Python? addresses a similar issue.
A:
is this (close to) what you tried doing?
public_key_filename = 'public_key.pem'
rsa = M2Crypto.RSA.load_pub_key(pk)
That should work. The issue might be with openssl too, does it work when you just use openssl (not in Python)?
Link to Me Too Crypto
| How to load an RSA key from a PEM file and use it in python-crypto | I have not found a way to load an RSA private key from a PEM file to use it in python-crypto (signature).
python-openssl can load a PEM file but the PKey object can't be used to retrieved key information (p, q, ...) to use with Crypto.PublicKey.construct().
| [
"I recommend M2Crypto instead of python-crypto. You will need M2Crypto to parse PEM anyway and its EVP api frees your code from depending on a particular algorithm.\nprivate = \"\"\"\n-----BEGIN RSA PRIVATE KEY-----\nMIIBOwIBAAJBANQNY7RD9BarYRsmMazM1hd7a+u3QeMPFZQ7Ic+BmmeWHvvVP4Yj\nyu1t6vAut7mKkaDeKbT3yiGVUgAEUaWMXqECAwEAAQJAIHCz8h37N4ScZHThYJgt\noIYHKpZsg/oIyRaKw54GKxZq5f7YivcWoZ8j7IQ65lHVH3gmaqKOvqdAVVt5imKZ\nKQIhAPPsr9i3FxU+Mac0pvQKhFVJUzAFfKiG3ulVUdHgAaw/AiEA3ozHKzfZWKxH\ngs8v8ZQ/FnfI7DwYYhJC0YsXb6NSvR8CIHymwLo73mTxsogjBQqDcVrwLL3GoAyz\nV6jf+/8HvXMbAiEAj1b3FVQEboOQD6WoyJ1mQO9n/xf50HjYhqRitOnp6ZsCIQDS\nAvkvYKc6LG8IANmVv93g1dyKZvU/OQkAZepqHZB2MQ==\n-----END RSA PRIVATE KEY-----\n\"\"\" \nmessage = \"python-crypto sucks\"\n\n# Grab RSA parameters e, n\nfrom M2Crypto import RSA, BIO\nbio = BIO.MemoryBuffer(private)\nrsa = RSA.load_key_bio(bio)\nn, e = rsa.n, rsa.e\n\n# In Python-crypto:\nimport Crypto.PublicKey.RSA\npycrypto_key = Crypto.PublicKey.RSA.construct((n, e))\n\n# Use EVP api to sign message\nfrom M2Crypto import EVP\nkey = EVP.load_key_string(private)\n# if you need a different digest than the default 'sha1':\nkey.reset_context(md='sha256')\nkey.sign_init()\nkey.sign_update(message)\nsignature = key.sign_final()\n\n# Use EVP api to verify signature\npublic = \"\"\"\n-----BEGIN PUBLIC KEY-----\nMFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANQNY7RD9BarYRsmMazM1hd7a+u3QeMP\nFZQ7Ic+BmmeWHvvVP4Yjyu1t6vAut7mKkaDeKbT3yiGVUgAEUaWMXqECAwEAAQ==\n-----END PUBLIC KEY-----\n\"\"\" \nfrom M2Crypto import BIO, RSA, EVP\nbio = BIO.MemoryBuffer(public)\nrsa = RSA.load_pub_key_bio(bio)\npubkey = EVP.PKey()\npubkey.assign_rsa(rsa)\npubkey.reset_context(md=\"sha256\")\npubkey.verify_init()\npubkey.verify_update(message)\nassert pubkey.verify_final(signature) == 1\n\nSee http://svn.osafoundation.org/m2crypto/trunk/tests/test_rsa.py, but I prefer using the algorithm-independent EVP API http://svn.osafoundation.org/m2crypto/trunk/tests/test_evp.py.\nHow do you verify an RSA SHA1 signature in Python? addresses a similar issue.\n",
"is this (close to) what you tried doing?\npublic_key_filename = 'public_key.pem'\nrsa = M2Crypto.RSA.load_pub_key(pk)\n\nThat should work. The issue might be with openssl too, does it work when you just use openssl (not in Python)?\nLink to Me Too Crypto\n"
] | [
15,
7
] | [] | [] | [
"cryptography",
"python"
] | stackoverflow_0000595114_cryptography_python.txt |
Q:
Django Override form
Another question on some forms
Here is my model
class TankJournal(models.Model):
user = models.ForeignKey(User)
tank = models.ForeignKey(TankProfile)
ts = models.IntegerField(max_length=15)
title = models.CharField(max_length=50)
body = models.TextField()
Here is my modelform
class JournalForm(ModelForm):
tank = forms.IntegerField(widget=forms.HiddenInput())
class Meta:
model = TankJournal
exclude = ('user','ts')
Here is my method to save it
def addJournal(request, id=0):
if not request.user.is_authenticated():
return HttpResponseRedirect('/')
#
# checking if they own the tank
#
from django.contrib.auth.models import User
user = User.objects.get(pk=request.session['id'])
if request.method == 'POST':
form = JournalForm(request.POST)
if form.is_valid():
obj = form.save(commit=False)
#
# setting the user and ts
#
from time import time
obj.ts = int(time())
obj.user = user
obj.tank = TankProfile.objects.get(pk=form.cleaned_data['tank'])
#
# saving the test
#
obj.save()
else:
print form.errors
else:
form = JournalForm(initial={'tank': id})
When it saves.. it complains the tank is not a TankProfile but a Integer.. how can I override the form object to make tank a TankProfile
thanks
A:
I think you want this:
class JournalForm(ModelForm):
tank = forms.ModelChoiceField(label="",
queryset=TankProfile.objects.all(),
widget=forms.HiddenInput)
A:
Why are you overriding the definition of tank?
class JournalForm(ModelForm):
tank = forms.IntegerField(widget=forms.HiddenInput())
If you omit this override, Django handles the foreign key reference for you.
"how can I override the form object to make tank a TankProfile?"
I don't understand this question, since it looks like you specifically overrode the form to prevent the foreign key from working.
| Django Override form | Another question on some forms
Here is my model
class TankJournal(models.Model):
user = models.ForeignKey(User)
tank = models.ForeignKey(TankProfile)
ts = models.IntegerField(max_length=15)
title = models.CharField(max_length=50)
body = models.TextField()
Here is my modelform
class JournalForm(ModelForm):
tank = forms.IntegerField(widget=forms.HiddenInput())
class Meta:
model = TankJournal
exclude = ('user','ts')
Here is my method to save it
def addJournal(request, id=0):
if not request.user.is_authenticated():
return HttpResponseRedirect('/')
#
# checking if they own the tank
#
from django.contrib.auth.models import User
user = User.objects.get(pk=request.session['id'])
if request.method == 'POST':
form = JournalForm(request.POST)
if form.is_valid():
obj = form.save(commit=False)
#
# setting the user and ts
#
from time import time
obj.ts = int(time())
obj.user = user
obj.tank = TankProfile.objects.get(pk=form.cleaned_data['tank'])
#
# saving the test
#
obj.save()
else:
print form.errors
else:
form = JournalForm(initial={'tank': id})
When it saves.. it complains the tank is not a TankProfile but a Integer.. how can I override the form object to make tank a TankProfile
thanks
| [
"I think you want this:\nclass JournalForm(ModelForm):\n tank = forms.ModelChoiceField(label=\"\",\n queryset=TankProfile.objects.all(),\n widget=forms.HiddenInput)\n\n",
"Why are you overriding the definition of tank?\nclass JournalForm(ModelForm):\n tank = forms.IntegerField(widget=forms.HiddenInput()) \n\nIf you omit this override, Django handles the foreign key reference for you.\n\"how can I override the form object to make tank a TankProfile?\"\nI don't understand this question, since it looks like you specifically overrode the form to prevent the foreign key from working.\n"
] | [
5,
2
] | [] | [] | [
"django",
"forms",
"python"
] | stackoverflow_0000606946_django_forms_python.txt |
Q:
Preserving last new line when reading a file
I´m reading a file in Python where each record is separated by an empty new line. If the file ends in two or more new lines, the last record is processed as expected, but if the file ends in a single new line it´s not processed. Here´s the code:
def fread():
record = False
for line in open('somefile.txt'):
if line.startswith('Record'):
record = True
d = SomeObject()
# do some processing with line
d.process(line)
if not line.strip() and record:
yield d
record = False
for record in fread():
print(record)
In this data sample, everything works as expected ('---' is an empty line):
Record 1
data a
data b
data c
\n
Record 2
data a
data b
data c
\n
\n
But in this, the last record isn´t returned:
Record 1
data a
data b
data c
\n
Record 2
data a
data b
data c
\n
How can I preserve the last new line from the file to get the last record?
PS.: I´m using the term "preserve" as I couldn´t find a better name.
Thanks.
Edit
The original code was a stripped version, just to illustrate the problem, but it seems that I stripped too much. Now I posted all function´s code.
A little more explanation: The object SomeObject is created for each record in the file and the records are separated by empty new lines. At the end of the record it yields back the object so I can use it (save to a db, compare to another objects, etc).
The main problem when the file ends in a single new line, the last record isn´t yielded. It seems that Python does not read the last line when it´s blank.
A:
The way it's written now probably doesn't work anyway; with d = SomeObject() inside your loop, a new SomeObject is being created for every line. Yet, if I understand correctly, what you want is for all of the lines in between empty lines to contribute to that one object. You could do something like this instead:
def fread():
d = None
for line in open('somefile.txt'):
if d is None:
d = SomeObject()
if line.strip():
# do some processing
else:
yield d
d = None
if d: yield d
This isn't great code, but it does work; that last object that misses its empty line is yielded when the loop is done.
A:
You might find a slight twist in a more classically pythonic direction improves the predicability of the code:
def fread():
for line in open('text.txt'):
if line.strip():
d = SomeObject()
yield d
raise StopIteration
for record in fread():
print record
The preferred way to end a generator in Python, though often not strictly necessary, is with the StopIteration exception. Using if line.strip() simply means that you'll do the yield if there's anything remaining in line after stripping whitespace. The construction of SomeObject() can be anywhere... I just happened to move it in case construction of SomeObject was expensive, or had side-effects that shouldn't happen if the line is empty.
EDIT: I'll leave my answer here for posterity's sake, but DNS below got the original intent right, where several lines contribute to the same SomeObject() record (which I totally glossed over).
A:
line.strip() will result in an empty string on an empty line. An empty string is False, so you swallow the empty line
>>> bool("\n".strip())
False
>>> bool("\n")
True
A:
If you call readline repeatedly (in a loop) on your file object (instead of using in) it should work as you expect. Compare these:
>>> x = open('/tmp/xyz')
>>> x.readline()
'x\n'
>>> x.readline()
'\n'
>>> x.readline()
'y\n'
>>> x.readline()
''
>>> open('/tmp/xyz').readlines()
['x\n', '\n', 'y\n']
A:
replace open('somefile.txt'): with open('somefile.txt').read().split('\n'): and your code will work.
But Jarret Hardie's answer is better.
| Preserving last new line when reading a file | I´m reading a file in Python where each record is separated by an empty new line. If the file ends in two or more new lines, the last record is processed as expected, but if the file ends in a single new line it´s not processed. Here´s the code:
def fread():
record = False
for line in open('somefile.txt'):
if line.startswith('Record'):
record = True
d = SomeObject()
# do some processing with line
d.process(line)
if not line.strip() and record:
yield d
record = False
for record in fread():
print(record)
In this data sample, everything works as expected ('---' is an empty line):
Record 1
data a
data b
data c
\n
Record 2
data a
data b
data c
\n
\n
But in this, the last record isn´t returned:
Record 1
data a
data b
data c
\n
Record 2
data a
data b
data c
\n
How can I preserve the last new line from the file to get the last record?
PS.: I´m using the term "preserve" as I couldn´t find a better name.
Thanks.
Edit
The original code was a stripped version, just to illustrate the problem, but it seems that I stripped too much. Now I posted all function´s code.
A little more explanation: The object SomeObject is created for each record in the file and the records are separated by empty new lines. At the end of the record it yields back the object so I can use it (save to a db, compare to another objects, etc).
The main problem when the file ends in a single new line, the last record isn´t yielded. It seems that Python does not read the last line when it´s blank.
| [
"The way it's written now probably doesn't work anyway; with d = SomeObject() inside your loop, a new SomeObject is being created for every line. Yet, if I understand correctly, what you want is for all of the lines in between empty lines to contribute to that one object. You could do something like this instead:\ndef fread():\n d = None\n for line in open('somefile.txt'):\n\n if d is None:\n d = SomeObject()\n\n if line.strip():\n # do some processing\n else:\n yield d\n d = None\n\n if d: yield d\n\nThis isn't great code, but it does work; that last object that misses its empty line is yielded when the loop is done.\n",
"You might find a slight twist in a more classically pythonic direction improves the predicability of the code:\ndef fread():\n for line in open('text.txt'):\n if line.strip():\n d = SomeObject()\n yield d\n\n raise StopIteration\n\nfor record in fread():\n print record\n\nThe preferred way to end a generator in Python, though often not strictly necessary, is with the StopIteration exception. Using if line.strip() simply means that you'll do the yield if there's anything remaining in line after stripping whitespace. The construction of SomeObject() can be anywhere... I just happened to move it in case construction of SomeObject was expensive, or had side-effects that shouldn't happen if the line is empty.\nEDIT: I'll leave my answer here for posterity's sake, but DNS below got the original intent right, where several lines contribute to the same SomeObject() record (which I totally glossed over).\n",
"line.strip() will result in an empty string on an empty line. An empty string is False, so you swallow the empty line\n>>> bool(\"\\n\".strip())\nFalse\n>>> bool(\"\\n\")\nTrue\n\n",
"If you call readline repeatedly (in a loop) on your file object (instead of using in) it should work as you expect. Compare these:\n>>> x = open('/tmp/xyz')\n>>> x.readline()\n'x\\n'\n>>> x.readline()\n'\\n'\n>>> x.readline()\n'y\\n'\n>>> x.readline()\n''\n>>> open('/tmp/xyz').readlines()\n['x\\n', '\\n', 'y\\n']\n\n",
"replace open('somefile.txt'): with open('somefile.txt').read().split('\\n'): and your code will work.\n\nBut Jarret Hardie's answer is better.\n"
] | [
6,
5,
0,
0,
0
] | [] | [] | [
"file",
"python"
] | stackoverflow_0000607375_file_python.txt |
Q:
Integrate postfix mail into my (python)webapp
I have a postfix server listening and receiving all emails received at mywebsite.com Now I want to show these postfix emails in a customized interface and that too for each user
To be clear, all the users of mywebsite.com will be given mail addresses like someguy@mywebsite.com who receives email on my production machine but he sees them in his own console built into his dashboard at mywebsite.com.
So to make the user see the mail he received, I need to create an email replica of the postfix mail so that mywebsite(which runs on django-python) will be reflecting them readily. How do I achieve this. To be precise this is my question, how do I convert a postfix mail to a python mail object(so that my system/website)understands it?
Just to be clear I have written psuedo code to achieve what I want:
email_as_python_object = postfix_email_convertor(postfix_email)
attachments_list = email_as_python_object.attachments
body = email_as_python_object.body # be it html or whatever
And by the way I have tried default email module which comes with python but thats not handy for all the cases. And even I need to deal with mail attachments manually(which I hate). I just need a simple way to deal with cases like these(I was wondering how postfix understands a email received. ie.. how it automatically figures out different headers,attachments etc..). Please help me.
A:
You want to have postfix deliver to a local mailbox, and then use a webmail system for people to access that stored mail.
Don't get hung up on postfix - it just a transfer agent - it takes messages from one place, and puts them somewhere else, it doesn't store messages.
So postfix will take the messages over SMTP, and put them in local mail files.
Then IMAP or some webmail system will display those messages to your users.
If you want the mail integrated in your webapp, then you should probably run an IMAP server, and use python IMAP libraries to get the messages.
A:
First of all, Postfix mail routing rules can be very complex and your presumably preferred solution involves a lot of trickery in the wrong places. You do not want to accidentally show some user anothers mails, do you? Second, although Postfix can do almost anything, it shouldn't as it only is a MDA (mail delivery agent).
Your solution is best solved by using a POP3 or IMAP server (Cyrus IMAPd, Courier, etc). IMAP servers can have "superuser accounts" who can read mails of all users. Your web application can then connect to the users mailbox and retreive the headers and bodys.
If you only want to show the subject-line you can fetch those with a special IMAP command and very low overhead. The Python IMAP library has not the easiest to understand API though. I'll give it a shot (not checked!) with an example taken from the standard library:
import imaplib
sess = imaplib.IMAP4()
sess.login('superuser', 'password')
# Honor the mailbox syntax of your server!
sess.select('INBOX/Luke') # Or something similar.
typ, data = sess.search(None, 'ALL') # All Messages.
subjectlines = []
for num in data[0].split():
typ, msgdata = sess.fetch(num, '(RFC822.SIZE BODY[HEADER.FIELDS (SUBJECT)])')
subject = msgdata[0][1].lstrip('Subject: ').strip()
subjectlines.append(subject)
This logs into the IMAP server, selects the users mailbox, fetches all the message-ids then fetches (hopefully) only the subjectlines and appends the resulting data onto the subjectlines list.
To fetch other parts of the mail vary the line with sess.fetch. For the specific syntax of fetch have a look at RFC 2060 (Section 6.4.5).
Good luck!
A:
I'm not sure that I understand the question.
If you want your remote web application to be able to view users' mailbox, you could install a pop or imap server and use a mail client (you should be able to find one off the shelf) to read the emails. Alternatively, you could write something to interrogate the pop/imap server using the relevant libraries that come with Python itself.
If you want to replicate the mail to another machine, you could use procmail and set up actions to do this. Postfix can be set up to invoke procmail in this wayy.
| Integrate postfix mail into my (python)webapp | I have a postfix server listening and receiving all emails received at mywebsite.com Now I want to show these postfix emails in a customized interface and that too for each user
To be clear, all the users of mywebsite.com will be given mail addresses like someguy@mywebsite.com who receives email on my production machine but he sees them in his own console built into his dashboard at mywebsite.com.
So to make the user see the mail he received, I need to create an email replica of the postfix mail so that mywebsite(which runs on django-python) will be reflecting them readily. How do I achieve this. To be precise this is my question, how do I convert a postfix mail to a python mail object(so that my system/website)understands it?
Just to be clear I have written psuedo code to achieve what I want:
email_as_python_object = postfix_email_convertor(postfix_email)
attachments_list = email_as_python_object.attachments
body = email_as_python_object.body # be it html or whatever
And by the way I have tried default email module which comes with python but thats not handy for all the cases. And even I need to deal with mail attachments manually(which I hate). I just need a simple way to deal with cases like these(I was wondering how postfix understands a email received. ie.. how it automatically figures out different headers,attachments etc..). Please help me.
| [
"You want to have postfix deliver to a local mailbox, and then use a webmail system for people to access that stored mail.\nDon't get hung up on postfix - it just a transfer agent - it takes messages from one place, and puts them somewhere else, it doesn't store messages.\nSo postfix will take the messages over SMTP, and put them in local mail files.\nThen IMAP or some webmail system will display those messages to your users.\nIf you want the mail integrated in your webapp, then you should probably run an IMAP server, and use python IMAP libraries to get the messages.\n",
"First of all, Postfix mail routing rules can be very complex and your presumably preferred solution involves a lot of trickery in the wrong places. You do not want to accidentally show some user anothers mails, do you? Second, although Postfix can do almost anything, it shouldn't as it only is a MDA (mail delivery agent).\nYour solution is best solved by using a POP3 or IMAP server (Cyrus IMAPd, Courier, etc). IMAP servers can have \"superuser accounts\" who can read mails of all users. Your web application can then connect to the users mailbox and retreive the headers and bodys.\nIf you only want to show the subject-line you can fetch those with a special IMAP command and very low overhead. The Python IMAP library has not the easiest to understand API though. I'll give it a shot (not checked!) with an example taken from the standard library:\nimport imaplib\n\nsess = imaplib.IMAP4()\nsess.login('superuser', 'password')\n# Honor the mailbox syntax of your server!\nsess.select('INBOX/Luke') # Or something similar. \ntyp, data = sess.search(None, 'ALL') # All Messages.\n\nsubjectlines = []\nfor num in data[0].split():\n typ, msgdata = sess.fetch(num, '(RFC822.SIZE BODY[HEADER.FIELDS (SUBJECT)])')\n subject = msgdata[0][1].lstrip('Subject: ').strip()\n subjectlines.append(subject)\n\nThis logs into the IMAP server, selects the users mailbox, fetches all the message-ids then fetches (hopefully) only the subjectlines and appends the resulting data onto the subjectlines list.\nTo fetch other parts of the mail vary the line with sess.fetch. For the specific syntax of fetch have a look at RFC 2060 (Section 6.4.5).\nGood luck!\n",
"I'm not sure that I understand the question.\nIf you want your remote web application to be able to view users' mailbox, you could install a pop or imap server and use a mail client (you should be able to find one off the shelf) to read the emails. Alternatively, you could write something to interrogate the pop/imap server using the relevant libraries that come with Python itself.\nIf you want to replicate the mail to another machine, you could use procmail and set up actions to do this. Postfix can be set up to invoke procmail in this wayy.\n"
] | [
9,
7,
0
] | [] | [] | [
"email",
"message",
"postfix_mta",
"python"
] | stackoverflow_0000607548_email_message_postfix_mta_python.txt |
Q:
Python parsing
I'm trying to parse the title tag in an RSS 2.0 feed into three different variables for each entry in that feed. Using ElementTree I've already parsed the RSS so that I can print each title [minus the trailing )] with the code below:
feed = getfeed("http://www.tourfilter.com/dallas/rss/by_concert_date")
for item in feed:
print repr(item.title[0:-1])
I include that because, as you can see, the item.title is a repr() data type, which I don't know much about.
A particular repr(item.title[0:-1]) printed in the interactive window looks like this:
'randy travis (Billy Bobs 3/21'
'Michael Schenker Group (House of Blues Dallas 3/26'
The user selects a band and I hope to, after parsing each item.title into 3 variables (one each for band, venue, and date... or possibly an array or I don't know...) select only those related to the band selected. Then they are sent to Google for geocoding, but that's another story.
I've seen some examples of regex and I'm reading about them, but it seems very complicated. Is it? I thought maybe someone here would have some insight as to exactly how to do this in an intelligent way. Should I use the re module? Does it matter that the output is currently is repr()s? Is there a better way? I was thinking I'd use a loop like (and this is my pseudoPython, just kind of notes I'm writing):
list = bandRaw,venue,date,latLong
for item in feed:
parse item.title for bandRaw, venue, date
if bandRaw == str(band)
send venue name + ", Dallas, TX" to google for geocoding
return lat,long
list = list + return character + bandRaw + "," + venue + "," + date + "," + lat + "," + long
else
In the end, I need to have the chosen entries in a .csv (comma-delimited) file looking like this:
band,venue,date,lat,long
randy travis,Billy Bobs,3/21,1234.5678,1234.5678
Michael Schenker Group,House of Blues Dallas,3/26,4321.8765,4321.8765
I hope this isn't too much to ask. I'll be looking into it on my own, just thought I should post here to make sure it got answered.
So, the question is, how do I best parse each repr(item.title[0:-1]) in the feed into the 3 separate values that I can then concatenate into a .csv file?
A:
Don't let regex scare you off... it's well worth learning.
Given the examples above, you might try putting the trailing parenthesis back in, and then using this pattern:
import re
pat = re.compile('([\w\s]+)\(([\w\s]+)(\d+/\d+)\)')
info = pat.match(s)
print info.groups()
('Michael Schenker Group ', 'House of Blues Dallas ', '3/26')
To get at each group individual, just call them on the info object:
print info.group(1) # or info.groups()[0]
print '"%s","%s","%s"' % (info.group(1), info.group(2), info.group(3))
"Michael Schenker Group","House of Blues Dallas","3/26"
The hard thing about regex in this case is making sure you know all the known possible characters in the title. If there are non-alpha chars in the 'Michael Schenker Group' part, you'll have to adjust the regex for that part to allow them.
The pattern above breaks down as follows, which is parsed left to right:
([\w\s]+) : Match any word or space characters (the plus symbol indicates that there should be one or more such characters). The parentheses mean that the match will be captured as a group. This is the "Michael Schenker Group " part. If there can be numbers and dashes here, you'll want to modify the pieces between the square brackets, which are the possible characters for the set.
\( : A literal parenthesis. The backslash escapes the parenthesis, since otherwise it counts as a regex command. This is the "(" part of the string.
([\w\s]+) : Same as the one above, but this time matches the "House of Blues Dallas " part. In parentheses so they will be captured as the second group.
(\d+/\d+) : Matches the digits 3 and 26 with a slash in the middle. In parentheses so they will be captured as the third group.
\) : Closing parenthesis for the above.
The python intro to regex is quite good, and you might want to spend an evening going over it http://docs.python.org/library/re.html#module-re. Also, check Dive Into Python, which has a friendly introduction: http://diveintopython3.ep.io/regular-expressions.html.
EDIT: See zacherates below, who has some nice edits. Two heads are better than one!
A:
Regular expressions are a great solution to this problem:
>>> import re
>>> s = 'Michael Schenker Group (House of Blues Dallas 3/26'
>>> re.match(r'(.*) \((.*) (\d+/\d+)', s).groups()
('Michael Schenker Group', 'House of Blues Dallas', '3/26')
As a side note, you might want to look at the Universal Feed Parser for handling the RSS parsing as feeds have a bad habit of being malformed.
Edit
In regards to your comment... The strings occasionally being wrapped in "s rather than 's has to do with the fact that you're using repr. The repr of a string is usually delimited with 's, unless that string contains one or more 's, where instead it uses "s so that the 's don't have to be escaped:
>>> "Hello there"
'Hello there'
>>> "it's not its"
"it's not its"
Notice the different quote styles.
A:
Regarding the repr(item.title[0:-1]) part, not sure where you got that from but I'm pretty sure you can simply use item.title. All you're doing is removing the last char from the string and then calling repr() on it, which does nothing.
Your code should look something like this:
import geocoders # from GeoPy
us = geocoders.GeocoderDotUS()
import feedparser # from www.feedparser.org
feedurl = "http://www.tourfilter.com/dallas/rss/by_concert_date"
feed = feedparser.parse(feedurl)
lines = []
for entry in feed.entries:
m = re.search(r'(.*) \((.*) (\d+/\d+)\)', entry.title)
if m:
bandRaw, venue, date = m.groups()
if band == bandRaw:
place, (lat, lng) = us.geocode(venue + ", Dallas, TX")
lines.append(",".join([band, venue, date, lat, lng]))
result = "\n".join(lines)
EDIT: replaced list with lines as the var name. list is a builtin and should not be used as a variable name. Sorry.
| Python parsing | I'm trying to parse the title tag in an RSS 2.0 feed into three different variables for each entry in that feed. Using ElementTree I've already parsed the RSS so that I can print each title [minus the trailing )] with the code below:
feed = getfeed("http://www.tourfilter.com/dallas/rss/by_concert_date")
for item in feed:
print repr(item.title[0:-1])
I include that because, as you can see, the item.title is a repr() data type, which I don't know much about.
A particular repr(item.title[0:-1]) printed in the interactive window looks like this:
'randy travis (Billy Bobs 3/21'
'Michael Schenker Group (House of Blues Dallas 3/26'
The user selects a band and I hope to, after parsing each item.title into 3 variables (one each for band, venue, and date... or possibly an array or I don't know...) select only those related to the band selected. Then they are sent to Google for geocoding, but that's another story.
I've seen some examples of regex and I'm reading about them, but it seems very complicated. Is it? I thought maybe someone here would have some insight as to exactly how to do this in an intelligent way. Should I use the re module? Does it matter that the output is currently is repr()s? Is there a better way? I was thinking I'd use a loop like (and this is my pseudoPython, just kind of notes I'm writing):
list = bandRaw,venue,date,latLong
for item in feed:
parse item.title for bandRaw, venue, date
if bandRaw == str(band)
send venue name + ", Dallas, TX" to google for geocoding
return lat,long
list = list + return character + bandRaw + "," + venue + "," + date + "," + lat + "," + long
else
In the end, I need to have the chosen entries in a .csv (comma-delimited) file looking like this:
band,venue,date,lat,long
randy travis,Billy Bobs,3/21,1234.5678,1234.5678
Michael Schenker Group,House of Blues Dallas,3/26,4321.8765,4321.8765
I hope this isn't too much to ask. I'll be looking into it on my own, just thought I should post here to make sure it got answered.
So, the question is, how do I best parse each repr(item.title[0:-1]) in the feed into the 3 separate values that I can then concatenate into a .csv file?
| [
"Don't let regex scare you off... it's well worth learning.\nGiven the examples above, you might try putting the trailing parenthesis back in, and then using this pattern:\nimport re\npat = re.compile('([\\w\\s]+)\\(([\\w\\s]+)(\\d+/\\d+)\\)')\ninfo = pat.match(s)\nprint info.groups()\n\n('Michael Schenker Group ', 'House of Blues Dallas ', '3/26')\n\nTo get at each group individual, just call them on the info object:\nprint info.group(1) # or info.groups()[0]\n\nprint '\"%s\",\"%s\",\"%s\"' % (info.group(1), info.group(2), info.group(3))\n\"Michael Schenker Group\",\"House of Blues Dallas\",\"3/26\"\n\nThe hard thing about regex in this case is making sure you know all the known possible characters in the title. If there are non-alpha chars in the 'Michael Schenker Group' part, you'll have to adjust the regex for that part to allow them.\nThe pattern above breaks down as follows, which is parsed left to right:\n([\\w\\s]+) : Match any word or space characters (the plus symbol indicates that there should be one or more such characters). The parentheses mean that the match will be captured as a group. This is the \"Michael Schenker Group \" part. If there can be numbers and dashes here, you'll want to modify the pieces between the square brackets, which are the possible characters for the set.\n\\( : A literal parenthesis. The backslash escapes the parenthesis, since otherwise it counts as a regex command. This is the \"(\" part of the string.\n([\\w\\s]+) : Same as the one above, but this time matches the \"House of Blues Dallas \" part. In parentheses so they will be captured as the second group.\n(\\d+/\\d+) : Matches the digits 3 and 26 with a slash in the middle. In parentheses so they will be captured as the third group.\n\\) : Closing parenthesis for the above.\nThe python intro to regex is quite good, and you might want to spend an evening going over it http://docs.python.org/library/re.html#module-re. Also, check Dive Into Python, which has a friendly introduction: http://diveintopython3.ep.io/regular-expressions.html.\nEDIT: See zacherates below, who has some nice edits. Two heads are better than one!\n",
"Regular expressions are a great solution to this problem:\n>>> import re\n>>> s = 'Michael Schenker Group (House of Blues Dallas 3/26'\n>>> re.match(r'(.*) \\((.*) (\\d+/\\d+)', s).groups()\n('Michael Schenker Group', 'House of Blues Dallas', '3/26')\n\nAs a side note, you might want to look at the Universal Feed Parser for handling the RSS parsing as feeds have a bad habit of being malformed.\nEdit\nIn regards to your comment... The strings occasionally being wrapped in \"s rather than 's has to do with the fact that you're using repr. The repr of a string is usually delimited with 's, unless that string contains one or more 's, where instead it uses \"s so that the 's don't have to be escaped:\n>>> \"Hello there\"\n'Hello there'\n>>> \"it's not its\"\n\"it's not its\"\n\nNotice the different quote styles.\n",
"Regarding the repr(item.title[0:-1]) part, not sure where you got that from but I'm pretty sure you can simply use item.title. All you're doing is removing the last char from the string and then calling repr() on it, which does nothing.\nYour code should look something like this:\nimport geocoders # from GeoPy\nus = geocoders.GeocoderDotUS()\n\nimport feedparser # from www.feedparser.org\nfeedurl = \"http://www.tourfilter.com/dallas/rss/by_concert_date\"\nfeed = feedparser.parse(feedurl)\n\nlines = []\nfor entry in feed.entries:\n m = re.search(r'(.*) \\((.*) (\\d+/\\d+)\\)', entry.title) \n if m:\n bandRaw, venue, date = m.groups()\n\n if band == bandRaw:\n place, (lat, lng) = us.geocode(venue + \", Dallas, TX\")\n lines.append(\",\".join([band, venue, date, lat, lng]))\n\nresult = \"\\n\".join(lines)\n\nEDIT: replaced list with lines as the var name. list is a builtin and should not be used as a variable name. Sorry.\n"
] | [
17,
7,
0
] | [] | [] | [
"parsing",
"python",
"regex",
"text_parsing"
] | stackoverflow_0000607760_parsing_python_regex_text_parsing.txt |
Q:
Model and Validation Confusion - Looking for advice
I'm somewhat new to Python, Django, and I'd like some advice on how to layout the code I'd like to write.
I have the model written that allows a file to be uploaded. In the models save method I'm checking if the file has a specific extension. If it has an XML extension I'm opening the file and grabbing some information from the file to save in the database. I have this model working. I've tested it in the built-in administration. It works.
Currently when there's an error (it's not an XML file; the file can't be opened; a specific attribute doesn't exist) I'm throwing an custom "Exception" error. What I would like to do is some how pass these "Exception" error messages to the view (whether that's a custom view or the built-in administration view) and have an error message displayed like if the forms library was being used. Is that possible?
I'm starting to think I'm going to have to write the validation checks again using the forms library. If that's the case, is it possible to still use the built-in administration template, but extend the form it uses to add these custom validations?
Anything to help my confusion would be appreciated.
UPDATE:
Here's my model so far, for those who are asking, "nzb" is the XML file field.
http://dpaste.com/hold/6101/
The admin interface will use the Form you associate with your model; your own views can also use the form.
This is exactly what I'd like to do. However, I don't know how to associate my forms with my models. When ever I've created forms in the past they've always acted as their own entity. I could never get the administration views to use them while using the ModelForm class. Can you shead any light on this?
I've read over the link you gave me and it seams to be what I've done in the past, with no luck.
Getting attributes from the file, should probably be a method.
Sorry, could you please elaborate on this? A method where?
UPDATE:
It seams I've been compleatly missing this step to link a form to the administration view.
http://docs.djangoproject.com/en/dev/ref/contrib/admin/#adding-custom-validation-to-the-admin
This should now allow me to do the validation in a Form. However, I'm still confused about how to actually handle the validation. S.Lott says it should be a method?
A:
The Form errors are automatically part of the administrative view.
See http://docs.djangoproject.com/en/dev/ref/forms/validation/#ref-forms-validation
You're happiest if you validate in a Form -- that's what Forms are for. The admin interface will use the Form you associate with your model; your own views can also use the form.
Getting attributes from the file, should probably be a separate method of the model class. The separate method of the model class can be used by the save() method of the model class or invoked at other times by view functions.
"I could never get the administration views to use them while using the ModelForm class."
http://docs.djangoproject.com/en/dev/ref/contrib/admin/#form
http://docs.djangoproject.com/en/dev/ref/contrib/admin/#adding-custom-validation-to-the-admin
"I'm still confused about how to actually handle the validation. S.Lott says it should be a method?"
Validation in a form is done with a clean() method or a clean_somefield() method.
The "Adding custom validation to the admin" link (above) shows how to add the clean_name method to the "MyArticleAdminForm" form.
If you're still confused, trying actually typing the code from the Django web page and see what it does.
A:
I guess the best way would be to implement a special field class that extends FileField with custom validation of the uploaded file.
The validation is implemented in the field's clean method. It should check the XML file and raise ValidationErrors if it encounters errors. The admin system should then treat your custom errors like any other field errors.
The ImageField class is a good example of special validation like this — I recommend just reading through the source.
A:
You can provide a form that will be used by the admin site. You can then perform validations in the form code that will be displayed in the admin area.
See the docs on the admin site, and in particular the form attribute of ModelAdmin.
A:
"I'm throwing an custom "Exception" error " - Where exactly are you throwing the exception ? In your model or in your view ?
I am confused with your question, so I am assuming that you should be asking 'Where should I catch input errors if any ? ' to yourself.
The Model and View as I see are like pieces in a small assembly line.
View/ Form validation is the first action which should be performed. If there is any issue with the input data through the forms. It should be prevented at the form level using form.is_valid() etc.
The models functionality should be to provide meta information about the entity itself apart from performing CRUD. Ideally it should not be bothered about the data it is getting for the CRUD operations.
| Model and Validation Confusion - Looking for advice | I'm somewhat new to Python, Django, and I'd like some advice on how to layout the code I'd like to write.
I have the model written that allows a file to be uploaded. In the models save method I'm checking if the file has a specific extension. If it has an XML extension I'm opening the file and grabbing some information from the file to save in the database. I have this model working. I've tested it in the built-in administration. It works.
Currently when there's an error (it's not an XML file; the file can't be opened; a specific attribute doesn't exist) I'm throwing an custom "Exception" error. What I would like to do is some how pass these "Exception" error messages to the view (whether that's a custom view or the built-in administration view) and have an error message displayed like if the forms library was being used. Is that possible?
I'm starting to think I'm going to have to write the validation checks again using the forms library. If that's the case, is it possible to still use the built-in administration template, but extend the form it uses to add these custom validations?
Anything to help my confusion would be appreciated.
UPDATE:
Here's my model so far, for those who are asking, "nzb" is the XML file field.
http://dpaste.com/hold/6101/
The admin interface will use the Form you associate with your model; your own views can also use the form.
This is exactly what I'd like to do. However, I don't know how to associate my forms with my models. When ever I've created forms in the past they've always acted as their own entity. I could never get the administration views to use them while using the ModelForm class. Can you shead any light on this?
I've read over the link you gave me and it seams to be what I've done in the past, with no luck.
Getting attributes from the file, should probably be a method.
Sorry, could you please elaborate on this? A method where?
UPDATE:
It seams I've been compleatly missing this step to link a form to the administration view.
http://docs.djangoproject.com/en/dev/ref/contrib/admin/#adding-custom-validation-to-the-admin
This should now allow me to do the validation in a Form. However, I'm still confused about how to actually handle the validation. S.Lott says it should be a method?
| [
"The Form errors are automatically part of the administrative view.\nSee http://docs.djangoproject.com/en/dev/ref/forms/validation/#ref-forms-validation\nYou're happiest if you validate in a Form -- that's what Forms are for. The admin interface will use the Form you associate with your model; your own views can also use the form.\nGetting attributes from the file, should probably be a separate method of the model class. The separate method of the model class can be used by the save() method of the model class or invoked at other times by view functions.\n\n\"I could never get the administration views to use them while using the ModelForm class.\"\nhttp://docs.djangoproject.com/en/dev/ref/contrib/admin/#form\nhttp://docs.djangoproject.com/en/dev/ref/contrib/admin/#adding-custom-validation-to-the-admin\n\n\"I'm still confused about how to actually handle the validation. S.Lott says it should be a method?\"\nValidation in a form is done with a clean() method or a clean_somefield() method.\nThe \"Adding custom validation to the admin\" link (above) shows how to add the clean_name method to the \"MyArticleAdminForm\" form. \nIf you're still confused, trying actually typing the code from the Django web page and see what it does.\n",
"I guess the best way would be to implement a special field class that extends FileField with custom validation of the uploaded file. \nThe validation is implemented in the field's clean method. It should check the XML file and raise ValidationErrors if it encounters errors. The admin system should then treat your custom errors like any other field errors. \nThe ImageField class is a good example of special validation like this — I recommend just reading through the source.\n",
"You can provide a form that will be used by the admin site. You can then perform validations in the form code that will be displayed in the admin area.\nSee the docs on the admin site, and in particular the form attribute of ModelAdmin.\n",
"\"I'm throwing an custom \"Exception\" error \" - Where exactly are you throwing the exception ? In your model or in your view ?\nI am confused with your question, so I am assuming that you should be asking 'Where should I catch input errors if any ? ' to yourself.\nThe Model and View as I see are like pieces in a small assembly line.\nView/ Form validation is the first action which should be performed. If there is any issue with the input data through the forms. It should be prevented at the form level using form.is_valid() etc.\nThe models functionality should be to provide meta information about the entity itself apart from performing CRUD. Ideally it should not be bothered about the data it is getting for the CRUD operations.\n"
] | [
4,
1,
1,
0
] | [] | [] | [
"django",
"python",
"validation"
] | stackoverflow_0000606782_django_python_validation.txt |
Q:
Django email
I am using the Gmail SMTP server to send out emails from users of my website.
These are the default settings in my settings.py
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'example@example.com'
EMAIL_HOST_PASSWORD = 'pwd'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
SERVER_EMAIL = EMAIL_HOST_USER
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
If I want a user to send an email, I am overriding these settings and sending the email using Django's email sending methods. When an exception occurs in the system, I receive an email from the example@example.com. Sometimes I receive an email from some logged in user. Which could also possibly mean that when a user receives an email sent from my website it has a sent address different from the actual user.
What should be done to avoid this situation?
A:
Django only uses settings.DEFAULT_FROM_EMAIL when any of the mail sending functions pass None or empty string as the sender address. This can be verified in django/core/mail.py.
When there is an unhandled exception Django calls the mail_admins() function in django/core/mail.py which always uses settings.SERVER_EMAIL and is only sent to addresses listed in settings.ADMINS. This can also be verified in django/core/mail.py.
The only other place Django itself sends e-mails is if settings.SEND_BROKEN_LINK_EMAILS is True, then CommonMiddleware will send mail to all addresses listed in settings.MANAGERS and the e-mail sender is settings.SERVER_EMAIL.
Therefore, the only time a regular user will receive e-mail from your site is when you call send_mail(). So, always pass a real address as the from_mail argument and you will avoid users receiving email from settings.SERVER_EMAIL or settings.DEFAULT_FROM_EMAIL.
Side note: django-registration is at least one example of a Django pluggable that will send mail from settings.DEFAULT_FROM_EMAIL so in cases like this you need to make sure it is a proper e-mail address such as support@yoursite.com or webmaster@yoursite.com.
| Django email | I am using the Gmail SMTP server to send out emails from users of my website.
These are the default settings in my settings.py
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'example@example.com'
EMAIL_HOST_PASSWORD = 'pwd'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
SERVER_EMAIL = EMAIL_HOST_USER
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
If I want a user to send an email, I am overriding these settings and sending the email using Django's email sending methods. When an exception occurs in the system, I receive an email from the example@example.com. Sometimes I receive an email from some logged in user. Which could also possibly mean that when a user receives an email sent from my website it has a sent address different from the actual user.
What should be done to avoid this situation?
| [
"Django only uses settings.DEFAULT_FROM_EMAIL when any of the mail sending functions pass None or empty string as the sender address. This can be verified in django/core/mail.py.\nWhen there is an unhandled exception Django calls the mail_admins() function in django/core/mail.py which always uses settings.SERVER_EMAIL and is only sent to addresses listed in settings.ADMINS. This can also be verified in django/core/mail.py.\nThe only other place Django itself sends e-mails is if settings.SEND_BROKEN_LINK_EMAILS is True, then CommonMiddleware will send mail to all addresses listed in settings.MANAGERS and the e-mail sender is settings.SERVER_EMAIL.\nTherefore, the only time a regular user will receive e-mail from your site is when you call send_mail(). So, always pass a real address as the from_mail argument and you will avoid users receiving email from settings.SERVER_EMAIL or settings.DEFAULT_FROM_EMAIL.\nSide note: django-registration is at least one example of a Django pluggable that will send mail from settings.DEFAULT_FROM_EMAIL so in cases like this you need to make sure it is a proper e-mail address such as support@yoursite.com or webmaster@yoursite.com.\n"
] | [
23
] | [] | [] | [
"django",
"django_email",
"email",
"python"
] | stackoverflow_0000607819_django_django_email_email_python.txt |
Q:
What will I lose or gain from switching database APIs? (from pywin32 and pysqlite to QSql)
I am writing a Python (2.5) GUI Application that does the following:
Imports from Access to an Sqlite database
Saves ui form settings to an Sqlite database
Currently I am using pywin32 to read Access, and pysqlite2/dbapi2 to read/write Sqlite.
However, certain Qt objects don't automatically cast to Python or Sqlite equivalents when updating the Sqlite database. For example, a QDate, QDateTime, QString and others raise an error. Currently I am maintaining conversion functions.
I investigated using QSql, which appears to overcome the casting problem. In addition, it is able to connect to both Access and Sqlite. These two benefits would appear to allow me to refactor my code to use less modules and not maintain my own conversion functions.
What I am looking for is a list of important side-effects, performance gains/losses, functionality gains/losses that any of the SO community has experienced as a result from the switch to QSql.
One functionality loss I have experienced thus far is the inability to use Access functions using the QODBC driver (e.g., 'SELECT LCASE(fieldname) from tablename' fails, as does 'SELECT FORMAT(fieldname, "General Number") from tablename')
A:
When dealing with databases and PyQt UIs, I'll use something similar to model-view-controller model to help organize and simplify the code.
View module
uses/holds any QObjects that are necessary
for the UI
contain simple functions/methods
for updating your QTGui Object, as
well as extracting input from GUI
objects
Controller module
will perform all DB interactions
the more complex code lives here
By using a MVC, you will not need to rely on the QT Library as much, and you will run into less problems linking QT with Python.
So I guess my suggestion is to continue using pysqlite (since that's what you are used to), but refactor your design a little so the only thing dealing with the QT libraries is the UI. From the description of your GUI, it should be fairly straightforward.
| What will I lose or gain from switching database APIs? (from pywin32 and pysqlite to QSql) | I am writing a Python (2.5) GUI Application that does the following:
Imports from Access to an Sqlite database
Saves ui form settings to an Sqlite database
Currently I am using pywin32 to read Access, and pysqlite2/dbapi2 to read/write Sqlite.
However, certain Qt objects don't automatically cast to Python or Sqlite equivalents when updating the Sqlite database. For example, a QDate, QDateTime, QString and others raise an error. Currently I am maintaining conversion functions.
I investigated using QSql, which appears to overcome the casting problem. In addition, it is able to connect to both Access and Sqlite. These two benefits would appear to allow me to refactor my code to use less modules and not maintain my own conversion functions.
What I am looking for is a list of important side-effects, performance gains/losses, functionality gains/losses that any of the SO community has experienced as a result from the switch to QSql.
One functionality loss I have experienced thus far is the inability to use Access functions using the QODBC driver (e.g., 'SELECT LCASE(fieldname) from tablename' fails, as does 'SELECT FORMAT(fieldname, "General Number") from tablename')
| [
"When dealing with databases and PyQt UIs, I'll use something similar to model-view-controller model to help organize and simplify the code. \nView module\n\nuses/holds any QObjects that are necessary\nfor the UI \ncontain simple functions/methods\nfor updating your QTGui Object, as\nwell as extracting input from GUI\nobjects\n\nController module\n\nwill perform all DB interactions\nthe more complex code lives here\n\nBy using a MVC, you will not need to rely on the QT Library as much, and you will run into less problems linking QT with Python.\nSo I guess my suggestion is to continue using pysqlite (since that's what you are used to), but refactor your design a little so the only thing dealing with the QT libraries is the UI. From the description of your GUI, it should be fairly straightforward.\n"
] | [
0
] | [] | [] | [
"pyqt4",
"python",
"pywin32",
"qt",
"sqlite"
] | stackoverflow_0000608098_pyqt4_python_pywin32_qt_sqlite.txt |
Q:
Python port binding
I've recently been learning python and I just started playing with networking using python's socket library. Everything has been going well until recently when my script terminated without closing the connection. The next time I ran the script, I got:
File "./alert_server.py", line 9, in <module>
s.bind((HOST, PORT))
File "<string>", line 1, in bind
socket.error: (98, 'Address already in use')
So it seems that something is still binded to the port, even though the python script isn't running (and I've verified this using $px aux. What's weird is that after a minute or so, I can run the script again on the same port and it will be fine. Is there any way to prevent/unbind a port for when this happens in the future?
A:
What you want to do is just before the bind, do:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
The reason you are seeing the behaviour you are is that the OS is reserving that particular port for some time after the last connection terminated. This is so that it can properly discard any stray further packets that might come in after the application has terminated.
By setting the SO_REUSEADDR socket option, you are telling the OS that you know what you're doing and you still want to bind to the same port.
| Python port binding | I've recently been learning python and I just started playing with networking using python's socket library. Everything has been going well until recently when my script terminated without closing the connection. The next time I ran the script, I got:
File "./alert_server.py", line 9, in <module>
s.bind((HOST, PORT))
File "<string>", line 1, in bind
socket.error: (98, 'Address already in use')
So it seems that something is still binded to the port, even though the python script isn't running (and I've verified this using $px aux. What's weird is that after a minute or so, I can run the script again on the same port and it will be fine. Is there any way to prevent/unbind a port for when this happens in the future?
| [
"What you want to do is just before the bind, do:\ns.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n\nThe reason you are seeing the behaviour you are is that the OS is reserving that particular port for some time after the last connection terminated. This is so that it can properly discard any stray further packets that might come in after the application has terminated.\nBy setting the SO_REUSEADDR socket option, you are telling the OS that you know what you're doing and you still want to bind to the same port.\n"
] | [
16
] | [] | [] | [
"python"
] | stackoverflow_0000608558_python.txt |
Q:
django- run a script from admin
I would like to write a script that is not activated by a certain URL, but by clicking on a link from the admin interface.
How do I do this?
Thanks!
A:
But a link has to go to a URL, so I think what you mean is you want to have a view function that is only visible in the admin interface, and that view function runs a script?
If so, override admin/base_site.html template with something this simple:
{% extends "admin/base.html" %}
{% block nav-global %}
<p><a href="{% url your-named-url %}">Do Something</a></p>
{% endblock %}
This (should) put the link at the top of the admin interface.
Add your url with named pattern to your urls.py
Then just make a normal django view and at the top of the view check to make sure the user is superuser like this:
if not request.user.is_staff:
return Http404
That will prevent unauthorized people from accessing this view.
Next, in your view after the above code, just run the script.
Do that with Python's subprocess module, for example:
from subprocess import call
retcode = call(["/full/path/myscript.py", "arg1"])
| django- run a script from admin | I would like to write a script that is not activated by a certain URL, but by clicking on a link from the admin interface.
How do I do this?
Thanks!
| [
"But a link has to go to a URL, so I think what you mean is you want to have a view function that is only visible in the admin interface, and that view function runs a script?\nIf so, override admin/base_site.html template with something this simple:\n{% extends \"admin/base.html\" %}\n{% block nav-global %}\n <p><a href=\"{% url your-named-url %}\">Do Something</a></p>\n{% endblock %}\n\nThis (should) put the link at the top of the admin interface.\nAdd your url with named pattern to your urls.py\nThen just make a normal django view and at the top of the view check to make sure the user is superuser like this:\nif not request.user.is_staff:\n return Http404\n\nThat will prevent unauthorized people from accessing this view.\nNext, in your view after the above code, just run the script.\nDo that with Python's subprocess module, for example:\nfrom subprocess import call\nretcode = call([\"/full/path/myscript.py\", \"arg1\"])\n\n"
] | [
11
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000608789_django_python.txt |
Q:
Monitoring user idle time
Developing a mac app, how can I tell whether the user is currently at their computer or not? Or how long ago they last pressed a key or moved the mouse?
A:
it turns out the answer was here
http://osdir.com/ml/python.pyobjc.devel/2006-09/msg00013.html
A:
You can use Quartz event taps and an NSTimer. Any time one of your event taps lights up, postpone the timer by setting its fire date. When the timer fires, the user is idle.
I'm not sure whether Quartz event taps are exposed to Python, though. The drawing APIs are, but I'm not sure about event taps.
| Monitoring user idle time | Developing a mac app, how can I tell whether the user is currently at their computer or not? Or how long ago they last pressed a key or moved the mouse?
| [
"it turns out the answer was here\nhttp://osdir.com/ml/python.pyobjc.devel/2006-09/msg00013.html\n",
"You can use Quartz event taps and an NSTimer. Any time one of your event taps lights up, postpone the timer by setting its fire date. When the timer fires, the user is idle.\nI'm not sure whether Quartz event taps are exposed to Python, though. The drawing APIs are, but I'm not sure about event taps.\n"
] | [
1,
0
] | [] | [] | [
"cocoa",
"idle_processing",
"macos",
"python"
] | stackoverflow_0000608710_cocoa_idle_processing_macos_python.txt |
Q:
Fast way to determine if a PID exists on (Windows)?
I realize "fast" is a bit subjective so I'll explain with some context. I'm working on a Python module called psutil for reading process information in a cross-platform way. One of the functions is a pid_exists(pid) function for determining if a PID is in the current process list.
Right now I'm doing this the obvious way, using EnumProcesses() to pull the process list, then interating through the list and looking for the PID. However, some simple benchmarking shows this is dramatically slower than the pid_exists function on UNIX-based platforms (Linux, OS X, FreeBSD) where we're using kill(pid, 0) with a 0 signal to determine if a PID exists. Additional testing shows it's EnumProcesses that's taking up almost all the time.
Anyone know a faster way than using EnumProcesses to determine if a PID exists? I tried OpenProcess() and checking for an error opening the nonexistent process, but this turned out to be over 4x slower than iterating through the EnumProcesses list, so that's out as well. Any other (better) suggestions?
NOTE: This is a Python library intended to avoid third-party lib dependencies like pywin32 extensions. I need a solution that is faster than our current code, and that doesn't depend on pywin32 or other modules not present in a standard Python distribution.
EDIT: To clarify - we're well aware that there are race conditions inherent in reading process iformation. We raise exceptions if the process goes away during the course of data collection or we run into other problems. The pid_exists() function isn't intended to replace proper error handling.
UPDATE: Apparently my earlier benchmarks were flawed - I wrote some simple test apps in C and EnumProcesses consistently comes out slower and OpenProcess (in conjunction with GetProcessExitCode in case the PID is valid but the process has stopped) is actually much faster not slower.
A:
OpenProcess could tell you w/o enumerating all. I have no idea how fast.
EDIT: note that you also need GetExitCodeProcess to verify the state of the process even if you get a handle from OpenProcess.
A:
Turns out that my benchmarks evidently were flawed somehow, as later testing reveals OpenProcess and GetExitCodeProcess are much faster than using EnumProcesses after all. I'm not sure what happened but I did some new tests and verified this is the faster solution:
int pid_is_running(DWORD pid)
{
HANDLE hProcess;
DWORD exitCode;
//Special case for PID 0 System Idle Process
if (pid == 0) {
return 1;
}
//skip testing bogus PIDs
if (pid < 0) {
return 0;
}
hProcess = handle_from_pid(pid);
if (NULL == hProcess) {
//invalid parameter means PID isn't in the system
if (GetLastError() == ERROR_INVALID_PARAMETER) {
return 0;
}
//some other error with OpenProcess
return -1;
}
if (GetExitCodeProcess(hProcess, &exitCode)) {
CloseHandle(hProcess);
return (exitCode == STILL_ACTIVE);
}
//error in GetExitCodeProcess()
CloseHandle(hProcess);
return -1;
}
Note that you do need to use GetExitCodeProcess() because OpenProcess() will succeed on processes that have died recently so you can't assume a valid process handle means the process is running.
Also note that OpenProcess() succeeds for PIDs that are within 3 of any valid PID (See Why does OpenProcess succeed even when I add three to the process ID?)
A:
There is an inherent race condition in the use of pid_exists function: by the time the calling program gets to use the answer, the process may have already disappeared, or a new process with the queried id may have been created. I would dare say that any application that uses this function is flawed by design and that optimizing this function is therefore not worth the effort.
A:
I'd code Jay's last function this way.
int pid_is_running(DWORD pid){
HANDLE hProcess;
DWORD exitCode;
//Special case for PID 0 System Idle Process
if (pid == 0) {
return 1;
}
//skip testing bogus PIDs
if (pid < 0) {
return 0;
}
hProcess = handle_from_pid(pid);
if (NULL == hProcess) {
//invalid parameter means PID isn't in the system
if (GetLastError() == ERROR_INVALID_PARAMETER) {
return 0;
}
//some other error with OpenProcess
return -1;
}
DWORD dwRetval = WaitForSingleObject(hProcess, 0);
CloseHandle(hProcess); // otherwise you'll be losing handles
switch(dwRetval) {
case WAIT_OBJECT_0;
return 0;
case WAIT_TIMEOUT;
return 1;
default:
return -1;
}
}
The main difference is closing the process handle (important when the client of this function is running for a long time) and the process termination detection strategy. WaitForSingleObject gives you the opportunity to wait for a while (changing the 0 to a function parameter value) until the process ends.
| Fast way to determine if a PID exists on (Windows)? | I realize "fast" is a bit subjective so I'll explain with some context. I'm working on a Python module called psutil for reading process information in a cross-platform way. One of the functions is a pid_exists(pid) function for determining if a PID is in the current process list.
Right now I'm doing this the obvious way, using EnumProcesses() to pull the process list, then interating through the list and looking for the PID. However, some simple benchmarking shows this is dramatically slower than the pid_exists function on UNIX-based platforms (Linux, OS X, FreeBSD) where we're using kill(pid, 0) with a 0 signal to determine if a PID exists. Additional testing shows it's EnumProcesses that's taking up almost all the time.
Anyone know a faster way than using EnumProcesses to determine if a PID exists? I tried OpenProcess() and checking for an error opening the nonexistent process, but this turned out to be over 4x slower than iterating through the EnumProcesses list, so that's out as well. Any other (better) suggestions?
NOTE: This is a Python library intended to avoid third-party lib dependencies like pywin32 extensions. I need a solution that is faster than our current code, and that doesn't depend on pywin32 or other modules not present in a standard Python distribution.
EDIT: To clarify - we're well aware that there are race conditions inherent in reading process iformation. We raise exceptions if the process goes away during the course of data collection or we run into other problems. The pid_exists() function isn't intended to replace proper error handling.
UPDATE: Apparently my earlier benchmarks were flawed - I wrote some simple test apps in C and EnumProcesses consistently comes out slower and OpenProcess (in conjunction with GetProcessExitCode in case the PID is valid but the process has stopped) is actually much faster not slower.
| [
"OpenProcess could tell you w/o enumerating all. I have no idea how fast.\nEDIT: note that you also need GetExitCodeProcess to verify the state of the process even if you get a handle from OpenProcess.\n",
"Turns out that my benchmarks evidently were flawed somehow, as later testing reveals OpenProcess and GetExitCodeProcess are much faster than using EnumProcesses after all. I'm not sure what happened but I did some new tests and verified this is the faster solution: \nint pid_is_running(DWORD pid)\n{\n HANDLE hProcess;\n DWORD exitCode;\n\n //Special case for PID 0 System Idle Process\n if (pid == 0) {\n return 1;\n }\n\n //skip testing bogus PIDs\n if (pid < 0) {\n return 0;\n }\n\n hProcess = handle_from_pid(pid);\n if (NULL == hProcess) {\n //invalid parameter means PID isn't in the system\n if (GetLastError() == ERROR_INVALID_PARAMETER) { \n return 0;\n }\n\n //some other error with OpenProcess\n return -1;\n }\n\n if (GetExitCodeProcess(hProcess, &exitCode)) {\n CloseHandle(hProcess);\n return (exitCode == STILL_ACTIVE);\n }\n\n //error in GetExitCodeProcess()\n CloseHandle(hProcess);\n return -1;\n}\n\nNote that you do need to use GetExitCodeProcess() because OpenProcess() will succeed on processes that have died recently so you can't assume a valid process handle means the process is running. \nAlso note that OpenProcess() succeeds for PIDs that are within 3 of any valid PID (See Why does OpenProcess succeed even when I add three to the process ID?)\n",
"There is an inherent race condition in the use of pid_exists function: by the time the calling program gets to use the answer, the process may have already disappeared, or a new process with the queried id may have been created. I would dare say that any application that uses this function is flawed by design and that optimizing this function is therefore not worth the effort.\n",
"I'd code Jay's last function this way.\nint pid_is_running(DWORD pid){\n HANDLE hProcess;\n DWORD exitCode;\n //Special case for PID 0 System Idle Process\n if (pid == 0) {\n return 1;\n }\n //skip testing bogus PIDs\n if (pid < 0) {\n return 0;\n }\n hProcess = handle_from_pid(pid);\n if (NULL == hProcess) {\n //invalid parameter means PID isn't in the system\n if (GetLastError() == ERROR_INVALID_PARAMETER) {\n return 0;\n }\n //some other error with OpenProcess\n return -1;\n }\n DWORD dwRetval = WaitForSingleObject(hProcess, 0);\n CloseHandle(hProcess); // otherwise you'll be losing handles\n\n switch(dwRetval) {\n case WAIT_OBJECT_0;\n return 0;\n case WAIT_TIMEOUT;\n return 1;\n default:\n return -1;\n }\n}\n\nThe main difference is closing the process handle (important when the client of this function is running for a long time) and the process termination detection strategy. WaitForSingleObject gives you the opportunity to wait for a while (changing the 0 to a function parameter value) until the process ends.\n"
] | [
8,
4,
3,
3
] | [] | [] | [
"c",
"pid",
"python",
"winapi"
] | stackoverflow_0000592256_c_pid_python_winapi.txt |
Q:
is there COMMIT analog in python for writing into a file?
I have a file open for writing, and a process running for days -- something is written into the file in relatively random moments. My understanding is -- until I do file.close() -- there is a chance nothing is really saved to disk. Is that true?
What if the system crashes when the main process is not finished yet? Is there a way to do kind of commit once every... say -- 10 minutes (and I call this commit myself -- no need to run timer)? Is file.close() and open(file,'a') the only way, or there are better alternatives?
A:
You should be able to use file.flush() to do this.
A:
If you don't want to kill the current process to add f.flush() (it sounds like it's been running for days already?), you should be OK. If you see the file you are writing to getting bigger, you will not lose that data...
From Python docs:
write(str)
Write a string to the file. There is no return value. Due to buffering,
the string may not actually show up in
the file until the flush() or close()
method is called.
It sounds like Python's buffering system will automatically flush file objects, but it is not guaranteed when that happens.
A:
To make sure that you're data is written to disk, use file.flush() followed by os.fsync(file.fileno()).
A:
As has already been stated use the .flush() method to force the write out of the buffer, but avoid using a lot of calls to flush as this can actually slow your writing down (if the application relies on fast writes) as you'll be forcing your filesystem to write changes that are smaller than it's buffer size which can bring you to your knees. :)
| is there COMMIT analog in python for writing into a file? | I have a file open for writing, and a process running for days -- something is written into the file in relatively random moments. My understanding is -- until I do file.close() -- there is a chance nothing is really saved to disk. Is that true?
What if the system crashes when the main process is not finished yet? Is there a way to do kind of commit once every... say -- 10 minutes (and I call this commit myself -- no need to run timer)? Is file.close() and open(file,'a') the only way, or there are better alternatives?
| [
"You should be able to use file.flush() to do this.\n",
"If you don't want to kill the current process to add f.flush() (it sounds like it's been running for days already?), you should be OK. If you see the file you are writing to getting bigger, you will not lose that data...\nFrom Python docs:\n\nwrite(str)\n Write a string to the file. There is no return value. Due to buffering,\n the string may not actually show up in\n the file until the flush() or close()\n method is called.\n\nIt sounds like Python's buffering system will automatically flush file objects, but it is not guaranteed when that happens. \n",
"To make sure that you're data is written to disk, use file.flush() followed by os.fsync(file.fileno()).\n",
"As has already been stated use the .flush() method to force the write out of the buffer, but avoid using a lot of calls to flush as this can actually slow your writing down (if the application relies on fast writes) as you'll be forcing your filesystem to write changes that are smaller than it's buffer size which can bring you to your knees. :)\n"
] | [
21,
3,
2,
2
] | [] | [] | [
"buffering",
"commit",
"file_io",
"python"
] | stackoverflow_0000608316_buffering_commit_file_io_python.txt |
Q:
Is there anything that cannot appear inside parentheses?
I was intrigued by this answer to my question about getting vim to highlight unmatched brackets in python code. Specifically, I'm talking about the second part of his answer where he mentions that the C syntax highlighting is actually flagging as an error any instance of curly braces inside parens. It is an unobtrusive cue that you have unclosed parens when all of your downstream curly braces light up in red.
That trick works because C syntax doesn't allow curly braces inside parentheses. To satisfy my (morbid?) curiosity, can I do something similar with python code? Is there anything in python syntax that isn't legal inside parentheses?
Note: I'm not trolling for a better answer to my other question (there are plenty of good answers there already). I'm merely curious if this trick is even possible with python code.
A:
Any Python statement (import, if, for, while, def, class etc.) cannot be in the parentheses:
In [1]: (import sys)
------------------------------------------------------------
File "<ipython console>", line 1
(import sys)
^
<type 'exceptions.SyntaxError'>: invalid syntax
A:
Here's an exact answer:
http://docs.python.org/reference/expressions.html#grammar-token-expression_list
http://docs.python.org/reference/compound_stmts.html#function
A:
I'm not sure what are you trying to do, but how about "def" or "class"?
this snippet is valid when it's not inside parenthesis
class dummy: pass
| Is there anything that cannot appear inside parentheses? | I was intrigued by this answer to my question about getting vim to highlight unmatched brackets in python code. Specifically, I'm talking about the second part of his answer where he mentions that the C syntax highlighting is actually flagging as an error any instance of curly braces inside parens. It is an unobtrusive cue that you have unclosed parens when all of your downstream curly braces light up in red.
That trick works because C syntax doesn't allow curly braces inside parentheses. To satisfy my (morbid?) curiosity, can I do something similar with python code? Is there anything in python syntax that isn't legal inside parentheses?
Note: I'm not trolling for a better answer to my other question (there are plenty of good answers there already). I'm merely curious if this trick is even possible with python code.
| [
"Any Python statement (import, if, for, while, def, class etc.) cannot be in the parentheses:\nIn [1]: (import sys)\n------------------------------------------------------------\nFile \"<ipython console>\", line 1\n (import sys)\n ^\n<type 'exceptions.SyntaxError'>: invalid syntax\n\n",
"Here's an exact answer:\n\nhttp://docs.python.org/reference/expressions.html#grammar-token-expression_list\nhttp://docs.python.org/reference/compound_stmts.html#function\n\n",
"I'm not sure what are you trying to do, but how about \"def\" or \"class\"?\nthis snippet is valid when it's not inside parenthesis\nclass dummy: pass\n\n"
] | [
5,
4,
0
] | [] | [] | [
"python",
"syntax",
"syntax_highlighting",
"vim"
] | stackoverflow_0000609169_python_syntax_syntax_highlighting_vim.txt |
Q:
When is it (not) appropriate to bundle dependencies with an application?
Summary
I recently had a conversation with the creator of a framework that one of my applications depends on. During that conversation he mentioned as a sort of aside that it would make my life simpler if I just bundled his framework with my application and delivered to the end user a version that I knew was consistent with my code. Intuitively I have always tried to avoid doing this and, in fact, I have taken pains to segment my own code so that portions of it could be redistributed without taking the entire project (even when there was precious little chance anyone would ever reuse any of it). However, after mulling it over for some time I have not been able to come up with a particularly good reason why I do this. In fact, now that I have thought about it, I'm seeing a pretty compelling case to bundle all my smaller dependencies. I have come up with a list of pros and cons and I'm hoping someone can point out anything that I'm missing.
Pros
Consistency of versions means easier
testing and troubleshooting.
Application may reach a wider
audience since there appear to be
fewer components to install.
Small tweaks to the dependency can
more easily be made downstream and
delivered with the application,
rather than waiting for them to
percolate into the upstream code base.
Cons
More complex packaging process to include
dependencies.
User may end up with multiple copies
of a dependency on their machine.
Per bortzmeyer's response, there are potential security concerns with not being able to upgrade individual components.
Notes
For reference, my application is written in Python and the dependencies I'm referencing are "light", by which I mean small and not in very common use. (So they do not exist on all machines or even in all repositories.) And when I say "package with" my application, I mean distribute under my own source tree, not install with a script that resides inside my package, so there would be no chance of conflicting versions. I am also developing solely on Linux so there are no Windows installation issues to worry about.
All that being said, I am interested in hearing any thoughts on the broader (language-independent) issue of packaging dependencies as well. Is there something I am missing or is this an easy decision that I am just over-thinking?
Addendum 1
It is worth mentioning that I am also quite sensitive to the needs of downstream packagers. I would like it to be as straightforward as possible to wrap the application up in a distribution-specific Deb or RPM.
A:
I favor bundling dependencies, if it's not feasible to use a system for automatic dependency resolution (i.e. setuptools), and if you can do it without introducing version conflicts. You still have to consider your application and your audience; serious developers or enthusiasts are more likely to want to work with a specific (latest) version of the dependency. Bundling stuff in may be annoying for them, since it's not what they expect.
But, especially for end-users of an application, I seriously doubt most people enjoy having to search for dependencies. As far as having duplicate copies goes, I would much rather spend an extra 10 milliseconds downloading some additional kilobytes, or spend whatever fraction of a cent on the extra meg of disk space, than spend 10+ minutes searching through websites (which may be down), downloading, installing (which may fail if versions are incompatible), etc.
I don't care how many copies of a library I have on my disk, as long as they don't get in each others' way. Disk space is really, really cheap.
A:
An important point seems to have been forgotten in the Cons of bundling libraries/frameworks/etc with the application: security updates.
Most Web frameworks are full of security holes and require frequent patching. Any library, anyway, may have to be upgraded one day or the other for a security bug.
If you do not bundle, sysadmins will just upgrade one copy of the library and restart depending applications.
If you bundle, sysadmins will probably not even know they have to upgrade something.
So, the issue with bundling is not the disk space, it's the risk of letting old and dangerous copies around.
A:
Can't you just rely on a certain version of those dependencies? E.g. in Python with setuptools you can specify which exact version it needs or even give some conditions like <= > etc. This of course only applies to Python and on the specifc package manager but I would personally always first try not to bundle everything. With shipping it as a Python egg you will also have all the dependencies installed automatically.
You might of course also use a two-way strategy in providing your own package with just links to the dependencies and nevertheless provide a complete setup in some installer like fashion. But even then (in the python case) I would suggest to simply bundle the eggs with it.
For some introduction into eggs see this post of mine.
Of course this is very Python specific but I assume that other language might have similar packaging tools.
A:
If you're producing software for an end-user, the goal is to let the customer use your software. Anything that stands in the way is counter-productive. If they have to download dependencies themselves, there's a possibility that they'll decide to avoid your software instead. You can't control whether libraries will be backwards compatible, and you don't want your software to stop working because the user updated their system. Similarly, you don't want a customer to install an old version of your software with old libraries and have the rest of the system break.
This means bundling is generally the way to go. If you can ensure that your software will install smoothly without bundling dependencies, and that's less work, then that may be a better option. It's about what satisfies your customers.
A:
For Linux, don't even think about bundling. You aren't smarter than the package manager or the packagers, and each distribution takes approach their own way - they won't be happy if you attempt to go your way. At best, they won't bother with packaging your app, which isn't great.
Keep in mind that in Linux, dependencies are automatically pulled in for you. It's not a matter of making the user get them. It's already done for you.
For windows, feel free to bundle, you're on your own there.
A:
Just my experience, take it with a grain of salt.
My preference for a couple of open-source libraries that I author is for independence from additional libs as much as possible. Reason being, not only am I on the hook for distribution of additional libraries along with mine, I'm also obliged to update my application for compatibility as those other libraries are updated as well.
From the libraries I've used from others that carry dependencies of "common" libraries, invariably I end up with requiring multiple versions of the common library on my system. The relative update speed of the niche libraries I'm using just isn't that fast, while the common libraries are updated much more often. Versioning hell.
But that's speaking generically. If I can aid my project and users NOW by incorporating a dependency, I'm always looking at the downstream and later-date effects of that decision. If I can manage it, I can confidently include the dependency.
As always, your mileage may vary.
A:
I always include all dependancies for my web applications. Not only does this make installation simpler, the application remains stable and working the way you expect it to even when other components on the system are upgraded.
A:
Beware reproducing the classic Windows DLL hell. By all means minimize the number of dependencies: ideally, just depend on your language and its framework, nothing else, if you can.
After all, preserving hard disk space is hardly the objective any more, so users need not care about having multiple copies. Also, unless you have a minuscule number of users, be sure to take the onus of packaging on yourself rather than requiring them to obtain all dependencies!
| When is it (not) appropriate to bundle dependencies with an application? | Summary
I recently had a conversation with the creator of a framework that one of my applications depends on. During that conversation he mentioned as a sort of aside that it would make my life simpler if I just bundled his framework with my application and delivered to the end user a version that I knew was consistent with my code. Intuitively I have always tried to avoid doing this and, in fact, I have taken pains to segment my own code so that portions of it could be redistributed without taking the entire project (even when there was precious little chance anyone would ever reuse any of it). However, after mulling it over for some time I have not been able to come up with a particularly good reason why I do this. In fact, now that I have thought about it, I'm seeing a pretty compelling case to bundle all my smaller dependencies. I have come up with a list of pros and cons and I'm hoping someone can point out anything that I'm missing.
Pros
Consistency of versions means easier
testing and troubleshooting.
Application may reach a wider
audience since there appear to be
fewer components to install.
Small tweaks to the dependency can
more easily be made downstream and
delivered with the application,
rather than waiting for them to
percolate into the upstream code base.
Cons
More complex packaging process to include
dependencies.
User may end up with multiple copies
of a dependency on their machine.
Per bortzmeyer's response, there are potential security concerns with not being able to upgrade individual components.
Notes
For reference, my application is written in Python and the dependencies I'm referencing are "light", by which I mean small and not in very common use. (So they do not exist on all machines or even in all repositories.) And when I say "package with" my application, I mean distribute under my own source tree, not install with a script that resides inside my package, so there would be no chance of conflicting versions. I am also developing solely on Linux so there are no Windows installation issues to worry about.
All that being said, I am interested in hearing any thoughts on the broader (language-independent) issue of packaging dependencies as well. Is there something I am missing or is this an easy decision that I am just over-thinking?
Addendum 1
It is worth mentioning that I am also quite sensitive to the needs of downstream packagers. I would like it to be as straightforward as possible to wrap the application up in a distribution-specific Deb or RPM.
| [
"I favor bundling dependencies, if it's not feasible to use a system for automatic dependency resolution (i.e. setuptools), and if you can do it without introducing version conflicts. You still have to consider your application and your audience; serious developers or enthusiasts are more likely to want to work with a specific (latest) version of the dependency. Bundling stuff in may be annoying for them, since it's not what they expect.\nBut, especially for end-users of an application, I seriously doubt most people enjoy having to search for dependencies. As far as having duplicate copies goes, I would much rather spend an extra 10 milliseconds downloading some additional kilobytes, or spend whatever fraction of a cent on the extra meg of disk space, than spend 10+ minutes searching through websites (which may be down), downloading, installing (which may fail if versions are incompatible), etc.\nI don't care how many copies of a library I have on my disk, as long as they don't get in each others' way. Disk space is really, really cheap.\n",
"An important point seems to have been forgotten in the Cons of bundling libraries/frameworks/etc with the application: security updates.\nMost Web frameworks are full of security holes and require frequent patching. Any library, anyway, may have to be upgraded one day or the other for a security bug.\nIf you do not bundle, sysadmins will just upgrade one copy of the library and restart depending applications. \nIf you bundle, sysadmins will probably not even know they have to upgrade something.\nSo, the issue with bundling is not the disk space, it's the risk of letting old and dangerous copies around.\n",
"Can't you just rely on a certain version of those dependencies? E.g. in Python with setuptools you can specify which exact version it needs or even give some conditions like <= > etc. This of course only applies to Python and on the specifc package manager but I would personally always first try not to bundle everything. With shipping it as a Python egg you will also have all the dependencies installed automatically. \nYou might of course also use a two-way strategy in providing your own package with just links to the dependencies and nevertheless provide a complete setup in some installer like fashion. But even then (in the python case) I would suggest to simply bundle the eggs with it. \nFor some introduction into eggs see this post of mine.\nOf course this is very Python specific but I assume that other language might have similar packaging tools.\n",
"If you're producing software for an end-user, the goal is to let the customer use your software. Anything that stands in the way is counter-productive. If they have to download dependencies themselves, there's a possibility that they'll decide to avoid your software instead. You can't control whether libraries will be backwards compatible, and you don't want your software to stop working because the user updated their system. Similarly, you don't want a customer to install an old version of your software with old libraries and have the rest of the system break. \nThis means bundling is generally the way to go. If you can ensure that your software will install smoothly without bundling dependencies, and that's less work, then that may be a better option. It's about what satisfies your customers.\n",
"For Linux, don't even think about bundling. You aren't smarter than the package manager or the packagers, and each distribution takes approach their own way - they won't be happy if you attempt to go your way. At best, they won't bother with packaging your app, which isn't great.\nKeep in mind that in Linux, dependencies are automatically pulled in for you. It's not a matter of making the user get them. It's already done for you.\nFor windows, feel free to bundle, you're on your own there.\n",
"Just my experience, take it with a grain of salt.\nMy preference for a couple of open-source libraries that I author is for independence from additional libs as much as possible. Reason being, not only am I on the hook for distribution of additional libraries along with mine, I'm also obliged to update my application for compatibility as those other libraries are updated as well.\nFrom the libraries I've used from others that carry dependencies of \"common\" libraries, invariably I end up with requiring multiple versions of the common library on my system. The relative update speed of the niche libraries I'm using just isn't that fast, while the common libraries are updated much more often. Versioning hell.\nBut that's speaking generically. If I can aid my project and users NOW by incorporating a dependency, I'm always looking at the downstream and later-date effects of that decision. If I can manage it, I can confidently include the dependency.\nAs always, your mileage may vary.\n",
"I always include all dependancies for my web applications. Not only does this make installation simpler, the application remains stable and working the way you expect it to even when other components on the system are upgraded.\n",
"Beware reproducing the classic Windows DLL hell. By all means minimize the number of dependencies: ideally, just depend on your language and its framework, nothing else, if you can.\nAfter all, preserving hard disk space is hardly the objective any more, so users need not care about having multiple copies. Also, unless you have a minuscule number of users, be sure to take the onus of packaging on yourself rather than requiring them to obtain all dependencies!\n"
] | [
10,
4,
3,
3,
2,
1,
1,
1
] | [] | [] | [
"dependencies",
"distribution",
"packaging",
"python"
] | stackoverflow_0000598299_dependencies_distribution_packaging_python.txt |
Q:
Catching Python exceptions using 'expect' method?
import sys
try:
file = open("words.txt")
expect(IOError):
if file:
print "%s" % file
else:
print "Cant the %s file" % "words.txt"
this gives me an a error -
File "main.py", line 4
expect(IOError):
SyntaxError: invaild syntax
What im going wrong/ how do you fix this
A:
Actually, it is except as in exception:
For instance:
except IOError:
print "Error opening file!"
A:
I assume you are trying to handle exceptions. In that case, use except, not expect. In any case except is not a function, rather it precedes a block of error handling code. When using files, you may want to look at the with statement and try-except-finally. The correction to your code is-
import sys
try:
file = open("words.txt")
except IOError:
#Handle error
pass
if file:
print "%s" % file
else:
print "Cant the %s file" % "words.txt"
I hope this helps.
A:
It's except. Read this.
A:
I think you're looking for except. The error handling part of the python tutorial explains it well.
-John
A:
>>> try:
... f = open('words.txt')
... except IOError:
... print "Cant the %s file" % "words.txt"
... else:
... print "%s" % f
| Catching Python exceptions using 'expect' method? | import sys
try:
file = open("words.txt")
expect(IOError):
if file:
print "%s" % file
else:
print "Cant the %s file" % "words.txt"
this gives me an a error -
File "main.py", line 4
expect(IOError):
SyntaxError: invaild syntax
What im going wrong/ how do you fix this
| [
"Actually, it is except as in exception:\nFor instance:\nexcept IOError:\n print \"Error opening file!\"\n\n",
"I assume you are trying to handle exceptions. In that case, use except, not expect. In any case except is not a function, rather it precedes a block of error handling code. When using files, you may want to look at the with statement and try-except-finally. The correction to your code is-\nimport sys\ntry:\n file = open(\"words.txt\")\nexcept IOError:\n #Handle error\n pass\nif file:\n print \"%s\" % file\nelse:\n print \"Cant the %s file\" % \"words.txt\"\n\nI hope this helps.\n",
"It's except. Read this.\n",
"I think you're looking for except. The error handling part of the python tutorial explains it well.\n-John\n",
"\n\n>>> try:\n... f = open('words.txt')\n... except IOError:\n... print \"Cant the %s file\" % \"words.txt\"\n... else:\n... print \"%s\" % f\n\n\n"
] | [
12,
4,
1,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000610403_python.txt |
Q:
How to use boolean 'and' in Python
In C# we can use && (boolean and) like this:
int i = 5;
int ii = 10;
if(i == 5 && ii == 10) {
Console.WriteLine("i is 5, and ii is 10");
}
Console.ReadKey(true);
But try that with python:
i = 5
ii = 10
if i == 5 && ii == 10:
print "i is 5 and ii is 10";
I get an error: SyntaxError: invalid syntax
If I use a single &, at least I get no syntax error. How do I do a boolean && in Python?
A:
Try this:
i = 5
ii = 10
if i == 5 and ii == 10:
print "i is 5 and ii is 10"
Edit: Oh, and you dont need that semicolon on the last line (edit to remove it from my code).
A:
As pointed out, "&" in python performs a bitwise and operation, just as it does in C#. and is the appropriate equivalent to the && operator.
Since we're dealing with booleans (i == 5 is True and ii == 10 is also True), you may wonder why this didn't either work anyway (True being treated as an integer quantity should still mean True & True is a True value), or throw an exception (eg. by forbidding bitwise operations on boolean types)
The reason is operator precedence. The "and" operator binds more loosely than ==, so the expression: "i==5 and ii==10" is equivalent to: "(i==5) and (ii==10)"
However, bitwise & has a higher precedence than "==" (since you wouldn't want expressions like "a & 0xff == ch" to mean "a & (0xff == ch)"), so the expression would actually be interpreted as:
if i == (5 & ii) == 10:
Which is using python's operator chaining to mean: does the valuee of ii anded with 5 equal both i and 10. Obviously this will never be true.
You would actually get (seemingly) the right answer if you had included brackets to force the precedence, so:
if (i==5) & (ii=10)
would cause the statement to be printed. It's the wrong thing to do, however - "&" has many different semantics to "and" - (precedence, short-cirtuiting, behaviour with integer arguments etc), so it's fortunate that you caught this here rather than being fooled till it produced less obvious bugs.
A:
The correct operator to be used are the keywords 'or' and 'and', which in your example, the correct way to express this would be:
if i == 5 and ii == 10:
print "i is 5 and ii is 10"
You can refer the details in the "Boolean Operations" section in the language reference.
A:
You can also test them as a couple.
if (i,ii)==(5,10):
print "i is 5 and ii is 10"
A:
& is used for bit-wise comparison. use and instead. and btw, you don't need semicolon at the end of print statement.
A:
In python, use and instead of && like this:
#!/usr/bin/python
foo = True;
bar = True;
if foo and bar:
print "both are true";
This prints:
both are true
| How to use boolean 'and' in Python | In C# we can use && (boolean and) like this:
int i = 5;
int ii = 10;
if(i == 5 && ii == 10) {
Console.WriteLine("i is 5, and ii is 10");
}
Console.ReadKey(true);
But try that with python:
i = 5
ii = 10
if i == 5 && ii == 10:
print "i is 5 and ii is 10";
I get an error: SyntaxError: invalid syntax
If I use a single &, at least I get no syntax error. How do I do a boolean && in Python?
| [
"Try this:\ni = 5\nii = 10\nif i == 5 and ii == 10:\n print \"i is 5 and ii is 10\"\n\nEdit: Oh, and you dont need that semicolon on the last line (edit to remove it from my code).\n",
"As pointed out, \"&\" in python performs a bitwise and operation, just as it does in C#. and is the appropriate equivalent to the && operator.\nSince we're dealing with booleans (i == 5 is True and ii == 10 is also True), you may wonder why this didn't either work anyway (True being treated as an integer quantity should still mean True & True is a True value), or throw an exception (eg. by forbidding bitwise operations on boolean types)\nThe reason is operator precedence. The \"and\" operator binds more loosely than ==, so the expression: \"i==5 and ii==10\" is equivalent to: \"(i==5) and (ii==10)\"\nHowever, bitwise & has a higher precedence than \"==\" (since you wouldn't want expressions like \"a & 0xff == ch\" to mean \"a & (0xff == ch)\"), so the expression would actually be interpreted as:\nif i == (5 & ii) == 10:\n\nWhich is using python's operator chaining to mean: does the valuee of ii anded with 5 equal both i and 10. Obviously this will never be true.\nYou would actually get (seemingly) the right answer if you had included brackets to force the precedence, so:\nif (i==5) & (ii=10)\n\nwould cause the statement to be printed. It's the wrong thing to do, however - \"&\" has many different semantics to \"and\" - (precedence, short-cirtuiting, behaviour with integer arguments etc), so it's fortunate that you caught this here rather than being fooled till it produced less obvious bugs.\n",
"The correct operator to be used are the keywords 'or' and 'and', which in your example, the correct way to express this would be:\nif i == 5 and ii == 10:\n print \"i is 5 and ii is 10\"\n\nYou can refer the details in the \"Boolean Operations\" section in the language reference.\n",
"You can also test them as a couple.\nif (i,ii)==(5,10):\n print \"i is 5 and ii is 10\"\n\n",
"& is used for bit-wise comparison. use and instead. and btw, you don't need semicolon at the end of print statement.\n",
"In python, use and instead of && like this:\n#!/usr/bin/python\nfoo = True;\nbar = True;\nif foo and bar:\n print \"both are true\";\n\nThis prints:\nboth are true\n\n"
] | [
75,
29,
16,
7,
6,
6
] | [] | [] | [
"boolean_logic",
"python"
] | stackoverflow_0000609972_boolean_logic_python.txt |
Q:
has no foreign key to in Django when trying to inline models
I need to be able to create a quiz type application with 20 some odd multiple choice questions.
I have 3 models: Quizzes, Questions, and Answers.
I want in the admin interface to create a quiz, and inline the quiz and answer elements.
The goal is to click "Add Quiz", and be transferred to a page with 20 question fields, with 4 answer fields per each in place.
Here's what I have currently:
class Quiz(models.Model):
label = models.CharField(blank=true, max_length=50)
class Question(models.Model):
label = models.CharField(blank=true, max_length=50)
quiz = models.ForeignKey(Quiz)
class Answer(models.Model):
label = models.CharField(blank=true, max_length=50)
question = models.ForeignKey(Question)
class QuestionInline(admin.TabularInline):
model = Question
extra = 20
class QuestionAdmin(admin.ModelAdmin):
inlines = [QuestionInline]
class AnswerInline(admin.TabularInline):
model = Answer
extra = 4
class AnswerAdmin(admin.ModelAdmin):
inlines = [AnswerInline]
class QuizAdmin(admin.ModelAdmin):
inlines = [QuestionInline, AnswerInline]
admin.site.register(Question, QuestionAdmin)
admin.site.register(Answer, AnswerAdmin)
admin.site.register(Quiz, QuizAdmin)
I get the following error when I try to add a quiz:
class 'quizzer.quiz.models.Answer'> has no ForeignKey to <class 'quizzer.quiz.models.Quiz'>
Is this doable, or am I trying to pull too much out of the Django Admin app?
A:
You can't do "nested" inlines in the Django admin (i.e. you can't have a Quiz with inline Questions, with each inline Question having inline Answers). So you'll need to lower your sights to just having inline Questions (then if you navigate to view a single Question, it could have inline Answers).
So your models are fine, but your admin code should look like this:
class QuestionInline(admin.TabularInline):
model = Question
extra = 20
class AnswerInline(admin.TabularInline):
model = Answer
extra = 4
class QuestionAdmin(admin.ModelAdmin):
inlines = [AnswerInline]
class AnswerAdmin(admin.ModelAdmin):
pass
class QuizAdmin(admin.ModelAdmin):
inlines = [QuestionInline]
It doesn't make sense for AnswerAdmin to have an AnswerInline, or QuestionAdmin to have a QuestionInline (unless these were models with a self-referential foreign key). And QuizAdmin can't have an AnswerInline, because Answer has no foreign key to Quiz.
If Django did support nested inlines, the logical syntax would be for QuestionInline to accept an "inlines" attribute, which you'd set to [AnswerInline]. But it doesn't.
Also note that "extra = 20" means you'll have 20 blank Question forms at the bottom of every Quiz, every time you load it up (even if it already has 20 actual Questions). Maybe that's what you want - makes for a long page, but makes it easy to add lots of questions at once.
A:
Let's follow through step by step.
The error: "Answer has no FK to Quiz".
That's correct. The Answer model has no FK to Quiz. It has an FK to Question, but not Quiz.
Why does Answer need an FK to quiz?
The QuizAdmin has an AnswerInline and a QuestionInline. For an admin to have inlines, it means the inlined models (Answer and Question) must have FK's to the parent admin.
Let's check. Question has an FK to Quiz.
And. Answer has no FK to Quiz. So your Quiz admin demands an FK that your model lacks. That's the error.
A:
Correct: trying to pull too much out of admin app :) Inline models need a foreign key to the parent model.
| has no foreign key to in Django when trying to inline models | I need to be able to create a quiz type application with 20 some odd multiple choice questions.
I have 3 models: Quizzes, Questions, and Answers.
I want in the admin interface to create a quiz, and inline the quiz and answer elements.
The goal is to click "Add Quiz", and be transferred to a page with 20 question fields, with 4 answer fields per each in place.
Here's what I have currently:
class Quiz(models.Model):
label = models.CharField(blank=true, max_length=50)
class Question(models.Model):
label = models.CharField(blank=true, max_length=50)
quiz = models.ForeignKey(Quiz)
class Answer(models.Model):
label = models.CharField(blank=true, max_length=50)
question = models.ForeignKey(Question)
class QuestionInline(admin.TabularInline):
model = Question
extra = 20
class QuestionAdmin(admin.ModelAdmin):
inlines = [QuestionInline]
class AnswerInline(admin.TabularInline):
model = Answer
extra = 4
class AnswerAdmin(admin.ModelAdmin):
inlines = [AnswerInline]
class QuizAdmin(admin.ModelAdmin):
inlines = [QuestionInline, AnswerInline]
admin.site.register(Question, QuestionAdmin)
admin.site.register(Answer, AnswerAdmin)
admin.site.register(Quiz, QuizAdmin)
I get the following error when I try to add a quiz:
class 'quizzer.quiz.models.Answer'> has no ForeignKey to <class 'quizzer.quiz.models.Quiz'>
Is this doable, or am I trying to pull too much out of the Django Admin app?
| [
"You can't do \"nested\" inlines in the Django admin (i.e. you can't have a Quiz with inline Questions, with each inline Question having inline Answers). So you'll need to lower your sights to just having inline Questions (then if you navigate to view a single Question, it could have inline Answers).\nSo your models are fine, but your admin code should look like this:\nclass QuestionInline(admin.TabularInline):\n model = Question\n extra = 20\n\nclass AnswerInline(admin.TabularInline):\n model = Answer\n extra = 4\n\nclass QuestionAdmin(admin.ModelAdmin):\n inlines = [AnswerInline]\n\nclass AnswerAdmin(admin.ModelAdmin):\n pass\n\nclass QuizAdmin(admin.ModelAdmin):\n inlines = [QuestionInline]\n\nIt doesn't make sense for AnswerAdmin to have an AnswerInline, or QuestionAdmin to have a QuestionInline (unless these were models with a self-referential foreign key). And QuizAdmin can't have an AnswerInline, because Answer has no foreign key to Quiz.\nIf Django did support nested inlines, the logical syntax would be for QuestionInline to accept an \"inlines\" attribute, which you'd set to [AnswerInline]. But it doesn't.\nAlso note that \"extra = 20\" means you'll have 20 blank Question forms at the bottom of every Quiz, every time you load it up (even if it already has 20 actual Questions). Maybe that's what you want - makes for a long page, but makes it easy to add lots of questions at once.\n",
"Let's follow through step by step.\nThe error: \"Answer has no FK to Quiz\".\nThat's correct. The Answer model has no FK to Quiz. It has an FK to Question, but not Quiz.\nWhy does Answer need an FK to quiz? \nThe QuizAdmin has an AnswerInline and a QuestionInline. For an admin to have inlines, it means the inlined models (Answer and Question) must have FK's to the parent admin.\nLet's check. Question has an FK to Quiz.\nAnd. Answer has no FK to Quiz. So your Quiz admin demands an FK that your model lacks. That's the error. \n",
"Correct: trying to pull too much out of admin app :) Inline models need a foreign key to the parent model.\n"
] | [
15,
3,
2
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0000609556_django_django_admin_python.txt |
Q:
pysqlite user types in select statement
Using pysqlite how can a user-defined-type be used as a value in a comparison, e. g: “... WHERE columnName > userType”?
For example, I've defined a bool type with the requisite registration, converter, etc. Pysqlite/Sqlite responds as expected for INSERT and SELECT operations (bool 'True' stored as an integer 1 and returned as True).
But it fails when the bool is used in either “SELECT * from tasks WHERE display = True” or “... WHERE display = 'True.' “ In the first case Sqlite reports an error that there is not a column named True. And in the second case no records are returned. The select works if a 1 is used in place of True. I seem to have the same problem when using pysqlite's own date and timestamp adaptors.
I can work around this behavior for this and other user-types but that's not as fun. I'd like to know if using a user-defined type in a query is or is not possible so that I don't keep banging my head on this particular wall.
Thank you.
A:
Use the correct way of passing variables to queries: Don't build the query, use question marks and pass the parameters as a tuple to execute().
myvar = True
cur.execute('SELECT * FROM tasks WHERE display = ?', (myvar,))
That way the sqlite driver will use the value directly. No escapeing, quoting, conversion is needed anymore.
Use this technique for every parameter, even string, integer, etc. That way you avoid SQL injection automatically.
A:
You probably have to cast it to the correct type. Try "SELECT * FROM tasks WHERE (display = CAST ('True' AS bool))".
| pysqlite user types in select statement | Using pysqlite how can a user-defined-type be used as a value in a comparison, e. g: “... WHERE columnName > userType”?
For example, I've defined a bool type with the requisite registration, converter, etc. Pysqlite/Sqlite responds as expected for INSERT and SELECT operations (bool 'True' stored as an integer 1 and returned as True).
But it fails when the bool is used in either “SELECT * from tasks WHERE display = True” or “... WHERE display = 'True.' “ In the first case Sqlite reports an error that there is not a column named True. And in the second case no records are returned. The select works if a 1 is used in place of True. I seem to have the same problem when using pysqlite's own date and timestamp adaptors.
I can work around this behavior for this and other user-types but that's not as fun. I'd like to know if using a user-defined type in a query is or is not possible so that I don't keep banging my head on this particular wall.
Thank you.
| [
"Use the correct way of passing variables to queries: Don't build the query, use question marks and pass the parameters as a tuple to execute().\nmyvar = True\ncur.execute('SELECT * FROM tasks WHERE display = ?', (myvar,))\n\nThat way the sqlite driver will use the value directly. No escapeing, quoting, conversion is needed anymore.\nUse this technique for every parameter, even string, integer, etc. That way you avoid SQL injection automatically.\n",
"You probably have to cast it to the correct type. Try \"SELECT * FROM tasks WHERE (display = CAST ('True' AS bool))\".\n"
] | [
1,
0
] | [] | [] | [
"pysqlite",
"python",
"sqlite"
] | stackoverflow_0000609516_pysqlite_python_sqlite.txt |
Q:
Is there something like CherryPy or Cerise in the Java world?
CherryPy and Cerise are two small frameworks that implement nothing but the barebones of a web-framework and I love their simplicity: in fact I reckon that if Classic ASP was implemented that way (and didn't pretty much require VBScript) I could have settled for it and lived happily ever after.
But now I'm living at the borders of the Java world and would like to know if there's something similar to these 2 frameworks and that doesn't try to take control away from you. My requrements would be that they have an:
a dispatcher that maps urls to methods (like CherryPy, Django, Cerise, Rails, etc...)
bonus points if it has a simple, yet powerful templating language (a la JSP/ASP) that is not too religious in separation of concerns
bonus points if it has some sort of library that helps in validating forms
Thanks
--
A:
Stripes
URLs to methods, check, form validation, check. Powerful but stays out of your way unless you need it.
A:
OOWeb, essentially a port of CherryPy.
A:
Groovy and Grails. If you like MVC or even have a existing library written in Java/JVM, that are the tools you're looking for!
Grails aims to bring the "coding by
convention" paradigm to Groovy. It's
an open-source web application
framework that leverages the Groovy
language and complements Java Web
development. You can use Grails as a
standalone development environment
that hides all configuration details
or integrate your Java business logic.
Grails aims to make development as
simple as possible and hence should
appeal to a wide range of developers
not just those from the Java
community.
| Is there something like CherryPy or Cerise in the Java world? | CherryPy and Cerise are two small frameworks that implement nothing but the barebones of a web-framework and I love their simplicity: in fact I reckon that if Classic ASP was implemented that way (and didn't pretty much require VBScript) I could have settled for it and lived happily ever after.
But now I'm living at the borders of the Java world and would like to know if there's something similar to these 2 frameworks and that doesn't try to take control away from you. My requrements would be that they have an:
a dispatcher that maps urls to methods (like CherryPy, Django, Cerise, Rails, etc...)
bonus points if it has a simple, yet powerful templating language (a la JSP/ASP) that is not too religious in separation of concerns
bonus points if it has some sort of library that helps in validating forms
Thanks
--
| [
"Stripes\nURLs to methods, check, form validation, check. Powerful but stays out of your way unless you need it.\n",
"OOWeb, essentially a port of CherryPy.\n",
"Groovy and Grails. If you like MVC or even have a existing library written in Java/JVM, that are the tools you're looking for! \n\nGrails aims to bring the \"coding by\n convention\" paradigm to Groovy. It's\n an open-source web application\n framework that leverages the Groovy\n language and complements Java Web\n development. You can use Grails as a\n standalone development environment\n that hides all configuration details\n or integrate your Java business logic.\n Grails aims to make development as\n simple as possible and hence should\n appeal to a wide range of developers\n not just those from the Java\n community.\n\n"
] | [
2,
2,
0
] | [] | [] | [
"java",
"python"
] | stackoverflow_0000610516_java_python.txt |