content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Equivalent of an HTML multiple SELECT box in wxPython
I'd like to create a ListBox in wxPython with the same semantics as a multiple select box in HTML. Specifically I'd like the following semantics
When the user clicks on an entry in the list, all other entries become de-selected and the clicked entry becomes selected. If the entry was already selected then it stays selected.
When the user holds down the Ctrl key while clicking on an entry, all other entries stay unchanged, but it toggles whether the clicked entry is selected.
When the user holds down shift and clicks on an entry, that entry and every entry between it and the last clicked entry become selected.
In Java I get this by using the JList class in Swing and setting the selection mode to MULTIPLE_INTERVAL_SELECTED. I assume that there's a way to do this with the wxPython toolkit, but I can't figure out how to get a ListBox or ListCtrl or any other class to do this short of doing an enormous amount of event-driven programming myself.
A:
I think what you're looking for is the wxLB_EXTENDED list box style. Specify style = wx.LB_EXTENDED when you create the ListBox.
You can then use the GetSelections method to obtain a list of the selected items.
| Equivalent of an HTML multiple SELECT box in wxPython | I'd like to create a ListBox in wxPython with the same semantics as a multiple select box in HTML. Specifically I'd like the following semantics
When the user clicks on an entry in the list, all other entries become de-selected and the clicked entry becomes selected. If the entry was already selected then it stays selected.
When the user holds down the Ctrl key while clicking on an entry, all other entries stay unchanged, but it toggles whether the clicked entry is selected.
When the user holds down shift and clicks on an entry, that entry and every entry between it and the last clicked entry become selected.
In Java I get this by using the JList class in Swing and setting the selection mode to MULTIPLE_INTERVAL_SELECTED. I assume that there's a way to do this with the wxPython toolkit, but I can't figure out how to get a ListBox or ListCtrl or any other class to do this short of doing an enormous amount of event-driven programming myself.
| [
"I think what you're looking for is the wxLB_EXTENDED list box style. Specify style = wx.LB_EXTENDED when you create the ListBox.\nYou can then use the GetSelections method to obtain a list of the selected items.\n"
] | [
3
] | [] | [] | [
"python",
"wxpython"
] | stackoverflow_0000363907_python_wxpython.txt |
Q:
Best way to monitor services on a few servers with python
What would be the best way to monitor services like HTTP/FTP/IMAP/POP3/SMTP for a few servers from python?
Using sockets and trying to connect to service port http-80, ftp-21, etc... and if connection successful assume service is ok or use python libs to connect to specified services and handle exceptions/return codes/etc...
For example for ftp is it better to
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(("ftp.domain.tld", 21))
print "ok"
except:
print "failed"
s.close()
or to
from ftplib import FTP
try:
ftp = FTP('ftp.domain.tld')
print "FTP service OK"
ftp.close()
except:
print "FTP err"
same goes for the rest, socket vs urllib2, imaplib, etc...
and if i go the lib way how do i test for smtp?
A:
update 1:
Actually, In your code there is no difference between the two option ( in FTP ). The second option should be Preferred for code readability. But way not login to the ftp server, And maybe read some file?
update 0:
When monitoring, testing the full stack is better. Because otherwise you can miss problems that don't manifest themselves in the socket level. The smtp can be tested with the smtplib. The best way is actually send mail with it. And read it from the target with the imaplib. Or, You can just use the smtplib to converse with the smtp server. It is better to do the whole end to end thing. But development and configuration resources should be considered against the impact of missing problems.
| Best way to monitor services on a few servers with python | What would be the best way to monitor services like HTTP/FTP/IMAP/POP3/SMTP for a few servers from python?
Using sockets and trying to connect to service port http-80, ftp-21, etc... and if connection successful assume service is ok or use python libs to connect to specified services and handle exceptions/return codes/etc...
For example for ftp is it better to
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(("ftp.domain.tld", 21))
print "ok"
except:
print "failed"
s.close()
or to
from ftplib import FTP
try:
ftp = FTP('ftp.domain.tld')
print "FTP service OK"
ftp.close()
except:
print "FTP err"
same goes for the rest, socket vs urllib2, imaplib, etc...
and if i go the lib way how do i test for smtp?
| [
"update 1:\nActually, In your code there is no difference between the two option ( in FTP ). The second option should be Preferred for code readability. But way not login to the ftp server, And maybe read some file?\nupdate 0:\nWhen monitoring, testing the full stack is better. Because otherwise you can miss problems that don't manifest themselves in the socket level. The smtp can be tested with the smtplib. The best way is actually send mail with it. And read it from the target with the imaplib. Or, You can just use the smtplib to converse with the smtp server. It is better to do the whole end to end thing. But development and configuration resources should be considered against the impact of missing problems.\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0000363968_python.txt |
Q:
Threaded code on mod_python
I have written a Django app that makes use of Python threading to create a web spider, the spider operates as a series of threads to check links.
When I run this app using the django test server (built in), the app runs fine and the threads seem to start and stop on time.
However, running the app on Apache it seems the threads aren't kicking off and running (after about 80 seconds there should be a queued database update and these changes aren't occuring).
Does anyone have an idea what I'm missing here?
-- Edit: My question is, how does Apache handle threaded applications, i.e. is there a limit on how many threads can be run from a single app?
Any help would be appreciated!
A:
Most likely, you are missing the creation of new processes. Apache will not run in a single process, but fork new processes for requests every now and then (depending on a dozen or so configuration parameters). If you run django in each process, they will share no memory, and the results produced in one worker won't be visible to any of the others. In addition, the Apache process might terminate (on idle, or after a certain time), discarding your in-memory results.
| Threaded code on mod_python | I have written a Django app that makes use of Python threading to create a web spider, the spider operates as a series of threads to check links.
When I run this app using the django test server (built in), the app runs fine and the threads seem to start and stop on time.
However, running the app on Apache it seems the threads aren't kicking off and running (after about 80 seconds there should be a queued database update and these changes aren't occuring).
Does anyone have an idea what I'm missing here?
-- Edit: My question is, how does Apache handle threaded applications, i.e. is there a limit on how many threads can be run from a single app?
Any help would be appreciated!
| [
"Most likely, you are missing the creation of new processes. Apache will not run in a single process, but fork new processes for requests every now and then (depending on a dozen or so configuration parameters). If you run django in each process, they will share no memory, and the results produced in one worker won't be visible to any of the others. In addition, the Apache process might terminate (on idle, or after a certain time), discarding your in-memory results.\n"
] | [
3
] | [] | [] | [
"apache",
"django",
"multithreading",
"python"
] | stackoverflow_0000364358_apache_django_multithreading_python.txt |
Q:
MVC and django fundamentals
Pretty new to this scene and trying to find some documentation to adopt best practices. We're building a fairly large content site which will consist of various media catalogs and I'm trying to find some comparable data / architectural models so that we can get a better idea of the approach we should use using a framework we've never made use of before. Any insight / help would be greatly appreciated!
A:
"data / architectural models so that we can get a better idea of the approach we should use using a framework we've never made use of before"
Django imposes best practices on you. You don't have a lot of choices and can't make a lot of mistakes.
MVC (while a noble aspiration) is implemented as follows:
Data is defined in "models.py" files using the Django ORM models.
urls.py file maps URL to view function. Pick your URL's wisely.
View function does all processing, making use of models and methods in models
Presentation (via HTML templates) invoked by View function. Essentially no processing can be done in presentation, just lightweight iteration and decision-making
The model is defined for you. Just stick to what Django does naturally and you'll be happy.
Architecturally, you usually have a stack like this.
Apache does two things.
serves static content directly and immediately
hands dynamic URL to Django (via mod_python, mod_wsgi or mod_fastcgi). Django apps map URL to view functions (which access to database (via ORM/model) and display via templates.
Database used by Django view functions.
The architecture is well-defined for you. Just stick to what Django does naturally and you'll be happy.
Feel free to read the Django documentation. It's excellent; perhaps the best there is.
A:
first, forget all MVC mantra. it's important to have a good layered structure, but MVC (as defined originally) isn't one, it was a modular structure, where each GUI module is split in these tree submodules. nothing to use on the web here.
in web development, it really pays to have a layered structure, where the most important layer is the storage/modelling one, which came to be called model layer. on top of that, you need a few other layers but they're really not anything like views and controllers in the GUI world.
the Django layers are roughly:
storage/modelling: models.py, obviously. try to put most of the 'working' concepts there. all the relationships, all the operations should be implemented here.
dispatching: mostly in urls.py. here you turn your URL scheme into code paths. think of it like a big switch() statement. try hard to have readable URLs, which map into user intentions. it will help a lot to add new functionality, or new ways to do the same things (like an AJAX UI later).
gathering: mostly the view functions, both yours and the prebuilt generic views. here you simply gather all the from the models to satisfy a user request. in surprisingly many cases, it just have to pick a single model instance, and everything else can be retrieved from relationships. for these URLs, a generic view is enough.
presentation: the templates. if the view gives you the data you need, it's simple enough to turn it into a webpage. it's here where you'll thank that the model classes have good accessors to get any kind of relevant data from any given instance.
A:
To understand django fundementals and the django take on MVC, consult the following:
http://www.djangobook.com/
As a starting point to getting your hands dirty with ...
"...trying to find some comparable data / architectural models"
Here is a quick and dirty way to reverse engineer a database to get a models.py file,
which you can then inspect to see how django would handle it.
1.) get an er diagram that closely matches your target. For example something like this
http://www.databaseanswers.org/data_models/product_catalogs/index.htm
2.) create an sql script from the er diagram and create the database,
I suggest Postgre, as some MySQL
table type will not have forgien key constraints, but in a pinch MySQL or SQLITE
will do
3.) create and configure a django app to use that database. Then run:
python manage.py inspectdb
This will at least give you a models.py file which you can read to see how django attempts
to model it.
Note that the inspect command is intended to be a shortcut for dealing with legacy
database when developing in django, and as such is not perfect. Be sure to read the
following before attempting this:
http://docs.djangoproject.com/en/dev/ref/django-admin/#ref-django-admin
| MVC and django fundamentals | Pretty new to this scene and trying to find some documentation to adopt best practices. We're building a fairly large content site which will consist of various media catalogs and I'm trying to find some comparable data / architectural models so that we can get a better idea of the approach we should use using a framework we've never made use of before. Any insight / help would be greatly appreciated!
| [
"\"data / architectural models so that we can get a better idea of the approach we should use using a framework we've never made use of before\"\nDjango imposes best practices on you. You don't have a lot of choices and can't make a lot of mistakes.\nMVC (while a noble aspiration) is implemented as follows:\n\nData is defined in \"models.py\" files using the Django ORM models.\nurls.py file maps URL to view function. Pick your URL's wisely.\nView function does all processing, making use of models and methods in models\nPresentation (via HTML templates) invoked by View function. Essentially no processing can be done in presentation, just lightweight iteration and decision-making\n\nThe model is defined for you. Just stick to what Django does naturally and you'll be happy.\nArchitecturally, you usually have a stack like this.\n\nApache does two things.\n\nserves static content directly and immediately \nhands dynamic URL to Django (via mod_python, mod_wsgi or mod_fastcgi). Django apps map URL to view functions (which access to database (via ORM/model) and display via templates.\n\nDatabase used by Django view functions.\n\nThe architecture is well-defined for you. Just stick to what Django does naturally and you'll be happy.\nFeel free to read the Django documentation. It's excellent; perhaps the best there is.\n",
"first, forget all MVC mantra. it's important to have a good layered structure, but MVC (as defined originally) isn't one, it was a modular structure, where each GUI module is split in these tree submodules. nothing to use on the web here.\nin web development, it really pays to have a layered structure, where the most important layer is the storage/modelling one, which came to be called model layer. on top of that, you need a few other layers but they're really not anything like views and controllers in the GUI world.\nthe Django layers are roughly:\n\nstorage/modelling: models.py, obviously. try to put most of the 'working' concepts there. all the relationships, all the operations should be implemented here.\ndispatching: mostly in urls.py. here you turn your URL scheme into code paths. think of it like a big switch() statement. try hard to have readable URLs, which map into user intentions. it will help a lot to add new functionality, or new ways to do the same things (like an AJAX UI later).\ngathering: mostly the view functions, both yours and the prebuilt generic views. here you simply gather all the from the models to satisfy a user request. in surprisingly many cases, it just have to pick a single model instance, and everything else can be retrieved from relationships. for these URLs, a generic view is enough.\npresentation: the templates. if the view gives you the data you need, it's simple enough to turn it into a webpage. it's here where you'll thank that the model classes have good accessors to get any kind of relevant data from any given instance.\n\n",
"To understand django fundementals and the django take on MVC, consult the following:\nhttp://www.djangobook.com/\nAs a starting point to getting your hands dirty with ...\n\"...trying to find some comparable data / architectural models\"\nHere is a quick and dirty way to reverse engineer a database to get a models.py file,\nwhich you can then inspect to see how django would handle it.\n1.) get an er diagram that closely matches your target. For example something like this\n http://www.databaseanswers.org/data_models/product_catalogs/index.htm\n2.) create an sql script from the er diagram and create the database, \n I suggest Postgre, as some MySQL\n table type will not have forgien key constraints, but in a pinch MySQL or SQLITE\n will do\n3.) create and configure a django app to use that database. Then run:\n python manage.py inspectdb\nThis will at least give you a models.py file which you can read to see how django attempts\nto model it.\nNote that the inspect command is intended to be a shortcut for dealing with legacy\ndatabase when developing in django, and as such is not perfect. Be sure to read the\nfollowing before attempting this:\n http://docs.djangoproject.com/en/dev/ref/django-admin/#ref-django-admin\n"
] | [
16,
5,
0
] | [] | [] | [
"django",
"django_models",
"django_templates",
"python"
] | stackoverflow_0000364015_django_django_models_django_templates_python.txt |
Q:
Can I use chart modules with wxpython?
Is it possible to use any chart modules with wxpython? And are there any good ones out there?
I'm thinking of the likes of PyCha (http://www.lorenzogil.com/projects/pycha/) or any equivalent. Many modules seem to require PyCairo, but I can't figure out if I can use those with my wxpython app.
My app has a notebook pane, and I'd like to place the chart inside it. The chart has to be dynamic -- ie the user can choose what kind of data to view -- so I'm guessing modules that make chart images are out.
Just for clarity, by charts I mean things like pies, lines and bars etc.
A:
I recently revisited matplotlib, and am pretty happy with the results.
If you're on windows, there are windows installers available to make your installation process a little less painful.
One potential drawback though is that it requires numpy to be installed.
I don't have experience with the interactivity of it, but it does support event handling.
A:
matplotlib does embed quite well in wxpython. I have only used it in Tkinter, which went smoothly for me. I like the optional toolbar that allows direct manipulation of the plot (resizing and panning and such)
A:
Use matplotlib. It integrates nicely with wxPython. Here's a sample of an interactive chart with wxPython and matplotlib.
| Can I use chart modules with wxpython? | Is it possible to use any chart modules with wxpython? And are there any good ones out there?
I'm thinking of the likes of PyCha (http://www.lorenzogil.com/projects/pycha/) or any equivalent. Many modules seem to require PyCairo, but I can't figure out if I can use those with my wxpython app.
My app has a notebook pane, and I'd like to place the chart inside it. The chart has to be dynamic -- ie the user can choose what kind of data to view -- so I'm guessing modules that make chart images are out.
Just for clarity, by charts I mean things like pies, lines and bars etc.
| [
"I recently revisited matplotlib, and am pretty happy with the results.\nIf you're on windows, there are windows installers available to make your installation process a little less painful.\nOne potential drawback though is that it requires numpy to be installed.\nI don't have experience with the interactivity of it, but it does support event handling.\n",
"matplotlib does embed quite well in wxpython. I have only used it in Tkinter, which went smoothly for me. I like the optional toolbar that allows direct manipulation of the plot (resizing and panning and such)\n",
"Use matplotlib. It integrates nicely with wxPython. Here's a sample of an interactive chart with wxPython and matplotlib. \n"
] | [
3,
1,
1
] | [] | [] | [
"python",
"wxpython"
] | stackoverflow_0000358189_python_wxpython.txt |
Q:
Detect windows logout in Python
How can I detect, or be notified, when windows is logging out in python?
Edit:
Martin v. Lรถwis' answer is good, and works for a full logout but it does not work for a 'fast user switching' event like pressing win+L which is what I really need it for.
Edit: im not using a gui this is running as a service
A:
You can detect fast user switching events using the Terminal Services API, which you can access from Python using the win32ts module from pywin32. In a GUI application, call WTSRegisterSessionNotification to receive notification messages, WTSUnRegisterSessionNotification to stop receiving notifications, and handle the WM_WTSSESSION_CHANGE message in your window procedure.
If you're writing a Windows service in Python, use the RegisterServiceCtrlHandlerEx function to detect fast user switching events. This is available in the pywin32 library as the RegisterServiceCtrlHandler function in the servicemanager module. In your handler function, look for the SERVICE_CONTROL_SESSIONCHANGE notification. See also the WM_WTSSESSION_CHANGE documentation for details of the specific notification codes.
There's some more detail in this thread from the python-win32 mailing list, which may be useful.
I hope this helps!
A:
In a console application, you can use win32api.SetConsoleCtrlHandler and look for CTRL_LOGOFF_EVENT. In a GUI application, you need a window open and wait for the WM_QUERYENDSESSION message. How precisely that works (and if it works at all) depends on your GUI library.
| Detect windows logout in Python | How can I detect, or be notified, when windows is logging out in python?
Edit:
Martin v. Lรถwis' answer is good, and works for a full logout but it does not work for a 'fast user switching' event like pressing win+L which is what I really need it for.
Edit: im not using a gui this is running as a service
| [
"You can detect fast user switching events using the Terminal Services API, which you can access from Python using the win32ts module from pywin32. In a GUI application, call WTSRegisterSessionNotification to receive notification messages, WTSUnRegisterSessionNotification to stop receiving notifications, and handle the WM_WTSSESSION_CHANGE message in your window procedure.\nIf you're writing a Windows service in Python, use the RegisterServiceCtrlHandlerEx function to detect fast user switching events. This is available in the pywin32 library as the RegisterServiceCtrlHandler function in the servicemanager module. In your handler function, look for the SERVICE_CONTROL_SESSIONCHANGE notification. See also the WM_WTSSESSION_CHANGE documentation for details of the specific notification codes.\nThere's some more detail in this thread from the python-win32 mailing list, which may be useful.\nI hope this helps!\n",
"In a console application, you can use win32api.SetConsoleCtrlHandler and look for CTRL_LOGOFF_EVENT. In a GUI application, you need a window open and wait for the WM_QUERYENDSESSION message. How precisely that works (and if it works at all) depends on your GUI library.\n"
] | [
6,
3
] | [] | [] | [
"python",
"winapi"
] | stackoverflow_0000365058_python_winapi.txt |
Q:
run a function in another function in N times
I have ask this kind of question before, but it seems my previous question is a bit misleading due to my poor English. I'm asking again to make clear. I am really confused about it. Thanks in advance.
Suppose I have a function A for generating the state of a cell in a certain rule, and I have another function which generates the state of a cell for N times, and each time the rule is as same as the fist function. And, yeah, don't know how to do it...
def 1st_funtion(a_matrixA)
#apply some rule on a_matrixA and return a new matrix(next state of the cell)
return new_matrix
def 2nd_funtion(a_matrixB,repeat_times=n)
#how to import the 1st_funtion and run for n times and return the final_matrix?
#I know if n=1, just make final_matrix=1st_funtion(a_matrixB)
return final_matrix
A:
def 1st_funtion(a_matrixA)
#apply some rule on a_matrixA and return a new matrix(next state of the cell)
return new_matrix
def 2nd_funtion(a_matrixB,repeat_times)
for i in range(repeat_times):
a_matrixB = 1st_funtion(a_matrixB)
return a_matrixB
| run a function in another function in N times | I have ask this kind of question before, but it seems my previous question is a bit misleading due to my poor English. I'm asking again to make clear. I am really confused about it. Thanks in advance.
Suppose I have a function A for generating the state of a cell in a certain rule, and I have another function which generates the state of a cell for N times, and each time the rule is as same as the fist function. And, yeah, don't know how to do it...
def 1st_funtion(a_matrixA)
#apply some rule on a_matrixA and return a new matrix(next state of the cell)
return new_matrix
def 2nd_funtion(a_matrixB,repeat_times=n)
#how to import the 1st_funtion and run for n times and return the final_matrix?
#I know if n=1, just make final_matrix=1st_funtion(a_matrixB)
return final_matrix
| [
"def 1st_funtion(a_matrixA)\n #apply some rule on a_matrixA and return a new matrix(next state of the cell)\n return new_matrix\n\ndef 2nd_funtion(a_matrixB,repeat_times)\n\n for i in range(repeat_times):\n a_matrixB = 1st_funtion(a_matrixB)\n return a_matrixB\n\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0000365384_python.txt |
Q:
how to determine the period of a function
if i have a function A,which can apply a certain rule on a given matrix to generate a another matrix which i call it the next state of the origin matrix,also the function can determine the the final state of the matrix by given times N(apply the rule on origin,and apply the rule on the next state of the origin matrix again,and apply rule apply rule... for N times).
So suppose for a given matrix,apply the rule on it for 5 times and the final matrix become the same as the origin matrix,and we call that matrix's period is 5.
And I have another function B,how can i make the functionB can determine the period of a given function under the same rule of the functionA,and return the period?I just have no idea how to start to make it...Thanks in advance.
def functionA(origin_matrix,N_times):
#apply rule on the origin_matrix to generate another matrix which is the next sate of it.
#apply rule on origin_matrix for N_times
return the_final_matrix
def functionB(origin_matrix):
#determine the period of the the origin_matrix.
return period
A:
Use a for loop, or a while loop with a temporary result and a counter. The latter method is most efficient (in general).
Simple version, in pseudocode:
iterations = 0;
tmp = origin_matrix;
do
tmp = operation(tmp);
iterations += 1;
while tmp != origin_matrix;
return iterations;
EDIT: You can also use a simple while construct:
while True:
tmp = operation(tmp)
iterations += 1
if tmp == origin_matrix:
break # Or you could return here.
EDIT: That was for functionB. I didn't know they were separate questions. For that example, operation(x) = functionA(x, 1).
For functionA, you'd use a for loop, most likely. Pseudocode:
matrix = origin_matrix
for i in range(N_times):
matrix = operation(matrix)
return matrix
| how to determine the period of a function | if i have a function A,which can apply a certain rule on a given matrix to generate a another matrix which i call it the next state of the origin matrix,also the function can determine the the final state of the matrix by given times N(apply the rule on origin,and apply the rule on the next state of the origin matrix again,and apply rule apply rule... for N times).
So suppose for a given matrix,apply the rule on it for 5 times and the final matrix become the same as the origin matrix,and we call that matrix's period is 5.
And I have another function B,how can i make the functionB can determine the period of a given function under the same rule of the functionA,and return the period?I just have no idea how to start to make it...Thanks in advance.
def functionA(origin_matrix,N_times):
#apply rule on the origin_matrix to generate another matrix which is the next sate of it.
#apply rule on origin_matrix for N_times
return the_final_matrix
def functionB(origin_matrix):
#determine the period of the the origin_matrix.
return period
| [
"Use a for loop, or a while loop with a temporary result and a counter. The latter method is most efficient (in general).\nSimple version, in pseudocode:\niterations = 0;\ntmp = origin_matrix;\n\ndo\n tmp = operation(tmp);\n iterations += 1;\nwhile tmp != origin_matrix;\n\nreturn iterations;\n\nEDIT: You can also use a simple while construct:\nwhile True:\n tmp = operation(tmp)\n iterations += 1\n\n if tmp == origin_matrix:\n break # Or you could return here.\n\nEDIT: That was for functionB. I didn't know they were separate questions. For that example, operation(x) = functionA(x, 1).\nFor functionA, you'd use a for loop, most likely. Pseudocode:\nmatrix = origin_matrix\n\nfor i in range(N_times):\n matrix = operation(matrix)\n\nreturn matrix\n\n"
] | [
6
] | [] | [] | [
"math",
"python"
] | stackoverflow_0000365924_math_python.txt |
Q:
How to truncate matrix using NumPy (Python)
just a quick question, if I have a matrix has n rows and m columns, how can I cut off the 4 sides of the matrix and return a new matrix? (the new matrix would have n-2 rows m-2 columns).
Thanks in advance
A:
a[1:-1, 1:-1]
A:
A more general answer is:
a[[slice(1, -1) for _ in a.shape]]
| How to truncate matrix using NumPy (Python) | just a quick question, if I have a matrix has n rows and m columns, how can I cut off the 4 sides of the matrix and return a new matrix? (the new matrix would have n-2 rows m-2 columns).
Thanks in advance
| [
"a[1:-1, 1:-1]\n\n",
"A more general answer is:\na[[slice(1, -1) for _ in a.shape]]\n\n"
] | [
18,
5
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0000365395_numpy_python.txt |
Q:
Tracking redirects and cookies with Python
I would like to do be able to follow and track redirects and the cookies that are set by the different webpages with Python (a bit like the tamper plugin for Firefox).
So if website1 redirects to website2 which then redirects to website3, I would like to follow that and also see what cookies each website sets. I have been looking at Urllib2 but it redirects by itself and I haven't seen a way to track the redirects.
A:
there are detailed turorial on this.
In dive into python and in voidspace. The short version is that urllib2 provide handlers (That you can override ) to control redirects and cookies.
| Tracking redirects and cookies with Python | I would like to do be able to follow and track redirects and the cookies that are set by the different webpages with Python (a bit like the tamper plugin for Firefox).
So if website1 redirects to website2 which then redirects to website3, I would like to follow that and also see what cookies each website sets. I have been looking at Urllib2 but it redirects by itself and I haven't seen a way to track the redirects.
| [
"there are detailed turorial on this.\nIn dive into python and in voidspace. The short version is that urllib2 provide handlers (That you can override ) to control redirects and cookies.\n"
] | [
2
] | [] | [] | [
"python",
"redirect",
"urllib2"
] | stackoverflow_0000366037_python_redirect_urllib2.txt |
Q:
how to browse to a external url from turbogears/cherrypy application?
I am writing a tinyurl clone to learn turbogears. I am wondering how do i redirect my browser to the external website (say www.yahoo.com) from my cherrypy/turbogears app?
I googled about it, but could not find much useful info.
A:
Just raise a HTTPRedirect exception, which lives in the cherrypy namespace. Like this:
raise cherrypy.HTTPRedirect("http://www.yahoo.com")
| how to browse to a external url from turbogears/cherrypy application? | I am writing a tinyurl clone to learn turbogears. I am wondering how do i redirect my browser to the external website (say www.yahoo.com) from my cherrypy/turbogears app?
I googled about it, but could not find much useful info.
| [
"Just raise a HTTPRedirect exception, which lives in the cherrypy namespace. Like this:\nraise cherrypy.HTTPRedirect(\"http://www.yahoo.com\")\n\n"
] | [
2
] | [] | [] | [
"cherrypy",
"python",
"turbogears",
"url_routing"
] | stackoverflow_0000366421_cherrypy_python_turbogears_url_routing.txt |
Q:
long <-> str binary conversion
Is there any lib that convert very long numbers to string just copying the data?
These one-liners are too slow:
def xlong(s):
return sum([ord(c) << e*8 for e,c in enumerate(s)])
def xstr(x):
return chr(x&255) + xstr(x >> 8) if x else ''
print xlong('abcd'*1024) % 666
print xstr(13**666)
A:
You want the struct module.
packed = struct.pack('l', 123456)
assert struct.unpack('l', packed)[0] == 123456
A:
How about
from binascii import hexlify, unhexlify
def xstr(x):
hex = '%x' % x
return unhexlify('0'*(len(hex)%2) + hex)[::-1]
def xlong(s):
return int(hexlify(s[::-1]), 16)
I didn't time it but it should be faster and also work on larger numbers, since it doesn't use recursion.
A:
In fact, I have a lack of long(s,256) . I lurk more and see that there are 2 function in Python CAPI file "longobject.h":
PyObject * _PyLong_FromByteArray( const unsigned char* bytes, size_t n, int little_endian, int is_signed);
int _PyLong_AsByteArray(PyLongObject* v, unsigned char* bytes, size_t n, int little_endian, int is_signed);
They do the job. I don't know why there are not included in some python module, or correct me if I'am wrong.
A:
If you need fast serialization use marshal module. It's around 400x faster than your methods.
A:
I'm guessing you don't care about the string format, you just want a serialization? If so, why not use Python's built-in serializer, the cPickle module? The dumps function will convert any python object including a long integer to a string, and the loads function is its inverse. If you're doing this for saving out to a file, check out the dump and load functions, too.
>>> import cPickle
>>> print cPickle.loads(cPickle.dumps(13**666)) % 666
73
>>> print (13**666) % 666
73
| long <-> str binary conversion | Is there any lib that convert very long numbers to string just copying the data?
These one-liners are too slow:
def xlong(s):
return sum([ord(c) << e*8 for e,c in enumerate(s)])
def xstr(x):
return chr(x&255) + xstr(x >> 8) if x else ''
print xlong('abcd'*1024) % 666
print xstr(13**666)
| [
"You want the struct module.\npacked = struct.pack('l', 123456)\nassert struct.unpack('l', packed)[0] == 123456\n\n",
"How about\nfrom binascii import hexlify, unhexlify\n\ndef xstr(x):\n hex = '%x' % x\n return unhexlify('0'*(len(hex)%2) + hex)[::-1]\n\ndef xlong(s):\n return int(hexlify(s[::-1]), 16)\n\nI didn't time it but it should be faster and also work on larger numbers, since it doesn't use recursion.\n",
"In fact, I have a lack of long(s,256) . I lurk more and see that there are 2 function in Python CAPI file \"longobject.h\":\nPyObject * _PyLong_FromByteArray( const unsigned char* bytes, size_t n, int little_endian, int is_signed);\nint _PyLong_AsByteArray(PyLongObject* v, unsigned char* bytes, size_t n, int little_endian, int is_signed);\n\nThey do the job. I don't know why there are not included in some python module, or correct me if I'am wrong.\n",
"If you need fast serialization use marshal module. It's around 400x faster than your methods.\n",
"I'm guessing you don't care about the string format, you just want a serialization? If so, why not use Python's built-in serializer, the cPickle module? The dumps function will convert any python object including a long integer to a string, and the loads function is its inverse. If you're doing this for saving out to a file, check out the dump and load functions, too.\n>>> import cPickle\n>>> print cPickle.loads(cPickle.dumps(13**666)) % 666\n73\n>>> print (13**666) % 666\n73\n\n"
] | [
4,
2,
2,
1,
0
] | [
"Performance of cPickle vs. marshal (Python 2.5.2, Windows):\npython -mtimeit -s\"from cPickle import loads,dumps;d=13**666\" \"loads(dumps(d))\"\n1000 loops, best of 3: 600 usec per loop\n\npython -mtimeit -s\"from marshal import loads,dumps;d=13**666\" \"loads(dumps(d))\"\n100000 loops, best of 3: 7.79 usec per loop\n\npython -mtimeit -s\"from pickle import loads,dumps;d= 13**666\" \"loads(dumps(d))\"\n1000 loops, best of 3: 644 usec per loop\n\nmarshal is much faster.\n"
] | [
-1
] | [
"bignum",
"python",
"string"
] | stackoverflow_0000328964_bignum_python_string.txt |
Q:
Beginner looking for beautiful and instructional Python code
As a complete beginner with no programming experience, I am trying to find beautiful Python code to study and play with. Please answer by pointing to a website, a book or some software project.
I have the following criterias:
complete code listings (working, hackable code)
beautiful code (highly readable, simple but effective)
instructional for the beginner (yes, hand-holding is needed)
I've tried learning how to program for too long now, never gotten to the point where the rubber hits the road. My main agenda is best spelled out by Nat Friedman's "How to become a hacker".
I'm aware of O'Reilly's "Beautiful Code", but think of it as too advanced and confusing for a beginner.
A:
Buy Programming Collective Intelligence. Great book of interesting AI algorithms based on mining data and all of the examples are in very easy to read Python.
The other great book is Text Processing in Python
A:
Read the Python libraries themselves. They're working, hackable, elegant, and instructional. Some is simple, some is complex.
Best of all, you got it when you downloaded Python itself. It's in your Python library directory. Nothing more to do except start poking around.
A:
Just do it.
Seriously, you're never going to learn to be a good programmer until you write some programs. First you'll write bad programs, then you'll fix them, then you'll write better ones, etc...
If you aren't insatiably motivated to try coding, then maybe it isn't for you. One way to get motivated is to get a job that requires you to code... for me, there's nothing like having my salary and pride on the line to get me working :)
A:
The Python project itself maintains a nice list of beginner's guides.
A:
Beautiful is so hard to define, there's no real answer to this question. Your best advice to follow what Nat says in the post you linked:
Download the source code to the program you want to change
Untar it on your hard drive
Get it to build and run
Open the source code in an editor
Find the part of the code that you need to change to make the program do what you want it to do
Make the changes you need to make to the code and test it to make sure it works
Run the diff -u command and email the output to the mailing list
There is no point looking for beautiful code. Just look at and fix bugs in projects that you use (Django & Twisted might be good candidates).
A:
I've seen How to Think Like a Computer Scientist recommended in many blogs.
A:
I personally think that reading good code won't work until you have a firm understanding of the language, especially of its idioms. First, I recommend the basic Wikibook "Non-Programmer's Tutorial for Python" to start out. If most of that makes sense, you have a good understanding of the basics already.
After that, I recommend Dive into Python. You'll see a lot of other people recommending this book, because it's comprehensive and free. You'll learn a lot of language specific idioms in Dive into Python, especially in the first few chapters. As you're reading it, try to do basic programs using the techniques Mark Pilgrim shows.
Dive into Python gets into specific modules later in the book. That will probably get a little boring, and when it does, you might want to look at code. I don't feel qualified to rank the code used by these, but Django and Deluge are both bigger projects that will show you the organization of large programs. Though they will probably be overwhelming unless you take the time to really attack them one piece at a time and get a firm understanding.
A:
I've learned quite a bit of beautiful and useful Python from O'Reilly's Python Cookbook. http://oreilly.com/catalog/9780596001674/
I've also learned much from ActiveState's Python Recipe's web page. http://code.activestate.com/recipes/langs/python/
A:
I'd recommend you review Exaile music player for linux. It includes a lot of practically useful things like plugins, lambda, decorators, settings manager, gui (using GTK+) and much more.
Exaile source code is not an ideal but will give you enough helpful information and basic Python coding concepts.
| Beginner looking for beautiful and instructional Python code | As a complete beginner with no programming experience, I am trying to find beautiful Python code to study and play with. Please answer by pointing to a website, a book or some software project.
I have the following criterias:
complete code listings (working, hackable code)
beautiful code (highly readable, simple but effective)
instructional for the beginner (yes, hand-holding is needed)
I've tried learning how to program for too long now, never gotten to the point where the rubber hits the road. My main agenda is best spelled out by Nat Friedman's "How to become a hacker".
I'm aware of O'Reilly's "Beautiful Code", but think of it as too advanced and confusing for a beginner.
| [
"Buy Programming Collective Intelligence. Great book of interesting AI algorithms based on mining data and all of the examples are in very easy to read Python.\nThe other great book is Text Processing in Python\n",
"Read the Python libraries themselves. They're working, hackable, elegant, and instructional. Some is simple, some is complex.\nBest of all, you got it when you downloaded Python itself. It's in your Python library directory. Nothing more to do except start poking around.\n",
"Just do it.\nSeriously, you're never going to learn to be a good programmer until you write some programs. First you'll write bad programs, then you'll fix them, then you'll write better ones, etc...\nIf you aren't insatiably motivated to try coding, then maybe it isn't for you. One way to get motivated is to get a job that requires you to code... for me, there's nothing like having my salary and pride on the line to get me working :)\n",
"The Python project itself maintains a nice list of beginner's guides.\n",
"Beautiful is so hard to define, there's no real answer to this question. Your best advice to follow what Nat says in the post you linked:\n\nDownload the source code to the program you want to change\nUntar it on your hard drive\nGet it to build and run\nOpen the source code in an editor\nFind the part of the code that you need to change to make the program do what you want it to do\nMake the changes you need to make to the code and test it to make sure it works\nRun the diff -u command and email the output to the mailing list\n\nThere is no point looking for beautiful code. Just look at and fix bugs in projects that you use (Django & Twisted might be good candidates).\n",
"I've seen How to Think Like a Computer Scientist recommended in many blogs.\n",
"I personally think that reading good code won't work until you have a firm understanding of the language, especially of its idioms. First, I recommend the basic Wikibook \"Non-Programmer's Tutorial for Python\" to start out. If most of that makes sense, you have a good understanding of the basics already.\nAfter that, I recommend Dive into Python. You'll see a lot of other people recommending this book, because it's comprehensive and free. You'll learn a lot of language specific idioms in Dive into Python, especially in the first few chapters. As you're reading it, try to do basic programs using the techniques Mark Pilgrim shows.\nDive into Python gets into specific modules later in the book. That will probably get a little boring, and when it does, you might want to look at code. I don't feel qualified to rank the code used by these, but Django and Deluge are both bigger projects that will show you the organization of large programs. Though they will probably be overwhelming unless you take the time to really attack them one piece at a time and get a firm understanding.\n",
"I've learned quite a bit of beautiful and useful Python from O'Reilly's Python Cookbook. http://oreilly.com/catalog/9780596001674/\nI've also learned much from ActiveState's Python Recipe's web page. http://code.activestate.com/recipes/langs/python/\n",
"I'd recommend you review Exaile music player for linux. It includes a lot of practically useful things like plugins, lambda, decorators, settings manager, gui (using GTK+) and much more.\nExaile source code is not an ideal but will give you enough helpful information and basic Python coding concepts.\n"
] | [
19,
7,
5,
4,
2,
1,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000125019_python.txt |
Q:
No print output from child multiprocessing.Process unless the program crashes
I am having trouble with the Python multiprocessing module. I am using the Process class to spawn a new process in order to utilize my second core. This second process loads a bunch of data into RAM and then waits patiently instead of consuming.
I wanted to see what that process printed with the print command, however, I do not see anything that it prints. I only see what the parent process prints. Now this makes sense to me since they live in two different process. The second process doesn't spawn its own shell/standard output window, nor is its output sent to the parent. Yet when this process crashs, it prints everything that my script told it to print, plus the stack trace and error.
I am wondering if there is a simple way to send the child process's print output to the first process, or have it spawn a shell/standard output so that I may debug it. I know I could create a multiprocessing.Queue dedicated to transmitting prints to the parent so that it may print these to standard output, but I do not feel like doing this if a simpler solution exists.
A:
Have you tried flushing stdout?
import sys
print "foo"
sys.stdout.flush()
| No print output from child multiprocessing.Process unless the program crashes | I am having trouble with the Python multiprocessing module. I am using the Process class to spawn a new process in order to utilize my second core. This second process loads a bunch of data into RAM and then waits patiently instead of consuming.
I wanted to see what that process printed with the print command, however, I do not see anything that it prints. I only see what the parent process prints. Now this makes sense to me since they live in two different process. The second process doesn't spawn its own shell/standard output window, nor is its output sent to the parent. Yet when this process crashs, it prints everything that my script told it to print, plus the stack trace and error.
I am wondering if there is a simple way to send the child process's print output to the first process, or have it spawn a shell/standard output so that I may debug it. I know I could create a multiprocessing.Queue dedicated to transmitting prints to the parent so that it may print these to standard output, but I do not feel like doing this if a simpler solution exists.
| [
"Have you tried flushing stdout?\nimport sys\nprint \"foo\"\nsys.stdout.flush()\n\n"
] | [
26
] | [] | [] | [
"io",
"multiprocessing",
"multithreading",
"python"
] | stackoverflow_0000367053_io_multiprocessing_multithreading_python.txt |
Q:
best way to print data in columnar format?
I am using Python to read in data in a user-unfriendly format and transform it into an easier-to-read format. The records I am outputting are usually going to be just a last name, first name, and room code. I
I would like to output a series of pages, each containing a contiguous subset of the total records, divided into multiple columns, each of which contains a contiguous subset of the total records on the page. (So in other words, you'd read down the first column, move to the next column, move to the next column, etc., and then start over on the next page...)
The problem I am facing now is that for output formats, I'm almost certainly limited to HTML (and Javascript, CSS, etc.) What is the best way to get the data into this columnar format? If I knew for certain that the printable area of the paper would hold 20 records vertically and five horizontally, for instance, I could easily print tables of 5x20, but I don't know if there's a way to indicate a page break -- and I don't know if there's any way to calculate programmatically how many records will fit on the page.
How would you approach this?
EDIT: The reason I said that I was limited in output: I have to produce the file on one computer, then bring it to a different computer upon which we cannot install new software and on which the selection of existing software is not optimal. The file itself is only going to be used to make a physical printout (which is what the end users will actually work with), but my time on the computer that I can print from is going to be limited, so I need to have the file all ready to go and print right away without a lot of tweaking.
Right now I've managed to find a word processor that I can use on the target machine, so I'm going to see if I can target a format that the word processor uses.
EDIT: Once I knew there was a word processor I could use, I made a simple skeleton file with the settings that I wanted (column and tab settings, monospaced font in a small point size, etc.) and then measured how many characters I got per line of a column and how many lines I got per column. I've watched the runs pretty carefully to make sure that there weren't some strange lines that somehow overflowed the characters-per-line guideline (which shouldn't happen with monospaced font, of course, but how many times do you end up having to figure out why that thing that "shouldn't" happen is happening anyways?)
If there hadn't been a word processor on the target machine that I could use, I probably would have looked at PDF as an output format.
A:
"If I knew for certain that the printable area of the paper would hold 20 records vertically and five horizontally"
You do know that.
You know the size of your paper. You know the size of your font. You can easily do the math.
"almost certainly limited to HTML..." doesn't make much sense. Is this a web application? The page can have a "Previous" and "Next" button to step through the pages? Pick a size that looks good to you and display one page full with "Previous" and "Next" buttons.
If it's supposed to be one HTML page that prints correctly, that's hard. There are CSS things you can do, but you'll be happier creating a PDF file.
Get PyX or ReportLab and create a PDF that prints properly.
I -- personally -- have no patience with any of this. I try put this kind of thing into a CSV file. My users can then open CSV with a tool spreadsheet (Open Office Org has a good one) and then adjust the columns and print with it.
| best way to print data in columnar format? | I am using Python to read in data in a user-unfriendly format and transform it into an easier-to-read format. The records I am outputting are usually going to be just a last name, first name, and room code. I
I would like to output a series of pages, each containing a contiguous subset of the total records, divided into multiple columns, each of which contains a contiguous subset of the total records on the page. (So in other words, you'd read down the first column, move to the next column, move to the next column, etc., and then start over on the next page...)
The problem I am facing now is that for output formats, I'm almost certainly limited to HTML (and Javascript, CSS, etc.) What is the best way to get the data into this columnar format? If I knew for certain that the printable area of the paper would hold 20 records vertically and five horizontally, for instance, I could easily print tables of 5x20, but I don't know if there's a way to indicate a page break -- and I don't know if there's any way to calculate programmatically how many records will fit on the page.
How would you approach this?
EDIT: The reason I said that I was limited in output: I have to produce the file on one computer, then bring it to a different computer upon which we cannot install new software and on which the selection of existing software is not optimal. The file itself is only going to be used to make a physical printout (which is what the end users will actually work with), but my time on the computer that I can print from is going to be limited, so I need to have the file all ready to go and print right away without a lot of tweaking.
Right now I've managed to find a word processor that I can use on the target machine, so I'm going to see if I can target a format that the word processor uses.
EDIT: Once I knew there was a word processor I could use, I made a simple skeleton file with the settings that I wanted (column and tab settings, monospaced font in a small point size, etc.) and then measured how many characters I got per line of a column and how many lines I got per column. I've watched the runs pretty carefully to make sure that there weren't some strange lines that somehow overflowed the characters-per-line guideline (which shouldn't happen with monospaced font, of course, but how many times do you end up having to figure out why that thing that "shouldn't" happen is happening anyways?)
If there hadn't been a word processor on the target machine that I could use, I probably would have looked at PDF as an output format.
| [
"\"If I knew for certain that the printable area of the paper would hold 20 records vertically and five horizontally\"\nYou do know that.\nYou know the size of your paper. You know the size of your font. You can easily do the math.\n\"almost certainly limited to HTML...\" doesn't make much sense. Is this a web application? The page can have a \"Previous\" and \"Next\" button to step through the pages? Pick a size that looks good to you and display one page full with \"Previous\" and \"Next\" buttons.\nIf it's supposed to be one HTML page that prints correctly, that's hard. There are CSS things you can do, but you'll be happier creating a PDF file.\nGet PyX or ReportLab and create a PDF that prints properly.\nI -- personally -- have no patience with any of this. I try put this kind of thing into a CSV file. My users can then open CSV with a tool spreadsheet (Open Office Org has a good one) and then adjust the columns and print with it.\n"
] | [
3
] | [] | [] | [
"css",
"formatting",
"html",
"python"
] | stackoverflow_0000365601_css_formatting_html_python.txt |
Q:
Ruby timeout for Python?
Does anyone know a good solution for implementing a function similar to Ruby's timeout in Python? I've googled it and didn't really see anything very good. Thanks for the help.
Here's a link to the Ruby documentation
http://www.ruby-doc.org/stdlib/libdoc/timeout/rdoc/index.html
A:
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = None
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return default
else:
return it.result
from:
http://code.activestate.com/recipes/473878/
| Ruby timeout for Python? | Does anyone know a good solution for implementing a function similar to Ruby's timeout in Python? I've googled it and didn't really see anything very good. Thanks for the help.
Here's a link to the Ruby documentation
http://www.ruby-doc.org/stdlib/libdoc/timeout/rdoc/index.html
| [
"def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):\n import threading\n class InterruptableThread(threading.Thread):\n def __init__(self):\n threading.Thread.__init__(self)\n self.result = None\n\n def run(self):\n try:\n self.result = func(*args, **kwargs)\n except:\n self.result = default\n\n it = InterruptableThread()\n it.start()\n it.join(timeout_duration)\n if it.isAlive():\n return default\n else:\n return it.result\n\nfrom:\nhttp://code.activestate.com/recipes/473878/\n"
] | [
2
] | [] | [] | [
"python",
"ruby",
"timeout"
] | stackoverflow_0000367562_python_ruby_timeout.txt |
Q:
Setting up a foreign key to an abstract base class with Django
I've factored out common attributes from two classes into an abstract base class, however I have another model that needs to reference either one of those classes. It's not possible to reference an ABC as it doesn't actually have a database table.
The following example should illustrate my problem:
class Answer(models.Model):
ovramt = models.ForeignKey("Ovramt")
question = models.ForeignKey("Question")
answer = models.CharField(max_length=3, choices=(("yes","yes"),("no","no") ("NA","N/A"))
likelihood = models.IntegerField(choices=LIKELY_CHOICES)
consequence = models.IntegerField(choices=CONSEQUENCE_CHOICES)
class Meta:
abstract = True
class Answer_A(Answer):
resident = models.ForeignKey("Resident")
def __unicode__(self):
return u"%s - %s - %s" %(self.ovramt.ssa.name, self.resident, self.question)
class Answer_B(Answer):
def __unicode__(self):
return u"%s - %s" %(self.ovramt.ssa.name, self.question)
class Answer_Risk(models.Model):
answer = models.ForeignKey("Answer")
risk = models.CharField(max_length=200)
def __unicode__(self):
return self.risk
Answer_A and Answer_B are slightly different in that Answer_A also needs a FK relationship to another table. Answer_B may also require some specific attributes later. The problem would STILL exist if I had Answer_B be the superclass - and have Answer_A subclass or compose it.
A 'Risk' is the same whether it's Answer_A or Answer_B. I also have other models that need to reference an 'Answer' regardless of it's sub-type. How can this be done? How can you reference a type regardless of it's sub-type?
Update:
I was trying to avoid a join operation but I don't think I'm going to be able to. Would it be worth having the reference to 'Resident' in all 'Answer's and just nulling it where required? Or is that considered very bad practice?
A:
A generic relation seems to be the solution. But it will complicate things even further.
It seems to me; your model structure is already more complex than necessary. I would simply merge all three Answer models into one. This way:
Answer_Risk would work without modification.
You can set resident to None (NULL) in case of an Answer_A.
You can return different string represantations depending on resident == None. (in other words; same functionality)
One more thing; are your answers likely to have more than one risk? If they'll have none or one risk you should consider following alternative implementations:
Using a one-to-one relationship
Demoting risk as a field (or any number of fields) inside Answer class.
My main concern is neither database structure nor performance (although these changes should improve performance) but code maintainability.
A:
My gut would be to suggest removing the abstract modifier on the base class. You'll get the same model structure, but Answer will be it's own table. The downside of this is that if these are large tables and/or your queries are complex, queries against it could be noticeably slower.
Alternatively, you could keep your models as is, but replace the ForeignKey to Answer with a GenericForeignKey. What you lose in the syntactic sugar of model inheritance, you gain a bit in query speed.
I don't believe it's possible to reference an abstract base model by ForeignKey (or anything functionally the same).
| Setting up a foreign key to an abstract base class with Django | I've factored out common attributes from two classes into an abstract base class, however I have another model that needs to reference either one of those classes. It's not possible to reference an ABC as it doesn't actually have a database table.
The following example should illustrate my problem:
class Answer(models.Model):
ovramt = models.ForeignKey("Ovramt")
question = models.ForeignKey("Question")
answer = models.CharField(max_length=3, choices=(("yes","yes"),("no","no") ("NA","N/A"))
likelihood = models.IntegerField(choices=LIKELY_CHOICES)
consequence = models.IntegerField(choices=CONSEQUENCE_CHOICES)
class Meta:
abstract = True
class Answer_A(Answer):
resident = models.ForeignKey("Resident")
def __unicode__(self):
return u"%s - %s - %s" %(self.ovramt.ssa.name, self.resident, self.question)
class Answer_B(Answer):
def __unicode__(self):
return u"%s - %s" %(self.ovramt.ssa.name, self.question)
class Answer_Risk(models.Model):
answer = models.ForeignKey("Answer")
risk = models.CharField(max_length=200)
def __unicode__(self):
return self.risk
Answer_A and Answer_B are slightly different in that Answer_A also needs a FK relationship to another table. Answer_B may also require some specific attributes later. The problem would STILL exist if I had Answer_B be the superclass - and have Answer_A subclass or compose it.
A 'Risk' is the same whether it's Answer_A or Answer_B. I also have other models that need to reference an 'Answer' regardless of it's sub-type. How can this be done? How can you reference a type regardless of it's sub-type?
Update:
I was trying to avoid a join operation but I don't think I'm going to be able to. Would it be worth having the reference to 'Resident' in all 'Answer's and just nulling it where required? Or is that considered very bad practice?
| [
"A generic relation seems to be the solution. But it will complicate things even further.\nIt seems to me; your model structure is already more complex than necessary. I would simply merge all three Answer models into one. This way:\n\nAnswer_Risk would work without modification.\nYou can set resident to None (NULL) in case of an Answer_A.\nYou can return different string represantations depending on resident == None. (in other words; same functionality)\n\nOne more thing; are your answers likely to have more than one risk? If they'll have none or one risk you should consider following alternative implementations:\n\nUsing a one-to-one relationship\nDemoting risk as a field (or any number of fields) inside Answer class.\n\nMy main concern is neither database structure nor performance (although these changes should improve performance) but code maintainability.\n",
"My gut would be to suggest removing the abstract modifier on the base class. You'll get the same model structure, but Answer will be it's own table. The downside of this is that if these are large tables and/or your queries are complex, queries against it could be noticeably slower.\nAlternatively, you could keep your models as is, but replace the ForeignKey to Answer with a GenericForeignKey. What you lose in the syntactic sugar of model inheritance, you gain a bit in query speed.\nI don't believe it's possible to reference an abstract base model by ForeignKey (or anything functionally the same).\n"
] | [
18,
8
] | [] | [] | [
"django",
"django_models",
"inheritance",
"python"
] | stackoverflow_0000367461_django_django_models_inheritance_python.txt |
Q:
Application configuration incorrect with Python Imaging Library
I'm trying to install the Python Imaging Library 1.1.6 for Python 2.6. After downloading the installation executable (Win XP), I receive the following error message:
"Application failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem"
Any thoughts on what I have done / not done? The application has not been installed, and I can't import the module through the IDLE session. Thoughts?
A:
It looks like an SxS ("side-by-side") issue. Probably the runtime libraries PIL is linked against are missing. Try installing a redistributable package of a compiler which was used to build PIL.
MSVC 2005 redist
MSVC 2008 redist
A:
Install Python "for all users", not "just for me".
A:
I got that same message recently to do with the wxPython library. It was because wxPython had been built using a later version of Visual C++ than I had on my PC. As atzz suggests, one solution is to install the appropriate redistributable package. Try a Google search for 'Microsoft Visual C++ 2008 Redistributable Package' and do the download. If that doesn't work, repeat for the 2005 version.
| Application configuration incorrect with Python Imaging Library | I'm trying to install the Python Imaging Library 1.1.6 for Python 2.6. After downloading the installation executable (Win XP), I receive the following error message:
"Application failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem"
Any thoughts on what I have done / not done? The application has not been installed, and I can't import the module through the IDLE session. Thoughts?
| [
"It looks like an SxS (\"side-by-side\") issue. Probably the runtime libraries PIL is linked against are missing. Try installing a redistributable package of a compiler which was used to build PIL.\nMSVC 2005 redist\nMSVC 2008 redist\n",
"Install Python \"for all users\", not \"just for me\".\n",
"I got that same message recently to do with the wxPython library. It was because wxPython had been built using a later version of Visual C++ than I had on my PC. As atzz suggests, one solution is to install the appropriate redistributable package. Try a Google search for 'Microsoft Visual C++ 2008 Redistributable Package' and do the download. If that doesn't work, repeat for the 2005 version.\n"
] | [
3,
1,
0
] | [
"I am shooting in the dark: could it be this?\n"
] | [
-1
] | [
"image_processing",
"python"
] | stackoverflow_0000321668_image_processing_python.txt |
Q:
Making a virtual package available via sys.modules
Say I have a package "mylibrary".
I want to make "mylibrary.config" available for import, either as a dynamically created module, or a module imported from an entirely different place that would then basically be "mounted" inside the "mylibrary" namespace.
I.e., I do:
import sys, types
sys.modules['mylibrary.config'] = types.ModuleType('config')
Given that setup:
>>> import mylibrary.config # -> works
>>> from mylibrary import config
<type 'exceptions.ImportError'>: cannot import name config
Even stranger:
>>> import mylibrary.config as X
<type 'exceptions.ImportError'>: cannot import name config
So it seems that using the direct import works, the other forms do not. Is it possible to make those work as well?
A:
You need to monkey-patch the module not only into sys.modules, but also into its parent module:
>>> import sys,types,xml
>>> xml.config = sys.modules['xml.config'] = types.ModuleType('xml.config')
>>> import xml.config
>>> from xml import config
>>> from xml import config as x
>>> x
<module 'xml.config' (built-in)>
A:
As well as the following:
import sys, types
config = types.ModuleType('config')
sys.modules['mylibrary.config'] = config
You also need to do:
import mylibrary
mylibrary.config = config
A:
You can try something like this:
class VirtualModule(object):
def __init__(self, modname, subModules):
try:
import sys
self._mod = __import__(modname)
sys.modules[modname] = self
__import__(modname)
self._modname = modname
self._subModules = subModules
except ImportError, err:
pass # please signal error in some useful way :-)
def __repr__(self):
return "Virtual module for " + self._modname
def __getattr__(self, attrname):
if attrname in self._subModules.keys():
import sys
__import__(self._subModules[attrname])
return sys.modules[self._subModules[attrname]]
else:
return self._mod.__dict__[attrname]
VirtualModule('mylibrary', {'config': 'actual_module_for_config'})
import mylibrary
mylibrary.config
mylibrary.some_function
| Making a virtual package available via sys.modules | Say I have a package "mylibrary".
I want to make "mylibrary.config" available for import, either as a dynamically created module, or a module imported from an entirely different place that would then basically be "mounted" inside the "mylibrary" namespace.
I.e., I do:
import sys, types
sys.modules['mylibrary.config'] = types.ModuleType('config')
Given that setup:
>>> import mylibrary.config # -> works
>>> from mylibrary import config
<type 'exceptions.ImportError'>: cannot import name config
Even stranger:
>>> import mylibrary.config as X
<type 'exceptions.ImportError'>: cannot import name config
So it seems that using the direct import works, the other forms do not. Is it possible to make those work as well?
| [
"You need to monkey-patch the module not only into sys.modules, but also into its parent module:\n>>> import sys,types,xml\n>>> xml.config = sys.modules['xml.config'] = types.ModuleType('xml.config')\n>>> import xml.config\n>>> from xml import config\n>>> from xml import config as x\n>>> x\n<module 'xml.config' (built-in)>\n\n",
"As well as the following:\nimport sys, types\nconfig = types.ModuleType('config')\nsys.modules['mylibrary.config'] = config\n\nYou also need to do: \nimport mylibrary\nmylibrary.config = config\n\n",
"You can try something like this:\nclass VirtualModule(object):\n def __init__(self, modname, subModules):\n try:\n import sys\n self._mod = __import__(modname)\n sys.modules[modname] = self\n __import__(modname)\n self._modname = modname\n self._subModules = subModules\n except ImportError, err:\n pass # please signal error in some useful way :-)\n def __repr__(self):\n return \"Virtual module for \" + self._modname\n def __getattr__(self, attrname):\n if attrname in self._subModules.keys():\n import sys\n __import__(self._subModules[attrname])\n return sys.modules[self._subModules[attrname]]\n else:\n return self._mod.__dict__[attrname]\n\n\nVirtualModule('mylibrary', {'config': 'actual_module_for_config'})\n\nimport mylibrary\nmylibrary.config\nmylibrary.some_function\n\n"
] | [
14,
2,
1
] | [] | [] | [
"import",
"module",
"python"
] | stackoverflow_0000368057_import_module_python.txt |
Q:
How to expose std::vector as a Python list using SWIG?
I'm trying to expose this function to Python using SWIG:
std::vector<int> get_match_stats();
And I want SWIG to generate wrapping code for Python so I can see it as a list of integers.
Adding this to the .i file:
%include "typemaps.i"
%include "std_vector.i"
namespace std
{
%template(IntVector) vector<int>;
}
I'm running SWIG Version 1.3.36 and calling swig with -Wall and I get no warnings.
I'm able to get access to a list but I get a bunch of warnings when compiling with -Wall (with g++ (GCC) 4.2.4 ) the generated C++ code that say:
warning: dereferencing type-punned pointer will break strict-aliasing rules
Am I exposing the function correctly? If so, what does the warning mean?
These are the lines before the offending line in the same function:
SWIGINTERN PyObject *_wrap_IntVector_erase__SWIG_0(PyObject *SWIGUNUSEDPARM(self), PyObject *args) {
PyObject *resultobj = 0;
std::vector *arg1 = (std::vector *) 0 ;
std::vector::iterator arg2 ;
std::vector::iterator result;
void *argp1 = 0 ;
int res1 = 0 ;
swig::PySwigIterator *iter2 = 0 ;
int res2 ;
PyObject * obj0 = 0 ;
PyObject * obj1 = 0 ;
if (!PyArg_ParseTuple(args,(char *)"OO:IntVector_erase",&obj0,&obj1)) SWIG_fail;
res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_std__vectorT_int_std__allocatorT_int_t_t, 0 | 0 );
if (!SWIG_IsOK(res1)) {
SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "IntVector_erase" "', argument " "1"" of type '" "std::vector *""'");
}
arg1 = reinterpret_cast * >(argp1);
And this is the offending line:
res2 = SWIG_ConvertPtr(obj1, SWIG_as_voidptrptr(&iter2), swig::PySwigIterator::descriptor(), 0);
More code follows that.
The warning generated when compiling with g++ 4.2.4 is:
swig_iss_wrap.cxx: In function โPyObject* _wrap_IntVector_erase__SWIG_0(PyObject*, PyObject*)โ:
swig_iss_wrap.cxx:5885: warning: dereferencing type-punned pointer will break strict-aliasing rules
A:
%template(IntVector) vector<int>;
A:
I don't have much experience with Swig, but are you #including your C++ header file in your .i file? Try one (or both) of
%include "myvector.h"
%{
# include "myvector.h"
%}
| How to expose std::vector as a Python list using SWIG? | I'm trying to expose this function to Python using SWIG:
std::vector<int> get_match_stats();
And I want SWIG to generate wrapping code for Python so I can see it as a list of integers.
Adding this to the .i file:
%include "typemaps.i"
%include "std_vector.i"
namespace std
{
%template(IntVector) vector<int>;
}
I'm running SWIG Version 1.3.36 and calling swig with -Wall and I get no warnings.
I'm able to get access to a list but I get a bunch of warnings when compiling with -Wall (with g++ (GCC) 4.2.4 ) the generated C++ code that say:
warning: dereferencing type-punned pointer will break strict-aliasing rules
Am I exposing the function correctly? If so, what does the warning mean?
These are the lines before the offending line in the same function:
SWIGINTERN PyObject *_wrap_IntVector_erase__SWIG_0(PyObject *SWIGUNUSEDPARM(self), PyObject *args) {
PyObject *resultobj = 0;
std::vector *arg1 = (std::vector *) 0 ;
std::vector::iterator arg2 ;
std::vector::iterator result;
void *argp1 = 0 ;
int res1 = 0 ;
swig::PySwigIterator *iter2 = 0 ;
int res2 ;
PyObject * obj0 = 0 ;
PyObject * obj1 = 0 ;
if (!PyArg_ParseTuple(args,(char *)"OO:IntVector_erase",&obj0,&obj1)) SWIG_fail;
res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_std__vectorT_int_std__allocatorT_int_t_t, 0 | 0 );
if (!SWIG_IsOK(res1)) {
SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "IntVector_erase" "', argument " "1"" of type '" "std::vector *""'");
}
arg1 = reinterpret_cast * >(argp1);
And this is the offending line:
res2 = SWIG_ConvertPtr(obj1, SWIG_as_voidptrptr(&iter2), swig::PySwigIterator::descriptor(), 0);
More code follows that.
The warning generated when compiling with g++ 4.2.4 is:
swig_iss_wrap.cxx: In function โPyObject* _wrap_IntVector_erase__SWIG_0(PyObject*, PyObject*)โ:
swig_iss_wrap.cxx:5885: warning: dereferencing type-punned pointer will break strict-aliasing rules
| [
"%template(IntVector) vector<int>;\n\n",
"I don't have much experience with Swig, but are you #including your C++ header file in your .i file? Try one (or both) of\n%include \"myvector.h\"\n\n\n%{\n# include \"myvector.h\"\n%}\n\n"
] | [
14,
0
] | [] | [] | [
"c++",
"python",
"stl",
"swig"
] | stackoverflow_0000276769_c++_python_stl_swig.txt |
Q:
Python regex to parse into a 2D array
I have a string like this that I need to parse into a 2D array:
str = "'813702104[813702106]','813702141[813702143]','813702172[813702174]'"
the array equiv would be:
arr[0][0] = 813702104
arr[0][1] = 813702106
arr[1][0] = 813702141
arr[1][1] = 813702143
#... etc ...
I'm trying to do this by REGEX. The string above is buried in an HTML page but I can be certain it's the only string in that pattern on the page. I'm not sure if this is the best way, but it's all I've got right now.
imgRegex = re.compile(r"(?:'(?P<main>\d+)\[(?P<thumb>\d+)\]',?)+")
If I run imgRegex.match(str).groups() I only get one result (the first couplet). How do I either get multiple matches back or a 2d match object (if such a thing exists!)?
Note: Contrary to how it might look, this is not homework
Note part deux: The real string is embedded in a large HTML file and therefore splitting does not appear to be an option.
I'm still getting answers for this, so I thought I better edit it to show why I'm not changing the accepted answer. Splitting, though more efficient on this test string, isn't going to extract the parts from a whole HTML file. I could combine a regex and splitting but that seems silly.
If you do have a better way to find the parts from a load of HTML (the pattern \d+\[\d+\] is unique to this string in the source), I'll happily change accepted answers. Anything else is academic.
A:
I would try findall or finditer instead of match.
Edit by Oli: Yeah findall work brilliantly but I had to simplify the regex to:
r"'(?P<main>\d+)\[(?P<thumb>\d+)\]',?"
A:
I think I will not go for regex for this task. Python list comprehension is quite powerful for this
In [27]: s = "'813702104[813702106]','813702141[813702143]','813702172[813702174]'"
In [28]: d=[[int(each1.strip(']\'')) for each1 in each.split('[')] for each in s.split(',')]
In [29]: d[0][1]
Out[29]: 813702106
In [30]: d[1][0]
Out[30]: 813702141
In [31]: d
Out[31]: [[813702104, 813702106], [813702141, 813702143], [813702172, 813702174]]
A:
Modifying your regexp a little,
>>> str = "'813702104[813702106]','813702141[813702143]','813702172[813702174]"
>>> imgRegex = re.compile(r"'(?P<main>\d+)\[(?P<thumb>\d+)\]',?")
>>> print imgRegex.findall(str)
[('813702104', '813702106'), ('813702141', '813702143')]
Which is a "2 dimensional array" - in Python, "a list of 2-tuples".
A:
I've got something that seems to work on your data set:
In [19]: str = "'813702104[813702106]','813702141[813702143]','813702172[813702174]'"
In [20]: ptr = re.compile( r"'(?P<one>\d+)\[(?P<two>\d+)\]'" )
In [21]: ptr.findall( str )
Out [23]:
[('813702104', '813702106'),
('813702141', '813702143'),
('813702172', '813702174')]
A:
Alternatively, you could use Python's [statement for item in list] syntax for building lists. You should find this to be considerably faster than a regex, particularly for small data sets. Larger data sets will show a less marked difference (it only has to load the regular expressions engine once no matter the size), but the listmaker should always be faster.
Start by splitting the string on commas:
>>> str = "'813702104[813702106]','813702141[813702143]','813702172[813702174]'"
>>> arr = [pair for pair in str.split(",")]
>>> arr
["'813702104[813702106]'", "'813702141[813702143]'", "'813702172[813702174]'"]
Right now, this returns the same thing as just str.split(","), so isn't very useful, but you should be able to see how the listmaker works โ it iterates through list, assigning each value to item, executing statement, and appending the resulting value to the newly-built list.
In order to get something useful accomplished, we need to put a real statement in, so we get a slice of each pair which removes the single quotes and closing square bracket, then further split on that conveniently-placed opening square bracket:
>>> arr = [pair[1:-2].split("[") for pair in str.split(",")]
>>> arr
>>> [['813702104', '813702106'], ['813702141', '813702143'], ['813702172', '813702174']]
This returns a two-dimensional array like you describe, but the items are all strings rather than integers. If you're simply going to use them as strings, that's far enough. If you need them to be actual integers, you simply use an "inner" listmaker as the statement for the "outer" listmaker:
>>> arr = [[int(x) for x in pair[1:-2].split("[")] for pair in str.split(",")]
>>> arr
>>> [[813702104, 813702106], [813702141, 813702143], [813702172, 813702174]]
This returns a two-dimensional array of the integers representing in a string like the one you provided, without ever needing to load the regular expressions engine.
| Python regex to parse into a 2D array | I have a string like this that I need to parse into a 2D array:
str = "'813702104[813702106]','813702141[813702143]','813702172[813702174]'"
the array equiv would be:
arr[0][0] = 813702104
arr[0][1] = 813702106
arr[1][0] = 813702141
arr[1][1] = 813702143
#... etc ...
I'm trying to do this by REGEX. The string above is buried in an HTML page but I can be certain it's the only string in that pattern on the page. I'm not sure if this is the best way, but it's all I've got right now.
imgRegex = re.compile(r"(?:'(?P<main>\d+)\[(?P<thumb>\d+)\]',?)+")
If I run imgRegex.match(str).groups() I only get one result (the first couplet). How do I either get multiple matches back or a 2d match object (if such a thing exists!)?
Note: Contrary to how it might look, this is not homework
Note part deux: The real string is embedded in a large HTML file and therefore splitting does not appear to be an option.
I'm still getting answers for this, so I thought I better edit it to show why I'm not changing the accepted answer. Splitting, though more efficient on this test string, isn't going to extract the parts from a whole HTML file. I could combine a regex and splitting but that seems silly.
If you do have a better way to find the parts from a load of HTML (the pattern \d+\[\d+\] is unique to this string in the source), I'll happily change accepted answers. Anything else is academic.
| [
"I would try findall or finditer instead of match.\nEdit by Oli: Yeah findall work brilliantly but I had to simplify the regex to:\nr\"'(?P<main>\\d+)\\[(?P<thumb>\\d+)\\]',?\"\n\n",
"I think I will not go for regex for this task. Python list comprehension is quite powerful for this\nIn [27]: s = \"'813702104[813702106]','813702141[813702143]','813702172[813702174]'\"\n\nIn [28]: d=[[int(each1.strip(']\\'')) for each1 in each.split('[')] for each in s.split(',')]\n\nIn [29]: d[0][1]\nOut[29]: 813702106\n\nIn [30]: d[1][0]\nOut[30]: 813702141\n\nIn [31]: d\nOut[31]: [[813702104, 813702106], [813702141, 813702143], [813702172, 813702174]]\n\n",
"Modifying your regexp a little,\n>>> str = \"'813702104[813702106]','813702141[813702143]','813702172[813702174]\"\n>>> imgRegex = re.compile(r\"'(?P<main>\\d+)\\[(?P<thumb>\\d+)\\]',?\")\n>>> print imgRegex.findall(str)\n[('813702104', '813702106'), ('813702141', '813702143')]\n\nWhich is a \"2 dimensional array\" - in Python, \"a list of 2-tuples\".\n",
"I've got something that seems to work on your data set:\nIn [19]: str = \"'813702104[813702106]','813702141[813702143]','813702172[813702174]'\"\nIn [20]: ptr = re.compile( r\"'(?P<one>\\d+)\\[(?P<two>\\d+)\\]'\" )\nIn [21]: ptr.findall( str )\nOut [23]:\n[('813702104', '813702106'),\n ('813702141', '813702143'),\n ('813702172', '813702174')]\n\n",
"Alternatively, you could use Python's [statement for item in list] syntax for building lists. You should find this to be considerably faster than a regex, particularly for small data sets. Larger data sets will show a less marked difference (it only has to load the regular expressions engine once no matter the size), but the listmaker should always be faster.\nStart by splitting the string on commas:\n>>> str = \"'813702104[813702106]','813702141[813702143]','813702172[813702174]'\"\n>>> arr = [pair for pair in str.split(\",\")]\n>>> arr\n[\"'813702104[813702106]'\", \"'813702141[813702143]'\", \"'813702172[813702174]'\"]\n\nRight now, this returns the same thing as just str.split(\",\"), so isn't very useful, but you should be able to see how the listmaker works โ it iterates through list, assigning each value to item, executing statement, and appending the resulting value to the newly-built list.\nIn order to get something useful accomplished, we need to put a real statement in, so we get a slice of each pair which removes the single quotes and closing square bracket, then further split on that conveniently-placed opening square bracket:\n>>> arr = [pair[1:-2].split(\"[\") for pair in str.split(\",\")]\n>>> arr\n>>> [['813702104', '813702106'], ['813702141', '813702143'], ['813702172', '813702174']]\n\nThis returns a two-dimensional array like you describe, but the items are all strings rather than integers. If you're simply going to use them as strings, that's far enough. If you need them to be actual integers, you simply use an \"inner\" listmaker as the statement for the \"outer\" listmaker:\n>>> arr = [[int(x) for x in pair[1:-2].split(\"[\")] for pair in str.split(\",\")]\n>>> arr\n>>> [[813702104, 813702106], [813702141, 813702143], [813702172, 813702174]]\n\nThis returns a two-dimensional array of the integers representing in a string like the one you provided, without ever needing to load the regular expressions engine.\n"
] | [
5,
3,
1,
1,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000346267_python_regex.txt |
Q:
Python: os.environ.get('SSH_ORIGINAL_COMMAND') returns None
Trying to follow a technique found at bzr and gitosis I did the following:
added to ~/.ssh/authorized_keys the command="my_parser" parameter
which point to a python script file named 'my_parser' and located in
/usr/local/bin (file was chmoded as 777)
in that script file '/usr/local/bin/my_parser' I got the following
lines:
#!/usr/bin/env python
import os
print os.environ.get('SSH_ORIGINAL_COMMAND', None)
When trying to ssh e.g. ssh localhost
I get None on the terminal and then the connection is closed.
I wonder if anyone have done such or alike in the past and can help me
with this.
Is there anything I should do in my python file in order to get that
environment variable?
A:
$SSH_ORIGINAL_COMMAND is set when you connect to a host with ssh to execute a single command:
$ ssh username@host 'some command'
Your "my_parser" would then return "some command".
Unless you invoke a shell with my_parser, it will then exit, and the connection will close. You can use this to control the environment of the remotely executed commands, but you lose the ability to have an interactive session
| Python: os.environ.get('SSH_ORIGINAL_COMMAND') returns None | Trying to follow a technique found at bzr and gitosis I did the following:
added to ~/.ssh/authorized_keys the command="my_parser" parameter
which point to a python script file named 'my_parser' and located in
/usr/local/bin (file was chmoded as 777)
in that script file '/usr/local/bin/my_parser' I got the following
lines:
#!/usr/bin/env python
import os
print os.environ.get('SSH_ORIGINAL_COMMAND', None)
When trying to ssh e.g. ssh localhost
I get None on the terminal and then the connection is closed.
I wonder if anyone have done such or alike in the past and can help me
with this.
Is there anything I should do in my python file in order to get that
environment variable?
| [
"$SSH_ORIGINAL_COMMAND is set when you connect to a host with ssh to execute a single command:\n$ ssh username@host 'some command'\n\nYour \"my_parser\" would then return \"some command\".\nUnless you invoke a shell with my_parser, it will then exit, and the connection will close. You can use this to control the environment of the remotely executed commands, but you lose the ability to have an interactive session \n"
] | [
1
] | [] | [] | [
"python",
"ssh"
] | stackoverflow_0000369414_python_ssh.txt |
Q:
Passing a Python array to a C++ vector using Swig
I have an array of objects in Python
[obj1, obj2, obj3]
and I want to pass them to off to a C++ function to perform some computation. I'm using SWIG to write my interface. The class type of the passed object is already defined in C++.
What's the best way to do this?
A:
It depends on if your function is already written and cannot be changed, in which case you may need to check Swig docs to see if there is already a typemap from PyList to std::vector (I think there is). If not, taking PyObject* as the argument to the function and using the Python C API for manipulating lists should work fine. I haven't had any problems with it so far. For self-documentation, I recommend typedef'ing PyObject* to some kind of expected type, like "PythonList" so that the parameters have some meaning.
This may also be useful:
How to expose std::vector<int> as a Python list using SWIG?
| Passing a Python array to a C++ vector using Swig | I have an array of objects in Python
[obj1, obj2, obj3]
and I want to pass them to off to a C++ function to perform some computation. I'm using SWIG to write my interface. The class type of the passed object is already defined in C++.
What's the best way to do this?
| [
"It depends on if your function is already written and cannot be changed, in which case you may need to check Swig docs to see if there is already a typemap from PyList to std::vector (I think there is). If not, taking PyObject* as the argument to the function and using the Python C API for manipulating lists should work fine. I haven't had any problems with it so far. For self-documentation, I recommend typedef'ing PyObject* to some kind of expected type, like \"PythonList\" so that the parameters have some meaning.\nThis may also be useful:\nHow to expose std::vector<int> as a Python list using SWIG?\n"
] | [
2
] | [] | [] | [
"c++",
"python",
"swig"
] | stackoverflow_0000368980_c++_python_swig.txt |
Q:
email.retr retrieves strange =20 characters when the email body has chinese characters in it
self.logger.info(msg)
popinstance=poplib.POP3(self.account[0])
self.logger.info(popinstance.getwelcome())
popinstance.user(self.account[1])
popinstance.pass_(self.account[2])
try:
(numMsgs, totalSize)=popinstance.stat()
self.logger.info("POP contains " + str(numMsgs) + " emails")
for thisNum in xrange(1, numMsgs + 1):
try:
(server_msg, body, octets)=popinstance.retr(thisNum)
except:
self.logger.error("Could not download email")
raise
text="\n".join(body)
mesg=StringIO.StringIO(text)
msg=rfc822.Message(mesg)
MessageID=email.Utils.parseaddr(msg["Message-ID"])[1]
self.logger.info("downloading email " + MessageID)
emailpath=os.path.join(self._emailpath + self._inboxfolder + "\\" + self._sanitize_string(MessageID + ".eml"))
emailpath=self._replace_whitespace(emailpath)
try:
self._dual_dump(text,emailpath)
except:
pass
self.logger.info(popinstance.dele(thisNum))
finally:
self.logger.info(popinstance.quit())
(server_msg, body, octets)=popinstance.retr(thisNum) returns =20 in the body of the email when the email contains chinese characters.
How do I handle this?
raw text of email:
Subject: (B/L:4363-0192-809.015) SI FOR 15680XXXX436
=20
Dear
=20
SI ENCLOSED
PLS SEND US THE BL DRAFT AND DEBIT NOTE
=20
TKS
=20
MYRI
----- Original Message -----=20
A:
It is probably a Space character encoded in quoted-printable
A:
Use the quopri module to decode the string.
| email.retr retrieves strange =20 characters when the email body has chinese characters in it | self.logger.info(msg)
popinstance=poplib.POP3(self.account[0])
self.logger.info(popinstance.getwelcome())
popinstance.user(self.account[1])
popinstance.pass_(self.account[2])
try:
(numMsgs, totalSize)=popinstance.stat()
self.logger.info("POP contains " + str(numMsgs) + " emails")
for thisNum in xrange(1, numMsgs + 1):
try:
(server_msg, body, octets)=popinstance.retr(thisNum)
except:
self.logger.error("Could not download email")
raise
text="\n".join(body)
mesg=StringIO.StringIO(text)
msg=rfc822.Message(mesg)
MessageID=email.Utils.parseaddr(msg["Message-ID"])[1]
self.logger.info("downloading email " + MessageID)
emailpath=os.path.join(self._emailpath + self._inboxfolder + "\\" + self._sanitize_string(MessageID + ".eml"))
emailpath=self._replace_whitespace(emailpath)
try:
self._dual_dump(text,emailpath)
except:
pass
self.logger.info(popinstance.dele(thisNum))
finally:
self.logger.info(popinstance.quit())
(server_msg, body, octets)=popinstance.retr(thisNum) returns =20 in the body of the email when the email contains chinese characters.
How do I handle this?
raw text of email:
Subject: (B/L:4363-0192-809.015) SI FOR 15680XXXX436
=20
Dear
=20
SI ENCLOSED
PLS SEND US THE BL DRAFT AND DEBIT NOTE
=20
TKS
=20
MYRI
----- Original Message -----=20
| [
"It is probably a Space character encoded in quoted-printable\n",
"Use the quopri module to decode the string.\n"
] | [
8,
5
] | [] | [] | [
"asianfonts",
"email",
"fonts",
"jython",
"python"
] | stackoverflow_0000320166_asianfonts_email_fonts_jython_python.txt |
Q:
using existing rrule to generate a further set of occurrences
I have an rrule instance e.g.
r = rrule(WEEKLY, byweekday=SA, count=10, dtstart=parse('20081001'))
where dtstart and byweekday may change.
If I then want to generate the ten dates that follow on from this
rrule, what's the best way of doing it? Can I assign a new value to
the _dtstart member of r? That seems to work but I'm not sure.
e.g.
r._dtstart = list(r)[-1] or something like that
Otherwise I guess I'll create a new rrule and access _dtstart, _count, _byweekday etc. of the original instance.
EDIT:
I've thought about it more, and I think what I should be doing is omitting the 'count' argument when I create the first rrule instance. I can still get ten occurrences the first time I use the rrule
instances = list(r[0:10])
and then afterwards I can get more
more = list(r[10:20])
I think that solves my problem without any ugliness
A:
Firstly, r._dtstart = list(r)[-1] will give you the last date in the original sequence of dates. If you use that, without modification, for the beginning of a new sequence, you will end up with a duplicate date, i.e. the last date of the first sequence will be the same as the first date of the new sequence, which is probably not what you want:
>>> from dateutil.rrule import *
>>> import datetime
>>> r = rrule(WEEKLY, byweekday=SA, count=10, dtstart=datetime.datetime(2008,10,01))
>>> print list(r)
[datetime.datetime(2008, 10, 4, 0, 0), datetime.datetime(2008, 10, 11, 0, 0), datetime.datetime(2008, 10, 18, 0, 0), datetime.datetime(2008, 10, 25, 0, 0), datetime.datetime(2008, 11, 1, 0, 0), datetime.datetime(2008, 11, 8, 0, 0), datetime.datetime(2008, 11, 15, 0, 0), datetime.datetime(2008, 11, 22, 0, 0), datetime.datetime(2008, 11, 29, 0, 0), datetime.datetime(2008, 12, 6, 0, 0)]
>>> r._dtstart = r[-1]
>>> print list(r)
[datetime.datetime(2008, 12, 6, 0, 0), datetime.datetime(2008, 12, 13, 0, 0), datetime.datetime(2008, 12, 20, 0, 0), datetime.datetime(2008, 12, 27, 0, 0), datetime.datetime(2009, 1, 3, 0, 0), datetime.datetime(2009, 1, 10, 0, 0), datetime.datetime(2009, 1, 17, 0, 0), datetime.datetime(2009, 1, 24, 0, 0), datetime.datetime(2009, 1, 31, 0, 0), datetime.datetime(2009, 2, 7, 0, 0)]
Furthermore, it is considered poor form to manipulate r._dtstart as it is clearly intended to be a private attribute.
Instead, do something like this:
>>> r = rrule(WEEKLY, byweekday=SA, count=10, dtstart=datetime.datetime(2008,10,01))
>>> r2 = rrule(WEEKLY, byweekday=SA, count=r.count(), dtstart=r[-1] + datetime.timedelta(days=1))
>>> print list(r)
[datetime.datetime(2008, 10, 4, 0, 0), datetime.datetime(2008, 10, 11, 0, 0), datetime.datetime(2008, 10, 18, 0, 0), datetime.datetime(2008, 10, 25, 0, 0), datetime.datetime(2008, 11, 1, 0, 0), datetime.datetime(2008, 11, 8, 0, 0), datetime.datetime(2008, 11, 15, 0, 0), datetime.datetime(2008, 11, 22, 0, 0), datetime.datetime(2008, 11, 29, 0, 0), datetime.datetime(2008, 12, 6, 0, 0)]
>>> print list(r2)
[datetime.datetime(2008, 12, 13, 0, 0), datetime.datetime(2008, 12, 20, 0, 0), datetime.datetime(2008, 12, 27, 0, 0), datetime.datetime(2009, 1, 3, 0, 0), datetime.datetime(2009, 1, 10, 0, 0), datetime.datetime(2009, 1, 17, 0, 0), datetime.datetime(2009, 1, 24, 0, 0), datetime.datetime(2009, 1, 31, 0, 0), datetime.datetime(2009, 2, 7, 0, 0), datetime.datetime(2009, 2, 14, 0, 0)]
This codes does not access any private attributes of rrule (although you might have to look at _byweekday).
| using existing rrule to generate a further set of occurrences | I have an rrule instance e.g.
r = rrule(WEEKLY, byweekday=SA, count=10, dtstart=parse('20081001'))
where dtstart and byweekday may change.
If I then want to generate the ten dates that follow on from this
rrule, what's the best way of doing it? Can I assign a new value to
the _dtstart member of r? That seems to work but I'm not sure.
e.g.
r._dtstart = list(r)[-1] or something like that
Otherwise I guess I'll create a new rrule and access _dtstart, _count, _byweekday etc. of the original instance.
EDIT:
I've thought about it more, and I think what I should be doing is omitting the 'count' argument when I create the first rrule instance. I can still get ten occurrences the first time I use the rrule
instances = list(r[0:10])
and then afterwards I can get more
more = list(r[10:20])
I think that solves my problem without any ugliness
| [
"Firstly, r._dtstart = list(r)[-1] will give you the last date in the original sequence of dates. If you use that, without modification, for the beginning of a new sequence, you will end up with a duplicate date, i.e. the last date of the first sequence will be the same as the first date of the new sequence, which is probably not what you want:\n>>> from dateutil.rrule import *\n>>> import datetime\n\n>>> r = rrule(WEEKLY, byweekday=SA, count=10, dtstart=datetime.datetime(2008,10,01))\n>>> print list(r)\n[datetime.datetime(2008, 10, 4, 0, 0), datetime.datetime(2008, 10, 11, 0, 0), datetime.datetime(2008, 10, 18, 0, 0), datetime.datetime(2008, 10, 25, 0, 0), datetime.datetime(2008, 11, 1, 0, 0), datetime.datetime(2008, 11, 8, 0, 0), datetime.datetime(2008, 11, 15, 0, 0), datetime.datetime(2008, 11, 22, 0, 0), datetime.datetime(2008, 11, 29, 0, 0), datetime.datetime(2008, 12, 6, 0, 0)]\n>>> r._dtstart = r[-1]\n>>> print list(r)\n[datetime.datetime(2008, 12, 6, 0, 0), datetime.datetime(2008, 12, 13, 0, 0), datetime.datetime(2008, 12, 20, 0, 0), datetime.datetime(2008, 12, 27, 0, 0), datetime.datetime(2009, 1, 3, 0, 0), datetime.datetime(2009, 1, 10, 0, 0), datetime.datetime(2009, 1, 17, 0, 0), datetime.datetime(2009, 1, 24, 0, 0), datetime.datetime(2009, 1, 31, 0, 0), datetime.datetime(2009, 2, 7, 0, 0)]\n\nFurthermore, it is considered poor form to manipulate r._dtstart as it is clearly intended to be a private attribute.\nInstead, do something like this:\n>>> r = rrule(WEEKLY, byweekday=SA, count=10, dtstart=datetime.datetime(2008,10,01))\n>>> r2 = rrule(WEEKLY, byweekday=SA, count=r.count(), dtstart=r[-1] + datetime.timedelta(days=1))\n>>> print list(r)\n[datetime.datetime(2008, 10, 4, 0, 0), datetime.datetime(2008, 10, 11, 0, 0), datetime.datetime(2008, 10, 18, 0, 0), datetime.datetime(2008, 10, 25, 0, 0), datetime.datetime(2008, 11, 1, 0, 0), datetime.datetime(2008, 11, 8, 0, 0), datetime.datetime(2008, 11, 15, 0, 0), datetime.datetime(2008, 11, 22, 0, 0), datetime.datetime(2008, 11, 29, 0, 0), datetime.datetime(2008, 12, 6, 0, 0)]\n>>> print list(r2)\n[datetime.datetime(2008, 12, 13, 0, 0), datetime.datetime(2008, 12, 20, 0, 0), datetime.datetime(2008, 12, 27, 0, 0), datetime.datetime(2009, 1, 3, 0, 0), datetime.datetime(2009, 1, 10, 0, 0), datetime.datetime(2009, 1, 17, 0, 0), datetime.datetime(2009, 1, 24, 0, 0), datetime.datetime(2009, 1, 31, 0, 0), datetime.datetime(2009, 2, 7, 0, 0), datetime.datetime(2009, 2, 14, 0, 0)]\n\nThis codes does not access any private attributes of rrule (although you might have to look at _byweekday).\n"
] | [
1
] | [] | [] | [
"python",
"python_dateutil"
] | stackoverflow_0000369261_python_python_dateutil.txt |
Q:
How do I put a SQLAlchemy label on the result of an arithmetic expression?
How do I translate something like this into SQLAlchemy?
select x - y as difference...
I know how to do:
x.label('foo')
...but I'm not sure where to put the ".label()" method call below:
select ([table.c.x - table.c.y], ...
A:
The ColumnElement method is just a helper; label() can be used following way:
select([sql.expression.label('foo', table.c.x - table.c.y), ...])
| How do I put a SQLAlchemy label on the result of an arithmetic expression? | How do I translate something like this into SQLAlchemy?
select x - y as difference...
I know how to do:
x.label('foo')
...but I'm not sure where to put the ".label()" method call below:
select ([table.c.x - table.c.y], ...
| [
"The ColumnElement method is just a helper; label() can be used following way:\nselect([sql.expression.label('foo', table.c.x - table.c.y), ...])\n\n"
] | [
9
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0000370077_python_sqlalchemy.txt |
Q:
How do I ORDER BY an arithmetic express in SQLAlchemy?
How do I translate something like this into SQLAlchemy?
SELECT (a * b) - (x + y) / z AS result
FROM table
ORDER BY result
A:
Just pass the label in as a string argument to order_by:
result_exp = sqlalchemy.sql.expression.label('result',
((test2_table.c.a * test2_table.c.b)
- (test2_table.c.x + test2_table.c.y)
/ test2_table.c.z))
select([result_exp], from_obj=[test2_table], order_by="result")
| How do I ORDER BY an arithmetic express in SQLAlchemy? | How do I translate something like this into SQLAlchemy?
SELECT (a * b) - (x + y) / z AS result
FROM table
ORDER BY result
| [
"Just pass the label in as a string argument to order_by:\nresult_exp = sqlalchemy.sql.expression.label('result',\n ((test2_table.c.a * test2_table.c.b)\n - (test2_table.c.x + test2_table.c.y)\n / test2_table.c.z))\nselect([result_exp], from_obj=[test2_table], order_by=\"result\")\n\n"
] | [
3
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0000370160_python_sqlalchemy.txt |
Q:
String conversion in Python
I'm using Python 2.5. The DLL I imported is created using the CLR. The DLL function is returning a string. I'm trying to apply "partition" attribute to it. I'm not able to do it. Even the partition is not working. I think "all strings returned from CLR are returned as Unicode".
A:
Could you post your error message?
Could you post what type of object you have (type(yourvar))?
Please check if you have a partition(sep) method for this object (dir(yourvar)).
Applying partition method should look like:
>>> us=u"ะัะธะฒะตั, Unicode String!"
>>> us.partition(' ')
(u'\u041f\u0440\u0438\u0432\u0435\u0442,', u' ', u'Unicode String!')
You can also try split function instead of partition:
>>> from string import split
>>> split(us,' ',1)
[u'\u041f\u0440\u0438\u0432\u0435\u0442,', u'Unicode String!']
A:
If by CLR you mean .NET CLR, try using IronPython :
IronPython is a new implementation of the Python programming language running on .NET. It supports an interactive console with fully dynamic compilation. It is well integrated with the rest of the .NET Framework and makes all .NET libraries easily available to Python programmers, while maintaining full compatibility with the Python language.
In IronPython, loading (importing) and calling a .NET dll is well documented and straight forward.
| String conversion in Python | I'm using Python 2.5. The DLL I imported is created using the CLR. The DLL function is returning a string. I'm trying to apply "partition" attribute to it. I'm not able to do it. Even the partition is not working. I think "all strings returned from CLR are returned as Unicode".
| [
"Could you post your error message?\nCould you post what type of object you have (type(yourvar))?\nPlease check if you have a partition(sep) method for this object (dir(yourvar)).\nApplying partition method should look like:\n>>> us=u\"ะัะธะฒะตั, Unicode String!\"\n>>> us.partition(' ')\n(u'\\u041f\\u0440\\u0438\\u0432\\u0435\\u0442,', u' ', u'Unicode String!')\n\nYou can also try split function instead of partition:\n>>> from string import split\n>>> split(us,' ',1)\n[u'\\u041f\\u0440\\u0438\\u0432\\u0435\\u0442,', u'Unicode String!']\n\n",
"If by CLR you mean .NET CLR, try using IronPython :\n\nIronPython is a new implementation of the Python programming language running on .NET. It supports an interactive console with fully dynamic compilation. It is well integrated with the rest of the .NET Framework and makes all .NET libraries easily available to Python programmers, while maintaining full compatibility with the Python language.\n\nIn IronPython, loading (importing) and calling a .NET dll is well documented and straight forward.\n"
] | [
2,
1
] | [] | [] | [
"clr",
"python"
] | stackoverflow_0000371155_clr_python.txt |
Q:
When is white space not important in Python?
When is white space not important in Python?
It seems to be ignored inside a list, for example:
for x in range(5):
list += [x, 1
,2,3,
4,5]
A:
White space is only important for indentation of statements. You have a single statement across several lines, and only the indentation of the beginning of the statement on the first line is significant. See Python: Myths about Indentation for more information.
A:
Your question is really about when Python implicitly joins lines of code.
Python will implicitly join lines that are contained within (parentheses), {braces}, and [brackets], as in your example code. You can also explicitly join lines with a backslash (\) at the end of a line.
More here on implicit line continuation:
Mr. Gamble's answer is correct for indentation.
| When is white space not important in Python? | When is white space not important in Python?
It seems to be ignored inside a list, for example:
for x in range(5):
list += [x, 1
,2,3,
4,5]
| [
"White space is only important for indentation of statements. You have a single statement across several lines, and only the indentation of the beginning of the statement on the first line is significant. See Python: Myths about Indentation for more information.\n",
"Your question is really about when Python implicitly joins lines of code.\nPython will implicitly join lines that are contained within (parentheses), {braces}, and [brackets], as in your example code. You can also explicitly join lines with a backslash (\\) at the end of a line.\nMore here on implicit line continuation: \nMr. Gamble's answer is correct for indentation.\n"
] | [
14,
7
] | [] | [] | [
"python",
"whitespace"
] | stackoverflow_0000372007_python_whitespace.txt |
Q:
UTF in Python Regex
I'm aware that Python 3 fixes a lot of UTF issues, I am not however able to use Python 3, I am using 2.5.1
I'm trying to regex a document but the document has UTF hyphens in it โ rather than -. Python can't match these and if I put them in the regex it throws a wobbly.
How can I force Python to use a UTF string or in some way match a character such as that?
Thanks for your help
A:
You have to escape the character in question (โ) and put a u in front of the string literal to make it a unicode string.
So, for example, this:
re.compile("โ")
becomes this:
re.compile(u"\u2013")
A:
After a quick test and visit to PEP 0264: Defining Python Source Code Encodings, I see you may need to tell Python the whole file is UTF-8 encoded by adding adding a comment like this to the first line.
# encoding: utf-8
Here's the test file I created and ran on Python 2.5.1 / OS X 10.5.6
# encoding: utf-8
import re
x = re.compile("โ")
print x.search("xxxโx").start()
A:
Don't use UTF-8 in a regular expression. UTF-8 is a multibyte encoding where some unicode code points are encoded by 2 or more bytes. You may match parts of your string that you didn't plan to match. Instead use unicode strings as suggested.
| UTF in Python Regex | I'm aware that Python 3 fixes a lot of UTF issues, I am not however able to use Python 3, I am using 2.5.1
I'm trying to regex a document but the document has UTF hyphens in it โ rather than -. Python can't match these and if I put them in the regex it throws a wobbly.
How can I force Python to use a UTF string or in some way match a character such as that?
Thanks for your help
| [
"You have to escape the character in question (โ) and put a u in front of the string literal to make it a unicode string. \nSo, for example, this:\nre.compile(\"โ\") \n\nbecomes this:\nre.compile(u\"\\u2013\")\n\n",
"After a quick test and visit to PEP 0264: Defining Python Source Code Encodings, I see you may need to tell Python the whole file is UTF-8 encoded by adding adding a comment like this to the first line.\n# encoding: utf-8\n\nHere's the test file I created and ran on Python 2.5.1 / OS X 10.5.6\n# encoding: utf-8\nimport re\nx = re.compile(\"โ\") \nprint x.search(\"xxxโx\").start()\n\n",
"Don't use UTF-8 in a regular expression. UTF-8 is a multibyte encoding where some unicode code points are encoded by 2 or more bytes. You may match parts of your string that you didn't plan to match. Instead use unicode strings as suggested.\n"
] | [
7,
4,
3
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000372102_python_regex.txt |
Q:
What's the most pythonic way of testing that inputs are well-formed numbers
I have a function that expects real numbers (either integers or floats) as its input, and I'm trying to validate this input before doing mathematical operations on it.
My first instinct is to cast inputs as floats from within a try-except block.
try:
myinput = float(input)
except:
raise ValueError("input is not a well-formed number")
I could also call isinstance(mydata, (float, int, long) ) but the list of "all these could be numbers" seems a bit inelegant to me.
What's the most pythonic way of going about it? Is there another option I overlooked?
A:
To quote myself from How much input validation should I be doing on my python functions/methods?:
For calculations like sum, factorial etc, pythons built-in type checks will do fine. The calculations will end upp calling add, mul etc for the types, and if they break, they will throw the correct exception anyway. By enforcing your own checks, you may invalidate otherwise working input.
Thus, the best option is to leave the type checking up to Python. If the calculation fails, Python's type checking will give an exception, so if you do it yourself, you just duplicate code which means more work on your behalf.
A:
In Python 2.6 and 3.0, a type hierarchy of numeric abstract data types has been added, so you could perform your check as:
>>> import numbers
>>> isValid = isinstance(myinput , numbers.Real)
numbers.Real will match integral or float type, but not non-numeric types, or complex numbers (use numbers.Complex for that). It'll also match rational numbers , but presumably you'd want to include those as well. ie:
>>> [isinstance(x, numbers.Real) for x in [4, 4.5, "some string", 3+2j]]
[True, True, False, False]
Unfortunately, this is all in Python >=2.6, so won't be useful if you're developing for 2.5 or earlier.
A:
Maybe you can use a combination of assert and isinstance statements.
Something like the following is I think a more pythonic way, as you throw an exception whenever your inputs don't follow your requirements. Unfortunately I don't see any better definition of what is a valid number than yours. Maybe someone will come with a better idea.
number = (float, int, long)
assert isinstance(mydata, (float, int, long))
A:
I don't get the question.
There are two things with wildly different semantics tossed around as "alternatives".
A type conversion is one thing. It works with any object that supports __float__, which can be quite a variety of objects, few of which are actually numeric.
try:
myinput = float(input)
except:
raise ValueError("input is not a well-formed number")
# at this point, input may not be numeric at all
# it may, however, have produced a numeric value
A type test is another thing. This works only with objects that are proper instances of a specific set of classes.
isinstance(input, (float, int, long) )
# at this point, input is one of a known list of numeric types
Here's the example class that responds to float, but is still not numeric.
class MyStrangeThing( object ):
def __init__( self, aString ):
# Some fancy parsing
def __float__( self ):
# extract some numeric value from my thing
The question "real numbers (either integers or floats)" is generally irrelevant. Many things are "numeric" and can be used in a numeric operation but aren't ints or floats. For example, you may have downloaded or created a rational numbers package.
There's no point in overvalidating inputs, unless you have an algorithm that will not work with some types. These are rare, but some calculations require integers, specifically so they can do integer division and remainder operations. For those, you might want to assert that your values are ints.
| What's the most pythonic way of testing that inputs are well-formed numbers | I have a function that expects real numbers (either integers or floats) as its input, and I'm trying to validate this input before doing mathematical operations on it.
My first instinct is to cast inputs as floats from within a try-except block.
try:
myinput = float(input)
except:
raise ValueError("input is not a well-formed number")
I could also call isinstance(mydata, (float, int, long) ) but the list of "all these could be numbers" seems a bit inelegant to me.
What's the most pythonic way of going about it? Is there another option I overlooked?
| [
"To quote myself from How much input validation should I be doing on my python functions/methods?:\n\nFor calculations like sum, factorial etc, pythons built-in type checks will do fine. The calculations will end upp calling add, mul etc for the types, and if they break, they will throw the correct exception anyway. By enforcing your own checks, you may invalidate otherwise working input.\n\nThus, the best option is to leave the type checking up to Python. If the calculation fails, Python's type checking will give an exception, so if you do it yourself, you just duplicate code which means more work on your behalf.\n",
"In Python 2.6 and 3.0, a type hierarchy of numeric abstract data types has been added, so you could perform your check as:\n>>> import numbers\n>>> isValid = isinstance(myinput , numbers.Real)\n\nnumbers.Real will match integral or float type, but not non-numeric types, or complex numbers (use numbers.Complex for that). It'll also match rational numbers , but presumably you'd want to include those as well. ie:\n>>> [isinstance(x, numbers.Real) for x in [4, 4.5, \"some string\", 3+2j]]\n[True, True, False, False]\n\nUnfortunately, this is all in Python >=2.6, so won't be useful if you're developing for 2.5 or earlier.\n",
"Maybe you can use a combination of assert and isinstance statements.\nSomething like the following is I think a more pythonic way, as you throw an exception whenever your inputs don't follow your requirements. Unfortunately I don't see any better definition of what is a valid number than yours. Maybe someone will come with a better idea.\nnumber = (float, int, long)\nassert isinstance(mydata, (float, int, long))\n\n",
"I don't get the question.\nThere are two things with wildly different semantics tossed around as \"alternatives\".\nA type conversion is one thing. It works with any object that supports __float__, which can be quite a variety of objects, few of which are actually numeric.\ntry:\n myinput = float(input)\nexcept:\n raise ValueError(\"input is not a well-formed number\")\n# at this point, input may not be numeric at all\n# it may, however, have produced a numeric value\n\nA type test is another thing. This works only with objects that are proper instances of a specific set of classes.\nisinstance(input, (float, int, long) )\n# at this point, input is one of a known list of numeric types\n\nHere's the example class that responds to float, but is still not numeric.\nclass MyStrangeThing( object ):\n def __init__( self, aString ):\n # Some fancy parsing \n def __float__( self ):\n # extract some numeric value from my thing\n\nThe question \"real numbers (either integers or floats)\" is generally irrelevant. Many things are \"numeric\" and can be used in a numeric operation but aren't ints or floats. For example, you may have downloaded or created a rational numbers package.\nThere's no point in overvalidating inputs, unless you have an algorithm that will not work with some types. These are rare, but some calculations require integers, specifically so they can do integer division and remainder operations. For those, you might want to assert that your values are ints.\n"
] | [
12,
5,
2,
1
] | [] | [] | [
"idioms",
"python"
] | stackoverflow_0000371419_idioms_python.txt |
Q:
Protecting online static content
How would I only allow users authenticated via Python code to access certain files on the server?
For instance, say I have /static/book.txt which I want to protect. When a user accesses /some/path/that/validates/him, a Python script deems him worthy of accessing /static/book.txt and redirects him to that path.
How would I stop users who bypass the script and directly access /static/book.txt?
A:
Lighttpd has mod_secdownload for this. Basically, it won't serve the static content directly unless you generate a short-lived static URL for it.
Note that you can do similar things on S3 for static content. It's a quite useful feature.
A:
You might want to just have your Python script open the file and dump the contents as its output if the user is properly authenticated. Put the files you want to protect in a folder that is outside of the webserver root.
| Protecting online static content | How would I only allow users authenticated via Python code to access certain files on the server?
For instance, say I have /static/book.txt which I want to protect. When a user accesses /some/path/that/validates/him, a Python script deems him worthy of accessing /static/book.txt and redirects him to that path.
How would I stop users who bypass the script and directly access /static/book.txt?
| [
"Lighttpd has mod_secdownload for this. Basically, it won't serve the static content directly unless you generate a short-lived static URL for it.\nNote that you can do similar things on S3 for static content. It's a quite useful feature.\n",
"You might want to just have your Python script open the file and dump the contents as its output if the user is properly authenticated. Put the files you want to protect in a folder that is outside of the webserver root.\n"
] | [
3,
3
] | [] | [] | [
"apache",
"download",
"lighttpd",
"python",
"security"
] | stackoverflow_0000372465_apache_download_lighttpd_python_security.txt |
Q:
How can I capture all exceptions from a wxPython application?
I'm writing a little debug app for a bit of kit we're developing and I'd like to roll it out to a few users to see if they can provoke any crashes. Does anyone know a way of effectively wrapping a wxPython app to catch any and all unhandled exceptions that would cause the app to crash?
Ideally I'd want to capture all output (not just errors) and log it to a file. Any unhandled exceptions ought to log to the current file and then allow the exception to pass on as per usual (i.e. the logging process ought to be transparent).
I'm sure someone must have done something along these lines before, but I've not managed to turn up anything that looks useful via google.
A:
For the exception handling, assuming your log file is opened as log:
import sys
import traceback
def excepthook(type, value, tb):
message = 'Uncaught exception:\n'
message += ''.join(traceback.format_exception(type, value, tb))
log.write(message)
sys.excepthook = excepthook
A:
For logging standard output, you can use a stdout wrapper, such as this one:
from __future__ import with_statement
class OutWrapper(object):
def __init__(self, realOutput, logFileName):
self._realOutput = realOutput
self._logFileName = logFileName
def _log(self, text):
with open(self._logFileName, 'a') as logFile:
logFile.write(text)
def write(self, text):
self._log(text)
self._realOutput.write(text)
You then have to initialize it in your main Python file (the one that runs everything):
import sys
sys.stdout = OutWrapper(sys.stdout, r'c:\temp\log.txt')
As to logging exceptions, the easiest thing to do is to wrap MainLoop method of wx.App in a try..except, then extract the exception information, save it in some way, and then re-raise the exception through raise, e.g.:
try:
app.MainLoop()
except:
exc_info = sys.exc_info()
saveExcInfo(exc_info) # this method you have to write yourself
raise
A:
You can use
sys.excepthook
(see Python docs)
and assign some custom object to it, that would catch all exceptions not caught earlier in your code. You can then log any message to any file you wish, together with traceback and do whatever you like with the exception (reraise it, display error message and allow user to continue using your app etc).
As for logging stdout - the best way for me was to write something similar to DzinX's OutWrapper.
If you're at debugging stage, consider flushing your log files after each entry. This harms performance a lot, but if you manage to cause segfault in some underlying C code, your logs won't mislead you.
A:
There are various ways. You can put a try..catch block in the wxApplication::OnInit, however, that would not always work with Gtk.
A nice alternative would be to override the Application::HandleEvent in your wxApplication derived class, and write a code like this:
void Application::HandleEvent(wxEvtHandler* handler, wxEventFunction func, wxEvent& event) const
{
try
{
wxAppConsole::HandleEvent(handler, func, event);
}
catch (const std::exception& e)
{
wxMessageBox(std2wx(e.what()), _("Unhandled Error"),
wxOK | wxICON_ERROR, wxGetTopLevelParent(wxGetActiveWindow()));
}
}
It's a C++ example, but you can surely translate to Python easily.
| How can I capture all exceptions from a wxPython application? | I'm writing a little debug app for a bit of kit we're developing and I'd like to roll it out to a few users to see if they can provoke any crashes. Does anyone know a way of effectively wrapping a wxPython app to catch any and all unhandled exceptions that would cause the app to crash?
Ideally I'd want to capture all output (not just errors) and log it to a file. Any unhandled exceptions ought to log to the current file and then allow the exception to pass on as per usual (i.e. the logging process ought to be transparent).
I'm sure someone must have done something along these lines before, but I've not managed to turn up anything that looks useful via google.
| [
"For the exception handling, assuming your log file is opened as log:\nimport sys\nimport traceback\n\ndef excepthook(type, value, tb):\n message = 'Uncaught exception:\\n'\n message += ''.join(traceback.format_exception(type, value, tb))\n log.write(message)\n\nsys.excepthook = excepthook\n\n",
"For logging standard output, you can use a stdout wrapper, such as this one:\nfrom __future__ import with_statement\n\nclass OutWrapper(object):\n def __init__(self, realOutput, logFileName):\n self._realOutput = realOutput\n self._logFileName = logFileName\n\n def _log(self, text):\n with open(self._logFileName, 'a') as logFile:\n logFile.write(text)\n\n def write(self, text):\n self._log(text)\n self._realOutput.write(text)\n\nYou then have to initialize it in your main Python file (the one that runs everything):\nimport sys \nsys.stdout = OutWrapper(sys.stdout, r'c:\\temp\\log.txt')\n\nAs to logging exceptions, the easiest thing to do is to wrap MainLoop method of wx.App in a try..except, then extract the exception information, save it in some way, and then re-raise the exception through raise, e.g.:\ntry:\n app.MainLoop()\nexcept:\n exc_info = sys.exc_info()\n saveExcInfo(exc_info) # this method you have to write yourself\n raise\n\n",
"You can use\n\nsys.excepthook\n\n(see Python docs)\nand assign some custom object to it, that would catch all exceptions not caught earlier in your code. You can then log any message to any file you wish, together with traceback and do whatever you like with the exception (reraise it, display error message and allow user to continue using your app etc).\nAs for logging stdout - the best way for me was to write something similar to DzinX's OutWrapper.\nIf you're at debugging stage, consider flushing your log files after each entry. This harms performance a lot, but if you manage to cause segfault in some underlying C code, your logs won't mislead you.\n",
"There are various ways. You can put a try..catch block in the wxApplication::OnInit, however, that would not always work with Gtk. \nA nice alternative would be to override the Application::HandleEvent in your wxApplication derived class, and write a code like this:\nvoid Application::HandleEvent(wxEvtHandler* handler, wxEventFunction func, wxEvent& event) const\n{\n try\n {\n wxAppConsole::HandleEvent(handler, func, event);\n }\n catch (const std::exception& e)\n {\n wxMessageBox(std2wx(e.what()), _(\"Unhandled Error\"),\n wxOK | wxICON_ERROR, wxGetTopLevelParent(wxGetActiveWindow()));\n }\n}\n\nIt's a C++ example, but you can surely translate to Python easily.\n"
] | [
10,
6,
3,
1
] | [] | [] | [
"error_handling",
"error_reporting",
"exception",
"python",
"wxwidgets"
] | stackoverflow_0000166198_error_handling_error_reporting_exception_python_wxwidgets.txt |
Q:
What is the object oriented programming computing overhead cost?
I have a large set of data (a data cube of 250,000 X 1,000 doubles, about a 4 gig file) and I want to manipulate it using a previous set of OOP classes I have written in Python. Currently the data set is already so large that to read into my machine memory I have to at least split it in half so computing overhead is a concern. My OOP classes create new objects (in this case I will need 250,000 new objects, each object is an array of 1,000 doubles) to handle the data. What is the overhead in terms of memory and computing required in creating objects for a generic OOP language? In python? What about in C++?
Yes, I realize I could make a new class that is an array. But 1) I already have these classes finished and 2) I put each object that I create back into an array for access later anyways. The question is pedagogical
*update: I want to be efficient with time, my time and the computers. I don't want to rewrite a program I already have if I don't have to and spending time optimizing the code wastes my time, I don't care that much if I waste the computers time. I actually do have a 64bit machine with 4Gig ram. The data is an image and I need to do several filters on each pixel.*
A:
You'd have similar issues with procedural/functional programming languages. How do you store that much data in memory? A struct or array wouldn't work either.
You need to take special steps to manage this scale of data.
BTW: I wouldn't use this as a reason to pick either an OO language or not.
A:
See http://code.activestate.com/recipes/546530/
This is the approximate size of Python objects.
The OO size "penalty" is often offset by the ability to (a) simplify processing and (b) keep less stuff in memory in the first place.
There is no OO performance overhead. Zero. In C++, the class definitions are optimized out of existence, and all you have left is C. In Python -- like all dynamic languages -- the dynamic programming environment adds some run-time lookups. Mostly, these are direct hashes into dictionaries. It's slower than code where a compiler did all the resolving for you. However it's still very fast with relatively low overhead.
A bad algorithm in C can easily be slower than the right algorithm in Python.
A:
Slightly OT: the flyweight design pattern can be useful for minimising overheads when you're manipulating large datasets. Without knowing the details of your problem I'm not sure how applicable it is, but it's worth a look...
A:
I wouldn't consider it fair to blame any shortcomings of your design to OOP. Just like any other programming platform out there OO can be used for both good and less than optimal design. Rarely will this be the fault of the programming model itself.
But to try to answer your question: Allocating 250000 new object requires some overhead in all OO language that I'm aware of, so if you can get away with streaming the data through the same instance, you're probably better off.
A:
Actual C++ OO memory overhead is one pointer (4-8 bytes, depending) per object with virtual methods. However, as mentioned in other answers, the default memory allocation overhead from dynamic allocation is likely to be significantly greater than this.
If you're doing things halfway reasonably, neither overhead is likely to be significant compared with an 1000*8-byte double array. If you're actually worried about allocation overhead, you can write your own allocator -- but, check first to see if it will actually buy you a significant improvement.
A:
Impossible to answer without knowing the shape of the data and the structure that you've designed to contain it.
A:
The "overhead" depends largely on the platform and the implementation you chose.
Now if you have a memory problem reading millions of data from a multiple Gb file, you're having an algorithm problem where the memory consumption of objects is definitely not the biggest concern, the concern yould be more about how you do fetch, process and store the data.
A:
Like the other posters have stated. I do not believe Objects are going to lend a significant amount of overhead to your process. It will need to store a pointer to the object but the rest of the 'doubles' will be taking 99% of your program's memory.
Can you partition this data into much smaller subsets? What is the task that you are trying to accomplish? I would be interested in seeing what you need all the data in memory for. Perhaps you can just serialize it, or use something like lazy evaluation in haskell.
Please post a follow up so we can understand your problem domain better.
A:
I don't think the question is overhead coming from OO.
If we accept C++ as an OO language and remember that the C++ compiler is a preprocessor to C (at least it used to be, when I used C++), anything done in C++ is really done in C. C has very little overhead. So it would depend on the libraries.
I think any overhead would come from interpretation, managed execution or memory management. For those that have the tools and the know-how, it would be very easy to find out which is most efficient, C++ or Python.
I can't see where C++ would add much avoidable overhead. I don't know much about Python.
A:
compared to the size of your data set, the overhead of 250K objects is negligible
i think you're on the wrong path; don't blame objects for that ;-)
A:
Please define "manipulate". If you really want to manipulate 4 gigs of data why do you want to manipulate it by pulling it ALL into memory right away?
I mean, who needs 4 gig of RAM anyway? :)
A:
If you have to manipulate data sets this big on a regular basis, could you just get a 64-bit machine with bucket-loads of RAM? For various reasons, I've found myself working with fairly resource hungry software (in this case SQL Server Analysis Services). Older 64-bit machines of this sort can take large amounts of RAM and feature CPUs that while not cutting-edge are still respectably fast.
I got some secondhand HP workstations and fitted them with several fast SCSI disks. In mid-2007 these machines with 4 or 8GB of RAM and 5x 10K or 15K SCSI disks cost between ยฃ1,500-ยฃ2,000 to buy. The disks were half the cost of the machines and you may not need the I/O performance so you probably won't need to spend this much. XW9300's of the sort that I bought can be purchased of ebay quite cheaply now - this posting of mine goes into various options for using ebay to get a high-spec 64-bit box on the cheap. You can get memory upgrades to 16 or 32GB for these machines off ebay for quite a small fraction of the list price of the parts.
A:
A friend of mine was a professor at MIT and a student asked him why his image analysis program was running so slow. How was it built? Every pixel was an object, and would send messages to its neighbors!
If I were you I'd try it in a throw-away program. My suspicion is, unless your classes are very carefully coded, you're going to find it spending a lot of time allocating, initializing, and de-allocating objects, and as Brian said, you might be able to spool the data through a set of re-used objects.
Edit: Excuse me. You said you are re-using objects, so that's good. In any case, when you get it running you could profile it or (if you were me) just read the call stack a few random times, and that will answer any questions about where the time goes.
A:
Since you can split the data in half and operate on it, I'm assuming that you're working on each record individually? It sounds to me like you need to change your deserialiser to read one record at a time, manipulate it, and then store out the results.
Basically you need a string parser class that does a Peek() which returns a char, knows how to skip whitespace, etc. Wrap a class around that that understands your data format, and you should be able to have it spit out an object at a time as it reads the file.
| What is the object oriented programming computing overhead cost? | I have a large set of data (a data cube of 250,000 X 1,000 doubles, about a 4 gig file) and I want to manipulate it using a previous set of OOP classes I have written in Python. Currently the data set is already so large that to read into my machine memory I have to at least split it in half so computing overhead is a concern. My OOP classes create new objects (in this case I will need 250,000 new objects, each object is an array of 1,000 doubles) to handle the data. What is the overhead in terms of memory and computing required in creating objects for a generic OOP language? In python? What about in C++?
Yes, I realize I could make a new class that is an array. But 1) I already have these classes finished and 2) I put each object that I create back into an array for access later anyways. The question is pedagogical
*update: I want to be efficient with time, my time and the computers. I don't want to rewrite a program I already have if I don't have to and spending time optimizing the code wastes my time, I don't care that much if I waste the computers time. I actually do have a 64bit machine with 4Gig ram. The data is an image and I need to do several filters on each pixel.*
| [
"You'd have similar issues with procedural/functional programming languages. How do you store that much data in memory? A struct or array wouldn't work either. \nYou need to take special steps to manage this scale of data.\nBTW: I wouldn't use this as a reason to pick either an OO language or not. \n",
"See http://code.activestate.com/recipes/546530/\nThis is the approximate size of Python objects.\nThe OO size \"penalty\" is often offset by the ability to (a) simplify processing and (b) keep less stuff in memory in the first place.\nThere is no OO performance overhead. Zero. In C++, the class definitions are optimized out of existence, and all you have left is C. In Python -- like all dynamic languages -- the dynamic programming environment adds some run-time lookups. Mostly, these are direct hashes into dictionaries. It's slower than code where a compiler did all the resolving for you. However it's still very fast with relatively low overhead.\nA bad algorithm in C can easily be slower than the right algorithm in Python.\n",
"Slightly OT: the flyweight design pattern can be useful for minimising overheads when you're manipulating large datasets. Without knowing the details of your problem I'm not sure how applicable it is, but it's worth a look...\n",
"I wouldn't consider it fair to blame any shortcomings of your design to OOP. Just like any other programming platform out there OO can be used for both good and less than optimal design. Rarely will this be the fault of the programming model itself. \nBut to try to answer your question: Allocating 250000 new object requires some overhead in all OO language that I'm aware of, so if you can get away with streaming the data through the same instance, you're probably better off. \n",
"Actual C++ OO memory overhead is one pointer (4-8 bytes, depending) per object with virtual methods. However, as mentioned in other answers, the default memory allocation overhead from dynamic allocation is likely to be significantly greater than this.\nIf you're doing things halfway reasonably, neither overhead is likely to be significant compared with an 1000*8-byte double array. If you're actually worried about allocation overhead, you can write your own allocator -- but, check first to see if it will actually buy you a significant improvement.\n",
"Impossible to answer without knowing the shape of the data and the structure that you've designed to contain it.\n",
"The \"overhead\" depends largely on the platform and the implementation you chose.\nNow if you have a memory problem reading millions of data from a multiple Gb file, you're having an algorithm problem where the memory consumption of objects is definitely not the biggest concern, the concern yould be more about how you do fetch, process and store the data.\n",
"Like the other posters have stated. I do not believe Objects are going to lend a significant amount of overhead to your process. It will need to store a pointer to the object but the rest of the 'doubles' will be taking 99% of your program's memory.\nCan you partition this data into much smaller subsets? What is the task that you are trying to accomplish? I would be interested in seeing what you need all the data in memory for. Perhaps you can just serialize it, or use something like lazy evaluation in haskell.\nPlease post a follow up so we can understand your problem domain better.\n",
"I don't think the question is overhead coming from OO.\nIf we accept C++ as an OO language and remember that the C++ compiler is a preprocessor to C (at least it used to be, when I used C++), anything done in C++ is really done in C. C has very little overhead. So it would depend on the libraries.\nI think any overhead would come from interpretation, managed execution or memory management. For those that have the tools and the know-how, it would be very easy to find out which is most efficient, C++ or Python.\nI can't see where C++ would add much avoidable overhead. I don't know much about Python.\n",
"compared to the size of your data set, the overhead of 250K objects is negligible\ni think you're on the wrong path; don't blame objects for that ;-)\n",
"Please define \"manipulate\". If you really want to manipulate 4 gigs of data why do you want to manipulate it by pulling it ALL into memory right away?\nI mean, who needs 4 gig of RAM anyway? :)\n",
"If you have to manipulate data sets this big on a regular basis, could you just get a 64-bit machine with bucket-loads of RAM? For various reasons, I've found myself working with fairly resource hungry software (in this case SQL Server Analysis Services). Older 64-bit machines of this sort can take large amounts of RAM and feature CPUs that while not cutting-edge are still respectably fast.\nI got some secondhand HP workstations and fitted them with several fast SCSI disks. In mid-2007 these machines with 4 or 8GB of RAM and 5x 10K or 15K SCSI disks cost between ยฃ1,500-ยฃ2,000 to buy. The disks were half the cost of the machines and you may not need the I/O performance so you probably won't need to spend this much. XW9300's of the sort that I bought can be purchased of ebay quite cheaply now - this posting of mine goes into various options for using ebay to get a high-spec 64-bit box on the cheap. You can get memory upgrades to 16 or 32GB for these machines off ebay for quite a small fraction of the list price of the parts.\n",
"A friend of mine was a professor at MIT and a student asked him why his image analysis program was running so slow. How was it built? Every pixel was an object, and would send messages to its neighbors!\nIf I were you I'd try it in a throw-away program. My suspicion is, unless your classes are very carefully coded, you're going to find it spending a lot of time allocating, initializing, and de-allocating objects, and as Brian said, you might be able to spool the data through a set of re-used objects.\nEdit: Excuse me. You said you are re-using objects, so that's good. In any case, when you get it running you could profile it or (if you were me) just read the call stack a few random times, and that will answer any questions about where the time goes.\n",
"Since you can split the data in half and operate on it, I'm assuming that you're working on each record individually? It sounds to me like you need to change your deserialiser to read one record at a time, manipulate it, and then store out the results.\nBasically you need a string parser class that does a Peek() which returns a char, knows how to skip whitespace, etc. Wrap a class around that that understands your data format, and you should be able to have it spit out an object at a time as it reads the file.\n"
] | [
3,
3,
3,
2,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"data_analysis",
"oop",
"python"
] | stackoverflow_0000372511_data_analysis_oop_python.txt |
Q:
What is the best way to serialize a ModelForm object in Django?
I am using Django and the Google Web Toolkit (GWT) for my current project. I would like to pass a ModelForm instance to GWT via an Http response so that I can "chop" it up and render it as I please. My goal is to keep the form in sync with changes to my models.py file, yet increase control I have over the look of the form. However, the django classes for serialization, serializers and simplejson, cannot serialize a ModelForm. Neither can cPickle. What are my alternatives?
A:
If you were using pure Django, you'd pass the form to your template, and could then call individual fields on the form for more precise rendering, rather than using ModelForm.to_table. You can use the following to iterate over each field and render it exactly how you want:
{% for field in form.fields %}
<div class="form-field">{{ field }}</div>
{% endfor %}
This also affords you the ability to do conditional checks using {% if %} blocks inside the loop should you want to exclude certain fields.
A:
If your problem is just to serialze a ModelForm to json, just write your own simplejson serializer subclass.
| What is the best way to serialize a ModelForm object in Django? | I am using Django and the Google Web Toolkit (GWT) for my current project. I would like to pass a ModelForm instance to GWT via an Http response so that I can "chop" it up and render it as I please. My goal is to keep the form in sync with changes to my models.py file, yet increase control I have over the look of the form. However, the django classes for serialization, serializers and simplejson, cannot serialize a ModelForm. Neither can cPickle. What are my alternatives?
| [
"If you were using pure Django, you'd pass the form to your template, and could then call individual fields on the form for more precise rendering, rather than using ModelForm.to_table. You can use the following to iterate over each field and render it exactly how you want:\n{% for field in form.fields %}\n <div class=\"form-field\">{{ field }}</div>\n{% endfor %}\n\nThis also affords you the ability to do conditional checks using {% if %} blocks inside the loop should you want to exclude certain fields.\n",
"If your problem is just to serialze a ModelForm to json, just write your own simplejson serializer subclass.\n"
] | [
2,
0
] | [] | [] | [
"django",
"json",
"python",
"serialization",
"xml"
] | stackoverflow_0000369230_django_json_python_serialization_xml.txt |
Q:
List all the classes that currently exist
I'm creating a simple API that creates typed classes based on JSON data that has a mandatory 'type' field defined in it. It uses this string to define a new type, add the fields in the JSON object, instantiate it, and then populate the fields on the instance.
What I want to be able to do is allow for these types to be optionally pre-defined in whatever application is using my module. This is so methods and other application-specific attributes not found in the JSON object can be added. I'd like to have my module check if a type already exists, and if it does, use that instead of dynamically creating the type. It would still add attributes from the JSON object, but it would be recycling the existing type.
My JSON data is:
{
"type": "Person",
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": 10021
},
"phoneNumbers": [
"212 555-1234",
"646 555-4567"
]
}
My code so far (assume json_object is a dict at this point):
if not json_object.has_key('type'):
raise TypeError('JSON stream is missing a "type" attribute.')
##### THIS IS WHERE I WANT TO CHECK IF THE TYPE BY THAT NAME EXISTS ALREADY ####
# create a type definition and add attributes to it
definition = type(json_object['type'], (object,), {})
for key in json_object.keys():
setattr(definition, key, None)
# instantiate and populate a test object
tester = definition()
for key, value in json_object.iteritems():
setattr(tester, key, value)
A:
If you want to reuse types that you created earlier, it's best to cache them yourself:
json_types = {}
def get_json_type(name):
try:
return json_types[name]
except KeyError:
json_types[name] = t = type(json_object['type'], (object,), {})
# any further initialization of t here
return t
definition = get_json_type(json_object['type'])
tester = definition()
# or: tester = get_json_type(json_object['type'])()
If you want to have them added to the module namespace, do
json_types = globals()
instead.
A:
You can use dir() to get a list of all the names of all objects in the current environment, and you can use globals() to a get a dictionary mapping those names to their values. Thus, to get just the list of objects which are classes, you can do:
import types
listOfClasses = [cls for cls in globals().values() if type(cls) == types.ClassType]
A:
You can use dir():
Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> dir()
['__builtins__', '__doc__', '__name__']
>>> class Foo:
... pass
...
>>> dir()
['Foo', '__builtins__', '__doc__', '__name__']
>>> type(eval('Foo'))
<type 'classobj'>
| List all the classes that currently exist | I'm creating a simple API that creates typed classes based on JSON data that has a mandatory 'type' field defined in it. It uses this string to define a new type, add the fields in the JSON object, instantiate it, and then populate the fields on the instance.
What I want to be able to do is allow for these types to be optionally pre-defined in whatever application is using my module. This is so methods and other application-specific attributes not found in the JSON object can be added. I'd like to have my module check if a type already exists, and if it does, use that instead of dynamically creating the type. It would still add attributes from the JSON object, but it would be recycling the existing type.
My JSON data is:
{
"type": "Person",
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": 10021
},
"phoneNumbers": [
"212 555-1234",
"646 555-4567"
]
}
My code so far (assume json_object is a dict at this point):
if not json_object.has_key('type'):
raise TypeError('JSON stream is missing a "type" attribute.')
##### THIS IS WHERE I WANT TO CHECK IF THE TYPE BY THAT NAME EXISTS ALREADY ####
# create a type definition and add attributes to it
definition = type(json_object['type'], (object,), {})
for key in json_object.keys():
setattr(definition, key, None)
# instantiate and populate a test object
tester = definition()
for key, value in json_object.iteritems():
setattr(tester, key, value)
| [
"If you want to reuse types that you created earlier, it's best to cache them yourself:\njson_types = {}\ndef get_json_type(name):\n try:\n return json_types[name]\n except KeyError:\n json_types[name] = t = type(json_object['type'], (object,), {})\n # any further initialization of t here\n return t\n\ndefinition = get_json_type(json_object['type'])\ntester = definition()\n# or: tester = get_json_type(json_object['type'])()\n\nIf you want to have them added to the module namespace, do\njson_types = globals()\n\ninstead.\n",
"You can use dir() to get a list of all the names of all objects in the current environment, and you can use globals() to a get a dictionary mapping those names to their values. Thus, to get just the list of objects which are classes, you can do:\nimport types\nlistOfClasses = [cls for cls in globals().values() if type(cls) == types.ClassType]\n\n",
"You can use dir():\nPython 2.5.2 (r252:60911, Oct 5 2008, 19:29:17)\n[GCC 4.3.2] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> dir()\n['__builtins__', '__doc__', '__name__']\n>>> class Foo:\n... pass\n...\n>>> dir()\n['Foo', '__builtins__', '__doc__', '__name__']\n>>> type(eval('Foo'))\n<type 'classobj'>\n\n"
] | [
3,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000373067_python.txt |
Q:
How to organize python test in a way that I can run all tests in a single command?
Currently my code is organized in the following tree structure:
src/
module1.py
module2.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
test_moduleA.py
test_moduleB.py
Where the module*.py files contains the source code and the test_module*.py contains the TestCases for the relevant module.
With the following comands I can run the tests contained in a single file, for example:
$ cd src
$ nosetests test_filesystem.py
..................
----------------------------------------------------------------------
Ran 18 tests in 0.390s
OK
How can I run all tests? I tried with nosetests -m 'test_.*' but it doesn't work.
$cd src
$ nosetests -m 'test_.*'
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Thanks
A:
Whether you seperate or mix tests and modules is probably a matter of taste, although I would strongly advocate for keeping them apart (setup reasons, code stats etc).
When you're using nosetests, make sure that all directories with tests are real packages:
src/
module1.py
module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
tests/
__init__.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
test_moduleA.py
test_moduleB.py
This way, you can just run nosetests in the toplevel directory and all tests will be found. You need to make sure that src/ is on the PYTHONPATH, however, otherwise all the tests will fail due to missing imports.
A:
If they all begin with test then just nosetest should work. Nose automatically searches for any files beginning with 'test'.
A:
I don't know about nosetests, but you can achieve that with the standard unittest module. You just need to create a test_all.py file under your root directory, then import all your test modules. In your case:
import unittest
import test_module1
import test_module2
import subpackage1
if __name__ == "__main__":
allsuites = unittest.TestSuite([test_module1.suite(), \
test_module2.suite(), \
subpackage1.test_moduleA.suite(), \
subpackage1.test_moduleB.suite()])
each module should provide the following function (example with a module with two unit tests: Class1 and Class2):
def suite():
""" This defines all the tests of a module"""
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(Class1))
suite.addTest(unittest.makeSuite(Class2))
return suite
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
A:
This is probably a hotly-contested topic, but I would suggest that you separate your tests out from your modules. Set up something like this...
Use setup.py to install these into the system path (or you may be able to modify environment variables to avoid the need for an "install" step).
foo/
module1.py
module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
Now any python script anywhere can access those modules, instead of depending on finding them in the local directory. Put your tests all off to the side like this:
tests/
test_module1.py
test_module2.py
test_subpackage1_moduleA,py
test_subpackage2_moduleB.py
I'm not sure about your nosetests command, but now that your tests are all in the same directory, it becomes much easier to write a wrapper script that simply imports all of the other tests in the same directory. Or if that's not possible, you can at least get away with a simple bash loop that gets your test files one by one:
#!/bin/bash
cd tests/
for TEST_SCRIPT in test_*.py ; do
nosetests -m $TEST_SCRIPT
done
A:
I'll give a Testoob answer.
Running tests in a single file is like Nose:
testoob test_foo.py
To run tests in many files you can create suites with the Testoob collectors (in each subpackage)
# src/subpackage?/__init__.py
def suite():
import testoob
return testoob.collecting.collect_from_files("test_*.py")
and
# src/alltests.py
test_modules = [
'subpackage1.suite',
'subpackage2.suite',
]
def suite():
import unittest
return unittest.TestLoader().loadTestsFromNames(test_modules)
if __name__ == "__main__":
import testoob
testoob.main(defaultTest="suite")
I haven't tried your specific scenario.
| How to organize python test in a way that I can run all tests in a single command? | Currently my code is organized in the following tree structure:
src/
module1.py
module2.py
test_module1.py
test_module2.py
subpackage1/
__init__.py
moduleA.py
moduleB.py
test_moduleA.py
test_moduleB.py
Where the module*.py files contains the source code and the test_module*.py contains the TestCases for the relevant module.
With the following comands I can run the tests contained in a single file, for example:
$ cd src
$ nosetests test_filesystem.py
..................
----------------------------------------------------------------------
Ran 18 tests in 0.390s
OK
How can I run all tests? I tried with nosetests -m 'test_.*' but it doesn't work.
$cd src
$ nosetests -m 'test_.*'
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Thanks
| [
"Whether you seperate or mix tests and modules is probably a matter of taste, although I would strongly advocate for keeping them apart (setup reasons, code stats etc).\nWhen you're using nosetests, make sure that all directories with tests are real packages:\nsrc/\n module1.py\n module2.py\n subpackage1/\n __init__.py\n moduleA.py\n moduleB.py\ntests/\n __init__.py\n test_module1.py\n test_module2.py\n subpackage1/\n __init__.py\n test_moduleA.py\n test_moduleB.py\n\nThis way, you can just run nosetests in the toplevel directory and all tests will be found. You need to make sure that src/ is on the PYTHONPATH, however, otherwise all the tests will fail due to missing imports.\n",
"If they all begin with test then just nosetest should work. Nose automatically searches for any files beginning with 'test'.\n",
"I don't know about nosetests, but you can achieve that with the standard unittest module. You just need to create a test_all.py file under your root directory, then import all your test modules. In your case:\nimport unittest\nimport test_module1\nimport test_module2\nimport subpackage1\nif __name__ == \"__main__\":\n allsuites = unittest.TestSuite([test_module1.suite(), \\\n test_module2.suite(), \\\n subpackage1.test_moduleA.suite(), \\\n subpackage1.test_moduleB.suite()])\n\neach module should provide the following function (example with a module with two unit tests: Class1 and Class2):\ndef suite():\n \"\"\" This defines all the tests of a module\"\"\"\n suite = unittest.TestSuite()\n suite.addTest(unittest.makeSuite(Class1))\n suite.addTest(unittest.makeSuite(Class2))\n return suite\nif __name__ == '__main__':\n unittest.TextTestRunner(verbosity=2).run(suite())\n\n",
"This is probably a hotly-contested topic, but I would suggest that you separate your tests out from your modules. Set up something like this...\nUse setup.py to install these into the system path (or you may be able to modify environment variables to avoid the need for an \"install\" step).\nfoo/\n module1.py\n module2.py\n subpackage1/\n __init__.py\n moduleA.py\n moduleB.py\n\nNow any python script anywhere can access those modules, instead of depending on finding them in the local directory. Put your tests all off to the side like this:\ntests/\n test_module1.py\n test_module2.py\n test_subpackage1_moduleA,py\n test_subpackage2_moduleB.py\n\nI'm not sure about your nosetests command, but now that your tests are all in the same directory, it becomes much easier to write a wrapper script that simply imports all of the other tests in the same directory. Or if that's not possible, you can at least get away with a simple bash loop that gets your test files one by one:\n#!/bin/bash\ncd tests/\nfor TEST_SCRIPT in test_*.py ; do\n nosetests -m $TEST_SCRIPT\ndone\n\n",
"I'll give a Testoob answer.\nRunning tests in a single file is like Nose:\ntestoob test_foo.py\n\nTo run tests in many files you can create suites with the Testoob collectors (in each subpackage)\n# src/subpackage?/__init__.py\ndef suite():\n import testoob\n return testoob.collecting.collect_from_files(\"test_*.py\")\n\nand\n# src/alltests.py\ntest_modules = [\n 'subpackage1.suite',\n 'subpackage2.suite',\n]\n\ndef suite():\n import unittest\n return unittest.TestLoader().loadTestsFromNames(test_modules)\n\nif __name__ == \"__main__\":\n import testoob\n testoob.main(defaultTest=\"suite\")\n\nI haven't tried your specific scenario.\n"
] | [
16,
8,
6,
2,
0
] | [] | [] | [
"python",
"python_nose",
"unit_testing"
] | stackoverflow_0000366720_python_python_nose_unit_testing.txt |
Q:
Secure, sandboxable user exposed programming language / environment?
Beyond offering an API for my website, I'd like to offer users the ability to write simple scripts that would run on my servers . The scripts would have access to objects owned by the user and be able to manipulate, modify, and otherwise process their data.
I'd like to be able to limit resources taken by these scripts at a fine level (eg. max execution time should be 100ms). I'd also like to ensure a secure sandbox such that each user will have access to only a limited set of data and resources, and be prevented from accessing disk, other people's data, etc.
Generally the scripts will be very simple (eg. create the sum or average of the values that match certain criteria), and they'll often be used in templates (eg. fill in the value of this cell or html element with the average or sum).
Ideally I'd like to use a sandboxed subset of a well know, commonly available programming language so it's easy for users to pick up. The backend is written in Python, so a Python based language could have benefits, but I'm open to other languages and technologies. Javascript is also attractive due to its simple nature and common availability.
The languages should support creation of DSLs and libraries.
The target audience is a general user base for a web based application, not necessarily very technical. In other words, it's not targeted at a base with particular knowledge of any particular programming language. My expectation is a subset of users will create scripts that will be used by the larger majority.
Any ideas or recommendations for the language and technology? Any examples of others trying this and the successes and failures they encountered?
A:
I use Lua for this, but it's directed at a Lua capable community. So my answer would be who are your users?
If your users are internal, like my case, and proficient with Python use Python. However if this is something for the world wide web, I'd probably choose javascript, because its the lingua franca, (every developer knows it, and its easy to pickup). As for an Engine... well V8 would be nice, but its not 100% thread safe, in that you can't run several engine within the same process in a lock free manner, as you can with SpiderMonkey. So You might want to use that. Also since javascript is sandboxed by default you won't have to worry about implementing much on your side.
| Secure, sandboxable user exposed programming language / environment? | Beyond offering an API for my website, I'd like to offer users the ability to write simple scripts that would run on my servers . The scripts would have access to objects owned by the user and be able to manipulate, modify, and otherwise process their data.
I'd like to be able to limit resources taken by these scripts at a fine level (eg. max execution time should be 100ms). I'd also like to ensure a secure sandbox such that each user will have access to only a limited set of data and resources, and be prevented from accessing disk, other people's data, etc.
Generally the scripts will be very simple (eg. create the sum or average of the values that match certain criteria), and they'll often be used in templates (eg. fill in the value of this cell or html element with the average or sum).
Ideally I'd like to use a sandboxed subset of a well know, commonly available programming language so it's easy for users to pick up. The backend is written in Python, so a Python based language could have benefits, but I'm open to other languages and technologies. Javascript is also attractive due to its simple nature and common availability.
The languages should support creation of DSLs and libraries.
The target audience is a general user base for a web based application, not necessarily very technical. In other words, it's not targeted at a base with particular knowledge of any particular programming language. My expectation is a subset of users will create scripts that will be used by the larger majority.
Any ideas or recommendations for the language and technology? Any examples of others trying this and the successes and failures they encountered?
| [
"I use Lua for this, but it's directed at a Lua capable community. So my answer would be who are your users?\nIf your users are internal, like my case, and proficient with Python use Python. However if this is something for the world wide web, I'd probably choose javascript, because its the lingua franca, (every developer knows it, and its easy to pickup). As for an Engine... well V8 would be nice, but its not 100% thread safe, in that you can't run several engine within the same process in a lock free manner, as you can with SpiderMonkey. So You might want to use that. Also since javascript is sandboxed by default you won't have to worry about implementing much on your side.\n"
] | [
2
] | [] | [] | [
"javascript",
"python",
"sandbox"
] | stackoverflow_0000373406_javascript_python_sandbox.txt |
Q:
Getting TRAC to run on IIS7
Im trying to get Trac upp and running on my IIS/w2008 server using this FAQ: TracOnWindowsIisAjp
Everything upp until "3. Install Tomcat AJP Connector for IIS" works ok.
I then define my directories as : C:\wwwroot\trac.evju.biz\AJP\, in the bin catalog I place the dll file, and 3 config files with this content:
isapi_redirect-1.2.26.properties
# Configuration file for the ISAPI Redirector
# The path to the ISAPI Redirector Extension, relative to the website
# This must be in a virtual directory with execute privileges
extension_uri=/AJP/isapi_redirect-1.2.26.dll
# Full path to the log file for the ISAPI Redirector
log_file=C:\wwwroot\trac.evju.biz\AJP\logs\isapi_redirect.log
# Log level (debug, info, warn, error or trace)
log_level=info
# Full path to the workers.properties file
worker_file=C:\wwwroot\trac.evju.biz\AJP\conf\workers.properties
# Full path to the uriworkermap.properties file
worker_mount_file=C:\wwwroot\trac.evju.biz\AJP\conf\uriworkermap.properties
workers.properties
# Define 1 real worker
worker.list=trac
# Set properties for trac (ajp13)
worker.trac.type=ajp13
worker.trac.host=localhost
worker.trac.port=8009
worker.trac.socket_keepalive=0
uriworkermap.properties
/C:\wwwroot\trac.evju.biz\irm\*=trac
Then I run into problems :
Define a virtual directory named AJP-Connector, pointing to your bin subdirectory, with permissions to execute executables (not only scripts).
I defined a virtual directory named AJP, pointing it to the bin subdirectory, but I can't find any way of give it execute permissions
And the rest of the instructions obviously don't quite apply to IIS7
Allow execution of the DLL as Web
Service Extension
In the IIS Manager, open Web Service Extensions.
Define a new Web Service Extension called AJP-Connector (or whatever you want).
Add C:\AJP-Connector\bin\isapi_redirect-1.2.26.dll to the required files (replace "C:\AJP-Connector" with your actual directory).
Set extension status to Allowed.
I tried adding the dll as a ISAPI extension, this resulted in a web.config file in the bin catalog with the following content:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers accessPolicy="Read, Execute, Script">
<remove name="ISAPI-dll" />
<add name="AJP" path="*.ajp" verb="*" modules="IsapiModule" scriptProcessor="C:\wwwroot\trac.evju.biz\AJP\bin\isapi_redirect-1.2.26.dll" resourceType="Unspecified" requireAccess="Execute" />
</handlers>
</system.webServer>
</configuration>
Any help appreciated.
A:
Just stumbled upon this question from an unrelated Google search. Odd how that happens...
IIS7 supports FastCGI natively, I'd highly recommend using that over AJP. If you're still watching this question leave a comment and I'll follow up with details of how to install.
A:
@Jeff Mc - I'm actually looking at setting trac on IIS7 and stumbled across this thread, just as you did. I would love to know the details on using FastCGI as well as any of the other gotchas with trac on IIS7.
| Getting TRAC to run on IIS7 | Im trying to get Trac upp and running on my IIS/w2008 server using this FAQ: TracOnWindowsIisAjp
Everything upp until "3. Install Tomcat AJP Connector for IIS" works ok.
I then define my directories as : C:\wwwroot\trac.evju.biz\AJP\, in the bin catalog I place the dll file, and 3 config files with this content:
isapi_redirect-1.2.26.properties
# Configuration file for the ISAPI Redirector
# The path to the ISAPI Redirector Extension, relative to the website
# This must be in a virtual directory with execute privileges
extension_uri=/AJP/isapi_redirect-1.2.26.dll
# Full path to the log file for the ISAPI Redirector
log_file=C:\wwwroot\trac.evju.biz\AJP\logs\isapi_redirect.log
# Log level (debug, info, warn, error or trace)
log_level=info
# Full path to the workers.properties file
worker_file=C:\wwwroot\trac.evju.biz\AJP\conf\workers.properties
# Full path to the uriworkermap.properties file
worker_mount_file=C:\wwwroot\trac.evju.biz\AJP\conf\uriworkermap.properties
workers.properties
# Define 1 real worker
worker.list=trac
# Set properties for trac (ajp13)
worker.trac.type=ajp13
worker.trac.host=localhost
worker.trac.port=8009
worker.trac.socket_keepalive=0
uriworkermap.properties
/C:\wwwroot\trac.evju.biz\irm\*=trac
Then I run into problems :
Define a virtual directory named AJP-Connector, pointing to your bin subdirectory, with permissions to execute executables (not only scripts).
I defined a virtual directory named AJP, pointing it to the bin subdirectory, but I can't find any way of give it execute permissions
And the rest of the instructions obviously don't quite apply to IIS7
Allow execution of the DLL as Web
Service Extension
In the IIS Manager, open Web Service Extensions.
Define a new Web Service Extension called AJP-Connector (or whatever you want).
Add C:\AJP-Connector\bin\isapi_redirect-1.2.26.dll to the required files (replace "C:\AJP-Connector" with your actual directory).
Set extension status to Allowed.
I tried adding the dll as a ISAPI extension, this resulted in a web.config file in the bin catalog with the following content:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers accessPolicy="Read, Execute, Script">
<remove name="ISAPI-dll" />
<add name="AJP" path="*.ajp" verb="*" modules="IsapiModule" scriptProcessor="C:\wwwroot\trac.evju.biz\AJP\bin\isapi_redirect-1.2.26.dll" resourceType="Unspecified" requireAccess="Execute" />
</handlers>
</system.webServer>
</configuration>
Any help appreciated.
| [
"Just stumbled upon this question from an unrelated Google search. Odd how that happens...\nIIS7 supports FastCGI natively, I'd highly recommend using that over AJP. If you're still watching this question leave a comment and I'll follow up with details of how to install.\n",
"@Jeff Mc - I'm actually looking at setting trac on IIS7 and stumbled across this thread, just as you did. I would love to know the details on using FastCGI as well as any of the other gotchas with trac on IIS7.\n"
] | [
0,
0
] | [] | [] | [
"iis",
"python",
"trac"
] | stackoverflow_0000304567_iis_python_trac.txt |
Q:
conversion of unicode string in python
I need to convert unicode strings in Python to other types such as unsigned and signed int 8 bits,unsigned and signed int 16 bits,unsigned and signed int 32 bits,unsigned and signed int 64 bits,double,float,string,unsigned and signed 8 bit,unsigned and signed 16 bit, unsigned and signed 32 bit,unsigned and signed 64 bit.
I need help from u people.
A:
use int() to convert the string to an integer. Python doesn't have different fixed-width integers so you'll just get one type of thing out.
Then use struct to pack the integer into a fixed width:
res = struct.pack("=B",i) ## uint8_t
res = struct.pack("=b",i) ## int8_t
res = struct.pack("=H",i) ## uint16_t
res = struct.pack("=h",i) ## int16_t
res = struct.pack("=I",i) ## uint32_t
res = struct.pack("=i",i) ## int32_t
res = struct.pack("=Q",i) ## uint64_t
res = struct.pack("=q",i) ## int64_t
res = struct.pack("=f",i) ## float
res = struct.pack("=d",i) ## double
struct produces a byte-string containing the number in binary.
EDIT:
From the comments it sounds like you just want to convert the string (of decimal digits) into an integer. Just use int() for that, however you won't get all the complicated overflow/underflow semantics of the specified types. You can't reproduce that in python, at least not without writing a whole lot of code.
I think if you want any more help you'll have to be more precise about what you want to achieve.
| conversion of unicode string in python | I need to convert unicode strings in Python to other types such as unsigned and signed int 8 bits,unsigned and signed int 16 bits,unsigned and signed int 32 bits,unsigned and signed int 64 bits,double,float,string,unsigned and signed 8 bit,unsigned and signed 16 bit, unsigned and signed 32 bit,unsigned and signed 64 bit.
I need help from u people.
| [
"use int() to convert the string to an integer. Python doesn't have different fixed-width integers so you'll just get one type of thing out.\nThen use struct to pack the integer into a fixed width:\nres = struct.pack(\"=B\",i) ## uint8_t\nres = struct.pack(\"=b\",i) ## int8_t\n\nres = struct.pack(\"=H\",i) ## uint16_t\nres = struct.pack(\"=h\",i) ## int16_t\n\nres = struct.pack(\"=I\",i) ## uint32_t\nres = struct.pack(\"=i\",i) ## int32_t\n\nres = struct.pack(\"=Q\",i) ## uint64_t\nres = struct.pack(\"=q\",i) ## int64_t\n\nres = struct.pack(\"=f\",i) ## float\nres = struct.pack(\"=d\",i) ## double\n\nstruct produces a byte-string containing the number in binary.\nEDIT:\nFrom the comments it sounds like you just want to convert the string (of decimal digits) into an integer. Just use int() for that, however you won't get all the complicated overflow/underflow semantics of the specified types. You can't reproduce that in python, at least not without writing a whole lot of code.\nI think if you want any more help you'll have to be more precise about what you want to achieve. \n"
] | [
11
] | [] | [] | [
"python",
"signed",
"string",
"unicode",
"unsigned"
] | stackoverflow_0000374318_python_signed_string_unicode_unsigned.txt |
Q:
XPath in XmlStream.addObserver doesn't work the way it should
What I want to do is to react only on specified root elements. For example, if user sends XmlStream that looks like:
<auth>
<login>user</login>
<pass>dupa.8</pass>
</auth>
My method ._auth should be executed. I've done it with addObserver method called inside connectionMade method.
self.addObserver("/auth", self._auth)
AFAIK XPath - if I write "/auth" it means that I want my root element to be "auth", so that message:
<longtagislong>
<auth>...</auth>
</longtagislong>
... should be rejected, because auth isn't root.
But Twisted however doesn't work the way I thought it should - my _auth method is executed when second message appears (with auth element inside the tree), not the first one - with auth element as a root.
So, my question is: how to tell Twisted and addObserver method that I want to react only if root element's name is "auth"?
A:
Ok, finally I got the answer. It's because of XmlStream itself. Connection is active as long as main root element is not closed (for example: <stream/>). Everything inside it is root element for XPath, that's why "/auth" means <stream><auth></auth></stream>.
| XPath in XmlStream.addObserver doesn't work the way it should | What I want to do is to react only on specified root elements. For example, if user sends XmlStream that looks like:
<auth>
<login>user</login>
<pass>dupa.8</pass>
</auth>
My method ._auth should be executed. I've done it with addObserver method called inside connectionMade method.
self.addObserver("/auth", self._auth)
AFAIK XPath - if I write "/auth" it means that I want my root element to be "auth", so that message:
<longtagislong>
<auth>...</auth>
</longtagislong>
... should be rejected, because auth isn't root.
But Twisted however doesn't work the way I thought it should - my _auth method is executed when second message appears (with auth element inside the tree), not the first one - with auth element as a root.
So, my question is: how to tell Twisted and addObserver method that I want to react only if root element's name is "auth"?
| [
"Ok, finally I got the answer. It's because of XmlStream itself. Connection is active as long as main root element is not closed (for example: <stream/>). Everything inside it is root element for XPath, that's why \"/auth\" means <stream><auth></auth></stream>.\n"
] | [
1
] | [] | [] | [
"python",
"twisted"
] | stackoverflow_0000373189_python_twisted.txt |
Q:
need help-variable creation in Python
I want to create variables as a1,a2,a3...a10.
For that I used a for loop. As the variable in loop increments I need to create a variable as above.
Can anyone give me an idea?
At the time of creation I also need to be able to assign values to them.
That's where I'm getting syntax error.
A:
Usually, we use a list, not a bunch of individual variables.
a = 10*[0]
a[0], a[1], a[2], a[9]
A:
Following what S.Lott said, you can also use a dict, if you really nead unique names and that the order of the items is not important:
data = {}
for i in range(0, 10):
data['a%d' % i] = i
>>>data
{'a1': 1, 'a0': 0, 'a3': 3, 'a2': 2, 'a5': 5, 'a4': 4, 'a7': 7, 'a6': 6, 'a9': 9, 'a8': 8}
I would add that this is very dangerous to automate variable creation like you want to do, as you might overwrite variables that could already exist.
A:
globals() returns the global dictionary of variables:
for i in range(1,6):
globals()["a%i" % i] = i
print a1, a2, a3, a4, a5 # -> 1 2 3 4 5
But frankly: I'd never do this, polluting the namespace automagically is harmful. I'd rather use a list or a dict.
| need help-variable creation in Python | I want to create variables as a1,a2,a3...a10.
For that I used a for loop. As the variable in loop increments I need to create a variable as above.
Can anyone give me an idea?
At the time of creation I also need to be able to assign values to them.
That's where I'm getting syntax error.
| [
"Usually, we use a list, not a bunch of individual variables.\na = 10*[0]\na[0], a[1], a[2], a[9]\n\n",
"Following what S.Lott said, you can also use a dict, if you really nead unique names and that the order of the items is not important:\ndata = {}\nfor i in range(0, 10):\n data['a%d' % i] = i\n\n>>>data\n{'a1': 1, 'a0': 0, 'a3': 3, 'a2': 2, 'a5': 5, 'a4': 4, 'a7': 7, 'a6': 6, 'a9': 9, 'a8': 8}\n\nI would add that this is very dangerous to automate variable creation like you want to do, as you might overwrite variables that could already exist.\n",
"globals() returns the global dictionary of variables:\nfor i in range(1,6):\n globals()[\"a%i\" % i] = i\n\nprint a1, a2, a3, a4, a5 # -> 1 2 3 4 5\n\nBut frankly: I'd never do this, polluting the namespace automagically is harmful. I'd rather use a list or a dict.\n"
] | [
13,
4,
2
] | [
"You can use the exec function:\nfor i in range(0,10):\n exec(\"a%d=%d\" % (i,i))\n\nNot very pythonic way of doing things.\n"
] | [
-1
] | [
"python",
"variables"
] | stackoverflow_0000374572_python_variables.txt |
Q:
How do you test a file.read() error in Python?
I have the following code (adapted from an example given in Dive Into Python) that reads the entire contents of a file into a buffer.
buffer = ""
try:
file = open(postFileName, 'rU')
try:
# Read the entire POST log file into a buffer
buffer += file.read()
finally:
file.close()
except IOError:
buffer += "The POST file could not be opened."
What's bothering me about this code is the inner try/finally block without an except block. Do I need an except block in there? Can the call to read() fail after the call to open() was successful? I know the try-except-finally is now unified, so adding one, at least syntactically, is not a problem.
If I add an except block, under what conditions will it be executed, and how do I write a test to make sure it runs under those conditions?
Additionally, if I don't need an except block, then why do I need the inner try/finally block at all?
A:
I find that finally blocks are often overused. The file close (and a few other similar patterns) are so important that Python 3.0 will have a with statement just to cover this base in a slightly less obscure way.
Do I need an except with a finally?
That hits on the confusing nature of this specific example, and why they added the with statement.
The finally does "no matter what" cleanup. Exception or no exception, the finally is always executed.
Can the call to read() fail after the call to open() was successful?
All OS calls, all I/O calls (almost everything) can raise an exception. All kinds of bad things can happen after open and before read.
If I add an except block, under what conditions will it be executed?
Read up on files. There are lots of goofy I/O errors that can occur between open and read. Also, read up on the built-in exceptions. https://docs.python.org/2/library/exceptions.html
How do I write a test to make sure it runs under those conditions?
You'll need a mock file object. This object will responds to open but raises an IOError or OSError on every read.
If I don't need an except block, then why do I need the inner try/finally block at all?
Cleanup. The finally will be executed no matter what exception is raised.
Try this. See what it does.
try:
raise OSError("hi mom")
finally:
print "Hmmm"
A:
I disagree with the other answers mentioning unifying the try / except / finally blocks. That would change the behaviour, as you wouldn't want the finally block to try to close the file if the open failed. The split blocks are correct here (though it may be better using the new "with open(filename,'rU') as f" syntax instead).
There are reasons the read() could fail. For instance the data could be too big to fit into memory, or the user may have signalled an interrupt with control-C. Those cases won't be caught by the IOError, but are left to be handled (or not) by the caller who may want to do different things depending on the nature of the application.
However, the code does still have an obligation to clean up the file, even where it doesn't deal with the error, hence the finally without the except.
A:
With a recent version of Python, you don't need to nest try-except and try-finally. try-except-finally has been unified:
try:
non_existing_var
except:
print 'error'
finally:
print 'finished'
| How do you test a file.read() error in Python? | I have the following code (adapted from an example given in Dive Into Python) that reads the entire contents of a file into a buffer.
buffer = ""
try:
file = open(postFileName, 'rU')
try:
# Read the entire POST log file into a buffer
buffer += file.read()
finally:
file.close()
except IOError:
buffer += "The POST file could not be opened."
What's bothering me about this code is the inner try/finally block without an except block. Do I need an except block in there? Can the call to read() fail after the call to open() was successful? I know the try-except-finally is now unified, so adding one, at least syntactically, is not a problem.
If I add an except block, under what conditions will it be executed, and how do I write a test to make sure it runs under those conditions?
Additionally, if I don't need an except block, then why do I need the inner try/finally block at all?
| [
"I find that finally blocks are often overused. The file close (and a few other similar patterns) are so important that Python 3.0 will have a with statement just to cover this base in a slightly less obscure way.\n\nDo I need an except with a finally? \nThat hits on the confusing nature of this specific example, and why they added the with statement. \nThe finally does \"no matter what\" cleanup. Exception or no exception, the finally is always executed.\nCan the call to read() fail after the call to open() was successful? \nAll OS calls, all I/O calls (almost everything) can raise an exception. All kinds of bad things can happen after open and before read.\nIf I add an except block, under what conditions will it be executed?\nRead up on files. There are lots of goofy I/O errors that can occur between open and read. Also, read up on the built-in exceptions. https://docs.python.org/2/library/exceptions.html\nHow do I write a test to make sure it runs under those conditions?\nYou'll need a mock file object. This object will responds to open but raises an IOError or OSError on every read.\nIf I don't need an except block, then why do I need the inner try/finally block at all?\nCleanup. The finally will be executed no matter what exception is raised. \n\nTry this. See what it does.\ntry:\n raise OSError(\"hi mom\")\nfinally:\n print \"Hmmm\"\n\n",
"I disagree with the other answers mentioning unifying the try / except / finally blocks. That would change the behaviour, as you wouldn't want the finally block to try to close the file if the open failed. The split blocks are correct here (though it may be better using the new \"with open(filename,'rU') as f\" syntax instead).\nThere are reasons the read() could fail. For instance the data could be too big to fit into memory, or the user may have signalled an interrupt with control-C. Those cases won't be caught by the IOError, but are left to be handled (or not) by the caller who may want to do different things depending on the nature of the application.\nHowever, the code does still have an obligation to clean up the file, even where it doesn't deal with the error, hence the finally without the except.\n",
"With a recent version of Python, you don't need to nest try-except and try-finally. try-except-finally has been unified:\ntry:\n non_existing_var\nexcept:\n print 'error'\nfinally:\n print 'finished'\n\n"
] | [
7,
3,
0
] | [] | [] | [
"error_handling",
"file_io",
"python"
] | stackoverflow_0000374768_error_handling_file_io_python.txt |
Q:
Should I use get_/set_ prefixes in Python method names?
In Python properties are used instead of the Java-style getters, setters. So one rarely sees get... or set.. methods in the public interfaces of classes.
But in cases were a property is not appropriate one might still end up with methods that behave like getters or setters. Now my questions: Should these method names start with get_ / set_? Or is this unpythonic vebosity since it is often obvious what is meant (and one can still use the docstring to clarify non-obvious situations)?
This might be a matter of personal taste, but I would be interested in what the majority thinks about this? What would you prefer as an API user?
Example: Say we have an object representing multiple cities. One might have a method get_city_by_postalcode(postalcode) or one could use the shorter name city_by_postalcode. I tend towards the later.
A:
You won't ever loose the chance to make your property behave like a getter/setter later by using descriptors. If you want to change a property to be read only you can also replace it with a getter method with the same name as the property and decorate it with @property. So my advice is to avoid getters/setters unless the project you are working on already uses them, because you can always change your mind later and make properties read-only, write-only or whatever without modifying the interface to your class.
A:
I think shorter is better, so I tend to prefer the later. But what's important is to consistent with your project: don't mix the two methods. If you jump into someone else's project, keep what the other developers chose initially.
A:
If it's usable as a property (one value to get or set, and no other parameters, I usually do:
class Foo(object):
def _get_x(self):
pass
def _set_x(self, value):
pass
x = property(_get_x, _set_x)
If the getter/setter is any more complex than that, I would use get_x and set_x:
A:
I've seen it done both ways. Coming from an Objective-C background, I usually do foo()/set_foo() if I can't use a property (although I try to use properties whenever possible). It doesn't really matter that much, though, as long as you're consistent.
(Of course, in your example, I wouldn't call the method get_city_by_postalcode() at all; I'd probably go with translate_postalcode or something similar that uses a better action verb in the name.)
A:
If I have to use a getter/setter, I like it this way:
Suppose you have a variable self._x. Then x() would return the value of self._x, and setX(x) would set the value of self._x
| Should I use get_/set_ prefixes in Python method names? | In Python properties are used instead of the Java-style getters, setters. So one rarely sees get... or set.. methods in the public interfaces of classes.
But in cases were a property is not appropriate one might still end up with methods that behave like getters or setters. Now my questions: Should these method names start with get_ / set_? Or is this unpythonic vebosity since it is often obvious what is meant (and one can still use the docstring to clarify non-obvious situations)?
This might be a matter of personal taste, but I would be interested in what the majority thinks about this? What would you prefer as an API user?
Example: Say we have an object representing multiple cities. One might have a method get_city_by_postalcode(postalcode) or one could use the shorter name city_by_postalcode. I tend towards the later.
| [
"You won't ever loose the chance to make your property behave like a getter/setter later by using descriptors. If you want to change a property to be read only you can also replace it with a getter method with the same name as the property and decorate it with @property. So my advice is to avoid getters/setters unless the project you are working on already uses them, because you can always change your mind later and make properties read-only, write-only or whatever without modifying the interface to your class.\n",
"I think shorter is better, so I tend to prefer the later. But what's important is to consistent with your project: don't mix the two methods. If you jump into someone else's project, keep what the other developers chose initially.\n",
"If it's usable as a property (one value to get or set, and no other parameters, I usually do:\nclass Foo(object):\n\n def _get_x(self):\n pass\n\n def _set_x(self, value):\n pass\n\n x = property(_get_x, _set_x)\n\nIf the getter/setter is any more complex than that, I would use get_x and set_x:\n",
"I've seen it done both ways. Coming from an Objective-C background, I usually do foo()/set_foo() if I can't use a property (although I try to use properties whenever possible). It doesn't really matter that much, though, as long as you're consistent.\n(Of course, in your example, I wouldn't call the method get_city_by_postalcode() at all; I'd probably go with translate_postalcode or something similar that uses a better action verb in the name.)\n",
"If I have to use a getter/setter, I like it this way:\nSuppose you have a variable self._x. Then x() would return the value of self._x, and setX(x) would set the value of self._x\n"
] | [
7,
5,
4,
1,
0
] | [] | [] | [
"coding_style",
"python"
] | stackoverflow_0000374763_coding_style_python.txt |
Q:
Prevent ftplib from Downloading a File in Progress?
We have a ftp system setup to monitor/download from remote ftp servers that are not under our control. The script connects to the remote ftp, and grabs the file names of files on the server, we then check to see if its something that has already been downloaded. If it hasn't been downloaded then we download the file and add it to the list.
We recently ran into an issue, where someone on the remote ftp side, will copy in a massive single file(>1GB) then the script will wake up see a new file and begin downloading the file that is being copied in.
What is the best way to check this? I was thinking of grabbing the file size waiting a few seconds checking the file size again and see if it has increased, if it hasn't then we download it. But since time is of the concern, we can't wait a few seconds for every single file set and see if it's file size has increased.
What would be the best way to go about this, currently everything is done via pythons ftplib, how can we do this aside from using the aforementioned method.
Yet again let me reiterate this, we have 0 control over the remote ftp sites.
Thanks.
UPDATE1:
I was thinking what if i tried to rename it... since we have full permissions on the ftp, if the file upload is in progress would the rename command fail?
We don't have any real options here... do we?
UPDATE2:
Well here's something interesting some of the ftps we tested on appear to automatically allocate the space once the transfer starts.
E.g. If i transfer a 200mb file to the ftp server. While the transfer is active if i connect to the ftp server and do a size while the upload is happening. It shows 200mb for the size. Even though the file is only like 10% complete.
Permissions also seem to be randomly set the FTP Server that comes with IIS sets the permissions AFTER the file is finished copying. While some of the other older ftp servers set it as soon as you send the file.
:'(
A:
โDamn the torpedoes! Full speed ahead!โ
Just download the file. If it is a large file then after the download completes wait as long as is reasonable for your scenario and continue the download from the point it stopped. Repeat until there is no more stuff to download.
A:
You can't know when the OS copy is done. It could slow down or wait.
For absolute certainty, you really need two files.
The massive file.
And a tiny trigger file.
They can mess with the massive file all they want. But when they touch the trigger file, you're downloading both.
If you can't get a trigger, you have to balance the time required to poll vs. the time required to download.
Do this.
Get a listing. Check timestamps.
Check sizes vs. previous size of file. If size isn't even close, it's being copied right now. Wait; loop on this step until size is close to previous size.
While you're not done:
a. Get the file.
b. Get a listing AGAIN. Check the size of the new listing, previous listing and your file. If they agree: you're done. If they don't agree: file changed while you were downloading; you're not done.
A:
As you say you have 0 control over the servers and can't make your clients post trigger files as suggested by S. Lott, you must deal with the imperfect solution and risk incomplete file transmission, perhaps by waiting for a while and compare file sizes before and after.
You can try to rename as you suggested, but as you have 0 control you can't be sure that the ftp-server-administrator (or their successor) doesn't change platforms or ftp servers or restricts your permissions.
Sorry.
A:
If you are dealing with multiple files, you could get the list of all the sizes at once, wait ten seconds, and see which are the same. Whichever are still the same should be safe to download.
| Prevent ftplib from Downloading a File in Progress? | We have a ftp system setup to monitor/download from remote ftp servers that are not under our control. The script connects to the remote ftp, and grabs the file names of files on the server, we then check to see if its something that has already been downloaded. If it hasn't been downloaded then we download the file and add it to the list.
We recently ran into an issue, where someone on the remote ftp side, will copy in a massive single file(>1GB) then the script will wake up see a new file and begin downloading the file that is being copied in.
What is the best way to check this? I was thinking of grabbing the file size waiting a few seconds checking the file size again and see if it has increased, if it hasn't then we download it. But since time is of the concern, we can't wait a few seconds for every single file set and see if it's file size has increased.
What would be the best way to go about this, currently everything is done via pythons ftplib, how can we do this aside from using the aforementioned method.
Yet again let me reiterate this, we have 0 control over the remote ftp sites.
Thanks.
UPDATE1:
I was thinking what if i tried to rename it... since we have full permissions on the ftp, if the file upload is in progress would the rename command fail?
We don't have any real options here... do we?
UPDATE2:
Well here's something interesting some of the ftps we tested on appear to automatically allocate the space once the transfer starts.
E.g. If i transfer a 200mb file to the ftp server. While the transfer is active if i connect to the ftp server and do a size while the upload is happening. It shows 200mb for the size. Even though the file is only like 10% complete.
Permissions also seem to be randomly set the FTP Server that comes with IIS sets the permissions AFTER the file is finished copying. While some of the other older ftp servers set it as soon as you send the file.
:'(
| [
"โDamn the torpedoes! Full speed ahead!โ\nJust download the file. If it is a large file then after the download completes wait as long as is reasonable for your scenario and continue the download from the point it stopped. Repeat until there is no more stuff to download.\n",
"You can't know when the OS copy is done. It could slow down or wait.\nFor absolute certainty, you really need two files.\n\nThe massive file.\nAnd a tiny trigger file.\n\nThey can mess with the massive file all they want. But when they touch the trigger file, you're downloading both.\n\nIf you can't get a trigger, you have to balance the time required to poll vs. the time required to download.\nDo this.\n\nGet a listing. Check timestamps.\nCheck sizes vs. previous size of file. If size isn't even close, it's being copied right now. Wait; loop on this step until size is close to previous size.\nWhile you're not done:\na. Get the file.\nb. Get a listing AGAIN. Check the size of the new listing, previous listing and your file. If they agree: you're done. If they don't agree: file changed while you were downloading; you're not done.\n\n",
"As you say you have 0 control over the servers and can't make your clients post trigger files as suggested by S. Lott, you must deal with the imperfect solution and risk incomplete file transmission, perhaps by waiting for a while and compare file sizes before and after.\nYou can try to rename as you suggested, but as you have 0 control you can't be sure that the ftp-server-administrator (or their successor) doesn't change platforms or ftp servers or restricts your permissions.\nSorry.\n",
"If you are dealing with multiple files, you could get the list of all the sizes at once, wait ten seconds, and see which are the same. Whichever are still the same should be safe to download.\n"
] | [
5,
0,
0,
0
] | [] | [] | [
"ftp",
"ftplib",
"python"
] | stackoverflow_0000375620_ftp_ftplib_python.txt |
Q:
Programmatic mail-merge style data injection into existing Excel spreadsheets?
I'd like to automate data entry into Excel spreadsheets. User data will exist on a web site, and when the user requests it, that data will need to be injected into an Excel spreadsheet. The complication is that the format of the Excel spreadsheet can vary significantly between users - it'll be user defined.
I've been thinking of this as a templating problem - the excel spreadsheet provides the template, and the task s to inject data into specific user defined cells within that template.
I've looked at xlwt and xlrd for python, as well as jexcelapi and POI-HSSF for Java. They seem like they could work, but given that I simply want to put values into certain cells, they seem like overkill. I'm also worried about re-writing the user's spreadsheet after processing; seems like an opportunity to introduce errors into the process.
Is there a way to tell excel to merge the data from one sheet into another? I'm thinking I could produce a simple spreadsheet that has only the data, and somehow get Excel to merge it into the user's existing spreadsheet.
Make sense? Better approaches?
A:
jXLS is maybe an option. You define an XLS file as a template and then you merge your data.
Quick overview here
http://jxls.sourceforge.net/
A:
WinHttpRequest (http://msdn.microsoft.com/en-us/library/aa384045(VS.85).aspx) may suit, you can use the document and so forth. Here is a snippet from the link.
Dim HttpReq As Object
' Create the WinHTTPRequest ActiveX Object.'
Set HttpReq = New WinHttpRequest
' Open an HTTP connection.'
HttpReq.Open "GET", "http://microsoft.com", False
' Send the HTTP Request.'
HttpReq.Send
' Get all response text.'
Text1.Text = HttpReq.ResponseText
A:
You might want to look into a third party library called Aspose.Cells. It is available for Java and .Net and allows very granular control of Excel documents. The great thing about this library is that it does not use automation (which could be disasterous in a multi-threaded environment like the web).
I have not personally used Apose.Cells, but I have used Aspose.Words (for .Net) to create mail merged Word documents that contained several thousand records and images, and it worked flawlessly.
| Programmatic mail-merge style data injection into existing Excel spreadsheets? | I'd like to automate data entry into Excel spreadsheets. User data will exist on a web site, and when the user requests it, that data will need to be injected into an Excel spreadsheet. The complication is that the format of the Excel spreadsheet can vary significantly between users - it'll be user defined.
I've been thinking of this as a templating problem - the excel spreadsheet provides the template, and the task s to inject data into specific user defined cells within that template.
I've looked at xlwt and xlrd for python, as well as jexcelapi and POI-HSSF for Java. They seem like they could work, but given that I simply want to put values into certain cells, they seem like overkill. I'm also worried about re-writing the user's spreadsheet after processing; seems like an opportunity to introduce errors into the process.
Is there a way to tell excel to merge the data from one sheet into another? I'm thinking I could produce a simple spreadsheet that has only the data, and somehow get Excel to merge it into the user's existing spreadsheet.
Make sense? Better approaches?
| [
"jXLS is maybe an option. You define an XLS file as a template and then you merge your data. \nQuick overview here\nhttp://jxls.sourceforge.net/\n",
"WinHttpRequest (http://msdn.microsoft.com/en-us/library/aa384045(VS.85).aspx) may suit, you can use the document and so forth. Here is a snippet from the link. \nDim HttpReq As Object\n\n' Create the WinHTTPRequest ActiveX Object.'\nSet HttpReq = New WinHttpRequest\n\n' Open an HTTP connection.'\nHttpReq.Open \"GET\", \"http://microsoft.com\", False\n\n' Send the HTTP Request.'\nHttpReq.Send\n\n' Get all response text.'\nText1.Text = HttpReq.ResponseText\n\n",
"You might want to look into a third party library called Aspose.Cells. It is available for Java and .Net and allows very granular control of Excel documents. The great thing about this library is that it does not use automation (which could be disasterous in a multi-threaded environment like the web).\nI have not personally used Apose.Cells, but I have used Aspose.Words (for .Net) to create mail merged Word documents that contained several thousand records and images, and it worked flawlessly.\n"
] | [
2,
1,
0
] | [] | [] | [
"excel",
"java",
"python"
] | stackoverflow_0000376221_excel_java_python.txt |
Q:
Stackless python and multicores?
So, I'm toying around with Stackless Python and a question popped up in my head, maybe this is "assumed" or "common" knowledge, but I couldn't find it actually written anywhere on the stackless site.
Does Stackless Python take advantage of multicore CPUs? In normal Python you have the GIL being constantly present and to make (true) use of multiple cores you need to use several processes, is this true for Stackless also?
A:
Stackless python does not make use of any kind of multi-core environment it runs on.
This is a common misconception about Stackless, as it allows the programmer to take advantage of thread-based programming. For many people these two are closely intertwined, but are, in fact two separate things.
Internally Stackless uses a round-robin scheduler to schedule every tasklet (micro threads), but no tasklet can be run concurrent with another one. This means that if one tasklet is busy, the others must wait until that tasklet relinquishes control. By default the scheduler will not stop a tasklet and give processor time to another. It is the tasklet's responsibility to schedule itself back in the end of the schedule queue using Stackless.schedule(), or by finishing its calculations.
all tasklets are thus executed in a sequential manner, even when multiplpe cores are available.
The reason why Stackless does not have multi-core support is because this makes threads a whole lot easier. And this is just what stackless is all about:
from the official stackless website
Stackless Python is an enhanced
version of the Python programming
language. It allows programmers to
reap the benefits of thread-based
programming without the performance
and complexity problems associated
with conventional threads. The
microthreads that Stackless adds to
Python are a cheap and lightweight
convenience which can if used
properly, give the following benefits:
Improved program structure.
More readable code.
Increased programmer productivity.
Here is a link to some more information about multiple cores and stackless.
| Stackless python and multicores? | So, I'm toying around with Stackless Python and a question popped up in my head, maybe this is "assumed" or "common" knowledge, but I couldn't find it actually written anywhere on the stackless site.
Does Stackless Python take advantage of multicore CPUs? In normal Python you have the GIL being constantly present and to make (true) use of multiple cores you need to use several processes, is this true for Stackless also?
| [
"Stackless python does not make use of any kind of multi-core environment it runs on.\nThis is a common misconception about Stackless, as it allows the programmer to take advantage of thread-based programming. For many people these two are closely intertwined, but are, in fact two separate things.\nInternally Stackless uses a round-robin scheduler to schedule every tasklet (micro threads), but no tasklet can be run concurrent with another one. This means that if one tasklet is busy, the others must wait until that tasklet relinquishes control. By default the scheduler will not stop a tasklet and give processor time to another. It is the tasklet's responsibility to schedule itself back in the end of the schedule queue using Stackless.schedule(), or by finishing its calculations.\nall tasklets are thus executed in a sequential manner, even when multiplpe cores are available.\nThe reason why Stackless does not have multi-core support is because this makes threads a whole lot easier. And this is just what stackless is all about:\nfrom the official stackless website\n\nStackless Python is an enhanced\n version of the Python programming\n language. It allows programmers to\n reap the benefits of thread-based\n programming without the performance\n and complexity problems associated\n with conventional threads. The\n microthreads that Stackless adds to\n Python are a cheap and lightweight\n convenience which can if used\n properly, give the following benefits:\n\nImproved program structure.\nMore readable code.\nIncreased programmer productivity.\n\n\nHere is a link to some more information about multiple cores and stackless. \n"
] | [
41
] | [] | [] | [
"concurrency",
"multicore",
"multithreading",
"python",
"python_stackless"
] | stackoverflow_0000377254_concurrency_multicore_multithreading_python_python_stackless.txt |
Q:
Code not waiting for class initialization!
I have a block of code that basically intializes several classes, but they are placed in a sequential order, as later ones reference early ones.
For some reason the last one initializes before the first one...it seems to me there is some sort of threading going on. What I need to know is how can I stop it from doing this?
Is there some way to make a class init do something similar to sending a return value?
Or maybe I could use the class in an if statement of some sort to check if the class has already been initialized?
I'm a bit new to Python and am migrating from C, so I'm still getting used to the little differences like naming conventions.
A:
Python upto 3.0 has a global lock, so everything is running in a single thread and in sequence.
My guess is that some side effect initializes the last class from a different place than you expect. Throw an exception in __init__ of that last class to see where it gets called.
A:
Spaces vs. Tabs issue...ugh. >.>
Well, atleast it works now. I admit that I kind of miss the braces from C instead of forced-indentation. It's quite handy as a prototyping language though. Maybe I'll grow to love it more when I get a better grasp of it.
| Code not waiting for class initialization! | I have a block of code that basically intializes several classes, but they are placed in a sequential order, as later ones reference early ones.
For some reason the last one initializes before the first one...it seems to me there is some sort of threading going on. What I need to know is how can I stop it from doing this?
Is there some way to make a class init do something similar to sending a return value?
Or maybe I could use the class in an if statement of some sort to check if the class has already been initialized?
I'm a bit new to Python and am migrating from C, so I'm still getting used to the little differences like naming conventions.
| [
"Python upto 3.0 has a global lock, so everything is running in a single thread and in sequence.\nMy guess is that some side effect initializes the last class from a different place than you expect. Throw an exception in __init__ of that last class to see where it gets called.\n",
"Spaces vs. Tabs issue...ugh. >.>\nWell, atleast it works now. I admit that I kind of miss the braces from C instead of forced-indentation. It's quite handy as a prototyping language though. Maybe I'll grow to love it more when I get a better grasp of it.\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0000377362_python.txt |
Q:
OOP and MVC programming style
I'm writing some data analysis software and decided to use such approach:
epn:
model/data.py <- Model definition
model/reader.py <- How to read data into model
view/gui.py <- main gui frame (wx)
view/dialogs.py <- different dialogs (wx)
epn.py <- controller
For communication between gui and data I used wx.lib.pubsub. So when button 'Modulation index' is pressed, epn.py catches the message and orders:
self.view.control_panel.displayModulationIndex(self.data.getModulationIndex())
where self.data.getModulationIndex() is:
def getModulationIndex(self):
m = self.mean_flux
f = self.fluxes
# other things
On the other hand I can write it as:
def getModulationIndex(self, m, f)
# other things
and call it as:
m = self.data.mean_flux
f = self.data.fluxes
self.view.control_panel.displayModulationIndex(self.data.getModulationIndex(m, f))
From my point of view the first example is better (shorter, encapsulated, more error-proof). But it is harder to test it --- you can't just call the method on some mock objects.
hope this one is clear
regards
chriss
A:
Example 1: "better (shorter, encapsulated, more error-proof)"
Not really.
The example 1 function call is no shorter than example 2; you have to set the instance variables before calling the function instead of passing the values as arguments. It's the same code.
The example 1 function call is no more encapsulated. Encapsulation is a property of the class as a whole, not an individual method. Methods are just methods, and they often have arguments so that they're clear, obvious, replaceable and easy to test.
The example 1 function call is not error-proof in any sense of the word. You're just as likely to forget to set the instance variable as you are to forget to pass the instance variable in the function call.
When you have explicit arguments (example 2), Python can check to see that you provided the correct number of arguments. In example 1, there's no checking done for you.
Example 1: "harder to test"
Agreed.
Summary
Instance variables are special. They reflect the state of being of some object. For model objects, they're serious business because they're often persistent. For GUI objects, they're "what's being displayed right now" which is often transient.
Don't over-use instance variables. Instance variables should be meaningful and significant. Not just junk variables for things that don't have a proper home anywhere else.
Method functions are just functions. In an OO world, they're still just functions. They map input arguments to output results (or object state changes). Pass all the arguments that make sense.
| OOP and MVC programming style | I'm writing some data analysis software and decided to use such approach:
epn:
model/data.py <- Model definition
model/reader.py <- How to read data into model
view/gui.py <- main gui frame (wx)
view/dialogs.py <- different dialogs (wx)
epn.py <- controller
For communication between gui and data I used wx.lib.pubsub. So when button 'Modulation index' is pressed, epn.py catches the message and orders:
self.view.control_panel.displayModulationIndex(self.data.getModulationIndex())
where self.data.getModulationIndex() is:
def getModulationIndex(self):
m = self.mean_flux
f = self.fluxes
# other things
On the other hand I can write it as:
def getModulationIndex(self, m, f)
# other things
and call it as:
m = self.data.mean_flux
f = self.data.fluxes
self.view.control_panel.displayModulationIndex(self.data.getModulationIndex(m, f))
From my point of view the first example is better (shorter, encapsulated, more error-proof). But it is harder to test it --- you can't just call the method on some mock objects.
hope this one is clear
regards
chriss
| [
"Example 1: \"better (shorter, encapsulated, more error-proof)\"\nNot really.\n\nThe example 1 function call is no shorter than example 2; you have to set the instance variables before calling the function instead of passing the values as arguments. It's the same code.\n\nThe example 1 function call is no more encapsulated. Encapsulation is a property of the class as a whole, not an individual method. Methods are just methods, and they often have arguments so that they're clear, obvious, replaceable and easy to test.\n\nThe example 1 function call is not error-proof in any sense of the word. You're just as likely to forget to set the instance variable as you are to forget to pass the instance variable in the function call.\nWhen you have explicit arguments (example 2), Python can check to see that you provided the correct number of arguments. In example 1, there's no checking done for you.\n\n\nExample 1: \"harder to test\"\nAgreed.\nSummary\nInstance variables are special. They reflect the state of being of some object. For model objects, they're serious business because they're often persistent. For GUI objects, they're \"what's being displayed right now\" which is often transient.\nDon't over-use instance variables. Instance variables should be meaningful and significant. Not just junk variables for things that don't have a proper home anywhere else.\nMethod functions are just functions. In an OO world, they're still just functions. They map input arguments to output results (or object state changes). Pass all the arguments that make sense.\n"
] | [
3
] | [] | [] | [
"coding_style",
"model_view_controller",
"oop",
"python"
] | stackoverflow_0000377337_coding_style_model_view_controller_oop_python.txt |
Q:
Python, Regular Expression Postcode search
I am trying to use regular expressions to find a UK postcode within a string.
I have got the regular expression working inside RegexBuddy, see below:
\b[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}\b
I have a bunch of addresses and want to grab the postcode from them, example below:
123 Some Road Name Town, City County PA23 6NH
How would I go about this in Python? I am aware of the re module for Python but I am struggling to get it working.
Cheers
Eef
A:
repeating your address 3 times with postcode PA23 6NH, PA2 6NH and PA2Q 6NH as test for you pattern and using the regex from wikipedia against yours, the code is..
import re
s="123 Some Road Name\nTown, City\nCounty\nPA23 6NH\n123 Some Road Name\nTown, City"\
"County\nPA2 6NH\n123 Some Road Name\nTown, City\nCounty\nPA2Q 6NH"
#custom
print re.findall(r'\b[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}\b', s)
#regex from #http://en.wikipedia.orgwikiUK_postcodes#Validation
print re.findall(r'[A-Z]{1,2}[0-9R][0-9A-Z]? [0-9][A-Z]{2}', s)
the result is
['PA23 6NH', 'PA2 6NH', 'PA2Q 6NH']
['PA23 6NH', 'PA2 6NH', 'PA2Q 6NH']
both the regex's give the same result.
A:
Try
import re
re.findall("[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}", x)
You don't need the \b.
A:
#!/usr/bin/env python
import re
ADDRESS="""123 Some Road Name
Town, City
County
PA23 6NH"""
reobj = re.compile(r'(\b[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}\b)')
matchobj = reobj.search(ADDRESS)
if matchobj:
print matchobj.group(1)
Example output:
[user@host]$ python uk_postcode.py
PA23 6NH
| Python, Regular Expression Postcode search | I am trying to use regular expressions to find a UK postcode within a string.
I have got the regular expression working inside RegexBuddy, see below:
\b[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}\b
I have a bunch of addresses and want to grab the postcode from them, example below:
123 Some Road Name Town, City County PA23 6NH
How would I go about this in Python? I am aware of the re module for Python but I am struggling to get it working.
Cheers
Eef
| [
"repeating your address 3 times with postcode PA23 6NH, PA2 6NH and PA2Q 6NH as test for you pattern and using the regex from wikipedia against yours, the code is..\nimport re\n\ns=\"123 Some Road Name\\nTown, City\\nCounty\\nPA23 6NH\\n123 Some Road Name\\nTown, City\"\\\n \"County\\nPA2 6NH\\n123 Some Road Name\\nTown, City\\nCounty\\nPA2Q 6NH\"\n\n#custom \nprint re.findall(r'\\b[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}\\b', s)\n\n#regex from #http://en.wikipedia.orgwikiUK_postcodes#Validation \nprint re.findall(r'[A-Z]{1,2}[0-9R][0-9A-Z]? [0-9][A-Z]{2}', s)\n\nthe result is \n['PA23 6NH', 'PA2 6NH', 'PA2Q 6NH']\n['PA23 6NH', 'PA2 6NH', 'PA2Q 6NH']\n\nboth the regex's give the same result.\n",
"Try\nimport re\nre.findall(\"[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}\", x)\n\nYou don't need the \\b. \n",
"#!/usr/bin/env python\n\nimport re\n\nADDRESS=\"\"\"123 Some Road Name\nTown, City\nCounty\nPA23 6NH\"\"\"\n\nreobj = re.compile(r'(\\b[A-Z]{1,2}[0-9][A-Z0-9]? [0-9][ABD-HJLNP-UW-Z]{2}\\b)')\nmatchobj = reobj.search(ADDRESS)\nif matchobj:\n print matchobj.group(1)\n\nExample output: \n[user@host]$ python uk_postcode.py \nPA23 6NH\n\n"
] | [
10,
0,
0
] | [] | [] | [
"postal_code",
"python",
"regex"
] | stackoverflow_0000378157_postal_code_python_regex.txt |
Q:
Problem sub-classing BaseException in Python
I wanted to create my own Python exception class, like this:
class MyException(BaseException):
def __init__(self, errno, address):
if errno == 10048:
mess = str(address) + ' is already in use'
else:
mess = 'Unable to open ' + str(address)
BaseException.__init__(mess)
but when the program got to the call to BaseException.__init__(), I got this traceback:
BaseException.__init__(mess)
TypeError: descriptor '__init__' requires a 'exceptions.BaseException' object but received a 'str'
I thought that BaseException would take any set of arguments. Also, how I am supposed to pass an 'exceptions.BaseException' object into exceptions.BaseException's constructor?
A:
You have to call the method of the base class with the instance as the first argument:
BaseException.__init__(self, mess)
To quote from the tutorial:
An overriding method in a derived class may in fact want to extend rather than simply replace the base class method of the same name. There is a simple way to call the base class method directly: just call BaseClassName.methodname(self, arguments). This is occasionally useful to clients as well. (Note that this only works if the base class is defined or imported directly in the global scope.)
As mentioned by Tony Arkles and in the documentation,
All built-in, non-system-exiting exceptions are derived from this class. All user-defined exceptions should also be derived from [Exception].
so you shouldn't inherit from BaseException, anyway...
A:
hop has it right.
As a side note, you really should not subclass BaseException, you should be subclassing Exception instead. (Unless you really really know what you're doing)
| Problem sub-classing BaseException in Python | I wanted to create my own Python exception class, like this:
class MyException(BaseException):
def __init__(self, errno, address):
if errno == 10048:
mess = str(address) + ' is already in use'
else:
mess = 'Unable to open ' + str(address)
BaseException.__init__(mess)
but when the program got to the call to BaseException.__init__(), I got this traceback:
BaseException.__init__(mess)
TypeError: descriptor '__init__' requires a 'exceptions.BaseException' object but received a 'str'
I thought that BaseException would take any set of arguments. Also, how I am supposed to pass an 'exceptions.BaseException' object into exceptions.BaseException's constructor?
| [
"You have to call the method of the base class with the instance as the first argument:\nBaseException.__init__(self, mess)\n\nTo quote from the tutorial:\n\nAn overriding method in a derived class may in fact want to extend rather than simply replace the base class method of the same name. There is a simple way to call the base class method directly: just call BaseClassName.methodname(self, arguments). This is occasionally useful to clients as well. (Note that this only works if the base class is defined or imported directly in the global scope.)\n\nAs mentioned by Tony Arkles and in the documentation,\n\nAll built-in, non-system-exiting exceptions are derived from this class. All user-defined exceptions should also be derived from [Exception].\n\nso you shouldn't inherit from BaseException, anyway...\n",
"hop has it right.\nAs a side note, you really should not subclass BaseException, you should be subclassing Exception instead. (Unless you really really know what you're doing)\n"
] | [
10,
6
] | [] | [] | [
"exception",
"inheritance",
"python"
] | stackoverflow_0000378493_exception_inheritance_python.txt |
Q:
Keeping GUIs responsive during long-running tasks
Keeping the GUI responsive while the application does some CPU-heavy processing is one of the challenges of effective GUI programming.
Here's a good discussion of how to do this in wxPython. To summarize, there are 3 ways:
Use threads
Use wxYield
Chunk the work and do it in the IDLE event handler
Which method have you found to be the most effective ? Techniques from other frameworks (like Qt, GTK or Windows API) are also welcome.
A:
Threads. They're what I always go for because you can do it in every framework you need.
And once you're used to multi-threading and parallel processing in one language/framework, you're good on all frameworks.
A:
Definitely threads. Why? The future is multi-core. Almost any new CPU has more than one core or if it has just one, it might support hyperthreading and thus pretending it has more than one. To effectively make use of multi-core CPUs (and Intel is planing to go up to 32 cores in the not so far future), you need multiple threads. If you run all in one main thread (usually the UI thread is the main thread), users will have CPUs with 8, 16 and one day 32 cores and your application never uses more than one of these, IOW it runs much, much slower than it could run.
Actual if you plan an application nowadays, I would go away of the classical design and think of a master/slave relationship. Your UI is the master, it's only task is to interact with the user. That is displaying data to the user and gathering user input. Whenever you app needs to "process any data" (even small amounts and much more important big ones), create a "task" of any kind, forward this task to a background thread and make the thread perform the task, providing feedback to the UI (e.g. how many percent it has completed or just if the task is still running or not, so the UI can show a "work-in-progress indicator"). If possible, split the task into many small, independent sub-tasks and run more than one background process, feeding one sub-task to each of them. That way your application can really benefit from multi-core and get faster the more cores CPUs have.
Actually companies like Apple and Microsoft are already planing on how to make their still most single threaded UIs themselves multithreaded. Even with the approach above, you may one day have the situation that the UI is the bottleneck itself. The background processes can process data much faster than the UI can present it to the user or ask the user for input. Today many UI frameworks are little thread-safe, many not thread-safe at all, but that will change. Serial processing (doing one task after another) is a dying design, parallel processing (doing many task at once) is where the future goes. Just look at graphic adapters. Even the most modern NVidia card has a pitiful performance, if you look at the processing speed in MHz/GHz of the GPU alone. How comes it can beat the crap out of CPUs when it comes to 3D calculations? Simple: Instead of calculating one polygon point or one texture pixel after another, it calculates many of them in parallel (actually a whole bunch at the same time) and that way it reaches a throughput that still makes CPUs cry. E.g. the ATI X1900 (to name the competitor as well) has 48 shader units!
A:
I think delayedresult is what you are looking for:
http://www.wxpython.org/docs/api/wx.lib.delayedresult-module.html
See the wxpython demo for an example.
A:
Threads or processes depending on the application. Sometimes it's actually best to have the GUI be it's own program and just send asynchronous calls to other programs when it has work to do. You'll still end up having multiple threads in the GUI to monitor for results, but it can simplify things if the work being done is complex and not directly connected to the GUI.
A:
Threads -
Let's use a simple 2-layer view (GUI, application logic).
The application logic work should be done in a separate Python thread. For Asynchronous events that need to propagate up to the GUI layer, use wx's event system to post custom events. Posting wx events is thread safe so you could conceivably do it from multiple contexts.
Working in the other direction (GUI input events triggering application logic), I have found it best to home-roll a custom event system. Use the Queue module to have a thread-safe way of pushing and popping event objects. Then, for every synchronous member function, pair it with an async version that pushes the sync function object and the parameters onto the event queue.
This works particularly well if only a single application logic-level operation can be performed at a time. The benefit of this model is that synchronization is simple - each synchronous function works within it's own context sequentially from start to end without worry of pre-emption or hand-coded yielding. You will not need locks to protect your critical sections. At the end of the function, post an event to the GUI layer indicating that the operation is complete.
You could scale this to allow multiple application-level threads to exist, but the usual concerns with synchronization will re-appear.
edit - Forgot to mention the beauty of this is that it is possible to completely decouple the application logic from the GUI code. The modularity helps if you ever decide to use a different framework or use provide a command-line version of the app. To do this, you will need an intermediate event dispatcher (application level -> GUI) that is implemented by the GUI layer.
A:
Working with Qt/C++ for Win32.
We divide the major work units into different processes. The GUI runs as a separate process and is able to command/receive data from the "worker" processes as needed. Works nicely in todays multi-core world.
A:
This answer doesn't apply to the OP's question regarding Python, but is more of a meta-response.
The easy way is threads. However, not every platform has pre-emptive threading (e.g. BREW, some other embedded systems) If possibly, simply chunk the work and do it in the IDLE event handler.
Another problem with using threads in BREW is that it doesn't clean up C++ stack objects, so it's way too easy to leak memory if you simply kill the thread.
A:
I use threads so the GUI's main event loop never blocks.
A:
For some types of operations, using separate processes makes a lot of sense. Back in the day, spawning a process incurred a lot of overhead. With modern hardware this overhead is hardly even a blip on the screen. This is especially true if you're spawning a long running process.
One (arguable) advantage is that it's a simpler conceptual model than threads that might lead to more maintainable code. It can also make your code easier to test, as you can write test scripts that exercise these external processes without having to involve the GUI. Some might even argue that is the primary advantage.
In the case of some code I once worked on, switching from threads to separate processes led to a net reduction of over 5000 lines of code while at the same time making the GUI more responsive, the code easier to maintain and test, all while improving the total overall performance.
| Keeping GUIs responsive during long-running tasks | Keeping the GUI responsive while the application does some CPU-heavy processing is one of the challenges of effective GUI programming.
Here's a good discussion of how to do this in wxPython. To summarize, there are 3 ways:
Use threads
Use wxYield
Chunk the work and do it in the IDLE event handler
Which method have you found to be the most effective ? Techniques from other frameworks (like Qt, GTK or Windows API) are also welcome.
| [
"Threads. They're what I always go for because you can do it in every framework you need. \nAnd once you're used to multi-threading and parallel processing in one language/framework, you're good on all frameworks.\n",
"Definitely threads. Why? The future is multi-core. Almost any new CPU has more than one core or if it has just one, it might support hyperthreading and thus pretending it has more than one. To effectively make use of multi-core CPUs (and Intel is planing to go up to 32 cores in the not so far future), you need multiple threads. If you run all in one main thread (usually the UI thread is the main thread), users will have CPUs with 8, 16 and one day 32 cores and your application never uses more than one of these, IOW it runs much, much slower than it could run.\nActual if you plan an application nowadays, I would go away of the classical design and think of a master/slave relationship. Your UI is the master, it's only task is to interact with the user. That is displaying data to the user and gathering user input. Whenever you app needs to \"process any data\" (even small amounts and much more important big ones), create a \"task\" of any kind, forward this task to a background thread and make the thread perform the task, providing feedback to the UI (e.g. how many percent it has completed or just if the task is still running or not, so the UI can show a \"work-in-progress indicator\"). If possible, split the task into many small, independent sub-tasks and run more than one background process, feeding one sub-task to each of them. That way your application can really benefit from multi-core and get faster the more cores CPUs have.\nActually companies like Apple and Microsoft are already planing on how to make their still most single threaded UIs themselves multithreaded. Even with the approach above, you may one day have the situation that the UI is the bottleneck itself. The background processes can process data much faster than the UI can present it to the user or ask the user for input. Today many UI frameworks are little thread-safe, many not thread-safe at all, but that will change. Serial processing (doing one task after another) is a dying design, parallel processing (doing many task at once) is where the future goes. Just look at graphic adapters. Even the most modern NVidia card has a pitiful performance, if you look at the processing speed in MHz/GHz of the GPU alone. How comes it can beat the crap out of CPUs when it comes to 3D calculations? Simple: Instead of calculating one polygon point or one texture pixel after another, it calculates many of them in parallel (actually a whole bunch at the same time) and that way it reaches a throughput that still makes CPUs cry. E.g. the ATI X1900 (to name the competitor as well) has 48 shader units!\n",
"I think delayedresult is what you are looking for:\nhttp://www.wxpython.org/docs/api/wx.lib.delayedresult-module.html\nSee the wxpython demo for an example.\n",
"Threads or processes depending on the application. Sometimes it's actually best to have the GUI be it's own program and just send asynchronous calls to other programs when it has work to do. You'll still end up having multiple threads in the GUI to monitor for results, but it can simplify things if the work being done is complex and not directly connected to the GUI.\n",
"Threads -\nLet's use a simple 2-layer view (GUI, application logic).\nThe application logic work should be done in a separate Python thread. For Asynchronous events that need to propagate up to the GUI layer, use wx's event system to post custom events. Posting wx events is thread safe so you could conceivably do it from multiple contexts.\nWorking in the other direction (GUI input events triggering application logic), I have found it best to home-roll a custom event system. Use the Queue module to have a thread-safe way of pushing and popping event objects. Then, for every synchronous member function, pair it with an async version that pushes the sync function object and the parameters onto the event queue. \nThis works particularly well if only a single application logic-level operation can be performed at a time. The benefit of this model is that synchronization is simple - each synchronous function works within it's own context sequentially from start to end without worry of pre-emption or hand-coded yielding. You will not need locks to protect your critical sections. At the end of the function, post an event to the GUI layer indicating that the operation is complete. \nYou could scale this to allow multiple application-level threads to exist, but the usual concerns with synchronization will re-appear.\nedit - Forgot to mention the beauty of this is that it is possible to completely decouple the application logic from the GUI code. The modularity helps if you ever decide to use a different framework or use provide a command-line version of the app. To do this, you will need an intermediate event dispatcher (application level -> GUI) that is implemented by the GUI layer. \n",
"Working with Qt/C++ for Win32.\nWe divide the major work units into different processes. The GUI runs as a separate process and is able to command/receive data from the \"worker\" processes as needed. Works nicely in todays multi-core world.\n",
"This answer doesn't apply to the OP's question regarding Python, but is more of a meta-response.\nThe easy way is threads. However, not every platform has pre-emptive threading (e.g. BREW, some other embedded systems) If possibly, simply chunk the work and do it in the IDLE event handler.\nAnother problem with using threads in BREW is that it doesn't clean up C++ stack objects, so it's way too easy to leak memory if you simply kill the thread.\n",
"I use threads so the GUI's main event loop never blocks.\n",
"For some types of operations, using separate processes makes a lot of sense. Back in the day, spawning a process incurred a lot of overhead. With modern hardware this overhead is hardly even a blip on the screen. This is especially true if you're spawning a long running process.\nOne (arguable) advantage is that it's a simpler conceptual model than threads that might lead to more maintainable code. It can also make your code easier to test, as you can write test scripts that exercise these external processes without having to involve the GUI. Some might even argue that is the primary advantage.\nIn the case of some code I once worked on, switching from threads to separate processes led to a net reduction of over 5000 lines of code while at the same time making the GUI more responsive, the code easier to maintain and test, all while improving the total overall performance.\n"
] | [
15,
7,
2,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"python",
"user_interface",
"wxpython"
] | stackoverflow_0000148963_python_user_interface_wxpython.txt |
Q:
Is there a python library for editing msword doc files?
Possible Duplicate:
Reading/Writing MS Word files in Python
I know there are some libraries for editing excel files but is there anything for editing msword 97/2000/2003 .doc files in python? Ideally I'd like to make some minor changes to the formatting of the text based on the contents of the text. A really trivial example would be highlighting every word starting with a capital.
A:
Why not look at using python-uno to load the document into OpenOffice and manipulate it using the UNO interface. There is some example code on the site I just linked to which can get you started.
A:
If platform independence is important, then I'd recommend using the OpenOffice API either through BASIC or Python. OpenOffice can also run in headless mode, without a GUI, so you can automate it for batch jobs. These links might be helpful:
(BASIC) Text Documents in OpenOffice
(Python) Examples
It's definitely more involved than importing a module and doing a string replace, but OpenOffice is probably the best free .doc reader, that you can hook into.
A:
The PyWin32 library allows you to access COM objects from Python, including all of the various Office COM APIs. I won't claim it's easy to use, but it does work.
A:
Per this SO post, I found out about jXLS, which uses Apache POI. POI has many subcomponents, including HWPF:
HWPF is our port of the Microsoft Word
97 file format to pure Java. It
supports read, and limited write
capabilities. Please see the HWPF
project page for more information.
This component is in the early stages
of development. It can already read
and write simple files.
Since this is a Java library, it could be scripted using Jython. I don't know how good the writing capabilities are yet, but please post a comment back if it helps.
| Is there a python library for editing msword doc files? |
Possible Duplicate:
Reading/Writing MS Word files in Python
I know there are some libraries for editing excel files but is there anything for editing msword 97/2000/2003 .doc files in python? Ideally I'd like to make some minor changes to the formatting of the text based on the contents of the text. A really trivial example would be highlighting every word starting with a capital.
| [
"Why not look at using python-uno to load the document into OpenOffice and manipulate it using the UNO interface. There is some example code on the site I just linked to which can get you started.\n",
"If platform independence is important, then I'd recommend using the OpenOffice API either through BASIC or Python. OpenOffice can also run in headless mode, without a GUI, so you can automate it for batch jobs. These links might be helpful:\n\n(BASIC) Text Documents in OpenOffice\n(Python) Examples\n\nIt's definitely more involved than importing a module and doing a string replace, but OpenOffice is probably the best free .doc reader, that you can hook into.\n",
"The PyWin32 library allows you to access COM objects from Python, including all of the various Office COM APIs. I won't claim it's easy to use, but it does work.\n",
"Per this SO post, I found out about jXLS, which uses Apache POI. POI has many subcomponents, including HWPF:\n\nHWPF is our port of the Microsoft Word\n 97 file format to pure Java. It\n supports read, and limited write\n capabilities. Please see the HWPF\n project page for more information.\n This component is in the early stages\n of development. It can already read\n and write simple files.\n\nSince this is a Java library, it could be scripted using Jython. I don't know how good the writing capabilities are yet, but please post a comment back if it helps.\n"
] | [
4,
3,
1,
1
] | [] | [] | [
"ms_word",
"python"
] | stackoverflow_0000376161_ms_word_python.txt |
Q:
What does the []-esque decorator syntax in Python mean?
Here's a snippet of code from within TurboGears 1.0.6:
[dispatch.generic(MultiorderGenericFunction)]
def run_with_transaction(func, *args, **kw):
pass
I can't figure out how putting a list before a function definition can possibly affect it.
In dispatch.generic's docstring, it mentions:
Note that when using older Python versions, you must use '[dispatch.generic()]' instead of '@dispatch.generic()'.
OK, so it apparently is a way to get decorator-like behavior in pre-decorator versions of Python, but how the heck can it possibly work?
A:
The decorator syntax is provided by PyProtocols.
"""
Finally, it's important to note that these "magic" decorators use a very sneaky hack: they abuse the sys.settrace() debugger hook to track whether assignments are taking place. Guido takes a very dim view of this, but the hook's existing functionality isn't going to change in 2.2, 2.3, or 2.4, so don't worry about it too much. This is really a trick to get "early access" to decorators, and the 2.4 lifecycle will be plenty long enough to get our code switched over to 2.4 syntax. Somewhere around Python 2.5 or 2.6, add_assignment_advisor() can drop the magic part and just be a backward compatibility wrapper for the decorators that use it.
"""
http://dirtsimple.org/2004/11/using-24-decorators-with-22-and-23.html
So it sounds like these work by wrapping the actual decorator in some magic that hooks into special code for debuggers to manipulate what actually gets assigned for the function.
The python docs say this about settrace
"""
Note
The settrace() function is intended only for implementing debuggers, profilers, coverage tools and the like. Its behavior is part of the implementation platform, rather than part of the language definition, and thus may not be available in all Python implementations.
"""
| What does the []-esque decorator syntax in Python mean? | Here's a snippet of code from within TurboGears 1.0.6:
[dispatch.generic(MultiorderGenericFunction)]
def run_with_transaction(func, *args, **kw):
pass
I can't figure out how putting a list before a function definition can possibly affect it.
In dispatch.generic's docstring, it mentions:
Note that when using older Python versions, you must use '[dispatch.generic()]' instead of '@dispatch.generic()'.
OK, so it apparently is a way to get decorator-like behavior in pre-decorator versions of Python, but how the heck can it possibly work?
| [
"The decorator syntax is provided by PyProtocols.\n\"\"\"\nFinally, it's important to note that these \"magic\" decorators use a very sneaky hack: they abuse the sys.settrace() debugger hook to track whether assignments are taking place. Guido takes a very dim view of this, but the hook's existing functionality isn't going to change in 2.2, 2.3, or 2.4, so don't worry about it too much. This is really a trick to get \"early access\" to decorators, and the 2.4 lifecycle will be plenty long enough to get our code switched over to 2.4 syntax. Somewhere around Python 2.5 or 2.6, add_assignment_advisor() can drop the magic part and just be a backward compatibility wrapper for the decorators that use it.\n\"\"\"\nhttp://dirtsimple.org/2004/11/using-24-decorators-with-22-and-23.html\nSo it sounds like these work by wrapping the actual decorator in some magic that hooks into special code for debuggers to manipulate what actually gets assigned for the function.\nThe python docs say this about settrace\n\"\"\"\nNote\nThe settrace() function is intended only for implementing debuggers, profilers, coverage tools and the like. Its behavior is part of the implementation platform, rather than part of the language definition, and thus may not be available in all Python implementations.\n\"\"\"\n"
] | [
11
] | [
"Nothing mysterious, it's just how syntax was before.\nThe parser has changed, probably because the Python Zen claims that \"In the face of ambiguity, refuse the temptation to guess.\". \n[] should be for list only, and there is it.\n"
] | [
-2
] | [
"decorator",
"python",
"syntax"
] | stackoverflow_0000379291_decorator_python_syntax.txt |
Q:
Idiomatic asynchronous design
Are there any sorts of useful idioms I can make use of when writing an API that is asynchronous? I would like to standardize on something as I seem to be using a few different styles throughout. It seems hard to make asynchronous code simple; I suppose this is because asynchronous operations are anything but.
At the most basic level, the user of the API must be able to:
Have data pushed to them as it becomes available
Check the status of the asynchronous operation
Be notified of errors that occur
Wait for completion (converting the asynchronous operation to a synchronous one)
My classes support several asynchronous operations. I have been putting some of the status/error callbacks in the class around it, but the class is becoming gunked up with a lot of incidental fields, as well as getting too large. I am curious if anyone has used an asynchronous API they found to be well-organized. I have looked at .NET's Begin/EndAsyncOperation + AsyncResult design, as well as some classes in Java (e.g. Future).
This is being written in Python, so it remains very flexible. There is a caveat: some of these asynchronous operations are being marshaled to a remote machine and executed over there. Thus, not every operation necessarily executes in a separate thread.
A:
You may want to look at Python Twisted. It is a nice Reactor based API that supports asynchronous operations. Proactor is the common term for asynchronous completion handler like frameworks.
A:
Also have a look at the Asynchronous Completion Token and ActiveObject patterns.
A:
This sounds like the Observer design pattern. link.
Your client object is an Observer. Your API belongs to an object that's Observable.
Each client (in Java parlance) implements the Observer interface. In Python, it's a matter of each client offering a number of methods that your observable will use.
class SomeClientInterface( object ):
def update( self, source, data ):
# handle data being pushed from Observable source
def error( self, from, status ):
# handle error in Observable source
Your Observable object has a way for Observers to register and do other things.
class Observable( object ):
def __init__( self ):
self.clients= set()
def register( self, observer ):
self.clients.add( observer )
def whenSomethingHappens( self ):
# doing work
if itAllWentToHell:
for c in self.clients:
c.error( self, "some status object" )
else:
for c in self.clients:
c.update( self, the pushed data )
def waitFor( self ):
# observers are waiting...
return theData
def status( self ):
return self.currentState
| Idiomatic asynchronous design | Are there any sorts of useful idioms I can make use of when writing an API that is asynchronous? I would like to standardize on something as I seem to be using a few different styles throughout. It seems hard to make asynchronous code simple; I suppose this is because asynchronous operations are anything but.
At the most basic level, the user of the API must be able to:
Have data pushed to them as it becomes available
Check the status of the asynchronous operation
Be notified of errors that occur
Wait for completion (converting the asynchronous operation to a synchronous one)
My classes support several asynchronous operations. I have been putting some of the status/error callbacks in the class around it, but the class is becoming gunked up with a lot of incidental fields, as well as getting too large. I am curious if anyone has used an asynchronous API they found to be well-organized. I have looked at .NET's Begin/EndAsyncOperation + AsyncResult design, as well as some classes in Java (e.g. Future).
This is being written in Python, so it remains very flexible. There is a caveat: some of these asynchronous operations are being marshaled to a remote machine and executed over there. Thus, not every operation necessarily executes in a separate thread.
| [
"You may want to look at Python Twisted. It is a nice Reactor based API that supports asynchronous operations. Proactor is the common term for asynchronous completion handler like frameworks.\n",
"Also have a look at the Asynchronous Completion Token and ActiveObject patterns.\n",
"This sounds like the Observer design pattern. link.\nYour client object is an Observer. Your API belongs to an object that's Observable.\nEach client (in Java parlance) implements the Observer interface. In Python, it's a matter of each client offering a number of methods that your observable will use.\nclass SomeClientInterface( object ):\n def update( self, source, data ):\n # handle data being pushed from Observable source\n def error( self, from, status ):\n # handle error in Observable source\n\nYour Observable object has a way for Observers to register and do other things.\nclass Observable( object ):\n def __init__( self ):\n self.clients= set()\n def register( self, observer ):\n self.clients.add( observer )\n def whenSomethingHappens( self ):\n # doing work\n if itAllWentToHell:\n for c in self.clients:\n c.error( self, \"some status object\" )\n else:\n for c in self.clients:\n c.update( self, the pushed data )\n def waitFor( self ):\n # observers are waiting...\n return theData\n def status( self ):\n return self.currentState\n\n"
] | [
4,
2,
2
] | [] | [] | [
"asynchronous",
"python"
] | stackoverflow_0000378564_asynchronous_python.txt |
Q:
How much slower is a wxWidget written in Python versus C++?
I'm looking into writing a wxWidget that displays a graphical node network, and therefore does a lot of drawing operations. I know that using Python to do it is going to be slower, but I'd rather get it working and port it later when its functional. Ideally, if the performance hit isn't too great, I'd prefer to keep the codebase in Python for easy updates.
What I'm wondering is how much slower should I expect things to go? I realize this is vague and open ended, but I just need a sense of what to expect. Will drawing 500 circles bog down? Will it be noticeable at all? What are your experiences?
A:
IMHO, main bottleneck will be the data structures you are going to use for representing the network graph. I have coded a similar application for tracing dependencies between various component versions in a system and graphics was the last thing I had to worry about and I was certainly drawing more than 500 objects with gradient fills for some of them!
If you are getting bogged down, you should checkout using PyGame for drawing things.
A:
In my experience, doing things the naive way (drawing each object to the screen) will bog down in Python quicker than C++. However, with Python it's going to be a lot quicker and less painful to code it the clever way (see for example PseudoDC), which will blow the naive C++ implementation out of the water.
I agree with suraj. above that PyGame may be a good choice, depending on how graphics-intensive the app is, compared to the convenient wxPython stuff you'll be giving up.
A:
For drawing, people have suggested PyGame. I like PyGame, its easy to work with and works well. Other choices would be Pyglet, or using PyOpenGL (you can most likely draw to a wx widget too, though I've never done it).
Personally, I'd do it in Python using whatever library I'm most familiar with (in my case, I'd use pygtk and cairo) and worry about performance only when it becomes a problem - then profile and optimize the bottleneck, if its Python code thats slow, I'll know which bits to run in C instead.
| How much slower is a wxWidget written in Python versus C++? | I'm looking into writing a wxWidget that displays a graphical node network, and therefore does a lot of drawing operations. I know that using Python to do it is going to be slower, but I'd rather get it working and port it later when its functional. Ideally, if the performance hit isn't too great, I'd prefer to keep the codebase in Python for easy updates.
What I'm wondering is how much slower should I expect things to go? I realize this is vague and open ended, but I just need a sense of what to expect. Will drawing 500 circles bog down? Will it be noticeable at all? What are your experiences?
| [
"IMHO, main bottleneck will be the data structures you are going to use for representing the network graph. I have coded a similar application for tracing dependencies between various component versions in a system and graphics was the last thing I had to worry about and I was certainly drawing more than 500 objects with gradient fills for some of them!\nIf you are getting bogged down, you should checkout using PyGame for drawing things.\n",
"In my experience, doing things the naive way (drawing each object to the screen) will bog down in Python quicker than C++. However, with Python it's going to be a lot quicker and less painful to code it the clever way (see for example PseudoDC), which will blow the naive C++ implementation out of the water.\nI agree with suraj. above that PyGame may be a good choice, depending on how graphics-intensive the app is, compared to the convenient wxPython stuff you'll be giving up.\n",
"For drawing, people have suggested PyGame. I like PyGame, its easy to work with and works well. Other choices would be Pyglet, or using PyOpenGL (you can most likely draw to a wx widget too, though I've never done it).\nPersonally, I'd do it in Python using whatever library I'm most familiar with (in my case, I'd use pygtk and cairo) and worry about performance only when it becomes a problem - then profile and optimize the bottleneck, if its Python code thats slow, I'll know which bits to run in C instead.\n"
] | [
1,
1,
1
] | [] | [] | [
"c++",
"drawing",
"performance",
"python",
"wxpython"
] | stackoverflow_0000379442_c++_drawing_performance_python_wxpython.txt |
Q:
How do you make a class attribute that isn't a standard data type?
I have one class that needs to grab an attribute that is set in another. It's not a standard data type though. Here's the code;
class graphics:
def __init__(self, Fullscreen = False, Width = 640, Height = 480):
print "Graphics Init"
SCREEN_SIZE = (Width, Height)
pygame.init()
if Fullscreen:
self.screen = pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)
print "Fullscreen Initialized"
else:
self.screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)
print "Non-Fullscreen Initialized"
What I need to do is reference the screen attribute, which I can set with self.screen and be readable within that class...but from another class I have to set
screen = ?
under
class graphics:
What does that question mark need to be? I've tried 0, None, ""...nothing seems to work, I have no idea what data type that pygame call would be. :S
A:
I'm thinking that a short explanation of the difference between class and instance attributes in Python might be helpful to you.
When you write code like so:
class Graphics:
screen_size = (1024, 768)
The class Graphics is actually an object itself -- a class object. Because you defined screen_size inside of it, screen_size is an attribute of the Graphics object. You can see this in the following:
assert Graphics.screen_size == (1024, 768)
In Python, these class objects can be used like functions -- just use the invocation syntax:
g = Graphics()
g is called an "instance" of the class Graphics. When you create instances of a class, all attribute lookups that don't match attributes of the instance look at the attributes of the class object next. That's why this lookup works:
assert g.screen_size == (1024, 768)
If we add an attribute to the instance with the same name, however, the lookup on the instance will succeed, and it won't have to go looking to the class object. You basically "mask" the class object's value with a value set directly on the instance. Note that this doesn't change the value of the attribute in the class object:
g.screen_size = (1400, 1050)
assert g.screen_size == (1400, 1050)
assert Graphics.screen_size == (1024, 768)
So, what you're doing in your __init__ method is exactly what we did above: setting an attribute of the instance, self.
class Graphics:
screen_size = (1024, 768)
def __init__(self):
self.screen_size = (1400, 1050)
g = Graphics()
assert Graphics.screen_size == (1024, 768)
assert g.screen_size == (1400, 1050)
The value Graphics.screen_size can be used anywhere after this class definition, as shown with the first assert statement in the above snippet.
Edit: And don't forget to check out the Python Tutorial's section on classes, which covers all this and more.
A:
It's likely an object type so
self.screen = object()
might work (If I understood your question correctly).
A:
ugh...PEBKAC.
I'm so used to C that I keep forgetting you can do more than just prototype outside of defs.
Rewrote as this and it worked:
class graphics:
SCREEN_SIZE = (640, 480)
screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)
def __init__(self, Fullscreen = False, Width = 640, Height = 480):
print "Graphics Init"
self.SCREEN_SIZE = (Width, Height)
pygame.init()
if Fullscreen:
self.screen = pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)
print "Fullscreen Initialized"
else:
self.screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)
print "Non-Fullscreen Initialized"
A:
You rarely, if ever, reference attributes of a class. You reference attributes of an object.
(Also, class names should be uppercase: Graphics).
class Graphics:
SCREEN_SIZE = (640, 480)
def __init__(self, Fullscreen = False, Width = 640, Height = 480):
print "Graphics Init"
self.SCREEN_SIZE = (Width, Height)
pygame.init()
if Fullscreen:
self.screen = pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)
print "Fullscreen Initialized"
else:
self.screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)
print "Non-Fullscreen Initialized"
Here's an example of getting or setting an attribute of an object.
g= Graphics() # create an object
# access the object's instance variables
print "screen", g.screen
g.screen= pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)
Note that we created an object (g) from our class (Graphics). We don't reference the class very often at all. Almost the only time a class name is used is to create object instances (Graphics()). We rarely say Graphics.this or Graphics.that to refer to attributes of the class itself.
| How do you make a class attribute that isn't a standard data type? | I have one class that needs to grab an attribute that is set in another. It's not a standard data type though. Here's the code;
class graphics:
def __init__(self, Fullscreen = False, Width = 640, Height = 480):
print "Graphics Init"
SCREEN_SIZE = (Width, Height)
pygame.init()
if Fullscreen:
self.screen = pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)
print "Fullscreen Initialized"
else:
self.screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)
print "Non-Fullscreen Initialized"
What I need to do is reference the screen attribute, which I can set with self.screen and be readable within that class...but from another class I have to set
screen = ?
under
class graphics:
What does that question mark need to be? I've tried 0, None, ""...nothing seems to work, I have no idea what data type that pygame call would be. :S
| [
"I'm thinking that a short explanation of the difference between class and instance attributes in Python might be helpful to you.\nWhen you write code like so:\nclass Graphics:\n screen_size = (1024, 768)\n\nThe class Graphics is actually an object itself -- a class object. Because you defined screen_size inside of it, screen_size is an attribute of the Graphics object. You can see this in the following:\nassert Graphics.screen_size == (1024, 768)\n\nIn Python, these class objects can be used like functions -- just use the invocation syntax:\ng = Graphics()\n\ng is called an \"instance\" of the class Graphics. When you create instances of a class, all attribute lookups that don't match attributes of the instance look at the attributes of the class object next. That's why this lookup works:\nassert g.screen_size == (1024, 768)\n\nIf we add an attribute to the instance with the same name, however, the lookup on the instance will succeed, and it won't have to go looking to the class object. You basically \"mask\" the class object's value with a value set directly on the instance. Note that this doesn't change the value of the attribute in the class object:\ng.screen_size = (1400, 1050)\nassert g.screen_size == (1400, 1050)\nassert Graphics.screen_size == (1024, 768)\n\nSo, what you're doing in your __init__ method is exactly what we did above: setting an attribute of the instance, self.\nclass Graphics:\n\n screen_size = (1024, 768)\n\n def __init__(self):\n self.screen_size = (1400, 1050)\n\ng = Graphics()\nassert Graphics.screen_size == (1024, 768)\nassert g.screen_size == (1400, 1050)\n\nThe value Graphics.screen_size can be used anywhere after this class definition, as shown with the first assert statement in the above snippet.\nEdit: And don't forget to check out the Python Tutorial's section on classes, which covers all this and more.\n",
"It's likely an object type so\nself.screen = object()\n\nmight work (If I understood your question correctly).\n",
"ugh...PEBKAC.\nI'm so used to C that I keep forgetting you can do more than just prototype outside of defs.\nRewrote as this and it worked:\nclass graphics:\nSCREEN_SIZE = (640, 480)\nscreen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)\ndef __init__(self, Fullscreen = False, Width = 640, Height = 480):\n print \"Graphics Init\"\n self.SCREEN_SIZE = (Width, Height)\n pygame.init()\n if Fullscreen:\n self.screen = pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)\n print \"Fullscreen Initialized\"\n else:\n self.screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)\n print \"Non-Fullscreen Initialized\"\n\n",
"You rarely, if ever, reference attributes of a class. You reference attributes of an object.\n(Also, class names should be uppercase: Graphics).\nclass Graphics:\nSCREEN_SIZE = (640, 480)\ndef __init__(self, Fullscreen = False, Width = 640, Height = 480):\n print \"Graphics Init\"\n self.SCREEN_SIZE = (Width, Height)\n pygame.init()\n if Fullscreen:\n self.screen = pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)\n print \"Fullscreen Initialized\"\n else:\n self.screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)\n print \"Non-Fullscreen Initialized\"\n\nHere's an example of getting or setting an attribute of an object.\ng= Graphics() # create an object\n# access the object's instance variables\nprint \"screen\", g.screen\ng.screen= pygame.display.set_mode(SCREEN_SIZE, FULLSCREEN, 32)\n\nNote that we created an object (g) from our class (Graphics). We don't reference the class very often at all. Almost the only time a class name is used is to create object instances (Graphics()). We rarely say Graphics.this or Graphics.that to refer to attributes of the class itself.\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0000379995_pygame_python.txt |
Q:
What are these tags @ivar @param and @type in python docstring?
The ampoule project uses some tags in docstring, like the javadoc ones.
For example from pool.py line 86:
def start(self, ampChild=None):
"""
Starts the ProcessPool with a given child protocol.
@param ampChild: a L{ampoule.child.AMPChild} subclass.
@type ampChild: L{ampoule.child.AMPChild} subclass
"""
What are these tags, which tool uses it.
A:
Just for fun I'll note that the Python standard library is using Sphinx/reStructuredText, whose info field lists are similar.
def start(self, ampChild=None):
"""Starts the ProcessPool with a given child protocol.
:param ampChild: a :class:`ampoule.child.AMPChild` subclass.
:type ampChild: :class:`ampoule.child.AMPChild` subclass
"""
A:
Markup for a documentation tool, probably epydoc.
| What are these tags @ivar @param and @type in python docstring? | The ampoule project uses some tags in docstring, like the javadoc ones.
For example from pool.py line 86:
def start(self, ampChild=None):
"""
Starts the ProcessPool with a given child protocol.
@param ampChild: a L{ampoule.child.AMPChild} subclass.
@type ampChild: L{ampoule.child.AMPChild} subclass
"""
What are these tags, which tool uses it.
| [
"Just for fun I'll note that the Python standard library is using Sphinx/reStructuredText, whose info field lists are similar.\ndef start(self, ampChild=None):\n \"\"\"Starts the ProcessPool with a given child protocol.\n\n :param ampChild: a :class:`ampoule.child.AMPChild` subclass.\n :type ampChild: :class:`ampoule.child.AMPChild` subclass\n \"\"\"\n\n",
"Markup for a documentation tool, probably epydoc.\n"
] | [
15,
14
] | [] | [] | [
"documentation",
"javadoc",
"python"
] | stackoverflow_0000379346_documentation_javadoc_python.txt |
Q:
How to set and preserve minimal width?
I use some wx.ListCtrl classes in wx.LC_REPORT mode, augmented with ListCtrlAutoWidthMixin.
The problem is: When user double clicks the column divider (to auto resize column), column width is set to match the width of contents. This is done by the wx library and resizes column to just few pixels when the control is empty.
I tried calling
self.SetColumnWidth(colNumber, wx.LIST_AUTOSIZE_USEHEADER)
while creating the list, but it just sets the initial column width, not the minimum allowed width.
Anyone succeeded with setting column minimal width?
EDIT: Tried catching
wx.EVT_LEFT_DCLICK
with no success. This event isn't generated when user double clicks column divider. Also tried with
wx.EVT_LIST_COL_END_DRAG
this event is generated, usually twice, for double click, but I don't see how can I retrieve information about new size, or how to differentiate double click from drag&drop. Anyone has some other ideas?
A:
Honestly, I've stopped using the native wx.ListCtrl in favor of using ObjectListView. There is a little bit of a learning curve, but there are lots of examples. This would be of interest to your question.
A:
Ok, after some struggle I got working workaround for that. It is ugly from design point of view, but works well enough for me.
That's how it works:
Store the initial width of column.
self.SetColumnWidth(colNum, wx.LIST_AUTOSIZE_USEHEADER)
self.__columnWidth[colNum] = self.GetColumnWidth(c)
Register handler for update UI event.
wx.EVT_UPDATE_UI(self, self.GetId(), self.onUpdateUI)
And write the handler function.
def onUpdateUI(self, evt):
for colNum in xrange(0, self.GetColumnCount()-1):
if self.GetColumnWidth(colNum) < self.__columnWidth[colNum]:
self.SetColumnWidth(colNum, self.__columnWidth[colNum])
evt.Skip()
The self.GetColumnCount() - 1 is intentional, so the last column is not resized. I know this is not an elegant solution, but it works well enough for me - you can not make columns too small by double clicking on dividers (in fact - you can't do it at all) and double-clicking on the divider after last column resizes the last column to fit list control width.
Still, if anyone knows better solution please post it.
| How to set and preserve minimal width? | I use some wx.ListCtrl classes in wx.LC_REPORT mode, augmented with ListCtrlAutoWidthMixin.
The problem is: When user double clicks the column divider (to auto resize column), column width is set to match the width of contents. This is done by the wx library and resizes column to just few pixels when the control is empty.
I tried calling
self.SetColumnWidth(colNumber, wx.LIST_AUTOSIZE_USEHEADER)
while creating the list, but it just sets the initial column width, not the minimum allowed width.
Anyone succeeded with setting column minimal width?
EDIT: Tried catching
wx.EVT_LEFT_DCLICK
with no success. This event isn't generated when user double clicks column divider. Also tried with
wx.EVT_LIST_COL_END_DRAG
this event is generated, usually twice, for double click, but I don't see how can I retrieve information about new size, or how to differentiate double click from drag&drop. Anyone has some other ideas?
| [
"Honestly, I've stopped using the native wx.ListCtrl in favor of using ObjectListView. There is a little bit of a learning curve, but there are lots of examples. This would be of interest to your question.\n",
"Ok, after some struggle I got working workaround for that. It is ugly from design point of view, but works well enough for me.\nThat's how it works:\n\nStore the initial width of column.\nself.SetColumnWidth(colNum, wx.LIST_AUTOSIZE_USEHEADER)\n\nself.__columnWidth[colNum] = self.GetColumnWidth(c)\n\nRegister handler for update UI event.\nwx.EVT_UPDATE_UI(self, self.GetId(), self.onUpdateUI)\n\nAnd write the handler function.\ndef onUpdateUI(self, evt):\n for colNum in xrange(0, self.GetColumnCount()-1):\n if self.GetColumnWidth(colNum) < self.__columnWidth[colNum]:\n self.SetColumnWidth(colNum, self.__columnWidth[colNum])\n evt.Skip()\n\n\nThe self.GetColumnCount() - 1 is intentional, so the last column is not resized. I know this is not an elegant solution, but it works well enough for me - you can not make columns too small by double clicking on dividers (in fact - you can't do it at all) and double-clicking on the divider after last column resizes the last column to fit list control width. \nStill, if anyone knows better solution please post it.\n"
] | [
3,
1
] | [] | [] | [
"listctrl",
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0000377204_listctrl_python_wxpython_wxwidgets.txt |
Q:
Get psyco speedup on x64 architecture?
Is there a way to get the same sort of speedup on x64 architecture as you can get from psyco on 32 bit processors?
A:
No, unfortunately, Psyco only runs on 32-bit x86 right now.
| Get psyco speedup on x64 architecture? | Is there a way to get the same sort of speedup on x64 architecture as you can get from psyco on 32 bit processors?
| [
"No, unfortunately, Psyco only runs on 32-bit x86 right now.\n"
] | [
5
] | [] | [] | [
"psyco",
"python"
] | stackoverflow_0000381479_psyco_python.txt |
Q:
Python GTK MVC: Kiwi?
I've been looking around for a good MVC framework for Python using PyGTK. I've looked at Kiwi but found it a bit lacking, especially with using the Gazpacho Glade-replacement.
Are there any other nice desktop Python MVC frameworks? I'm one of the few (it seems) to not want a webapp.
A:
In defense of Kiwi:
Kiwi works fine with Glade3 instead of Gazpacho. (who forced you to use Gazpacho?)
Kiwi is my first dependency for any PyGTK application commercial or open source.
Kiwi is very actively maintained.
I have generally got to a stage where I think its irresponsible to not use Kiwi in a PyGTK application. Perhaps you can tell us what you found "lacking" so we can improve the framework. #kiwi on irc.gimp.net (or the Kiwi mailing list).
A:
There's Dabo, made by some guys moving from FoxPro. It might work for you if you're writing a data driven business app.
Beyond that, I haven't found anything that you haven't.
GUI stuff is supposed to be hard. It builds character.
(Attributed to Jim Ahlstrom, at one of the early Python workshops. Unfortunately, things haven't changed much since then.)
A:
"mvc" titled app:
http://sourceforge.net/projects/pygtkmvc/
"avc" titled app:
http://avc.inrim.it/html/
more information:
http://www.pygtk.org/applications.html
A:
PureMVC
http://trac.puremvc.org/PureMVC_Python
| Python GTK MVC: Kiwi? | I've been looking around for a good MVC framework for Python using PyGTK. I've looked at Kiwi but found it a bit lacking, especially with using the Gazpacho Glade-replacement.
Are there any other nice desktop Python MVC frameworks? I'm one of the few (it seems) to not want a webapp.
| [
"In defense of Kiwi:\n\nKiwi works fine with Glade3 instead of Gazpacho. (who forced you to use Gazpacho?)\nKiwi is my first dependency for any PyGTK application commercial or open source.\nKiwi is very actively maintained.\n\nI have generally got to a stage where I think its irresponsible to not use Kiwi in a PyGTK application. Perhaps you can tell us what you found \"lacking\" so we can improve the framework. #kiwi on irc.gimp.net (or the Kiwi mailing list).\n",
"There's Dabo, made by some guys moving from FoxPro. It might work for you if you're writing a data driven business app.\nBeyond that, I haven't found anything that you haven't.\n\nGUI stuff is supposed to be hard. It builds character.\n\n(Attributed to Jim Ahlstrom, at one of the early Python workshops. Unfortunately, things haven't changed much since then.)\n",
"\"mvc\" titled app:\nhttp://sourceforge.net/projects/pygtkmvc/\n\"avc\" titled app:\nhttp://avc.inrim.it/html/\nmore information:\nhttp://www.pygtk.org/applications.html\n",
"PureMVC\nhttp://trac.puremvc.org/PureMVC_Python\n"
] | [
4,
2,
2,
0
] | [] | [] | [
"gtk",
"model_view_controller",
"python"
] | stackoverflow_0000310856_gtk_model_view_controller_python.txt |
Q:
need help-variable creation in Python (continuation)
That was helpful kgiannakakis.
I'm facing a problem as below:
a = ['zbc','2.3']
for i in range(0,5):
exec('E%d=%s' %(i,a[i]))
This results in:
Traceback (most recent call last):
File "", line 2, in
exec('E%d=%s' %(i,a[i]))
File "", line 1, in
NameError: name 'zbc' is not defined
A:
It looks like the code you're generating expands to:
E0=zbc
E1=2.3
At the next iteration through the loop, you'll get an IndexError exception because a is only two elements long.
So given the above, you are trying to assign the value of zbc to E0. If zbc doesn't exist (which it seems that it doesn't), then you will get the NameError you mention.
It's hard to determine what you're actually trying to do with this code, so I'm not sure what to recommend. You could assign strings instead:
exec('E%d="%s"' %(i,a[i]))
This would expand to:
E0="zbc"
E1="2.3"
You would still get the IndexError because your array a is not 5 elements long. That should be an easy fix for you.
A:
Okay. this code is very weird.
As a one liner like this, it's not syntactically correct, but I suspect you're missing line breaks for some reason. But then it becomes
a = ['zbc','2.3']
for i in range(0,5):
exec('E%d=%s' %(i,a[i]))
But that will result in an index error on the reference to a[i] as shown:
>>> a
['zbc', '2.3']
>>> for i in range(0,5):
... print a[i]
...
zbc
2.3
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
IndexError: list index out of range
If you avoided that issue, you'd get
exec("E2.3=1")
on the second pass through the lopp, and that's a syntax error too.
A:
It seems you are trying to use the solution marked in this question.
If your goal is access values in a loop, you should just use a list. This weird concept of variable names with numbers in them is not one that should be used in any language. Try this.
vals = ['foo', 'bar', 'blah', 67, -0.4, 'your mom']
for i in range(len(vals)):
print(vals[i])
That is the correct way to have a list of values indexed by an integer, not putting it in the variable name.
A:
Just keep in mind that 'exec' executes whatever string you pass in to it as if you typed it in your .py file or the interpreter.
When debugging exec() related code, it's helpful to log whatever you're about to 'exec' when you run into trouble, if you did that you'd easily have noticed that E0 wasn't being assigned to the string "zbc" but to the non-existent object zbc.
Aside from that, this code sample is really weird. ย There are some legitimate uses for parsing strings into instance variables, or objects in other namespaces, most notably when you're coding a highly dynamic class that needs to do sensible stuff with messy input, or needs to setup a bunch of instance variables from a dict or string. ย But without context, the code in your question looks like you're avoiding, or don't understand how, to use list() and dict() objects..
I'd recommend telling a bit more about what you're trying to achieve next time you ask a question around something as peculiar as this. ย That would give people a good opportunity to suggest a better solution, or โif you're approaching a particular problem in a completely sensible wayโ prevent a bunch of answers telling you that you're doing something completely wrong.
| need help-variable creation in Python (continuation) | That was helpful kgiannakakis.
I'm facing a problem as below:
a = ['zbc','2.3']
for i in range(0,5):
exec('E%d=%s' %(i,a[i]))
This results in:
Traceback (most recent call last):
File "", line 2, in
exec('E%d=%s' %(i,a[i]))
File "", line 1, in
NameError: name 'zbc' is not defined
| [
"It looks like the code you're generating expands to:\nE0=zbc\nE1=2.3\n\nAt the next iteration through the loop, you'll get an IndexError exception because a is only two elements long.\nSo given the above, you are trying to assign the value of zbc to E0. If zbc doesn't exist (which it seems that it doesn't), then you will get the NameError you mention.\nIt's hard to determine what you're actually trying to do with this code, so I'm not sure what to recommend. You could assign strings instead:\nexec('E%d=\"%s\"' %(i,a[i]))\n\nThis would expand to:\nE0=\"zbc\"\nE1=\"2.3\"\n\nYou would still get the IndexError because your array a is not 5 elements long. That should be an easy fix for you.\n",
"Okay. this code is very weird.\nAs a one liner like this, it's not syntactically correct, but I suspect you're missing line breaks for some reason. But then it becomes\na = ['zbc','2.3']\nfor i in range(0,5): \n exec('E%d=%s' %(i,a[i]))\n\nBut that will result in an index error on the reference to a[i] as shown:\n>>> a\n['zbc', '2.3']\n>>> for i in range(0,5):\n... print a[i]\n... \nzbc\n2.3\nTraceback (most recent call last):\n File \"<stdin>\", line 2, in <module>\nIndexError: list index out of range\n\nIf you avoided that issue, you'd get\nexec(\"E2.3=1\")\n\non the second pass through the lopp, and that's a syntax error too.\n",
"It seems you are trying to use the solution marked in this question.\nIf your goal is access values in a loop, you should just use a list. This weird concept of variable names with numbers in them is not one that should be used in any language. Try this.\nvals = ['foo', 'bar', 'blah', 67, -0.4, 'your mom']\nfor i in range(len(vals)):\n print(vals[i])\n\nThat is the correct way to have a list of values indexed by an integer, not putting it in the variable name.\n",
"Just keep in mind that 'exec' executes whatever string you pass in to it as if you typed it in your .py file or the interpreter.\nWhen debugging exec() related code, it's helpful to log whatever you're about to 'exec' when you run into trouble, if you did that you'd easily have noticed that E0 wasn't being assigned to the string \"zbc\" but to the non-existent object zbc.\nAside from that, this code sample is really weird. ย There are some legitimate uses for parsing strings into instance variables, or objects in other namespaces, most notably when you're coding a highly dynamic class that needs to do sensible stuff with messy input, or needs to setup a bunch of instance variables from a dict or string. ย But without context, the code in your question looks like you're avoiding, or don't understand how, to use list() and dict() objects..\nI'd recommend telling a bit more about what you're trying to achieve next time you ask a question around something as peculiar as this. ย That would give people a good opportunity to suggest a better solution, or โif you're approaching a particular problem in a completely sensible wayโ prevent a bunch of answers telling you that you're doing something completely wrong.\n"
] | [
2,
0,
0,
0
] | [] | [] | [
"python",
"variables"
] | stackoverflow_0000377000_python_variables.txt |
Q:
Is @measured a standard decorator? What library is it in?
In this blog article they use the construct:
@measured
def some_func():
#...
# Presumably outputs something like "some_func() is finished in 121.333 s" somewhere
This @measured directive doesn't seem to work with raw python. What is it?
UPDATE: I see from Triptych that @something is valid, but is where can I find @measured, is it in a library somewhere, or is the author of this blog using something from his own private code base?
A:
@measured decorates the some_func() function, using a function or class named measured. The @ is the decorator syntax, measured is the decorator function name.
Decorators can be a bit hard to understand, but they are basically used to either wrap code around a function, or inject code into one.
For example the measured function (used as a decorator) is probably implemented like this...
import time
def measured(orig_function):
# When you decorate a function, the decorator func is called
# with the original function as the first argument.
# You return a new, modified function. This returned function
# is what the to-be-decorated function becomes.
print "INFO: This from the decorator function"
print "INFO: I am about to decorate %s" % (orig_function)
# This is what some_func will become:
def newfunc(*args, **kwargs):
print "INFO: This is the decorated function being called"
start = time.time()
# Execute the old function, passing arguments
orig_func_return = orig_function(*args, **kwargs)
end = time.time()
print "Function took %s seconds to execute" % (end - start)
return orig_func_return # return the output of the original function
# Return the modified function, which..
return newfunc
@measured
def some_func(arg1):
print "This is my original function! Argument was %s" % arg1
# We call the now decorated function..
some_func(123)
#.. and we should get (minus the INFO messages):
This is my original function! Argument was 123
# Function took 7.86781311035e-06 to execute
The decorator syntax is just a shorter and neater way of doing the following:
def some_func():
print "This is my original function!"
some_func = measured(some_func)
There are some decorators included with Python, for example staticmethod - but measured is not one of them:
>>> type(measured)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'measured' is not defined
Check the projects import statements to see where the function or class is coming from. If it uses from blah import * you'll need to check all of those files (which is why import * is discouraged), or you could just do something like grep -R def measured *
A:
Yes it's real. It's a function decorator.
Function decorators in Python are functions that take a function as it's single argument, and return a new function in it's place.
@classmethod and @staticmethod are two built in function decorators.
Read more ยป
A:
measured is the name of a function that must be defined before that code will work.
In general any function used as a decorator must accept a function and return a function. The function will be replaced with the result of passing it to the decorator - measured() in this case.
| Is @measured a standard decorator? What library is it in? | In this blog article they use the construct:
@measured
def some_func():
#...
# Presumably outputs something like "some_func() is finished in 121.333 s" somewhere
This @measured directive doesn't seem to work with raw python. What is it?
UPDATE: I see from Triptych that @something is valid, but is where can I find @measured, is it in a library somewhere, or is the author of this blog using something from his own private code base?
| [
"@measured decorates the some_func() function, using a function or class named measured. The @ is the decorator syntax, measured is the decorator function name.\nDecorators can be a bit hard to understand, but they are basically used to either wrap code around a function, or inject code into one.\nFor example the measured function (used as a decorator) is probably implemented like this...\nimport time\n\ndef measured(orig_function):\n # When you decorate a function, the decorator func is called\n # with the original function as the first argument.\n # You return a new, modified function. This returned function\n # is what the to-be-decorated function becomes.\n\n print \"INFO: This from the decorator function\"\n print \"INFO: I am about to decorate %s\" % (orig_function)\n\n # This is what some_func will become:\n def newfunc(*args, **kwargs):\n print \"INFO: This is the decorated function being called\"\n\n start = time.time()\n\n # Execute the old function, passing arguments\n orig_func_return = orig_function(*args, **kwargs)\n end = time.time()\n\n print \"Function took %s seconds to execute\" % (end - start)\n return orig_func_return # return the output of the original function\n\n # Return the modified function, which..\n return newfunc\n\n@measured\ndef some_func(arg1):\n print \"This is my original function! Argument was %s\" % arg1\n\n# We call the now decorated function..\nsome_func(123)\n\n#.. and we should get (minus the INFO messages):\nThis is my original function! Argument was 123\n# Function took 7.86781311035e-06 to execute\n\nThe decorator syntax is just a shorter and neater way of doing the following:\ndef some_func():\n print \"This is my original function!\"\n\nsome_func = measured(some_func)\n\nThere are some decorators included with Python, for example staticmethod - but measured is not one of them:\n>>> type(measured)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'measured' is not defined\n\nCheck the projects import statements to see where the function or class is coming from. If it uses from blah import * you'll need to check all of those files (which is why import * is discouraged), or you could just do something like grep -R def measured *\n",
"Yes it's real. It's a function decorator. \nFunction decorators in Python are functions that take a function as it's single argument, and return a new function in it's place. \n@classmethod and @staticmethod are two built in function decorators.\nRead more ยป\n",
"measured is the name of a function that must be defined before that code will work.\nIn general any function used as a decorator must accept a function and return a function. The function will be replaced with the result of passing it to the decorator - measured() in this case.\n"
] | [
13,
3,
0
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0000382624_decorator_python.txt |
Q:
Using Python's ctypes to pass/read a parameter declared as "struct_name *** param_name"?
I am trying to use Python's ctypes library to access some methods in the scanning library SANE. This is my first experience with ctypes and the first time I have had to deal with C datatypes in over a year so there is a fair learning curve here, but I think even without that this particular declaration would be troublesome:
extern SANE_Status sane_get_devices (const SANE_Device *** device_list, SANE_Bool local_only);
First of all, I've successfully dealt with SANE_Status (an enum) and SANE_Bool (a typedef to c_int). Those were both simple. That first parameter, on the other hand, is causing me all sorts of grief. I'm unfamiliar with the "***" notation to begin with and my tracer bullets so far have yielded nothing more than garbage data. How do I format the input to this function such that I can read back a list of my Python structure-objects? For reference, the C structure being referenced is:
typedef struct
{
SANE_String_Const name; /* unique device name */
SANE_String_Const vendor; /* device vendor string */
SANE_String_Const model; /* device model name */
SANE_String_Const type; /* device type (e.g., "flatbed scanner") */
}
SANE_Device;
Where SANE_String_Const is defined as a c_char_p.
My Python/ctypes version of this object is:
class SANE_Device(Structure):
_fields_ = [
("name", c_char_p),
("vendor", c_char_p),
("model", c_char_p),
("type", c_char_p)]
Suggestions on what I should pass in such that I can get the expected behavior (a list of structure-objects) out of this? All responses appreciated.
Update 1:
Using the following, I was able to retrieve a correct SANE_Device Python structure:
devices = pointer(pointer(pointer(SANE_Device())))
status = libsane.sane_get_devices(devices, c_int(0))
print status, devices, devices.contents.contents.contents.name
However, 1) yuck and 2) that seems like it would only work if there is a single result. I can't len() on devices.contents.contents or devices.contents.contents.contents. How am I to determine the number of results? The SANE docs specify that "If the function executes successfully, it stores a pointer to a NULL terminated array of pointers to SANE_Device structures in *device_list". Suggestions?
Update 2:
I was able to pass an ten-item array and then access the first element using:
devices = pointer(pointer(pointer((SANE_Device * 10)())))
status = libsane.sane_get_devices(devices, c_int(0))
print status, devices, devices.contents.contents.contents[0].name
However, ten is obviously an arbitrary number and I have no way of determining the actual number of results. Trying to access devices.contents.contents.contents[1].name when only one device is connected causes a segmentation fault. There must be a proper way of dealing with variable-length constructs like these in ctypes.
A:
A const SANE_Device *** is a three-level pointer: it's a pointer to a pointer to a pointer to a constant SANE_Device. You can use the program cdecl to decipher complicated C/C++ type definitions.
According to the SANE documentation, SANE_get_devices() will store a pointer to a NULL-terminated list of pointers to SANE devices if successful. Thus, the proper way to call it is to declare a variable of type const SANE_Device ** (i.e. a pointer to a pointer to a constant `SANE_Device), and pass in the address of that pointer:
const SANE_Device **device_list;
SANE_get_devices(&device_list, local_only); // check return value
// Now, device_list[0] points to the first device,
// device_list[1] points to the second device, etc.
// Once you hit a NULL pointer, that's the end of the list:
int num_devices = 0;
while(device_list[num_devices] != NULL)
num_devices++;
// num_devices now stores the total number of devices
Now, this is how you would call it from C code. I've skimmed the documentation on ctypes, and it appears that you want to use the byref function to pass the argument by reference, and that the value you pass should be a POINTER to a POINTER to a SANE_Device. Note the distinction between pointer and POINTER: the former creates a pointer to an instance, whereas the latter creates a pointer to a type. Thus, I'm guessing the following code will work:
// SANE_Device declared as you had it
devices = POINTER(POINTER(SANE_Device))() // devices is a NULL pointer to a pointer to a SANE_Device
status = libsane.sane_get_devices(byref(devices), c_int(0))
if status != successful: // replace this by whatever success is
print error
else:
num_devices = 0
// Convert NULL-terminated C list into Python list
device_list = []
while devices[num_devices]:
device_list.append(devices[num_devices].contents) // use .contents here since each entry in the C list is itself a pointer
num_devices += 1
print device_list
[Edit] I've tested the above code using a very simple placeholder for SANE_get_devices, and it works.
| Using Python's ctypes to pass/read a parameter declared as "struct_name *** param_name"? | I am trying to use Python's ctypes library to access some methods in the scanning library SANE. This is my first experience with ctypes and the first time I have had to deal with C datatypes in over a year so there is a fair learning curve here, but I think even without that this particular declaration would be troublesome:
extern SANE_Status sane_get_devices (const SANE_Device *** device_list, SANE_Bool local_only);
First of all, I've successfully dealt with SANE_Status (an enum) and SANE_Bool (a typedef to c_int). Those were both simple. That first parameter, on the other hand, is causing me all sorts of grief. I'm unfamiliar with the "***" notation to begin with and my tracer bullets so far have yielded nothing more than garbage data. How do I format the input to this function such that I can read back a list of my Python structure-objects? For reference, the C structure being referenced is:
typedef struct
{
SANE_String_Const name; /* unique device name */
SANE_String_Const vendor; /* device vendor string */
SANE_String_Const model; /* device model name */
SANE_String_Const type; /* device type (e.g., "flatbed scanner") */
}
SANE_Device;
Where SANE_String_Const is defined as a c_char_p.
My Python/ctypes version of this object is:
class SANE_Device(Structure):
_fields_ = [
("name", c_char_p),
("vendor", c_char_p),
("model", c_char_p),
("type", c_char_p)]
Suggestions on what I should pass in such that I can get the expected behavior (a list of structure-objects) out of this? All responses appreciated.
Update 1:
Using the following, I was able to retrieve a correct SANE_Device Python structure:
devices = pointer(pointer(pointer(SANE_Device())))
status = libsane.sane_get_devices(devices, c_int(0))
print status, devices, devices.contents.contents.contents.name
However, 1) yuck and 2) that seems like it would only work if there is a single result. I can't len() on devices.contents.contents or devices.contents.contents.contents. How am I to determine the number of results? The SANE docs specify that "If the function executes successfully, it stores a pointer to a NULL terminated array of pointers to SANE_Device structures in *device_list". Suggestions?
Update 2:
I was able to pass an ten-item array and then access the first element using:
devices = pointer(pointer(pointer((SANE_Device * 10)())))
status = libsane.sane_get_devices(devices, c_int(0))
print status, devices, devices.contents.contents.contents[0].name
However, ten is obviously an arbitrary number and I have no way of determining the actual number of results. Trying to access devices.contents.contents.contents[1].name when only one device is connected causes a segmentation fault. There must be a proper way of dealing with variable-length constructs like these in ctypes.
| [
"A const SANE_Device *** is a three-level pointer: it's a pointer to a pointer to a pointer to a constant SANE_Device. You can use the program cdecl to decipher complicated C/C++ type definitions.\nAccording to the SANE documentation, SANE_get_devices() will store a pointer to a NULL-terminated list of pointers to SANE devices if successful. Thus, the proper way to call it is to declare a variable of type const SANE_Device ** (i.e. a pointer to a pointer to a constant `SANE_Device), and pass in the address of that pointer:\nconst SANE_Device **device_list;\nSANE_get_devices(&device_list, local_only); // check return value\n// Now, device_list[0] points to the first device,\n// device_list[1] points to the second device, etc.\n// Once you hit a NULL pointer, that's the end of the list:\nint num_devices = 0;\nwhile(device_list[num_devices] != NULL)\n num_devices++;\n// num_devices now stores the total number of devices\n\nNow, this is how you would call it from C code. I've skimmed the documentation on ctypes, and it appears that you want to use the byref function to pass the argument by reference, and that the value you pass should be a POINTER to a POINTER to a SANE_Device. Note the distinction between pointer and POINTER: the former creates a pointer to an instance, whereas the latter creates a pointer to a type. Thus, I'm guessing the following code will work:\n// SANE_Device declared as you had it\ndevices = POINTER(POINTER(SANE_Device))() // devices is a NULL pointer to a pointer to a SANE_Device\nstatus = libsane.sane_get_devices(byref(devices), c_int(0))\nif status != successful: // replace this by whatever success is\n print error\nelse:\n num_devices = 0\n // Convert NULL-terminated C list into Python list\n device_list = []\n while devices[num_devices]:\n device_list.append(devices[num_devices].contents) // use .contents here since each entry in the C list is itself a pointer\n num_devices += 1\n print device_list\n\n[Edit] I've tested the above code using a very simple placeholder for SANE_get_devices, and it works.\n"
] | [
7
] | [] | [] | [
"ctypes",
"pointers",
"python",
"sane",
"structure"
] | stackoverflow_0000383010_ctypes_pointers_python_sane_structure.txt |
Q:
Django: How can I use my model classes to interact with my database from outside Django?
I'd like to write a script that interacts with my DB using a Django app's model. However, I would like to be able to run this script from the command line or via cron. What all do I need to import to allow this?
A:
You need to set up the Django environment variables. These tell Python where your project is, and what the name of the settings module is (the project name in the settings module is optional):
import os
os.environ['PYTHONPATH'] = '/path/to/myproject'
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'
Now you should be able to access the models:
from myproject.models import MyModel
all_my_models = MyModel.objects.all()
A:
The preferred way should be to add a custom command and then run it as any other django-admin (not to be confused with django.contrib.admin) command:
./manage.py mycustomcommand --customarg
Setting DJANGO_SETTINGS_MODULE should only be used when a custom command is not feasible.
A:
Depending on your specific needs, django-command-extensions might save you a bit of time. To run any script as-is without messing around with environment variables just type:
./manage.py runscript path/to/my/script.py
django-command-extensions also has commands for automating scripts as cron jobs, which is something you mentioned that you'd like to do.
If you are a more nuts and bolts type of person, you might check out this very detailed post outlining how to make "standalone" django scripts to be run from cron jobs and whatnot.
| Django: How can I use my model classes to interact with my database from outside Django? | I'd like to write a script that interacts with my DB using a Django app's model. However, I would like to be able to run this script from the command line or via cron. What all do I need to import to allow this?
| [
"You need to set up the Django environment variables. These tell Python where your project is, and what the name of the settings module is (the project name in the settings module is optional):\nimport os\n\nos.environ['PYTHONPATH'] = '/path/to/myproject'\nos.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'\n\nNow you should be able to access the models:\nfrom myproject.models import MyModel\n\nall_my_models = MyModel.objects.all()\n\n",
"The preferred way should be to add a custom command and then run it as any other django-admin (not to be confused with django.contrib.admin) command:\n./manage.py mycustomcommand --customarg\n\nSetting DJANGO_SETTINGS_MODULE should only be used when a custom command is not feasible.\n",
"Depending on your specific needs, django-command-extensions might save you a bit of time. To run any script as-is without messing around with environment variables just type:\n./manage.py runscript path/to/my/script.py\n\ndjango-command-extensions also has commands for automating scripts as cron jobs, which is something you mentioned that you'd like to do.\nIf you are a more nuts and bolts type of person, you might check out this very detailed post outlining how to make \"standalone\" django scripts to be run from cron jobs and whatnot.\n"
] | [
15,
13,
5
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0000383073_django_django_models_python.txt |
Q:
How do you apply 'or' to all values of a list in Python?
How do you apply 'or' to all values of a list in Python? I'm thinking something like:
or([True, True, False])
or if it was possible:
reduce(or, [True, True, False])
A:
The built-in function any does what you want:
>>> any([True, True, False])
True
>>> any([False, False, False])
False
>>> any([False, False, True])
True
any has the advantage over reduce of shortcutting the test for later items in the sequence once it finds a true value. This can be very handy if the sequence is a generator with an expensive operation behind it. For example:
>>> def iam(result):
... # Pretend this is expensive.
... print "iam(%r)" % result
... return result
...
>>> any((iam(x) for x in [False, True, False]))
iam(False)
iam(True)
True
>>> reduce(lambda x,y: x or y, (iam(x) for x in [False, True, False]))
iam(False)
iam(True)
iam(False)
True
If your Python's version doesn't have any(), all() builtins then they are easily implemented as Guido van Rossum suggested:
def any(S):
for x in S:
if x:
return True
return False
def all(S):
for x in S:
if not x:
return False
return True
A:
No one has mentioned it, but "or" is available as a function in the operator module:
from operator import or_
Then you can use reduce as above.
Would always advise "any" though in more recent Pythons.
A:
>>> all([True,False,True])
False
>>> any([True,False,True])
True
Python 2.5 and up (documentation)
A:
You can do this:
reduce(lambda a,b: a or b, [True, True, False])
A:
reduce should do it for you, shouldn't it?
>>> def _or(x, y):
... return x or y
...
>>> reduce(_or, [True, True, False])
True
| How do you apply 'or' to all values of a list in Python? | How do you apply 'or' to all values of a list in Python? I'm thinking something like:
or([True, True, False])
or if it was possible:
reduce(or, [True, True, False])
| [
"The built-in function any does what you want:\n>>> any([True, True, False])\nTrue\n>>> any([False, False, False])\nFalse\n>>> any([False, False, True])\nTrue\n\nany has the advantage over reduce of shortcutting the test for later items in the sequence once it finds a true value. This can be very handy if the sequence is a generator with an expensive operation behind it. For example:\n>>> def iam(result):\n... # Pretend this is expensive.\n... print \"iam(%r)\" % result\n... return result\n... \n>>> any((iam(x) for x in [False, True, False]))\niam(False)\niam(True)\nTrue\n>>> reduce(lambda x,y: x or y, (iam(x) for x in [False, True, False]))\niam(False)\niam(True)\niam(False)\nTrue\n\nIf your Python's version doesn't have any(), all() builtins then they are easily implemented as Guido van Rossum suggested:\ndef any(S):\n for x in S:\n if x:\n return True\n return False\n\ndef all(S):\n for x in S:\n if not x:\n return False\n return True\n\n",
"No one has mentioned it, but \"or\" is available as a function in the operator module:\nfrom operator import or_\n\nThen you can use reduce as above.\nWould always advise \"any\" though in more recent Pythons.\n",
">>> all([True,False,True])\nFalse\n>>> any([True,False,True])\nTrue\n\nPython 2.5 and up (documentation)\n",
"You can do this:\nreduce(lambda a,b: a or b, [True, True, False])\n\n",
"reduce should do it for you, shouldn't it?\n>>> def _or(x, y):\n... return x or y\n... \n>>> reduce(_or, [True, True, False])\nTrue\n\n"
] | [
31,
7,
3,
2,
1
] | [] | [] | [
"list",
"python",
"reduce"
] | stackoverflow_0000383623_list_python_reduce.txt |
Q:
With Lua and Python embeddable, is there a place for Basic?
I started off programming in Basic on the ZX81, then BASICA, GW-BASIC, and QBasic. I moved on to C (Ah, Turbo C 3.1, I hardly knew ye...)
When I got started in microcontrollers I regressed with the BASIC Stamp from Parallax. However, BASIC is/was awesome because it was so easy to understand and so hard to make a mistake. I moved on to assembly and C eventually because I needed the additional power (speed, capacity, resources, etc.), but I know that if the bar was much higher many people would never get into programming microcontrollers.
I keep getting an itch to make my own on-chip BASIC interpretor, but I wonder if there's need for BASIC now that Lua and Python are easily embeddable, and just as approachable as BASIC.
What, if any, are the advantages BASIC has over other languages?
Why is it still around?
If I decide to make a general purpose microcontroller board with an interpreter, is there any reason to make a version of BASIC?
Plenty of other languages are considered dead, but BASIC just keeps hanging on.
A:
[This may come off sounding more negative than it really is. I'm not saying Basic is the root of all evil, others have said that. I'm saying it's a legacy we can afford to leave behind.]
"because it was so easy to understand and so hard to make a mistake" That's certainly debatable. I've had some bad experiences with utterly opaque basic. Professional stuff -- commercial products -- perfectly awful code. Had to give up and decline the work.
"What, if any, are the advantages Basic has over other languages?" None, really.
"Why is it still around?" Two reasons: (1) Microsoft, (2) all the IT departments that started doing VB and now have millions of lines of VB legacy code.
"Plenty of other languages are considered dead..." Yep. Basic is there along side COBOL, PL/I and RPG as legacies that sometimes have more cost than value. But because of the "if it ain't broke don't fix it" policy of big IT, there they sit, sucking up resources who could easily replace it with something smaller, simpler and cheaper to maintain. Except it hasn't "failed" -- it's just disproportionately expensive.
30-year old COBOL is a horrible situation to rework. Starting in 2016 we'll be looking at 30-year old MS Basic that we just can't figure out, don't want to live without, and can't decide how to replace.
"but basic just keeps hanging on" It appears that some folks love Basic. Others see it as yet another poorly-designed language; it's advantages are being early to market and being backed by huge vendors (IBM, initially). Poorly-design, early-to-market only leaves us with a legacy that we'll be suffering with for decades.
I still have my 1965-edition Dartmouth Basic manual. I don't long for the good old days.
A:
As an architecture, the main claim to fame of BASIC is that you could make BASIC interpreters very small - just a few KB. In the days of a DG Nova this was a win as you could use systems like Business BASIC to build a multiuser application on a machine with 64K of RAM (or even less).
BASIC (VB in particular) is a legacy system and has a large existing code-base. Arguably VB is really a language (some would say a thin wrapper over COM) that has a BASIC-like syntax. These days, I see little reason to keep the language around apart from people's familiarity with it and to maintain the existing code base. I certainly would not advocate new development in it (note that VB.Net is not really BASIC but just has a VB-like syntax. The type system is not broken in the way that VB's was.)
What is missing from the computing world is a relevant language that is easy to learn and tinker with and has mind-share in mainstream application development. I grew up in the days of 8-bit machines, and the entry barrier to programming on those systems was very low. The architecture of the machines was very simple, and you could learn to program and write more-or-less relevant applications on these machines very easily.
Modern architectures are much more complex and have a bigger hump to learn. You can see people pontificating on how kids can't learn to program as easily as they could back in the days of BASIC and 8-bit computers and I think that argument has some merit. There is something of a hole left that makes programming just that bit harder to get into. Toy languages are not much use here - for programming to be attractive it has to be possible to aspire to build something relevant with the language you are learning.
This leads to the problem of a language that is easy for kids to learn but still allows them to write relevant programmes (or even games) that they might actually want. It also has to be widely perceived as relevant.
The closest thing I can think of to this is Python. It's not the only example of a language of that type, but it is the one with the most mind-share - and (IMO) a perception of relevance is necessary to play in this niche. It's also one of the easiest languages to learn that I've experienced (of the 30 or so that I've used over the years).
A:
Why not give Jumentum a try and see how it works for you?
http://jumentum.sourceforge.net/
it's an open source BASIC for micrcontrollers
The elua project is also lua for microcontrollers
http://elua.berlios.de/
A:
BASIC persists, particularly in the STAMP implementation, because it is lower level than most other very-easy-to-learn programming languages. For most embedded BASIC implementations the BASIC instructions map directly to single or groups of machine instructions, with very little overhead. The same programs written in "higher level" languages like Lua or Python would run far slower on those same microcontrollers.
PS: BASIC variants like PBASIC have very little in common with, say, Visual BASIC, despite the naming similarity. They have diverged in very different ways.
A:
Good question...
Basically (sic!), I have no answer. I would say just that Lua is very easy to learn, probably as easy as Basic (which was one of my first languages as well, I used dialects on lot of 8-bit computers...), but is more powerful (allowing OO or functional styles and even mixing them) and somehow stricter (no goto...).
I don't know well Python, but from what I have read, it is as easy, powerful and strict than Lua.
Beside, both are "standardized" de facto, ie. there are no dialects (beside the various versions), unlike Basic which has many variants.
Also both have carefully crafted VM, efficient, (mostly) bugless. Should you make your own interpretor, you should either take an existing VM and generate bytecode for it from Basic source, or make your own. Sure fun stuff, but time consuming and prone to bugs...
So, I would just let Basic have a nice retirement... :-P
PS.: Why it is hanging on? Perhaps Microsoft isn't foreign to that... (VB, VBA, VBScript...)
There are also lot of dialects around (RealBasic, DarkBasic, etc.), with some audience.
A:
At the risk of sounding like two old-timers on rocking chairs, let me grumpily say that "Kids today don't appreciate BASIC" and then paradoxically say "They don't know how good they've got it."
BASICs greatest strength was always its comprehensibility. It was something that people could get. That was long ignored by academics and language developers.
When you talk about wanting to implement BASIC, I assume you're not talking about line-numbered BASIC, but a structured form. The problem with that is that as soon as you start moving into structured programming -- functions, 'why can't I just GOTO that spot?', etc. -- it really becomes unclear what advantages, if any, BASIC would have over, say, Python.
Additionally, one reason BASIC was "so easy to get right" was that in those days libraries weren't nearly as important as they are today. Libraries imply structured if not object-oriented programming, so again you're in a situation where a more modern dynamic scripting language "fits" the reality of what people do today better.
If the real question is "well, I want to implement an interpreter and so it comes down to return on investment," then it becomes a problem of an grammar that's actually easy to implement. I'd suggest that BASIC doesn't really have that many advantages in that regard either (unless you really do return to line numbers and a very limited grammar).
In short, I don't think you should invest your effort in a BASIC interpreter.
A:
Well, these people seem to think that not only basic still has a place in the mobile space but also that they can make money off it:
http://www.nsbasic.com/symbian/
A:
I started out on a ZX81 too. But as Tony Hoare said, programming in BASIC is like trying to do long division using roman numerals.
Plenty of other languages are
considered dead, but basic just keeps
hanging on.
Sadly yes. I blame Bill Gates for this...BASIC was on a stretcher with a priest saying the last rites for it, and then MS brought it back like Smallpox.
A:
I used to program in BASIC in the QBasic days. QBASIC had subroutines, functions, structures (they used to be called types), and I guess that's it. Now, this seems limited compared to all the features that Python has - OO, lambdas, metaclasses, generators, list comprehensions, just to name a few off the top of my head. But that simplicity, I think, is a strength of BASIC. If you're looking at a simple embeddable language, I'd bet that QBasic will be faster and easier to understand. And a procedural langauge is probably more than sufficient for most embedding/scripting type of applications.
I'd say the most important reason BASIC is still around is Visual Basic. For a long time in the 90s, VB was the only way to write GUIs, COM and DB code for Windows without falling into one of the C++ Turing tarpits. [Maybe Delphi was a good option too, but unfortunately it never became as popular as VB]. I do think it is because of all this VB and VBA code that is still being used and maintained that BASIC still isn't dead.
That said, I'd say there's pretty a good rationale to write BASIC interpreter (maybe even compiler using LLVM or something similar) for BASIC today. You'll get a clean, simple easy to use and fast language if you implement something that resembles QBasic. You won't have to solve any language design issues and the best part is people will already know your language.
| With Lua and Python embeddable, is there a place for Basic? | I started off programming in Basic on the ZX81, then BASICA, GW-BASIC, and QBasic. I moved on to C (Ah, Turbo C 3.1, I hardly knew ye...)
When I got started in microcontrollers I regressed with the BASIC Stamp from Parallax. However, BASIC is/was awesome because it was so easy to understand and so hard to make a mistake. I moved on to assembly and C eventually because I needed the additional power (speed, capacity, resources, etc.), but I know that if the bar was much higher many people would never get into programming microcontrollers.
I keep getting an itch to make my own on-chip BASIC interpretor, but I wonder if there's need for BASIC now that Lua and Python are easily embeddable, and just as approachable as BASIC.
What, if any, are the advantages BASIC has over other languages?
Why is it still around?
If I decide to make a general purpose microcontroller board with an interpreter, is there any reason to make a version of BASIC?
Plenty of other languages are considered dead, but BASIC just keeps hanging on.
| [
"[This may come off sounding more negative than it really is. I'm not saying Basic is the root of all evil, others have said that. I'm saying it's a legacy we can afford to leave behind.]\n\"because it was so easy to understand and so hard to make a mistake\" That's certainly debatable. I've had some bad experiences with utterly opaque basic. Professional stuff -- commercial products -- perfectly awful code. Had to give up and decline the work.\n\"What, if any, are the advantages Basic has over other languages?\" None, really.\n\"Why is it still around?\" Two reasons: (1) Microsoft, (2) all the IT departments that started doing VB and now have millions of lines of VB legacy code.\n\"Plenty of other languages are considered dead...\" Yep. Basic is there along side COBOL, PL/I and RPG as legacies that sometimes have more cost than value. But because of the \"if it ain't broke don't fix it\" policy of big IT, there they sit, sucking up resources who could easily replace it with something smaller, simpler and cheaper to maintain. Except it hasn't \"failed\" -- it's just disproportionately expensive.\n30-year old COBOL is a horrible situation to rework. Starting in 2016 we'll be looking at 30-year old MS Basic that we just can't figure out, don't want to live without, and can't decide how to replace.\n\"but basic just keeps hanging on\" It appears that some folks love Basic. Others see it as yet another poorly-designed language; it's advantages are being early to market and being backed by huge vendors (IBM, initially). Poorly-design, early-to-market only leaves us with a legacy that we'll be suffering with for decades.\nI still have my 1965-edition Dartmouth Basic manual. I don't long for the good old days.\n",
"As an architecture, the main claim to fame of BASIC is that you could make BASIC interpreters very small - just a few KB. In the days of a DG Nova this was a win as you could use systems like Business BASIC to build a multiuser application on a machine with 64K of RAM (or even less).\nBASIC (VB in particular) is a legacy system and has a large existing code-base. Arguably VB is really a language (some would say a thin wrapper over COM) that has a BASIC-like syntax. These days, I see little reason to keep the language around apart from people's familiarity with it and to maintain the existing code base. I certainly would not advocate new development in it (note that VB.Net is not really BASIC but just has a VB-like syntax. The type system is not broken in the way that VB's was.)\nWhat is missing from the computing world is a relevant language that is easy to learn and tinker with and has mind-share in mainstream application development. I grew up in the days of 8-bit machines, and the entry barrier to programming on those systems was very low. The architecture of the machines was very simple, and you could learn to program and write more-or-less relevant applications on these machines very easily.\nModern architectures are much more complex and have a bigger hump to learn. You can see people pontificating on how kids can't learn to program as easily as they could back in the days of BASIC and 8-bit computers and I think that argument has some merit. There is something of a hole left that makes programming just that bit harder to get into. Toy languages are not much use here - for programming to be attractive it has to be possible to aspire to build something relevant with the language you are learning.\nThis leads to the problem of a language that is easy for kids to learn but still allows them to write relevant programmes (or even games) that they might actually want. It also has to be widely perceived as relevant.\nThe closest thing I can think of to this is Python. It's not the only example of a language of that type, but it is the one with the most mind-share - and (IMO) a perception of relevance is necessary to play in this niche. It's also one of the easiest languages to learn that I've experienced (of the 30 or so that I've used over the years).\n",
"Why not give Jumentum a try and see how it works for you?\nhttp://jumentum.sourceforge.net/\nit's an open source BASIC for micrcontrollers\nThe elua project is also lua for microcontrollers\nhttp://elua.berlios.de/\n",
"BASIC persists, particularly in the STAMP implementation, because it is lower level than most other very-easy-to-learn programming languages. For most embedded BASIC implementations the BASIC instructions map directly to single or groups of machine instructions, with very little overhead. The same programs written in \"higher level\" languages like Lua or Python would run far slower on those same microcontrollers.\nPS: BASIC variants like PBASIC have very little in common with, say, Visual BASIC, despite the naming similarity. They have diverged in very different ways.\n",
"Good question...\nBasically (sic!), I have no answer. I would say just that Lua is very easy to learn, probably as easy as Basic (which was one of my first languages as well, I used dialects on lot of 8-bit computers...), but is more powerful (allowing OO or functional styles and even mixing them) and somehow stricter (no goto...).\nI don't know well Python, but from what I have read, it is as easy, powerful and strict than Lua.\nBeside, both are \"standardized\" de facto, ie. there are no dialects (beside the various versions), unlike Basic which has many variants.\nAlso both have carefully crafted VM, efficient, (mostly) bugless. Should you make your own interpretor, you should either take an existing VM and generate bytecode for it from Basic source, or make your own. Sure fun stuff, but time consuming and prone to bugs...\nSo, I would just let Basic have a nice retirement... :-P\nPS.: Why it is hanging on? Perhaps Microsoft isn't foreign to that... (VB, VBA, VBScript...)\nThere are also lot of dialects around (RealBasic, DarkBasic, etc.), with some audience.\n",
"At the risk of sounding like two old-timers on rocking chairs, let me grumpily say that \"Kids today don't appreciate BASIC\" and then paradoxically say \"They don't know how good they've got it.\" \nBASICs greatest strength was always its comprehensibility. It was something that people could get. That was long ignored by academics and language developers. \nWhen you talk about wanting to implement BASIC, I assume you're not talking about line-numbered BASIC, but a structured form. The problem with that is that as soon as you start moving into structured programming -- functions, 'why can't I just GOTO that spot?', etc. -- it really becomes unclear what advantages, if any, BASIC would have over, say, Python.\nAdditionally, one reason BASIC was \"so easy to get right\" was that in those days libraries weren't nearly as important as they are today. Libraries imply structured if not object-oriented programming, so again you're in a situation where a more modern dynamic scripting language \"fits\" the reality of what people do today better.\nIf the real question is \"well, I want to implement an interpreter and so it comes down to return on investment,\" then it becomes a problem of an grammar that's actually easy to implement. I'd suggest that BASIC doesn't really have that many advantages in that regard either (unless you really do return to line numbers and a very limited grammar). \nIn short, I don't think you should invest your effort in a BASIC interpreter. \n",
"Well, these people seem to think that not only basic still has a place in the mobile space but also that they can make money off it:\nhttp://www.nsbasic.com/symbian/\n",
"I started out on a ZX81 too. But as Tony Hoare said, programming in BASIC is like trying to do long division using roman numerals.\n\nPlenty of other languages are\n considered dead, but basic just keeps\n hanging on.\n\nSadly yes. I blame Bill Gates for this...BASIC was on a stretcher with a priest saying the last rites for it, and then MS brought it back like Smallpox.\n",
"I used to program in BASIC in the QBasic days. QBASIC had subroutines, functions, structures (they used to be called types), and I guess that's it. Now, this seems limited compared to all the features that Python has - OO, lambdas, metaclasses, generators, list comprehensions, just to name a few off the top of my head. But that simplicity, I think, is a strength of BASIC. If you're looking at a simple embeddable language, I'd bet that QBasic will be faster and easier to understand. And a procedural langauge is probably more than sufficient for most embedding/scripting type of applications.\nI'd say the most important reason BASIC is still around is Visual Basic. For a long time in the 90s, VB was the only way to write GUIs, COM and DB code for Windows without falling into one of the C++ Turing tarpits. [Maybe Delphi was a good option too, but unfortunately it never became as popular as VB]. I do think it is because of all this VB and VBA code that is still being used and maintained that BASIC still isn't dead. \nThat said, I'd say there's pretty a good rationale to write BASIC interpreter (maybe even compiler using LLVM or something similar) for BASIC today. You'll get a clean, simple easy to use and fast language if you implement something that resembles QBasic. You won't have to solve any language design issues and the best part is people will already know your language.\n"
] | [
11,
7,
2,
2,
1,
1,
1,
1,
0
] | [] | [] | [
"basic",
"interpreter",
"lua",
"python",
"scripting"
] | stackoverflow_0000244138_basic_interpreter_lua_python_scripting.txt |
Q:
Django objects.filter, how "expensive" would this be?
I am trying to make a search view in Django. It is a search form with freetext input + some options to select, so that you can filter on years and so on. This is some of the code I have in the view so far, the part that does the filtering. And I would like some input on how expensive this would be on the database server.
soknad_list = Soknad.objects.all()
if var1:
soknad_list = soknad_list.filter(pub_date__year=var1)
if var2:
soknad_list = soknad_list.filter(muncipality__name__exact=var2)
if var3:
soknad_list = soknad_list.filter(genre__name__exact=var3)
# TEXT SEARCH
stop_word_list = re.compile(STOP_WORDS, re.IGNORECASE)
search_term = '%s' % request.GET['q']
cleaned_search_term = stop_word_list.sub('', search_term)
cleaned_search_term = cleaned_search_term.strip()
if len(cleaned_search_term) != 0:
soknad_list = soknad_list.filter(Q(dream__icontains=cleaned_search_term) | Q(tags__icontains=cleaned_search_term) | Q(name__icontains=cleaned_search_term) | Q(school__name__icontains=cleaned_search_term))
So what I do is, first make a list of all objects, then I check which variables exists (I fetch these with GET on an earlier point) and then I filter the results if they exists. But this doesn't seem too elegant, it probably does a lot of queries to achieve the result, so is there a better way to this?
It does exactly what I want, but I guess there is a better/smarter way to do this. Any ideas?
A:
filter itself doesn't execute a query, no query is executed until you explicitly fetch items from query (e.g. get), and list( query ) also executes it.
A:
You can see the query that will be generated by using:
soknad_list.query.as_sql()[0]
You can then put that into your database shell to see how long the query takes, or use EXPLAIN (if your database backend supports it) to see how expensive it is.
| Django objects.filter, how "expensive" would this be? | I am trying to make a search view in Django. It is a search form with freetext input + some options to select, so that you can filter on years and so on. This is some of the code I have in the view so far, the part that does the filtering. And I would like some input on how expensive this would be on the database server.
soknad_list = Soknad.objects.all()
if var1:
soknad_list = soknad_list.filter(pub_date__year=var1)
if var2:
soknad_list = soknad_list.filter(muncipality__name__exact=var2)
if var3:
soknad_list = soknad_list.filter(genre__name__exact=var3)
# TEXT SEARCH
stop_word_list = re.compile(STOP_WORDS, re.IGNORECASE)
search_term = '%s' % request.GET['q']
cleaned_search_term = stop_word_list.sub('', search_term)
cleaned_search_term = cleaned_search_term.strip()
if len(cleaned_search_term) != 0:
soknad_list = soknad_list.filter(Q(dream__icontains=cleaned_search_term) | Q(tags__icontains=cleaned_search_term) | Q(name__icontains=cleaned_search_term) | Q(school__name__icontains=cleaned_search_term))
So what I do is, first make a list of all objects, then I check which variables exists (I fetch these with GET on an earlier point) and then I filter the results if they exists. But this doesn't seem too elegant, it probably does a lot of queries to achieve the result, so is there a better way to this?
It does exactly what I want, but I guess there is a better/smarter way to do this. Any ideas?
| [
"filter itself doesn't execute a query, no query is executed until you explicitly fetch items from query (e.g. get), and list( query ) also executes it.\n",
"You can see the query that will be generated by using:\nsoknad_list.query.as_sql()[0]\n\nYou can then put that into your database shell to see how long the query takes, or use EXPLAIN (if your database backend supports it) to see how expensive it is.\n"
] | [
4,
2
] | [
"As Aaron mentioned, you should get a hold of the query text that is going to be run against the database and use an EXPLAIN (or other some method) to view the query execution plan. Once you have a hold of the execution plan for the query you can see what is going on in the database itself. There are a lot of operations that see very expensive to run through procedural code that are very trivial for any database to run, especially if you provide indexes that the database can use for speeding up your query.\nIf I read your question correctly, you're retrieving a result set of all rows in the Soknad table. Once you have these results back you use the filter() method to trim down your results meet your criteria. From looking at the Django documentation, it looks like this will do an in-memory filter rather than re-query the database (of course, this really depends on which data access layer you're using and not on Django itself).\nThe most optimal solution would be to use a full-text search engine (Lucene, ferret, etc) to handle this for you. If that is not available or practical the next best option would be to to construct a query predicate (WHERE clause) before issuing your query to the database and let the database perform the filtering. \nHowever, as with all things that involve the database, the real answer is 'it depends.' The best suggestion is to try out several different approaches using data that is close to production and benchmark them over at least 3 iterations before settling on a final solution to the problem. It may be just as fast, or even faster, to filter in memory rather than filter in the database.\n"
] | [
-1
] | [
"django",
"optimization",
"python",
"search",
"sql"
] | stackoverflow_0000383760_django_optimization_python_search_sql.txt |
Q:
Best Practice: network communication
I'm programming a simple network chat with a Python server and a Java client.
But one question came into my mind:
Which "network protocol" should I use for communication?
There are some possibilities for me:
YAML: Nice to parse, problem: parsed objects contain language specific parts
XML: Easy to parse, big overhead for simple tasks
create an own "language": Problems with escaping, but most flexible
So what is the best practice for this?
Are there other alternatives?
A:
Check JSON. It is compatible accross many languages (Python and Java included), and it is human readable.
http://www.json.org/
If you plan to do Web development, and plan to use Javascript, then JSON might be a good choice as it was originally designed for Javascript.
Moreover compared to YAML, using JSON in Python is as easy as writing: import json (it is part of the standard library).
You may have a look at the following page, comparing XML, JSON, and YAML. It seems they are differences in terms of encoding delay and memory used, that might guide your choice.
A:
It may be a little bit heavyweight for your needs, but have you considered implementing XMPP protocol for your chat client?
If you do that, then your system could interoperate with Google Talk, Jabber, iChat, etc.
A:
If you want the protocol to also abstract away the method invocation, have a look at XML-RPC, which Java and Python (and pretty much everything else) has good support for.
Object marshalling and unmarshalling is solid, can handle unicode, lists and dictionaries, and produces pretty human-readable output:
>>> import xmlrpclib
>>> print xmlrpclib.dumps((1, u"\xdd\xde"), methodname="my_method")
<?xml version='1.0'?>
<methodCall>
<methodName>my_method</methodName>
<params>
<param>
<value><int>1</int></value>
</param>
<param>
<value><string>รร</string></value>
</param>
</params>
</methodCall>
Basically, it has the benefits that Mapad mentions about JSON, with the extra functionality of method invocation, at the expense of (probably marginal) processing costs and (probably marginal) programming complexity.
Doug Hellman has good tutorials for both the client and server pieces of the Python XML-RPC libs here.
| Best Practice: network communication | I'm programming a simple network chat with a Python server and a Java client.
But one question came into my mind:
Which "network protocol" should I use for communication?
There are some possibilities for me:
YAML: Nice to parse, problem: parsed objects contain language specific parts
XML: Easy to parse, big overhead for simple tasks
create an own "language": Problems with escaping, but most flexible
So what is the best practice for this?
Are there other alternatives?
| [
"Check JSON. It is compatible accross many languages (Python and Java included), and it is human readable.\nhttp://www.json.org/\nIf you plan to do Web development, and plan to use Javascript, then JSON might be a good choice as it was originally designed for Javascript.\nMoreover compared to YAML, using JSON in Python is as easy as writing: import json (it is part of the standard library).\nYou may have a look at the following page, comparing XML, JSON, and YAML. It seems they are differences in terms of encoding delay and memory used, that might guide your choice. \n",
"It may be a little bit heavyweight for your needs, but have you considered implementing XMPP protocol for your chat client?\nIf you do that, then your system could interoperate with Google Talk, Jabber, iChat, etc.\n",
"If you want the protocol to also abstract away the method invocation, have a look at XML-RPC, which Java and Python (and pretty much everything else) has good support for.\nObject marshalling and unmarshalling is solid, can handle unicode, lists and dictionaries, and produces pretty human-readable output:\n>>> import xmlrpclib\n>>> print xmlrpclib.dumps((1, u\"\\xdd\\xde\"), methodname=\"my_method\")\n<?xml version='1.0'?>\n<methodCall>\n<methodName>my_method</methodName>\n<params>\n<param>\n<value><int>1</int></value>\n</param>\n<param>\n<value><string>รร</string></value>\n</param>\n</params>\n</methodCall>\n\nBasically, it has the benefits that Mapad mentions about JSON, with the extra functionality of method invocation, at the expense of (probably marginal) processing costs and (probably marginal) programming complexity.\nDoug Hellman has good tutorials for both the client and server pieces of the Python XML-RPC libs here.\n"
] | [
8,
4,
0
] | [] | [] | [
"java",
"networking",
"python"
] | stackoverflow_0000377556_java_networking_python.txt |
Q:
Best way to return the language of a given string
More specifically, I'm trying to check if given string (a sentence) is in Turkish.
I can check if the string has Turkish characters such as ร, ล, ร, ร, ฤ etc. However that's not very reliable as those might be converted to C, S, U, O, G before I receive the string.
Another method is to have the 100 most used words in Turkish and check if the sentence includes any/some of those words. I can combine these two methods and use a point system.
What do you think is the most efficient way to solve my problem in Python?
Related question: (human) Language of a document (Perl, Google Translation API)
A:
One option would be to use a Bayesian Classifier such as Reverend. The Reverend homepage gives this suggestion for a naive language detector:
from reverend.thomas import Bayes
guesser = Bayes()
guesser.train('french', 'le la les du un une je il elle de en')
guesser.train('german', 'der die das ein eine')
guesser.train('spanish', 'el uno una las de la en')
guesser.train('english', 'the it she he they them are were to')
guesser.guess('they went to el cantina')
guesser.guess('they were flying planes')
guesser.train('english', 'the rain in spain falls mainly on the plain')
guesser.save('my_guesser.bay')
Training with more complex token sets would strengthen the results. For more information on Bayesian classification, see here and here.
A:
A simple statistical method that I've used before:
Get a decent amount of sample training text in the language you want to detect. Split it up into trigrams, e.g.
"Hello foobar" in trigrams is:
'Hel', 'ell', 'llo', 'lo ', 'o f', ' fo', 'foo', 'oob', 'oba', 'bar'
For all of the source data, count up the frequency of occurrence of each trigram, presumably in a dict where key=trigram and value=frequency. You can limit this to the top 300 most frequent 3-letter combinations or something if you want. Pickle the dict away somewhere.
To tell if a new sample of text is written in the same language, repeat the above steps for the sample text. Now, all you have to do is compute a correlation between the sample trigram frequencies and the training trigram frequencies. You'll need to play with it a bit to pick a threshold correlation above which you are willing to consider input to be turkish or not.
This method has been shown to be highly accurate, beating out more sophisticated methods, see
Cavnar & Trenkle (1994): "N-Gram-Based Text Categorization"
Using trigrams solves the problem of using word lists, as there is a vast number of words in any given language, especially given different grammatical permutations. I've tried looking for common words, the problem is they often give a false positive for some other language, or themselves have many permutations. The statistical method doesn't require a lot of storage space and does not require complex parsing. By the way this method only works for languages with a phonetic writing system, it works poorly if at all with languages that use an ideographic language (i.e. Chinese, Japanese, Korean).
Alternatively wikipedia has a section on Turkish in its handy language recognition chart.
| Best way to return the language of a given string | More specifically, I'm trying to check if given string (a sentence) is in Turkish.
I can check if the string has Turkish characters such as ร, ล, ร, ร, ฤ etc. However that's not very reliable as those might be converted to C, S, U, O, G before I receive the string.
Another method is to have the 100 most used words in Turkish and check if the sentence includes any/some of those words. I can combine these two methods and use a point system.
What do you think is the most efficient way to solve my problem in Python?
Related question: (human) Language of a document (Perl, Google Translation API)
| [
"One option would be to use a Bayesian Classifier such as Reverend. The Reverend homepage gives this suggestion for a naive language detector:\nfrom reverend.thomas import Bayes\nguesser = Bayes()\nguesser.train('french', 'le la les du un une je il elle de en')\nguesser.train('german', 'der die das ein eine')\nguesser.train('spanish', 'el uno una las de la en')\nguesser.train('english', 'the it she he they them are were to')\nguesser.guess('they went to el cantina')\nguesser.guess('they were flying planes')\nguesser.train('english', 'the rain in spain falls mainly on the plain')\nguesser.save('my_guesser.bay')\n\nTraining with more complex token sets would strengthen the results. For more information on Bayesian classification, see here and here.\n",
"A simple statistical method that I've used before:\nGet a decent amount of sample training text in the language you want to detect. Split it up into trigrams, e.g.\n\"Hello foobar\" in trigrams is:\n 'Hel', 'ell', 'llo', 'lo ', 'o f', ' fo', 'foo', 'oob', 'oba', 'bar'\nFor all of the source data, count up the frequency of occurrence of each trigram, presumably in a dict where key=trigram and value=frequency. You can limit this to the top 300 most frequent 3-letter combinations or something if you want. Pickle the dict away somewhere.\nTo tell if a new sample of text is written in the same language, repeat the above steps for the sample text. Now, all you have to do is compute a correlation between the sample trigram frequencies and the training trigram frequencies. You'll need to play with it a bit to pick a threshold correlation above which you are willing to consider input to be turkish or not.\nThis method has been shown to be highly accurate, beating out more sophisticated methods, see\nCavnar & Trenkle (1994): \"N-Gram-Based Text Categorization\"\nUsing trigrams solves the problem of using word lists, as there is a vast number of words in any given language, especially given different grammatical permutations. I've tried looking for common words, the problem is they often give a false positive for some other language, or themselves have many permutations. The statistical method doesn't require a lot of storage space and does not require complex parsing. By the way this method only works for languages with a phonetic writing system, it works poorly if at all with languages that use an ideographic language (i.e. Chinese, Japanese, Korean).\nAlternatively wikipedia has a section on Turkish in its handy language recognition chart.\n"
] | [
13,
10
] | [
"Why not just use an existing spell checking library?\nSpell check for several languages, choose language with lowest error count.\n"
] | [
-1
] | [
"algorithm",
"python",
"string"
] | stackoverflow_0000383966_algorithm_python_string.txt |
Q:
Code refactoring help - how to reorganize validations
We have a web application that takes user inputs or database lookups to form some operations against some physical resources. The design can be simply presented as following diagram:
user input <=> model object <=> database storage
validations are needed with request coming from user input but NOT when coming from database lookup hits (since if a record exists, those attributes must have already been validated before). I am trying to refactoring the code so that the validations happen in the object constructor instead of the old way (a separate few validation routines)
How would you decide which way is better? (The fundamental difference of method 1 (the old way) and 2 is that validations in 1 are not mandatory and decoupled from object instantiation but 2 binds them and makes them mandatory for all requests)
Here are two example code snippets for design 1 and 2:
Method 1:
# For processing single request.
# Steps: 1. Validate all incoming data. 2. instantiate the object.
ValidateAttribures(request) # raise Exceptions if failed
resource = Resource(**request)
Method 2:
# Have to extract out this since it does not have anything to do with
# the object.
# raise Exceptions if some required params missing.
# steps: 1. Check whether its a batching request. 2. instantiate the object.
# (validations are performed inside the constructor)
CheckIfBatchRequest(request)
resource = Resource(**request) # raise Exceptions when validations failed
In a batch request:
Method 1:
# steps: 1. validate each request and return error to the client if any found.
# 2. perform the object instantiate and creation process. Exceptions are
# captured.
# 3. when all finished, email out any errors.
for request in batch_requests:
try:
ValidateAttribute(request)
except SomeException, e:
return ErrorPage(e)
errors = []
for request in batch_requests:
try:
CreatResource(Resource(**request), request)
except CreationError, e:
errors.append('failed to create with error: %s', e)
email(errors)
Method 2:
# steps: 1. validate batch job related data from the request.
# 2. If success, create objects for each request and do the validations.
# 3. If exception, return error found, otherwise,
# return a list of pairs with (object, request)
# 4. Do the creation process and email out any errors if encountered.
CheckIfBatchRequest(request)
request_objects = []
for request in batch_requests:
try:
resource = Resource(**request)
except SomeException, e:
return ErrorPage(e)
request_objects.append((resource, request))
email(CreateResource(request_objects)) # the CreateResource will also need to be refactored.
Pros and Cons as I can see here are:
Method 1 follows more close to the business logic. No redundant validations check when objects come from db lookup. The validation routines are better maintainable and read.
Method 2 makes easy and clean for the caller. Validations are mandatory even if from db lookup. Validations are less maintainable and read.
A:
Doing validation in the constructor really isn't the "Django way". Since the data you need to validate is coming from the client-side, using new forms (probably with a ModelForm) is the most idiomatic method to validate because it wraps all of your concerns into one API: it provides sensible validation defaults (with the ability to easily customize), plus model forms integrates the data-entry side (the html form) with the data commit (model.save()).
However, it sounds like you have what may be a mess of a legacy project; it may be outside the scope of your time to rewrite all your form handling to use new forms, at least initially. So here are my thoughts:
First of all, it's not "non-Djangonic" to put some validation in the model itself - after all, html form submissions may not be the only source of new data. You can override the save() method or use signals to either clean the data on save or throw an exception on invalid data. Long term, Django will have model validation, but it's not there yet; in the interim, you should consider this a "safety" to ensure you don't commit invalid data to your DB. In other words, you still need to validate field by field before committing so you know what error to display to your users on invalid input.
What I would suggest is this. Create new forms classes for each item you need to validate, even if you're not using them initially. Malcolm Tredinnick outlined a technique for doing model validation using the hooks provided in the forms system. Read up on that (it's really quite simple and elegant), and hook in into your models. Once you've got the newforms classes defined and working, you'll see that it's not very difficult - and will in fact greatly simplify your code - if you rip out your existing form templates and corresponding validation, and handle your form POSTs using the forms framework. There is a bit of a learning curve, but the forms API is extremely well thought out and you'll be grateful how much cleaner it will make your code.
A:
Thanks Daniel for your reply. Especially for the newforms API, I will definitely spend time digging into it and see if I can adopt it for the better long-term benefits. But just for the sake of getting my work done for this iteration (meet my deadline before EOY), I'd probably still have to stick with the current legacy structure, after all, either way will get me to what I want, just that I want to make it sane and clean as possible as I can without breaking too much.
So sounds like doing validations in model isn't a too bad idea, but in another sense, my old way of doing validations in views against the request seems also close to the concept of encapsulating them inside the newforms API (data validation is decoupled from model creation). Do you think it is ok to just keep my old design? It make more sense to me to touch this with the newforms API instead of juggling them right now...
(Well I got this refactoring suggestion from my code reviewer but I am really not so sure that my old way violates any mvc patterns or too complicated to maintain. I think my way makes more senses but my reviewer thought binding validation and model creation together makes more sense...)
| Code refactoring help - how to reorganize validations | We have a web application that takes user inputs or database lookups to form some operations against some physical resources. The design can be simply presented as following diagram:
user input <=> model object <=> database storage
validations are needed with request coming from user input but NOT when coming from database lookup hits (since if a record exists, those attributes must have already been validated before). I am trying to refactoring the code so that the validations happen in the object constructor instead of the old way (a separate few validation routines)
How would you decide which way is better? (The fundamental difference of method 1 (the old way) and 2 is that validations in 1 are not mandatory and decoupled from object instantiation but 2 binds them and makes them mandatory for all requests)
Here are two example code snippets for design 1 and 2:
Method 1:
# For processing single request.
# Steps: 1. Validate all incoming data. 2. instantiate the object.
ValidateAttribures(request) # raise Exceptions if failed
resource = Resource(**request)
Method 2:
# Have to extract out this since it does not have anything to do with
# the object.
# raise Exceptions if some required params missing.
# steps: 1. Check whether its a batching request. 2. instantiate the object.
# (validations are performed inside the constructor)
CheckIfBatchRequest(request)
resource = Resource(**request) # raise Exceptions when validations failed
In a batch request:
Method 1:
# steps: 1. validate each request and return error to the client if any found.
# 2. perform the object instantiate and creation process. Exceptions are
# captured.
# 3. when all finished, email out any errors.
for request in batch_requests:
try:
ValidateAttribute(request)
except SomeException, e:
return ErrorPage(e)
errors = []
for request in batch_requests:
try:
CreatResource(Resource(**request), request)
except CreationError, e:
errors.append('failed to create with error: %s', e)
email(errors)
Method 2:
# steps: 1. validate batch job related data from the request.
# 2. If success, create objects for each request and do the validations.
# 3. If exception, return error found, otherwise,
# return a list of pairs with (object, request)
# 4. Do the creation process and email out any errors if encountered.
CheckIfBatchRequest(request)
request_objects = []
for request in batch_requests:
try:
resource = Resource(**request)
except SomeException, e:
return ErrorPage(e)
request_objects.append((resource, request))
email(CreateResource(request_objects)) # the CreateResource will also need to be refactored.
Pros and Cons as I can see here are:
Method 1 follows more close to the business logic. No redundant validations check when objects come from db lookup. The validation routines are better maintainable and read.
Method 2 makes easy and clean for the caller. Validations are mandatory even if from db lookup. Validations are less maintainable and read.
| [
"Doing validation in the constructor really isn't the \"Django way\". Since the data you need to validate is coming from the client-side, using new forms (probably with a ModelForm) is the most idiomatic method to validate because it wraps all of your concerns into one API: it provides sensible validation defaults (with the ability to easily customize), plus model forms integrates the data-entry side (the html form) with the data commit (model.save()).\nHowever, it sounds like you have what may be a mess of a legacy project; it may be outside the scope of your time to rewrite all your form handling to use new forms, at least initially. So here are my thoughts:\nFirst of all, it's not \"non-Djangonic\" to put some validation in the model itself - after all, html form submissions may not be the only source of new data. You can override the save() method or use signals to either clean the data on save or throw an exception on invalid data. Long term, Django will have model validation, but it's not there yet; in the interim, you should consider this a \"safety\" to ensure you don't commit invalid data to your DB. In other words, you still need to validate field by field before committing so you know what error to display to your users on invalid input.\nWhat I would suggest is this. Create new forms classes for each item you need to validate, even if you're not using them initially. Malcolm Tredinnick outlined a technique for doing model validation using the hooks provided in the forms system. Read up on that (it's really quite simple and elegant), and hook in into your models. Once you've got the newforms classes defined and working, you'll see that it's not very difficult - and will in fact greatly simplify your code - if you rip out your existing form templates and corresponding validation, and handle your form POSTs using the forms framework. There is a bit of a learning curve, but the forms API is extremely well thought out and you'll be grateful how much cleaner it will make your code.\n",
"Thanks Daniel for your reply. Especially for the newforms API, I will definitely spend time digging into it and see if I can adopt it for the better long-term benefits. But just for the sake of getting my work done for this iteration (meet my deadline before EOY), I'd probably still have to stick with the current legacy structure, after all, either way will get me to what I want, just that I want to make it sane and clean as possible as I can without breaking too much.\nSo sounds like doing validations in model isn't a too bad idea, but in another sense, my old way of doing validations in views against the request seems also close to the concept of encapsulating them inside the newforms API (data validation is decoupled from model creation). Do you think it is ok to just keep my old design? It make more sense to me to touch this with the newforms API instead of juggling them right now...\n(Well I got this refactoring suggestion from my code reviewer but I am really not so sure that my old way violates any mvc patterns or too complicated to maintain. I think my way makes more senses but my reviewer thought binding validation and model creation together makes more sense...)\n"
] | [
1,
0
] | [] | [] | [
"django",
"optimization",
"python",
"refactoring",
"web_applications"
] | stackoverflow_0000384291_django_optimization_python_refactoring_web_applications.txt |
Q:
Checking 2-dimensional array (like eight queens puzzle)
My problem is very similar to eight queens puzzle.
I've got 2-dimensional array (N x N) that for example, looks like this:
0,0,0,0,1 y
0,0,0,0,0 |
0,0,0,0,0 V
0,0,0,1,0
0,0,0,0,0
x->
I'm checking horizontally, vertically and diagonally for occurrences of 1
\,0,|,0,/
0,\,|,/,0
-,-,1,-,-
0,/,|,\,0
/,0,|,0,\
I'm thinking about storing only the (x,y) postions of "1"'s in a list
[[4,0],[3,3]]
and solving it mathematically, check every position of "1" with another (x1,y1)<->(x2,y2),
if x1 == x2 or y1 == y2 we have a collision! if not check:
x2 == x1 + z;
y2 == y1 + z;
x2 == x1 - z;
y2 == y1 - z;
(???)
where z is +/- that ( x1+z in 0..N ) and ( y1+z in 0..N ) .......
My problem is checking for diagonal collision, is there a better way to do it???
A:
One possible solution:
def collision(x1, y1, x2, y2):
return x1 == x2 or y1 == y2 or abs(x1-x2) == abs(y1-y2)
i.e. there is a collision if the two points are on the same horizontal row, same vertical row or same diagonal (vertical distance == horizontal distance).
A:
Your description sounds like an instance of an exact cover problem, which can be solved using an algorithm Knuth calls Algorithm X. I have implemented a Sudoku solver in Javascript using this technique. You can probably find implementations in Python, too.
A:
I think it would be much faster if you didn't solve it mathematically but first check all rows for multiple occurrences of 1s, then all columns and finally all diagonal lines.
Here is some code to test the diagonal lines in a simple way. (It's JavaScript, sorry!)
var count = 0;
for (column = -n; column < n; column++) {
for (row = 0; row < n; row++) {
// conditions for which there are no valid coordinates.
if (column + row > 6) {
break;
}
if (column < 0) {
continue;
if (field[row][column] == 1) {
count++;
if (count == 2)
break; // collision
}
}
}
This method would have a complexity of O(n^2), whereas the one you suggested has a complexity of O(n^2 + k^2) (k being the number of 1s) If k is always small this should be no problem.
A:
Assuming you actually do have an N-dimensional space, which you probably don't, you can use this collision detector:
def collision(t1, t2):
return len(set([abs(a-b) for a,b in zip(t1, t2)] + [0])) <= 2
Pass it a pair of tuples with whatever arity you like, and it will return true if the two points lie on any N-dimensional diagonal.
| Checking 2-dimensional array (like eight queens puzzle) | My problem is very similar to eight queens puzzle.
I've got 2-dimensional array (N x N) that for example, looks like this:
0,0,0,0,1 y
0,0,0,0,0 |
0,0,0,0,0 V
0,0,0,1,0
0,0,0,0,0
x->
I'm checking horizontally, vertically and diagonally for occurrences of 1
\,0,|,0,/
0,\,|,/,0
-,-,1,-,-
0,/,|,\,0
/,0,|,0,\
I'm thinking about storing only the (x,y) postions of "1"'s in a list
[[4,0],[3,3]]
and solving it mathematically, check every position of "1" with another (x1,y1)<->(x2,y2),
if x1 == x2 or y1 == y2 we have a collision! if not check:
x2 == x1 + z;
y2 == y1 + z;
x2 == x1 - z;
y2 == y1 - z;
(???)
where z is +/- that ( x1+z in 0..N ) and ( y1+z in 0..N ) .......
My problem is checking for diagonal collision, is there a better way to do it???
| [
"One possible solution:\ndef collision(x1, y1, x2, y2):\n return x1 == x2 or y1 == y2 or abs(x1-x2) == abs(y1-y2)\n\ni.e. there is a collision if the two points are on the same horizontal row, same vertical row or same diagonal (vertical distance == horizontal distance).\n",
"Your description sounds like an instance of an exact cover problem, which can be solved using an algorithm Knuth calls Algorithm X. I have implemented a Sudoku solver in Javascript using this technique. You can probably find implementations in Python, too.\n",
"I think it would be much faster if you didn't solve it mathematically but first check all rows for multiple occurrences of 1s, then all columns and finally all diagonal lines.\nHere is some code to test the diagonal lines in a simple way. (It's JavaScript, sorry!)\nvar count = 0;\nfor (column = -n; column < n; column++) {\n for (row = 0; row < n; row++) {\n // conditions for which there are no valid coordinates.\n if (column + row > 6) {\n break;\n }\n if (column < 0) {\n continue;\n\n if (field[row][column] == 1) {\n count++;\n if (count == 2)\n break; // collision\n }\n }\n}\n\nThis method would have a complexity of O(n^2), whereas the one you suggested has a complexity of O(n^2 + k^2) (k being the number of 1s) If k is always small this should be no problem.\n",
"Assuming you actually do have an N-dimensional space, which you probably don't, you can use this collision detector:\ndef collision(t1, t2):\n return len(set([abs(a-b) for a,b in zip(t1, t2)] + [0])) <= 2\n\nPass it a pair of tuples with whatever arity you like, and it will return true if the two points lie on any N-dimensional diagonal.\n"
] | [
20,
2,
0,
0
] | [] | [] | [
"arrays",
"puzzle",
"python"
] | stackoverflow_0000384874_arrays_puzzle_python.txt |
Q:
Conway's Game Of Life
I am currently writting a programe about Conway's Game of life and I am really a beginner of python, I don't know how to start at all. Anybody can help me with it?
A:
http://plife.sourceforge.net/
Send the 50 quid to me :)
(I'm kidding, of course)
A:
You probably don't need to pay to learn python. Implementing cellular automata makes for good starting project. The best place to start with python is the official tutorial and you can follow that with dive into python.
The answers here and here may be helpful as well.
If you can bear the self-praising and claims to godness Wolfram's book is a good way to get a feel for cellular automata, but don't take the book itself too seriously (that's a separate issue that can fill several blog posts). He also has a set of papers on this stuff that is published as a book that goes into all the details. And of course if you just google for cellular automata and Conway's game you'll find a myriad of implementations and explanations.
A:
You'll probably get some good documentation here : http://www.imdb.com/title/tt0085959/.
A:
I happened to post a blog on implementing John Conway's Game of Life in XAML/WPF using embedded Python. It might be of interest to you.
A:
try here instead.
http://www.elance.com/p/landing/buyer.html
http://www.rentacoder.com/RentACoder/DotNet/default.aspx
These are places where you can hire people to do small freelance work for you.
A:
There's a few people on the pygame.org website who've done their versions of the game of life. Maybe they'll be of help:
Pygame.org - search "life"
| Conway's Game Of Life | I am currently writting a programe about Conway's Game of life and I am really a beginner of python, I don't know how to start at all. Anybody can help me with it?
| [
"http://plife.sourceforge.net/\nSend the 50 quid to me :)\n(I'm kidding, of course)\n",
"You probably don't need to pay to learn python. Implementing cellular automata makes for good starting project. The best place to start with python is the official tutorial and you can follow that with dive into python.\nThe answers here and here may be helpful as well.\nIf you can bear the self-praising and claims to godness Wolfram's book is a good way to get a feel for cellular automata, but don't take the book itself too seriously (that's a separate issue that can fill several blog posts). He also has a set of papers on this stuff that is published as a book that goes into all the details. And of course if you just google for cellular automata and Conway's game you'll find a myriad of implementations and explanations.\n",
"You'll probably get some good documentation here : http://www.imdb.com/title/tt0085959/.\n",
"I happened to post a blog on implementing John Conway's Game of Life in XAML/WPF using embedded Python. It might be of interest to you.\n",
"try here instead.\nhttp://www.elance.com/p/landing/buyer.html\nhttp://www.rentacoder.com/RentACoder/DotNet/default.aspx\nThese are places where you can hire people to do small freelance work for you.\n",
"There's a few people on the pygame.org website who've done their versions of the game of life. Maybe they'll be of help:\nPygame.org - search \"life\"\n"
] | [
4,
3,
1,
1,
0,
0
] | [] | [] | [
"cellular_automata",
"conways_game_of_life",
"python"
] | stackoverflow_0000350230_cellular_automata_conways_game_of_life_python.txt |
Q:
need help - bit-field conversion
I want to convert strings to bit-fields.Also,convert them to binary and then use.
Need help with this..help me ..
A:
I think the struct module is what you're after.
Example of usage:
>>> from struct import *
>>> pack('hhl', 1, 2, 3)
'\x00\x01\x00\x02\x00\x00\x00\x03'
>>> unpack('hhl', '\x00\x01\x00\x02\x00\x00\x00\x03')
(1, 2, 3)
>>> calcsize('hhl')
8
A:
they're all binary already... Which language are we talking about?
I'd start by looking at the string as an array of characters and working with each character individually.
| need help - bit-field conversion | I want to convert strings to bit-fields.Also,convert them to binary and then use.
Need help with this..help me ..
| [
"I think the struct module is what you're after.\nExample of usage: \n>>> from struct import *\n>>> pack('hhl', 1, 2, 3)\n'\\x00\\x01\\x00\\x02\\x00\\x00\\x00\\x03'\n>>> unpack('hhl', '\\x00\\x01\\x00\\x02\\x00\\x00\\x00\\x03')\n(1, 2, 3)\n>>> calcsize('hhl')\n8\n\n",
"they're all binary already... Which language are we talking about?\nI'd start by looking at the string as an array of characters and working with each character individually.\n"
] | [
2,
0
] | [] | [] | [
"bit",
"bit_fields",
"python",
"string"
] | stackoverflow_0000386151_bit_bit_fields_python_string.txt |
Q:
Does PyS60 produce sis files that are native?
I am currently looking at developing a mobile apps for the S60 platform and is specifically looking at PyS60. It seems to suggest that the it can be compiled into native .sis files without the need for an embedded python interpreter. Reading through the documentations I could not find any statements where this is explicitly mentioned. While I am right now downloading the SDKs, Emulators, and the whole bunch of tool chains needed to test the development out on Linux, I thought I would ask here a bit while I am doing that.
A:
Once you've written your code in python, you can convert this to a .sis file using ensymble.
http://code.google.com/p/ensymble/
This software allows you to make your .py file into a .sis file using the py2sis option - however, it won't be much use on any phone without python installed, so you may also need to use ensymble to merge your newly-created .sis with the .sis file for python, with a command like
./ensymble.py mergesis --verbose your-script-name.sis PythonForS60-1-4-5-3rdEd.sis final-app-name.sis
the resulting final-app-name.sis file will install both your file and also python.
A:
Linux is not officially supported for Series60 development yet. You will save yourself a lot of headache using Windows, weirdly enough.
As far as Python is oncerned, I think the developed application is packaged into a .sis file but still requires the PyS60 interpreter to run once installed.
| Does PyS60 produce sis files that are native? | I am currently looking at developing a mobile apps for the S60 platform and is specifically looking at PyS60. It seems to suggest that the it can be compiled into native .sis files without the need for an embedded python interpreter. Reading through the documentations I could not find any statements where this is explicitly mentioned. While I am right now downloading the SDKs, Emulators, and the whole bunch of tool chains needed to test the development out on Linux, I thought I would ask here a bit while I am doing that.
| [
"Once you've written your code in python, you can convert this to a .sis file using ensymble.\nhttp://code.google.com/p/ensymble/\nThis software allows you to make your .py file into a .sis file using the py2sis option - however, it won't be much use on any phone without python installed, so you may also need to use ensymble to merge your newly-created .sis with the .sis file for python, with a command like\n./ensymble.py mergesis --verbose your-script-name.sis PythonForS60-1-4-5-3rdEd.sis final-app-name.sis\nthe resulting final-app-name.sis file will install both your file and also python.\n",
"Linux is not officially supported for Series60 development yet. You will save yourself a lot of headache using Windows, weirdly enough.\nAs far as Python is oncerned, I think the developed application is packaged into a .sis file but still requires the PyS60 interpreter to run once installed.\n"
] | [
14,
1
] | [] | [] | [
"pys60",
"python",
"s60",
"symbian"
] | stackoverflow_0000334765_pys60_python_s60_symbian.txt |
Q:
Python 3 porting workflow?
I have a small project I want to try porting to Python 3 - how do I go about this?
I have made made the code run without warnings using python2.6 -3 (mostly removing .has_key() calls), but I am not sure of the best way to use the 2to3 tool.
Use the 2to3 tool to convert this source code to 3.0 syntax. Do not manually edit the output!
Running 2to3 something.py outputs a diff, which isn't useful on it's own. Using the --write flag overwrites something.py and creates a backup.. It seems like I have to do..
2to3 something.py
python3.0 something.py
mv something.py.bak something.py
vim something.py
# repeat
..which is a bit round-a-bout - ideally I could do something like..
mv something.py py2.6_something.py # once
2to3 py2.6_something.py --write-file something.py
vim py2.6_something.py
# repeat
A:
Aha, you can pipe the 2to3 output to the patch command, which can write the modified file to a new file:
mv something.py py2.6_something.py
2to3 py2.6_something.py | patch -o something.py
A:
2.x should be your codebase of active development, so 2to3 should really be run in a branch or temporary directory. I'm not sure why you'd want to have the 2.x and 3.x versions lying around in the same directory. distutils has a build_2to3 script that will run 2to3 on a 3.0 install.
| Python 3 porting workflow? | I have a small project I want to try porting to Python 3 - how do I go about this?
I have made made the code run without warnings using python2.6 -3 (mostly removing .has_key() calls), but I am not sure of the best way to use the 2to3 tool.
Use the 2to3 tool to convert this source code to 3.0 syntax. Do not manually edit the output!
Running 2to3 something.py outputs a diff, which isn't useful on it's own. Using the --write flag overwrites something.py and creates a backup.. It seems like I have to do..
2to3 something.py
python3.0 something.py
mv something.py.bak something.py
vim something.py
# repeat
..which is a bit round-a-bout - ideally I could do something like..
mv something.py py2.6_something.py # once
2to3 py2.6_something.py --write-file something.py
vim py2.6_something.py
# repeat
| [
"Aha, you can pipe the 2to3 output to the patch command, which can write the modified file to a new file:\nmv something.py py2.6_something.py\n2to3 py2.6_something.py | patch -o something.py\n\n",
"2.x should be your codebase of active development, so 2to3 should really be run in a branch or temporary directory. I'm not sure why you'd want to have the 2.x and 3.x versions lying around in the same directory. distutils has a build_2to3 script that will run 2to3 on a 3.0 install.\n"
] | [
6,
1
] | [] | [] | [
"porting",
"python",
"python_3.x"
] | stackoverflow_0000385394_porting_python_python_3.x.txt |
Q:
US-format phone numbers to links in Python
I'm working a piece of code to turn phone numbers into links for mobile phone - I've got it but it feels really dirty.
import re
from string import digits
PHONE_RE = re.compile('([(]{0,1}[2-9]\d{2}[)]{0,1}[-_. ]{0,1}[2-9]\d{2}[-_. ]{0,1}\d{4})')
def numbers2links(s):
result = ""
last_match_index = 0
for match in PHONE_RE.finditer(s):
raw_number = match.group()
number = ''.join(d for d in raw_number if d in digits)
call = '<a href="tel:%s">%s</a>' % (number, raw_number)
result += s[last_match_index:match.start()] + call
last_match_index = match.end()
result += s[last_match_index:]
return result
>>> numbers2links("Ghost Busters at (555) 423-2368! How about this one: 555 456 7890! 555-456-7893 is where its at.")
'Ghost Busters at <a href="tel:5554232368">(555) 423-2368</a>! How about this one: <a href="tel:5554567890">555 456 7890</a>! <a href="tel:5554567893">555-456-7893</a> is where its at.'
Is there anyway I could restructure the regex or the the regex method I'm using to make this cleaner?
Update
To clarify, my question is not about the correctness of my regex - I realize that it's limited. Instead I'm wondering if anyone had any comments on the method of substiting in links for the phone numbers - is there anyway I could use re.replace or something like that instead of the string hackery that I have?
A:
Nice first take :) I think this version is a bit more readable (and probably a teensy bit faster). The key thing to note here is the use of re.sub. Keeps us away from the nasty match indexes...
import re
PHONE_RE = re.compile('([(]{0,1}[2-9]\d{2}[)]{0,1}[-_. ]{0,1}[2-9]\d{2}[-_. ]{0,1}\d{4})')
NON_NUMERIC = re.compile('\D')
def numbers2links(s):
def makelink(mo):
raw_number = mo.group()
number = NON_NUMERIC.sub("", raw_number)
return '<a href="tel:%s">%s</a>' % (number, raw_number)
return PHONE_RE.sub(makelink, s)
print numbers2links("Ghost Busters at (555) 423-2368! How about this one: 555 456 7890! 555-456-7893 is where its at.")
A note: In my practice, I've not noticed much of a speedup pre-compiling simple regular expressions like the two I'm using, even if you're using them thousands of times. The re module may have some sort of internal caching - didn't bother to read the source and check.
Also, I replaced your method of checking each character to see if it's in string.digits with another re.sub() because I think my version is more readable, not because I'm certain it performs better (although it might).
A:
Why not re-use the work of others - for example, from RegExpLib.com?
My second suggestion is to remember there are other countries besides the USA, and quite a few of them have telephones ;-) Please don't forget us during your software development.
Also, there is a standard for the formatting of telephone numbers; the ITU's E.123. My recollection of the standard was that what it describes doesn't match well with popular usage.
Edit: I mixed up G.123 and E.123. Oops. Props Bortzmeyer
A:
Your regexp only parses a specific format, which is not the international standard. If you limit yourself to one country, it may work.
Otherwise, the international standard is ITU E.123 : "Notation for national and international telephone numbers,
e-mail addresses and Web addresses"
A:
First off, reliably capturing phone numbers with a single regular expression is notoriously difficult with a strong tendency to being impossible. Not every country has a definition of a "phone number" that is as narrow as it is in the U.S. Even in the U.S., things are more complicated than they seem (from the Wikipedia article on the North American Numbering Plan):
A) Country code: optional prefix ("1" or "+1" or "001")
((00|\+)?1)?
B) Numbering Plan Area code (NPA): cannot begin with 1, digit 2 cannot be 9
[2-9][0-8][0-9]
C) Exchange code (NXX): cannot begin with 1, cannot end with "11", optional parentheses
\(?[2-9](00|[2-9]{2})\)?
D) Station Code: four digits, cannot all be 0 (I suppose)
(?!0{4})\d{4}
E) an optional extension may follow
([x#-]\d+)?
S) parts of the number are separated by spaces, dashes, dots (or not)
[. -]?
So, the basic regex for the U.S. would be:
((00|\+)?1[. -]?)?\(?[2-9][0-8][0-9]\)?[. -]?[2-9](00|[2-9]{2})[. -]?(?!0{4})\d{4}([. -]?[x#-]\d+)?
| A |S | | B | S | C | S | D | S | E |
And that's just for the relatively trivial numbering plan of the U.S., and even there it certainly is not covering all subtleties. If you want to make it reliable you have to develop a similar beast for all expected input languages.
A:
A few things that will clean up your existing regex without really changing the functionality:
Replace {0,1} with ?, [(] with (, [)] with ). You also might as well just make your [2-9] b e a \d as well, so you can make those patterns be \d{3} and \d{4} for the last part. I doubt it will really increase the rate of false positives.
| US-format phone numbers to links in Python | I'm working a piece of code to turn phone numbers into links for mobile phone - I've got it but it feels really dirty.
import re
from string import digits
PHONE_RE = re.compile('([(]{0,1}[2-9]\d{2}[)]{0,1}[-_. ]{0,1}[2-9]\d{2}[-_. ]{0,1}\d{4})')
def numbers2links(s):
result = ""
last_match_index = 0
for match in PHONE_RE.finditer(s):
raw_number = match.group()
number = ''.join(d for d in raw_number if d in digits)
call = '<a href="tel:%s">%s</a>' % (number, raw_number)
result += s[last_match_index:match.start()] + call
last_match_index = match.end()
result += s[last_match_index:]
return result
>>> numbers2links("Ghost Busters at (555) 423-2368! How about this one: 555 456 7890! 555-456-7893 is where its at.")
'Ghost Busters at <a href="tel:5554232368">(555) 423-2368</a>! How about this one: <a href="tel:5554567890">555 456 7890</a>! <a href="tel:5554567893">555-456-7893</a> is where its at.'
Is there anyway I could restructure the regex or the the regex method I'm using to make this cleaner?
Update
To clarify, my question is not about the correctness of my regex - I realize that it's limited. Instead I'm wondering if anyone had any comments on the method of substiting in links for the phone numbers - is there anyway I could use re.replace or something like that instead of the string hackery that I have?
| [
"Nice first take :) I think this version is a bit more readable (and probably a teensy bit faster). The key thing to note here is the use of re.sub. Keeps us away from the nasty match indexes...\nimport re\n\nPHONE_RE = re.compile('([(]{0,1}[2-9]\\d{2}[)]{0,1}[-_. ]{0,1}[2-9]\\d{2}[-_. ]{0,1}\\d{4})')\nNON_NUMERIC = re.compile('\\D')\n\ndef numbers2links(s):\n\n def makelink(mo):\n raw_number = mo.group()\n number = NON_NUMERIC.sub(\"\", raw_number)\n return '<a href=\"tel:%s\">%s</a>' % (number, raw_number)\n\n return PHONE_RE.sub(makelink, s)\n\n\nprint numbers2links(\"Ghost Busters at (555) 423-2368! How about this one: 555 456 7890! 555-456-7893 is where its at.\")\n\nA note: In my practice, I've not noticed much of a speedup pre-compiling simple regular expressions like the two I'm using, even if you're using them thousands of times. The re module may have some sort of internal caching - didn't bother to read the source and check.\nAlso, I replaced your method of checking each character to see if it's in string.digits with another re.sub() because I think my version is more readable, not because I'm certain it performs better (although it might).\n",
"Why not re-use the work of others - for example, from RegExpLib.com?\nMy second suggestion is to remember there are other countries besides the USA, and quite a few of them have telephones ;-) Please don't forget us during your software development.\nAlso, there is a standard for the formatting of telephone numbers; the ITU's E.123. My recollection of the standard was that what it describes doesn't match well with popular usage.\nEdit: I mixed up G.123 and E.123. Oops. Props Bortzmeyer\n",
"Your regexp only parses a specific format, which is not the international standard. If you limit yourself to one country, it may work.\nOtherwise, the international standard is ITU E.123 : \"Notation for national and international telephone numbers,\ne-mail addresses and Web addresses\"\n",
"First off, reliably capturing phone numbers with a single regular expression is notoriously difficult with a strong tendency to being impossible. Not every country has a definition of a \"phone number\" that is as narrow as it is in the U.S. Even in the U.S., things are more complicated than they seem (from the Wikipedia article on the North American Numbering Plan):\n\nA) Country code: optional prefix (\"1\" or \"+1\" or \"001\")\n\n\n((00|\\+)?1)?\n\nB) Numbering Plan Area code (NPA): cannot begin with 1, digit 2 cannot be 9\n\n\n[2-9][0-8][0-9]\n\nC) Exchange code (NXX): cannot begin with 1, cannot end with \"11\", optional parentheses\n\n\n\\(?[2-9](00|[2-9]{2})\\)?\n\nD) Station Code: four digits, cannot all be 0 (I suppose)\n\n\n(?!0{4})\\d{4}\n\nE) an optional extension may follow\n\n\n([x#-]\\d+)?\n\nS) parts of the number are separated by spaces, dashes, dots (or not)\n\n\n[. -]?\n\n\nSo, the basic regex for the U.S. would be:\n((00|\\+)?1[. -]?)?\\(?[2-9][0-8][0-9]\\)?[. -]?[2-9](00|[2-9]{2})[. -]?(?!0{4})\\d{4}([. -]?[x#-]\\d+)?\n| A |S | | B | S | C | S | D | S | E |\n\nAnd that's just for the relatively trivial numbering plan of the U.S., and even there it certainly is not covering all subtleties. If you want to make it reliable you have to develop a similar beast for all expected input languages.\n",
"A few things that will clean up your existing regex without really changing the functionality:\nReplace {0,1} with ?, [(] with (, [)] with ). You also might as well just make your [2-9] b e a \\d as well, so you can make those patterns be \\d{3} and \\d{4} for the last part. I doubt it will really increase the rate of false positives.\n"
] | [
5,
1,
1,
1,
0
] | [] | [] | [
"mobile_website",
"phone_number",
"python",
"regex"
] | stackoverflow_0000385632_mobile_website_phone_number_python_regex.txt |
Q:
Tricky Python string literals in passing parameter to timeit.Timer() function
I'm having a hard time with the setup statement in Python's timeit.Timer(stmt, setup_stmt). I appreciate any help to get me out of this tricky problem:
So my sniplet looks like this:
def compare(string1, string2):
# compare 2 strings
if __name__ = '__main__':
str1 = "This string has \n several new lines \n in the middle"
str2 = "This string hasn't any new line, but a single quote ('), in the middle"
t = timeit.Timer('compare(p1, p2)', "from __main__ import compare; p1=%s, p2=%s" % (str1,str2))
I don't know how to escape the metacharacter in the variable str1, str2 without changing their meaning in the setup statement:
"from __main__ import compare; p1=%s, p2=%s" % (str1,str2)
I tried various combination but always have the following errors:
SyntaxError: can't assign to literal
SyntaxError: EOL while scanning single-quoted string
SyntaxError: invalid syntax
A:
Consider This as an alternative.
t = timeit.Timer('compare(p1, p2)', "from __main__ import compare; p1=%r; p2=%r" % (str1,str2))
The %r uses the repr for the string, which Python always quotes and escapes correctly.
EDIT: Fixed code by changing a comma to a semicolon; the error is now gone.
A:
Why bother quoting the strings at all? Just use them directly.
ie. change your last line to:
t = timeit.Timer('compare(str1, str2)', "from __main__ import compare, str1, str2")
| Tricky Python string literals in passing parameter to timeit.Timer() function | I'm having a hard time with the setup statement in Python's timeit.Timer(stmt, setup_stmt). I appreciate any help to get me out of this tricky problem:
So my sniplet looks like this:
def compare(string1, string2):
# compare 2 strings
if __name__ = '__main__':
str1 = "This string has \n several new lines \n in the middle"
str2 = "This string hasn't any new line, but a single quote ('), in the middle"
t = timeit.Timer('compare(p1, p2)', "from __main__ import compare; p1=%s, p2=%s" % (str1,str2))
I don't know how to escape the metacharacter in the variable str1, str2 without changing their meaning in the setup statement:
"from __main__ import compare; p1=%s, p2=%s" % (str1,str2)
I tried various combination but always have the following errors:
SyntaxError: can't assign to literal
SyntaxError: EOL while scanning single-quoted string
SyntaxError: invalid syntax
| [
"Consider This as an alternative.\nt = timeit.Timer('compare(p1, p2)', \"from __main__ import compare; p1=%r; p2=%r\" % (str1,str2))\n\nThe %r uses the repr for the string, which Python always quotes and escapes correctly.\nEDIT: Fixed code by changing a comma to a semicolon; the error is now gone.\n",
"Why bother quoting the strings at all? Just use them directly.\nie. change your last line to:\nt = timeit.Timer('compare(str1, str2)', \"from __main__ import compare, str1, str2\")\n\n"
] | [
7,
2
] | [] | [] | [
"python",
"string_literals",
"timeit"
] | stackoverflow_0000386664_python_string_literals_timeit.txt |
Q:
printing a list of persons with more than one home, each home with more than one phone number
I have a class Person which can have several Homes, each one with one or many Phone numbers.
I have defined the classes, but now i am trying to create a view wich list every person, with all its homes and all the phone numbers for each home address... something like:
john smith
123 fake str
305-99-8877
305-99-8876
321 oak road
444-98-7654
peter guy
453 north ave...
so far i have something like this:
(on my views.py)
def ViewAll(request):
people = Person.objects.all()
render_to_response('viewall.html', {'people': people})
(and on my template)
{% for guy in people %}
{{ guy.name }}
{% if person.home_address_set.all %}
{{ home_address }}
{% for ?????? in ???? %}
#print phone numbers in each home
{% endfor %}
{% endif %}
{% endfor %}
any idea of how to write the for I'm missing? of course, if there is another way (a better more elegant or efficient way) of doing what I need, I would love to hear it.
A:
You have what appears to be three nested collections: Person, Home, Phone Number.
Step 1 - How would you write this in a view function?
for p in Person.objects.all():
print "person", p
for h in p.home_address_set.all():
print " home", h
for ph in h.phone_set.all():
print " phone", ph
Don't omit this step. If you can't make it work in a view function, your model is wrong. Take the time to get this part right.
Step 2 - Convert this into template syntax.
{% for p on people %}
{% for h in p.home_address_set.all %}
{% fpr ph in h.phone_set.all %}
{% endfor %}
{% endfor %}
{% endfor %}
The results should be the same as your view function.
| printing a list of persons with more than one home, each home with more than one phone number | I have a class Person which can have several Homes, each one with one or many Phone numbers.
I have defined the classes, but now i am trying to create a view wich list every person, with all its homes and all the phone numbers for each home address... something like:
john smith
123 fake str
305-99-8877
305-99-8876
321 oak road
444-98-7654
peter guy
453 north ave...
so far i have something like this:
(on my views.py)
def ViewAll(request):
people = Person.objects.all()
render_to_response('viewall.html', {'people': people})
(and on my template)
{% for guy in people %}
{{ guy.name }}
{% if person.home_address_set.all %}
{{ home_address }}
{% for ?????? in ???? %}
#print phone numbers in each home
{% endfor %}
{% endif %}
{% endfor %}
any idea of how to write the for I'm missing? of course, if there is another way (a better more elegant or efficient way) of doing what I need, I would love to hear it.
| [
"You have what appears to be three nested collections: Person, Home, Phone Number.\nStep 1 - How would you write this in a view function?\nfor p in Person.objects.all():\n print \"person\", p\n for h in p.home_address_set.all():\n print \" home\", h\n for ph in h.phone_set.all():\n print \" phone\", ph\n\nDon't omit this step. If you can't make it work in a view function, your model is wrong. Take the time to get this part right.\nStep 2 - Convert this into template syntax.\n{% for p on people %}\n {% for h in p.home_address_set.all %}\n {% fpr ph in h.phone_set.all %}\n {% endfor %}\n {% endfor %}\n{% endfor %} \n\nThe results should be the same as your view function.\n"
] | [
8
] | [] | [] | [
"django",
"django_templates",
"python"
] | stackoverflow_0000387991_django_django_templates_python.txt |
Q:
How do you manage your Django applications?
I just wanted to try to build a project with django. Therefore I have a (basic) question on how to manage such a project. Since I cannot find any guidelines or so on how to split a project into applications.
Let's take a kind of SO as an example. Which applications would you use?
I'd say there should be the applications "users" and "questions". But what if there was a topic system with static articles, too. Maybe they also could receive votes.
How to build the apps structure then? One app for "questions", "votes" and "topics" or just one app "content"?
I have no idea what to do. Maybe it's because I know not very much about Django yet, but I'm interested either...
A:
There aren't hard-and-fast rules, but I would say it's better to err on the side of more specialized applications. Ideally an application should handle just one functional concern: i.e. "tagging" or "commenting" or "auth/auth" or "posts." This type of design will also help you reuse available open source applications instead of reinventing the wheel (i.e. Django comes with auth and comments apps, django-tagging or django-taggable can almost certainly do what you need, etc).
Generic foreign keys can help you decouple applications such as tagging or commenting that might be applied to models from several other applications.
A:
You should try and separate the project in as much applications as possible. For most projects an application will not contain more than 5 models. For example a project like SO would have separate applications for UsersProfiles, Questions, Tags (there's a ready one in django for this), etc. If there was a system with static pages that'd be a separate application too (there are ready ones for this purpose). You should also try and make your applications as generic as possible, so you may reuse them in other projects. There's a good presentation on reusable apps.
A:
Just like any set of dependencies... try to find the most useful stand-alone aspects of the project and make those stand-alone apps. Other Django Apps will have higher level functionality, and reuse the parts of the lowest level apps that you have set up.
In my project, I have a calendar app with its own Event object in its models. I also have a carpool database set up, and for the departure time and the duration I use the calendar's Event object right in my RideShare tables. The carpooling database is calendar-aware, and gets all the nice .ics export and calendar views from the calendar app for 'free.'
There are some tricks to getting the Apps reusable, like naming the templates directory: project/app2/templates/app2/index.html. This lets you refer to app2/index.html from any other app, and get the right template. I picked that one up looking at the built-in reusable apps in Django itself. Pinax is a bit of a monster size-wise but it also demonstrates a nice reusable App structure.
If in doubt, forget about reusable apps for now. Put all your messages and polls in one app and get through one rev. You'll discover during the process what steps feel unnecessary, and could be broken out as something stand-alone in the future.
A:
A good question to ask yourself when deciding whether or not to write an app is "could I use this in another project?". If you think you could, then consider what it would take to make the application as independent as possible; How can you reduce the dependancies so that the app doesn't rely on anything specific to a particular project.
Some of the ways you can do this are:
Giving each app its own urls.py
Allowing model types to be passed in as parameters rather than explicitly declaring what models are used in your views. Generic views use this principle.
Make your templates easily overridden by having some sort of template_name parameter passed in your urls.py
Make sure you can do reverse url lookups with your objects and views. This means naming your views in the urls.py and creating get_absolute_url methods on your models.
In some cases like Tagging, GenericForeignKeys can be used to associate a model in your app to any other model, regardless of whether it has ForeignKeys "looking back" at it.
A:
I'll tell you how I am approaching such question: I usually sit with a sheet of paper and draw the boxes (functionalities) and arrows (interdependencies between functionalities). I am sure there are methodologies or other things that could help you, but my approach usually works for me (YMMV, of course).
Knowing what a site is supposed to be is basic, though. ;)
| How do you manage your Django applications? | I just wanted to try to build a project with django. Therefore I have a (basic) question on how to manage such a project. Since I cannot find any guidelines or so on how to split a project into applications.
Let's take a kind of SO as an example. Which applications would you use?
I'd say there should be the applications "users" and "questions". But what if there was a topic system with static articles, too. Maybe they also could receive votes.
How to build the apps structure then? One app for "questions", "votes" and "topics" or just one app "content"?
I have no idea what to do. Maybe it's because I know not very much about Django yet, but I'm interested either...
| [
"There aren't hard-and-fast rules, but I would say it's better to err on the side of more specialized applications. Ideally an application should handle just one functional concern: i.e. \"tagging\" or \"commenting\" or \"auth/auth\" or \"posts.\" This type of design will also help you reuse available open source applications instead of reinventing the wheel (i.e. Django comes with auth and comments apps, django-tagging or django-taggable can almost certainly do what you need, etc). \nGeneric foreign keys can help you decouple applications such as tagging or commenting that might be applied to models from several other applications.\n",
"You should try and separate the project in as much applications as possible. For most projects an application will not contain more than 5 models. For example a project like SO would have separate applications for UsersProfiles, Questions, Tags (there's a ready one in django for this), etc. If there was a system with static pages that'd be a separate application too (there are ready ones for this purpose). You should also try and make your applications as generic as possible, so you may reuse them in other projects. There's a good presentation on reusable apps.\n",
"Just like any set of dependencies... try to find the most useful stand-alone aspects of the project and make those stand-alone apps. Other Django Apps will have higher level functionality, and reuse the parts of the lowest level apps that you have set up.\nIn my project, I have a calendar app with its own Event object in its models. I also have a carpool database set up, and for the departure time and the duration I use the calendar's Event object right in my RideShare tables. The carpooling database is calendar-aware, and gets all the nice .ics export and calendar views from the calendar app for 'free.'\nThere are some tricks to getting the Apps reusable, like naming the templates directory: project/app2/templates/app2/index.html. This lets you refer to app2/index.html from any other app, and get the right template. I picked that one up looking at the built-in reusable apps in Django itself. Pinax is a bit of a monster size-wise but it also demonstrates a nice reusable App structure.\nIf in doubt, forget about reusable apps for now. Put all your messages and polls in one app and get through one rev. You'll discover during the process what steps feel unnecessary, and could be broken out as something stand-alone in the future.\n",
"A good question to ask yourself when deciding whether or not to write an app is \"could I use this in another project?\". If you think you could, then consider what it would take to make the application as independent as possible; How can you reduce the dependancies so that the app doesn't rely on anything specific to a particular project.\nSome of the ways you can do this are:\n\nGiving each app its own urls.py\nAllowing model types to be passed in as parameters rather than explicitly declaring what models are used in your views. Generic views use this principle.\nMake your templates easily overridden by having some sort of template_name parameter passed in your urls.py\nMake sure you can do reverse url lookups with your objects and views. This means naming your views in the urls.py and creating get_absolute_url methods on your models.\nIn some cases like Tagging, GenericForeignKeys can be used to associate a model in your app to any other model, regardless of whether it has ForeignKeys \"looking back\" at it.\n\n",
"I'll tell you how I am approaching such question: I usually sit with a sheet of paper and draw the boxes (functionalities) and arrows (interdependencies between functionalities). I am sure there are methodologies or other things that could help you, but my approach usually works for me (YMMV, of course).\nKnowing what a site is supposed to be is basic, though. ;)\n"
] | [
6,
5,
3,
3,
0
] | [] | [] | [
"django",
"project",
"python",
"structure"
] | stackoverflow_0000384333_django_project_python_structure.txt |
Q:
Mysql Connection, one or many?
I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises.
In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions.
Query Some WMI data
Update that to the database
Query Other WMI data
Update that to the database
Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update?
It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
A:
"However, opening and closing the connection with each update seems more 'neat'. "
It's also a huge amount of overhead -- and there's no actual benefit.
Creating and disposing of connections is relatively expensive. More importantly, what's the actual reason? How does it improve, simplify, clarify?
Generally, most applications have one connection that they use from when they start to when they stop.
A:
I don't think that there is "better" solution. Its too early to think about resources. And since wmi is quite slow ( in comparison to sql connection ) the db is not an issue.
Just make it work. And then make it better.
The good thing about working with open connection here, is that the "natural" solution is to use objects and not just functions. So it will be a learning experience( In case you are learning python and not mysql).
A:
Think for a moment about the following scenario:
for dataItem in dataSet:
update(dataItem)
If you open and close your connection inside of the update function and your dataSet contains a thousand items then you will destroy the performance of your application and ruin any transactional capabilities.
A better way would be to open a connection and pass it to the update function. You could even have your update function call a connection manager of sorts. If you intend to perform single updates periodically then open and close your connection around your update function calls.
In this way you will be able to use functions to encapsulate your data operations and be able to share a connection between them.
However, this approach is not great for performing bulk inserts or updates.
A:
Useful clues in S.Lott's and Igal Serban's answers. I think you should first find out your actual requirements and code accordingly.
Just to mention a different strategy; some applications keep a pool of database (or whatever) connections and in case of a transaction just pull one from that pool. It seems rather obvious you just need one connection for this kind of application. But you can still keep a pool of one connection and apply following;
Whenever database transaction is needed the connection is pulled from the pool and returned back at the end.
(optional) The connection is expired (and of replaced by a new one) after a certain amount of time.
(optional) The connection is expired after a certain amount of usage.
(optional) The pool can check (by sending an inexpensive query) if the connection is alive before handing it over the program.
This is somewhat in between single connection and connection per transaction strategies.
| Mysql Connection, one or many? | I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises.
In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions.
Query Some WMI data
Update that to the database
Query Other WMI data
Update that to the database
Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update?
It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
| [
"\"However, opening and closing the connection with each update seems more 'neat'. \" \nIt's also a huge amount of overhead -- and there's no actual benefit.\nCreating and disposing of connections is relatively expensive. More importantly, what's the actual reason? How does it improve, simplify, clarify?\nGenerally, most applications have one connection that they use from when they start to when they stop. \n",
"I don't think that there is \"better\" solution. Its too early to think about resources. And since wmi is quite slow ( in comparison to sql connection ) the db is not an issue.\nJust make it work. And then make it better.\nThe good thing about working with open connection here, is that the \"natural\" solution is to use objects and not just functions. So it will be a learning experience( In case you are learning python and not mysql).\n",
"Think for a moment about the following scenario:\nfor dataItem in dataSet:\n update(dataItem)\n\nIf you open and close your connection inside of the update function and your dataSet contains a thousand items then you will destroy the performance of your application and ruin any transactional capabilities.\nA better way would be to open a connection and pass it to the update function. You could even have your update function call a connection manager of sorts. If you intend to perform single updates periodically then open and close your connection around your update function calls.\nIn this way you will be able to use functions to encapsulate your data operations and be able to share a connection between them.\nHowever, this approach is not great for performing bulk inserts or updates.\n",
"Useful clues in S.Lott's and Igal Serban's answers. I think you should first find out your actual requirements and code accordingly.\nJust to mention a different strategy; some applications keep a pool of database (or whatever) connections and in case of a transaction just pull one from that pool. It seems rather obvious you just need one connection for this kind of application. But you can still keep a pool of one connection and apply following;\n\nWhenever database transaction is needed the connection is pulled from the pool and returned back at the end.\n(optional) The connection is expired (and of replaced by a new one) after a certain amount of time.\n(optional) The connection is expired after a certain amount of usage.\n(optional) The pool can check (by sending an inexpensive query) if the connection is alive before handing it over the program.\n\nThis is somewhat in between single connection and connection per transaction strategies.\n"
] | [
7,
2,
1,
1
] | [] | [] | [
"mysql",
"python"
] | stackoverflow_0000387619_mysql_python.txt |
Q:
How can i create a lookup in Django?
I have a Question model & Form, one of the fields in this model is userid=ForeignKey(User), this Works perfectly well on the Question Model, am able to pick the user from a drop down.
But kind a tricky when i want to list the question from the model, which is the best way to lookup the user name from the Users table? becouse at this point i cant have the dropdown!
I want to have a simple thing e.g.
Question Title
asked by:lookup user Name
A:
The name of your field (userid instead of user) makes me think that you may be confused about the behavior of Django's ForeignKey.
If you define a model like this:
from django.contrib.auth.models import User
from django.db import models
class Question(models.Model):
user = models.ForeignKey(User)
title = models.CharField(max_length=100)
...
def __unicode__(self):
return self.title
And then instantiate a Question as question:
>>> question.user # the `User` instance
<User: username>
>>> question.user_id # the user's primary key
1
It looks like you may be expecting question.userid to be the user's primary key, rather than what it actually is: the User instance itself. When you access question.userid, a database lookup is performed, but it's done automatically by Django using the value of question.userid_id. I would rename the userid field to user to avoid confusion.
With that out of the way, I think what you are trying to do is list the questions along with their associated users. If that's the case, do something like this in your template:
<ol>
{% for question in questions %}
<li>{{ question }} asked by: {{ question.user }}</li>
{% endfor %}
</ol>
A:
I find your question vague. If you want to fetch all Question instances that are related to a particular User instance given a user_name, you can do thus:
questions = Question.objects.filter( userid__username='user_name' )
If you already have a User instance (held, say, in a variable called user) and would like to fetch all Question objects related to it, you can get those with:
questions = user.question_set.all()
| How can i create a lookup in Django? | I have a Question model & Form, one of the fields in this model is userid=ForeignKey(User), this Works perfectly well on the Question Model, am able to pick the user from a drop down.
But kind a tricky when i want to list the question from the model, which is the best way to lookup the user name from the Users table? becouse at this point i cant have the dropdown!
I want to have a simple thing e.g.
Question Title
asked by:lookup user Name
| [
"The name of your field (userid instead of user) makes me think that you may be confused about the behavior of Django's ForeignKey.\nIf you define a model like this:\nfrom django.contrib.auth.models import User\nfrom django.db import models\n\nclass Question(models.Model):\n user = models.ForeignKey(User)\n title = models.CharField(max_length=100)\n ...\n\n def __unicode__(self):\n return self.title\n\nAnd then instantiate a Question as question:\n>>> question.user # the `User` instance\n<User: username>\n\n>>> question.user_id # the user's primary key\n1\n\nIt looks like you may be expecting question.userid to be the user's primary key, rather than what it actually is: the User instance itself. When you access question.userid, a database lookup is performed, but it's done automatically by Django using the value of question.userid_id. I would rename the userid field to user to avoid confusion.\nWith that out of the way, I think what you are trying to do is list the questions along with their associated users. If that's the case, do something like this in your template:\n<ol>\n{% for question in questions %}\n <li>{{ question }} asked by: {{ question.user }}</li>\n{% endfor %}\n</ol>\n\n",
"I find your question vague. If you want to fetch all Question instances that are related to a particular User instance given a user_name, you can do thus:\nquestions = Question.objects.filter( userid__username='user_name' )\n\nIf you already have a User instance (held, say, in a variable called user) and would like to fetch all Question objects related to it, you can get those with:\nquestions = user.question_set.all()\n\n"
] | [
5,
0
] | [] | [] | [
"django",
"lookup",
"python"
] | stackoverflow_0000388233_django_lookup_python.txt |
Q:
Python: flush a buffer before program termination via a finalizer
I keep a cache of transactions to flush (to persistent storage) on the event of a watermark or object finalization. Since __del__ is no longer guaranteed to be called on every object, is the appropriate approach to hook a similar function (or __del__ itself) into atexit.register (during initialization)?
If I'm not mistaken, this will cause the object to which the method is bound to hang around until program termination. This isn't likely to be a problem, but maybe there's a more elegant solution?
Note: I know using __del__ is non-ideal because it can cause uncatchable exceptions, but I can't think of another way to do this short of cascading finalize() calls all the way through my program. TIA!
A:
If you have to handle ressources the prefered way is to have an explicit call to a close() or finalize() method. Have a look at the with statement to abstract that. In your case the weakref module might be an option. The cached object can be garbage collected by the system and have their __del__() method called or you finalize them if they are still alive.
A:
I would say atexit or try and see if you can refactor the code into being able to be expressed using a with_statement which is in the __future__ in 2.5 and in 2.6 by default. 2.5 includes a module contextlib to simplify things a bit. I've done something like this when using Canonical's Storm ORM.
from future import with_statement
@contextlib.contextmanager
def start_transaction(db):
db.start()
yield
db.end()
with start_transaction(db) as transaction:
...
For a non-db case, you could just register the objects to be flushed with a global and then use something similar. The benefit of this approach is that it keeps things explicit.
A:
Put the following in a file called destructor.py
import atexit
objects = []
def _destructor():
global objects
for obj in objects:
obj.destroy()
del objects
atexit.register(_destructor)
now use it this way:
import destructor
class MyObj(object):
def __init__(self):
destructor.objects.append(self)
# ... other init stuff
def destroy(self):
# clean up resources here
A:
If you don't need your object to be alive at the time you perform the flush, you could use weak references
This is similar to your proposed solution, but rather than using a real reference, store a list of weak references, with a callback function to perform the flush. This way, the references aren't going to keep those objects alive, and you won't run into any circular garbage problems with __del__ methods.
You can run through the list of weak references on termination to manually flush any still alive if this needs to be guaranteed done at a certain point.
A:
I think atexit is the way to go here.
| Python: flush a buffer before program termination via a finalizer | I keep a cache of transactions to flush (to persistent storage) on the event of a watermark or object finalization. Since __del__ is no longer guaranteed to be called on every object, is the appropriate approach to hook a similar function (or __del__ itself) into atexit.register (during initialization)?
If I'm not mistaken, this will cause the object to which the method is bound to hang around until program termination. This isn't likely to be a problem, but maybe there's a more elegant solution?
Note: I know using __del__ is non-ideal because it can cause uncatchable exceptions, but I can't think of another way to do this short of cascading finalize() calls all the way through my program. TIA!
| [
"If you have to handle ressources the prefered way is to have an explicit call to a close() or finalize() method. Have a look at the with statement to abstract that. In your case the weakref module might be an option. The cached object can be garbage collected by the system and have their __del__() method called or you finalize them if they are still alive.\n",
"I would say atexit or try and see if you can refactor the code into being able to be expressed using a with_statement which is in the __future__ in 2.5 and in 2.6 by default. 2.5 includes a module contextlib to simplify things a bit. I've done something like this when using Canonical's Storm ORM.\nfrom future import with_statement\n@contextlib.contextmanager\ndef start_transaction(db):\n db.start()\n yield\n db.end()\n\nwith start_transaction(db) as transaction:\n ...\n\nFor a non-db case, you could just register the objects to be flushed with a global and then use something similar. The benefit of this approach is that it keeps things explicit.\n",
"Put the following in a file called destructor.py\nimport atexit\n\nobjects = []\n\ndef _destructor():\n global objects\n for obj in objects:\n obj.destroy()\n del objects\n\natexit.register(_destructor)\n\nnow use it this way:\nimport destructor\n\nclass MyObj(object):\n def __init__(self):\n destructor.objects.append(self)\n # ... other init stuff\n def destroy(self):\n # clean up resources here\n\n",
"If you don't need your object to be alive at the time you perform the flush, you could use weak references\nThis is similar to your proposed solution, but rather than using a real reference, store a list of weak references, with a callback function to perform the flush. This way, the references aren't going to keep those objects alive, and you won't run into any circular garbage problems with __del__ methods.\nYou can run through the list of weak references on termination to manually flush any still alive if this needs to be guaranteed done at a certain point.\n",
"I think atexit is the way to go here.\n"
] | [
4,
3,
2,
2,
0
] | [] | [] | [
"buffer",
"destructor",
"finalizer",
"python"
] | stackoverflow_0000388154_buffer_destructor_finalizer_python.txt |
Q:
How to update turbogears application production database
I am having a postgres production database in production (which contains a lot of Data). now I need to modify the model of the tg-app to add couple of new tables to the database.
How do i do this? I am using sqlAlchemy.
A:
The simplest approach is to simply write some sql update scripts and use those to update the database. Obviously that's a fairly low-level (as it were) approach.
If you think you will be doing this a lot and want to stick in Python you might want to look at sqlalchemy-migrate. There was an article about it in the recent Python Magazine.
A:
This always works and requires little thinking -- only patience.
Make a backup.
Actually make a backup. Everyone skips step 1 thinking that they have a backup, but they can never find it or work with it. Don't trust any backup that you can't recover from.
Create a new database schema.
Define your new structure from the ground up in the new schema. Ideally, you'll run a DDL script that builds the new schema. Don't have a script to build the schema? Create one and put it under version control.
With SA, you can define your tables and it can build your schema for you. This is ideal, since you have your schema under version control in Python.
Move data.
a. For tables which did not change structure, move data from old schema to new schema using simple INSERT/SELECT statements.
b. For tables which did change structure, develop INSERT/SELECT scripts to move the data from old to new. Often, this can be a single SQL statement per new table. In some cases, it has to be a Python loop with two open connections.
c. For new tables, load the data.
Stop using the old schema. Start using the new schema. Find every program that used the old schema and fix the configuration.
Don't have a list of applications? Make one. Seriously -- it's important.
Applications have hard-coded DB configurations? Fix that, too, while you're at it. Either create a common config file, or use some common environment variable or something to (a) assure consistency and (b) centralize the notion of "production".
You can do this kind of procedure any time you do major surgery. It never touches the old database except to extract the data.
A:
I'd agree in general with John. One-pass SELECTing and INSERTing would not be practical for a large database, and setting up replication or multi-pass differential SELECT / INSERTs would probably be harder and more error-prone.
Personally, I use SQLAlchemy as an ORM under TurboGears. To do schema migrations I run:
tg-admin sql status
To see the difference between the live and development schemas, then manually write (and version control) DDL scripts to make the required changes.
For those using SQLAlchemy standalone (i.e. not under TurboGears), the sql status functionality is pretty simple and can be found here in the TG source: http://svn.turbogears.org/branches/1.1/turbogears/command/sacommand.py (there's versions for older Python / SA releases in the 1.0 branch, too).
A:
If you are just adding tables, and not modifying any of the tables which have the existing data in it, you can simply add the new sqlAlchemy table definitions to model.py, and run:
tg-admin sql create
This will not overwrite any of your existing tables.
For schema migration, you might take a look at http://code.google.com/p/sqlalchemy-migrate/ although I haven't used it yet myself.
Always take a backup of the production database before migration activity.
| How to update turbogears application production database | I am having a postgres production database in production (which contains a lot of Data). now I need to modify the model of the tg-app to add couple of new tables to the database.
How do i do this? I am using sqlAlchemy.
| [
"The simplest approach is to simply write some sql update scripts and use those to update the database. Obviously that's a fairly low-level (as it were) approach.\nIf you think you will be doing this a lot and want to stick in Python you might want to look at sqlalchemy-migrate. There was an article about it in the recent Python Magazine.\n",
"This always works and requires little thinking -- only patience.\n\nMake a backup.\nActually make a backup. Everyone skips step 1 thinking that they have a backup, but they can never find it or work with it. Don't trust any backup that you can't recover from.\nCreate a new database schema.\nDefine your new structure from the ground up in the new schema. Ideally, you'll run a DDL script that builds the new schema. Don't have a script to build the schema? Create one and put it under version control.\nWith SA, you can define your tables and it can build your schema for you. This is ideal, since you have your schema under version control in Python.\nMove data.\na. For tables which did not change structure, move data from old schema to new schema using simple INSERT/SELECT statements.\nb. For tables which did change structure, develop INSERT/SELECT scripts to move the data from old to new. Often, this can be a single SQL statement per new table. In some cases, it has to be a Python loop with two open connections.\nc. For new tables, load the data.\nStop using the old schema. Start using the new schema. Find every program that used the old schema and fix the configuration. \nDon't have a list of applications? Make one. Seriously -- it's important. \nApplications have hard-coded DB configurations? Fix that, too, while you're at it. Either create a common config file, or use some common environment variable or something to (a) assure consistency and (b) centralize the notion of \"production\".\n\nYou can do this kind of procedure any time you do major surgery. It never touches the old database except to extract the data.\n",
"I'd agree in general with John. One-pass SELECTing and INSERTing would not be practical for a large database, and setting up replication or multi-pass differential SELECT / INSERTs would probably be harder and more error-prone.\nPersonally, I use SQLAlchemy as an ORM under TurboGears. To do schema migrations I run:\ntg-admin sql status\n\nTo see the difference between the live and development schemas, then manually write (and version control) DDL scripts to make the required changes.\nFor those using SQLAlchemy standalone (i.e. not under TurboGears), the sql status functionality is pretty simple and can be found here in the TG source: http://svn.turbogears.org/branches/1.1/turbogears/command/sacommand.py (there's versions for older Python / SA releases in the 1.0 branch, too).\n",
"If you are just adding tables, and not modifying any of the tables which have the existing data in it, you can simply add the new sqlAlchemy table definitions to model.py, and run:\ntg-admin sql create\n\nThis will not overwrite any of your existing tables.\nFor schema migration, you might take a look at http://code.google.com/p/sqlalchemy-migrate/ although I haven't used it yet myself.\nAlways take a backup of the production database before migration activity.\n"
] | [
1,
1,
1,
0
] | [] | [] | [
"data_migration",
"database",
"postgresql",
"python",
"turbogears"
] | stackoverflow_0000301566_data_migration_database_postgresql_python_turbogears.txt |
Q:
How can I read Perl data structures from Python?
I've often seen people use Perl data structures in lieu of configuration files; i.e. a lone file containing only:
%config = (
'color' => 'red',
'numbers' => [5, 8],
qr/^spam/ => 'eggs'
);
What's the best way to convert the contents of these files into Python-equivalent data structures, using pure Python? For the time being we can assume that there are no real expressions to evaluate, only structured data.
A:
Is using pure Python a requirement? If not, you can load it in Perl and convert it to YAML or JSON. Then use PyYAML or something similar to load them in Python.
A:
I'd just turn the Perl data structure into something else. Not seeing the actual file, there might be some extra work that my solution doesn't do.
If the only thing that's in the file is the one variable declaration (so, no 1; at the end, and so on), it can be really simple to turn your %config it into YAML:
perl -MYAML -le 'print YAML::Dump( { do shift } )' filename
The do returns the last thing it evaluated, so in this little code it returns the list of hash key-value pairs. Things such as YAML::Dump like to work with references so they get a hint about the top-level structure, so I make that into a hash reference by surrounding the do with the curly braces. For your example, I'd get this YAML output:
---
(?-xism:^spam): eggs
color: red
numbers:
- 5
- 8
I don't know how Python will like that stringified regex, though. Do you really have a key that is a regex? I'd be curious to know how that's being used as part of the configuration.
If there's extra stuff in the file, life is a bit more tough. There's probably a really clever way to get around that, but I used the same idea, but just hard-coded the variable name that I wanted.
I tried this on the Perl data structure that the CPAN.pm module uses, and it looks like it came out fine. The only ugliness is the fore-knowledge of the variable name that it supplies. Now that you've seen the error of configuration in Perl code, avoid making the same mistake with Python code. :)
YAML:
perl -MYAML -le 'do shift; print YAML::Dump( $CPAN::Config )' MyConfig.pm
JSON:
perl -MJSON::Any -le 'do shift; my $j = JSON::Any->new; print $j->objToJson( $CPAN::Config )' MyConfig.pm
or
# suggested by JF Sebastian
perl -MJSON -le 'do shift; print to_json( $CPAN::Config )' MyConfig.pm
XML::Simple doesn't work out so well because it treats everything like an attribute, but maybe someone can improve on this:
perl -MXML::Simple -le 'do shift; print XMLout( $CPAN::Config )' MyConfig.pm
A:
Not sure what the use case is. Here's my assumption: you're going to do a one-time conversion from Perl to Python.
Perl has this
%config = (
'color' => 'red',
'numbers' => [5, 8],
qr/^spam/ => 'eggs'
);
In Python, it would be
config = {
'color' : 'red',
'numbers' : [5, 8],
re.compile( "^spam" ) : 'eggs'
}
So, I'm guessing it's a bunch of RE's to replace
%variable = ( with variable = {
); with }
variable => value with variable : value
qr/.../ => with re.compile( r"..." ) : value
However, Python's built-in dict doesn't do anything unusual with a regex as a hash key. For that, you'd have to write your own subclass of dict, and override __getitem__ to check REGEX keys separately.
class PerlLikeDict( dict ):
pattern_type= type(re.compile(""))
def __getitem__( self, key ):
if key in self:
return super( PerlLikeDict, self ).__getitem__( key )
for k in self:
if type(k) == self.pattern_type:
if k.match(key):
return self[k]
raise KeyError( "key %r not found" % ( key, ) )
Here's the example of using a Perl-like dict.
>>> pat= re.compile( "hi" )
>>> a = { pat : 'eggs' } # native dict, no features.
>>> x=PerlLikeDict( a )
>>> x['b']= 'c'
>>> x
{<_sre.SRE_Pattern object at 0x75250>: 'eggs', 'b': 'c'}
>>> x['b']
'c'
>>> x['ji']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 10, in __getitem__
KeyError: "key 'ji' not found"
>>> x['hi']
'eggs'
A:
I've also found PyPerl, but it doesn't seem to be maintained. I guess something like this is what I was looking for -- a module that did some basic interpretation of Perl and passed the result as a Python object. A Perl interpreter that died on anything too complex would be fine. :-)
| How can I read Perl data structures from Python? | I've often seen people use Perl data structures in lieu of configuration files; i.e. a lone file containing only:
%config = (
'color' => 'red',
'numbers' => [5, 8],
qr/^spam/ => 'eggs'
);
What's the best way to convert the contents of these files into Python-equivalent data structures, using pure Python? For the time being we can assume that there are no real expressions to evaluate, only structured data.
| [
"Is using pure Python a requirement? If not, you can load it in Perl and convert it to YAML or JSON. Then use PyYAML or something similar to load them in Python.\n",
"I'd just turn the Perl data structure into something else. Not seeing the actual file, there might be some extra work that my solution doesn't do.\nIf the only thing that's in the file is the one variable declaration (so, no 1; at the end, and so on), it can be really simple to turn your %config it into YAML:\nperl -MYAML -le 'print YAML::Dump( { do shift } )' filename \n\nThe do returns the last thing it evaluated, so in this little code it returns the list of hash key-value pairs. Things such as YAML::Dump like to work with references so they get a hint about the top-level structure, so I make that into a hash reference by surrounding the do with the curly braces. For your example, I'd get this YAML output:\n\n---\n(?-xism:^spam): eggs\ncolor: red\nnumbers:\n - 5\n - 8\n\nI don't know how Python will like that stringified regex, though. Do you really have a key that is a regex? I'd be curious to know how that's being used as part of the configuration.\n\nIf there's extra stuff in the file, life is a bit more tough. There's probably a really clever way to get around that, but I used the same idea, but just hard-coded the variable name that I wanted.\nI tried this on the Perl data structure that the CPAN.pm module uses, and it looks like it came out fine. The only ugliness is the fore-knowledge of the variable name that it supplies. Now that you've seen the error of configuration in Perl code, avoid making the same mistake with Python code. :)\nYAML:\n perl -MYAML -le 'do shift; print YAML::Dump( $CPAN::Config )' MyConfig.pm\n\nJSON:\n perl -MJSON::Any -le 'do shift; my $j = JSON::Any->new; print $j->objToJson( $CPAN::Config )' MyConfig.pm\n\nor\n# suggested by JF Sebastian\nperl -MJSON -le 'do shift; print to_json( $CPAN::Config )' MyConfig.pm\n\nXML::Simple doesn't work out so well because it treats everything like an attribute, but maybe someone can improve on this:\nperl -MXML::Simple -le 'do shift; print XMLout( $CPAN::Config )' MyConfig.pm\n\n",
"Not sure what the use case is. Here's my assumption: you're going to do a one-time conversion from Perl to Python.\nPerl has this\n%config = (\n 'color' => 'red',\n 'numbers' => [5, 8],\n qr/^spam/ => 'eggs'\n);\n\nIn Python, it would be\nconfig = {\n 'color' : 'red',\n 'numbers' : [5, 8],\n re.compile( \"^spam\" ) : 'eggs'\n}\n\nSo, I'm guessing it's a bunch of RE's to replace \n\n%variable = ( with variable = {\n); with }\nvariable => value with variable : value\nqr/.../ => with re.compile( r\"...\" ) : value\n\nHowever, Python's built-in dict doesn't do anything unusual with a regex as a hash key. For that, you'd have to write your own subclass of dict, and override __getitem__ to check REGEX keys separately.\nclass PerlLikeDict( dict ):\n pattern_type= type(re.compile(\"\"))\n def __getitem__( self, key ):\n if key in self:\n return super( PerlLikeDict, self ).__getitem__( key )\n for k in self:\n if type(k) == self.pattern_type:\n if k.match(key):\n return self[k]\n raise KeyError( \"key %r not found\" % ( key, ) )\n\nHere's the example of using a Perl-like dict.\n>>> pat= re.compile( \"hi\" )\n>>> a = { pat : 'eggs' } # native dict, no features.\n>>> x=PerlLikeDict( a )\n>>> x['b']= 'c'\n>>> x\n{<_sre.SRE_Pattern object at 0x75250>: 'eggs', 'b': 'c'}\n>>> x['b']\n'c'\n>>> x['ji']\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<stdin>\", line 10, in __getitem__\nKeyError: \"key 'ji' not found\"\n>>> x['hi']\n'eggs'\n\n",
"I've also found PyPerl, but it doesn't seem to be maintained. I guess something like this is what I was looking for -- a module that did some basic interpretation of Perl and passed the result as a Python object. A Perl interpreter that died on anything too complex would be fine. :-)\n"
] | [
18,
14,
7,
0
] | [] | [] | [
"configuration",
"data_structures",
"perl",
"python"
] | stackoverflow_0000389945_configuration_data_structures_perl_python.txt |
Q:
Using Python Web GET data
I'm trying to pass information to a python page via the url. I have the following link text:
"<a href='complete?id=%s'>" % (str(r[0]))
on the complete page, I have this:
import cgi
def complete():
form = cgi.FieldStorage()
db = MySQLdb.connect(user="", passwd="", db="todo")
c = db.cursor()
c.execute("delete from tasks where id =" + str(form["id"]))
return "<html><center>Task completed! Click <a href='/chris'>here</a> to go back!</center></html>"
The problem is that when i go to the complete page, i get a key error on "id". Does anyone know how to fix this?
EDIT
when i run cgi.test() it gives me nothing
I think something is wrong with the way i'm using the url because its not getting passed through.
its basically localhost/chris/complete?id=1
/chris/ is a folder and complete is a function within index.py
Am i formatting the url the wrong way?
A:
The error means that form["id"] failed to find the key "id" in cgi.FieldStorage().
To test what keys are in the called URL, use cgi.test():
cgi.test()
Robust test CGI script, usable as main program. Writes minimal HTTP headers and formats all information provided to the script in HTML form.
EDIT: a basic test script (using the python cgi module with Linux path) is only 3 lines. Make sure you know how to run it on your system, then call it from a browser to check arguments are seen on the CGI side. You may also want to add traceback formatting with import cgitb; cgitb.enable().
#!/usr/bin/python
import cgi
cgi.test()
A:
Have you tried printing out the value of form to make sure you're getting what you think you're getting? You do have a little problem with your code though... you should be doing form["id"].value to get the value of the item from FieldStorage. Another alternative is to just do it yourself, like so:
import os
import cgi
query_string = os.environ.get("QUERY_STRING", "")
form = cgi.parse_qs(query_string)
This should result in something like this:
{'id': ['123']}
A:
First off, you should make dictionary lookups via
possibly_none = my_dict.get( "key_name" )
Because this assigns None to the variable, if the key is not in the dict. You can then use the
if key is not None:
do_stuff
idiom (yes, I'm a fan of null checks and defensive programming in general...). The python documentation suggests something along these lines as well.
Without digging into the code too much, I think you should reference
form.get( 'id' ).value
in order to extract the data you seem to be asking for.
| Using Python Web GET data | I'm trying to pass information to a python page via the url. I have the following link text:
"<a href='complete?id=%s'>" % (str(r[0]))
on the complete page, I have this:
import cgi
def complete():
form = cgi.FieldStorage()
db = MySQLdb.connect(user="", passwd="", db="todo")
c = db.cursor()
c.execute("delete from tasks where id =" + str(form["id"]))
return "<html><center>Task completed! Click <a href='/chris'>here</a> to go back!</center></html>"
The problem is that when i go to the complete page, i get a key error on "id". Does anyone know how to fix this?
EDIT
when i run cgi.test() it gives me nothing
I think something is wrong with the way i'm using the url because its not getting passed through.
its basically localhost/chris/complete?id=1
/chris/ is a folder and complete is a function within index.py
Am i formatting the url the wrong way?
| [
"The error means that form[\"id\"] failed to find the key \"id\" in cgi.FieldStorage().\nTo test what keys are in the called URL, use cgi.test():\n\ncgi.test()\nRobust test CGI script, usable as main program. Writes minimal HTTP headers and formats all information provided to the script in HTML form.\n\nEDIT: a basic test script (using the python cgi module with Linux path) is only 3 lines. Make sure you know how to run it on your system, then call it from a browser to check arguments are seen on the CGI side. You may also want to add traceback formatting with import cgitb; cgitb.enable().\n#!/usr/bin/python\nimport cgi\ncgi.test()\n\n",
"Have you tried printing out the value of form to make sure you're getting what you think you're getting? You do have a little problem with your code though... you should be doing form[\"id\"].value to get the value of the item from FieldStorage. Another alternative is to just do it yourself, like so:\nimport os\nimport cgi\n\nquery_string = os.environ.get(\"QUERY_STRING\", \"\")\nform = cgi.parse_qs(query_string)\n\nThis should result in something like this:\n{'id': ['123']}\n\n",
"First off, you should make dictionary lookups via\npossibly_none = my_dict.get( \"key_name\" )\n\nBecause this assigns None to the variable, if the key is not in the dict. You can then use the \nif key is not None:\n do_stuff\n\nidiom (yes, I'm a fan of null checks and defensive programming in general...). The python documentation suggests something along these lines as well.\nWithout digging into the code too much, I think you should reference \nform.get( 'id' ).value\n\nin order to extract the data you seem to be asking for.\n"
] | [
1,
1,
0
] | [] | [] | [
"form_data",
"python"
] | stackoverflow_0000384336_form_data_python.txt |
Q:
Interpreting Excel Currency Values
I am using python to read a currency value from excel. The returned from the range.Value method is a tuple that I don't know how to parse.
For example, the cell appears as $548,982, but in python the value is returned as (1, 1194857614).
How can I get the numerical amount from excel or how can I convert this tuple value into the numerical value?
Thanks!
A:
Try this:
import struct
try: import decimal
except ImportError:
divisor= 10000.0
else:
divisor= decimal.Decimal(10000)
def xl_money(i1, i2):
byte8= struct.unpack(">q", struct.pack(">ii", i1, i2))[0]
return byte8 / divisor
>>> xl_money(1, 1194857614)
Decimal("548982.491")
Money in Microsoft COM is an 8-byte integer; it's fixed point, with 4 decimal places (i.e. 1 is represented by 10000). What my function does, is take the tuple of 4-byte integers, make an 8-byte integer using struct to avoid any issues of sign, and then dividing by the constant 10000. The function uses decimal.Decimal if available, otherwise it uses float.
UPDATE (based on comment): So far, it's only COM Currency values being returned as a two-integer tuple, so you might want to check for that, but there are no guarantees that this will always be successful. However, depending on the library you use and its version, it's quite possible that later on, after some upgrade, you will be receiving decimal.Decimals and not two-integer tuples anymore.
A:
I tried this with Excel 2007 and VBA. It is giving correct value.
1) Try pasting this value in a new excel workbook
2) Press Alt + F11. Gets you to VBA Editor.
3) Press Ctrl + G. Gets you to immediate window.
4) In the immediate window, type ?cells("a1").Value
here "a1" is the cell where you have pasted the value.
I am doubting that the cell has some value or character due to which it is interpreted this way.
Post your observations here.
| Interpreting Excel Currency Values | I am using python to read a currency value from excel. The returned from the range.Value method is a tuple that I don't know how to parse.
For example, the cell appears as $548,982, but in python the value is returned as (1, 1194857614).
How can I get the numerical amount from excel or how can I convert this tuple value into the numerical value?
Thanks!
| [
"Try this:\nimport struct\ntry: import decimal\nexcept ImportError:\n divisor= 10000.0\nelse:\n divisor= decimal.Decimal(10000)\n\ndef xl_money(i1, i2):\n byte8= struct.unpack(\">q\", struct.pack(\">ii\", i1, i2))[0]\n return byte8 / divisor\n\n>>> xl_money(1, 1194857614)\nDecimal(\"548982.491\")\n\nMoney in Microsoft COM is an 8-byte integer; it's fixed point, with 4 decimal places (i.e. 1 is represented by 10000). What my function does, is take the tuple of 4-byte integers, make an 8-byte integer using struct to avoid any issues of sign, and then dividing by the constant 10000. The function uses decimal.Decimal if available, otherwise it uses float.\nUPDATE (based on comment): So far, it's only COM Currency values being returned as a two-integer tuple, so you might want to check for that, but there are no guarantees that this will always be successful. However, depending on the library you use and its version, it's quite possible that later on, after some upgrade, you will be receiving decimal.Decimals and not two-integer tuples anymore.\n",
"I tried this with Excel 2007 and VBA. It is giving correct value.\n1) Try pasting this value in a new excel workbook\n2) Press Alt + F11. Gets you to VBA Editor.\n3) Press Ctrl + G. Gets you to immediate window.\n4) In the immediate window, type ?cells(\"a1\").Value \nhere \"a1\" is the cell where you have pasted the value.\nI am doubting that the cell has some value or character due to which it is interpreted this way.\nPost your observations here.\n"
] | [
3,
0
] | [] | [] | [
"excel",
"python",
"pywin32"
] | stackoverflow_0000390263_excel_python_pywin32.txt |
Q:
pyGTK Radio Button
Alright, I'll preface this with the fact that I'm a GTK and Python newb, but I haven't been able to dig up the information I needed. Basically what I have is a list of Radio Buttons, and based on which one is checked, I need to connect a button to a different function. I tried creating all my radio buttons, and then creating a disgusting if/else block checking for sget_active() on each button. The problem is the same button returns true every single time. Any ideas?
Here's the code in use:
#Radio Buttons Center
self.updatePostRadioVBox = gtk.VBox(False, 0)
self.updatePageRadio = gtk.RadioButton(None, "Updating Page")
self.updatePostRadio = gtk.RadioButton(self.updatePageRadio, "Updating Blog Post")
self.pageRadio = gtk.RadioButton(self.updatePageRadio, "New Page")
self.blogRadio = gtk.RadioButton(self.updatePageRadio, "New Blog Post")
self.addSpaceRadio = gtk.RadioButton(self.updatePageRadio, "Add New Space")
self.removePageRadio = gtk.RadioButton(self.updatePageRadio, "Remove Page")
self.removePostRadio = gtk.RadioButton(self.updatePageRadio, "Remove Blog Post")
self.removeSpaceRadio = gtk.RadioButton(self.updatePageRadio, "Remove Space")
#Now the buttons to direct us from here
self.returnMainMenuButton = gtk.Button(" Main Menu ")
self.returnMainMenuButton.connect("clicked", self.transToMain)
self.contentManageHBoxBottom.pack_start(self.returnMainMenuButton, False, False, 30)
self.contentProceedButton = gtk.Button(" Proceed ")
self.contentManageHBoxBottom.pack_end(self.contentProceedButton, False, False, 30)
if self.updatePageRadio.get_active():
self.contentProceedButton.connect("clicked", self.updatePage)
elif self.updatePostRadio.get_active():
self.contentProceedButton.connect("clicked", self.updatePost)
elif self.pageRadio.get_active():
self.contentProceedButton.connect("clicked", self.newPage)
elif self.blogRadio.get_active():
self.contentProceedButton.connect("clicked", self.newBlogPost)
elif self.addSpaceRadio.get_active():
self.contentProceedButton.connect("clicked", self.newSpace)
elif self.removePageRadio.get_active():
self.contentProceedButton.connect("clicked", self.removePage)
elif self.removePostRadio.get_active():
self.contentProceedButton.connect("clicked", self.removeBlogPost)
elif self.removeSpaceRadio.get_active():
self.contentProceedButton.connect("clicked", self.removeSpace)
A:
Edit: (since you posted some code), just use:
active = [r for r in self.updatePageRadio.get_group() if r.get_active()][0]
and use that to look up in a dict of functions and call it:
my_actions[active]()
Edit: I totally forgot to mention that this is not a good use-case at all for RadioButtons, regular gtk.Button would be much better in my opinion.
Your answer is to use the RadioButton "groups" system. It is explained in this document, but here is a small practical example.
Firstly a group is really just a RadioButton itself that is used to collect a number of other RadioButtons. You specify a group as the first argument to the constructor.
r1 = gtk.RadioButton(None, label='Cat') # this has no group, it is the first
r2 = gtk.RadioButton(r1, label='Mouse') # use the first radio
# button as the group argument
r3 = gtk.RadioButton(r1, label='Dog') # again use r1
Now all the radio buttons will be synchronised. And the matter of reading them is as easy as:
active_radios = [r for r in r1.get_group() if r.get_active()]
A:
First, I presume that's a typo and you're actually calling get_active() in your code and not set_active()? Other than that, without seeing the code, I can point you to a pygtk tutorial about radio buttons
| pyGTK Radio Button | Alright, I'll preface this with the fact that I'm a GTK and Python newb, but I haven't been able to dig up the information I needed. Basically what I have is a list of Radio Buttons, and based on which one is checked, I need to connect a button to a different function. I tried creating all my radio buttons, and then creating a disgusting if/else block checking for sget_active() on each button. The problem is the same button returns true every single time. Any ideas?
Here's the code in use:
#Radio Buttons Center
self.updatePostRadioVBox = gtk.VBox(False, 0)
self.updatePageRadio = gtk.RadioButton(None, "Updating Page")
self.updatePostRadio = gtk.RadioButton(self.updatePageRadio, "Updating Blog Post")
self.pageRadio = gtk.RadioButton(self.updatePageRadio, "New Page")
self.blogRadio = gtk.RadioButton(self.updatePageRadio, "New Blog Post")
self.addSpaceRadio = gtk.RadioButton(self.updatePageRadio, "Add New Space")
self.removePageRadio = gtk.RadioButton(self.updatePageRadio, "Remove Page")
self.removePostRadio = gtk.RadioButton(self.updatePageRadio, "Remove Blog Post")
self.removeSpaceRadio = gtk.RadioButton(self.updatePageRadio, "Remove Space")
#Now the buttons to direct us from here
self.returnMainMenuButton = gtk.Button(" Main Menu ")
self.returnMainMenuButton.connect("clicked", self.transToMain)
self.contentManageHBoxBottom.pack_start(self.returnMainMenuButton, False, False, 30)
self.contentProceedButton = gtk.Button(" Proceed ")
self.contentManageHBoxBottom.pack_end(self.contentProceedButton, False, False, 30)
if self.updatePageRadio.get_active():
self.contentProceedButton.connect("clicked", self.updatePage)
elif self.updatePostRadio.get_active():
self.contentProceedButton.connect("clicked", self.updatePost)
elif self.pageRadio.get_active():
self.contentProceedButton.connect("clicked", self.newPage)
elif self.blogRadio.get_active():
self.contentProceedButton.connect("clicked", self.newBlogPost)
elif self.addSpaceRadio.get_active():
self.contentProceedButton.connect("clicked", self.newSpace)
elif self.removePageRadio.get_active():
self.contentProceedButton.connect("clicked", self.removePage)
elif self.removePostRadio.get_active():
self.contentProceedButton.connect("clicked", self.removeBlogPost)
elif self.removeSpaceRadio.get_active():
self.contentProceedButton.connect("clicked", self.removeSpace)
| [
"Edit: (since you posted some code), just use:\nactive = [r for r in self.updatePageRadio.get_group() if r.get_active()][0]\n\nand use that to look up in a dict of functions and call it:\nmy_actions[active]()\n\n\nEdit: I totally forgot to mention that this is not a good use-case at all for RadioButtons, regular gtk.Button would be much better in my opinion.\n\nYour answer is to use the RadioButton \"groups\" system. It is explained in this document, but here is a small practical example.\nFirstly a group is really just a RadioButton itself that is used to collect a number of other RadioButtons. You specify a group as the first argument to the constructor.\nr1 = gtk.RadioButton(None, label='Cat') # this has no group, it is the first\nr2 = gtk.RadioButton(r1, label='Mouse') # use the first radio\n # button as the group argument\nr3 = gtk.RadioButton(r1, label='Dog') # again use r1\n\nNow all the radio buttons will be synchronised. And the matter of reading them is as easy as:\nactive_radios = [r for r in r1.get_group() if r.get_active()]\n\n",
"First, I presume that's a typo and you're actually calling get_active() in your code and not set_active()? Other than that, without seeing the code, I can point you to a pygtk tutorial about radio buttons\n"
] | [
7,
0
] | [] | [] | [
"gtk",
"pygtk",
"python"
] | stackoverflow_0000391237_gtk_pygtk_python.txt |
Q:
Organising my Python project
I'm starting a Python project and expect to have 20 or more classes in it. As is good practice I want to put them in a separate file each. However, the project directory quickly becomes swamped with files (or will when I do this).
If I put a file to import in a folder I can no longer import it. How do I import a file from another folder and will I need to reference to the class it contains differently now that it's in a folder?
Thanks in advance
A:
Create an __init__.py file in your projects folder, and it will be treated like a module by Python.
Classes in your package directory can then be imported using syntax like:
from package import class
import package.class
Within __init__.py, you may create an __all__ array that defines from package import * behavior:
# name1 and name2 will be available in calling module's namespace
# when using "from package import *" syntax
__all__ = ['name1', 'name2']
And here is way more information than you even want to know about packages in Python
Generally speaking, a good way to learn about how to organize a lot of code is to pick a popular Python package and see how they did it. I'd check out Django and Twisted, for starters.
A:
"As is good practice I want to put them in a separate file each. "
This is not actually a very good practice. You should design modules that contain closely-related classes.
As a practical matter, no class actually stands completely alone. Generally classes come in clusters or groups that are logically related.
A:
Python doesn't force you into Java's nasty one-class-per-file style. In fact, it's not even considered good style to put each class in a separate file unless they are huge. (If they are huge, you probably have to do refactoring anyway.) Instead, you should group similar classes and functions in modules. For example, if you are writing a GUI calculator, your package layout might look like this:
/amazingcalc
/__init__.py # This makes it a Python package and importable.
/evaluate.py # Contains the code to actually do calculations.
/main.py # Starts the application
/ui.py # Contains the code to make a pretty interface
A:
simple answer is to create an empty file called __init__.py in the new folder you made. Then in your top level .py file include with something like:
import mynewsubfolder.mynewclass
| Organising my Python project | I'm starting a Python project and expect to have 20 or more classes in it. As is good practice I want to put them in a separate file each. However, the project directory quickly becomes swamped with files (or will when I do this).
If I put a file to import in a folder I can no longer import it. How do I import a file from another folder and will I need to reference to the class it contains differently now that it's in a folder?
Thanks in advance
| [
"Create an __init__.py file in your projects folder, and it will be treated like a module by Python.\nClasses in your package directory can then be imported using syntax like:\nfrom package import class\nimport package.class\n\nWithin __init__.py, you may create an __all__ array that defines from package import * behavior:\n# name1 and name2 will be available in calling module's namespace \n# when using \"from package import *\" syntax\n__all__ = ['name1', 'name2'] \n\nAnd here is way more information than you even want to know about packages in Python\nGenerally speaking, a good way to learn about how to organize a lot of code is to pick a popular Python package and see how they did it. I'd check out Django and Twisted, for starters.\n",
"\"As is good practice I want to put them in a separate file each. \"\nThis is not actually a very good practice. You should design modules that contain closely-related classes.\nAs a practical matter, no class actually stands completely alone. Generally classes come in clusters or groups that are logically related. \n",
"Python doesn't force you into Java's nasty one-class-per-file style. In fact, it's not even considered good style to put each class in a separate file unless they are huge. (If they are huge, you probably have to do refactoring anyway.) Instead, you should group similar classes and functions in modules. For example, if you are writing a GUI calculator, your package layout might look like this:\n/amazingcalc\n /__init__.py # This makes it a Python package and importable.\n /evaluate.py # Contains the code to actually do calculations.\n /main.py # Starts the application\n /ui.py # Contains the code to make a pretty interface\n\n",
"simple answer is to create an empty file called __init__.py in the new folder you made. Then in your top level .py file include with something like:\nimport mynewsubfolder.mynewclass\n\n"
] | [
31,
22,
12,
6
] | [] | [] | [
"project_organization",
"python"
] | stackoverflow_0000391879_project_organization_python.txt |
Q:
Is there a Ruby/Python HTML reflow/layout library?
I'm looking for a library in Ruby or Python that would take some HTML and CSS as the input and return data that contains the positions and sizes of the elements. If it helps, I don't need the info for all the elements but just the major divs of the page.
A:
Scriptor, I think what you likely are looking for might be something in JavaScript more then Ruby or Python. I mean - the positions and sizes are essentially going to be determined by the rendering engine (the browser). You might consider using something like jQuery to loop through all of your desired objects - outputting the name of the object (like the DIV's ID) and the height and width of that item. So, for what it's worth I'd look at jQuery if I was in your position and the height() and width() methods. You never know - there may already be a jQuery plugin.
| Is there a Ruby/Python HTML reflow/layout library? | I'm looking for a library in Ruby or Python that would take some HTML and CSS as the input and return data that contains the positions and sizes of the elements. If it helps, I don't need the info for all the elements but just the major divs of the page.
| [
"Scriptor, I think what you likely are looking for might be something in JavaScript more then Ruby or Python. I mean - the positions and sizes are essentially going to be determined by the rendering engine (the browser). You might consider using something like jQuery to loop through all of your desired objects - outputting the name of the object (like the DIV's ID) and the height and width of that item. So, for what it's worth I'd look at jQuery if I was in your position and the height() and width() methods. You never know - there may already be a jQuery plugin.\n"
] | [
3
] | [
"Both Ruby and Python have a Regex library. Why not search for things like /width=\\\"(\\d+)px\\\"/ and /height:(\\d+)px/. Use $1 to find the value in the group. I'm not a regex expert and I'm doing this from memory, so refer to any of the tutorials on the net for the correct syntax and variable usage, but that's where to start. Good luck,\nbsperlinus\n"
] | [
-1
] | [
"html",
"layout",
"python",
"ruby"
] | stackoverflow_0000392217_html_layout_python_ruby.txt |
Q:
Is there anyone who has managed to compile mod_wsgi for apache on Mac OS X Leopard?
I'm working on a Django project that requires debugging on a multithreaded server. I've found mod_wsgi 2.0+ to be the easiest to work with, because of easy workarounds for python module reloading. Problem is can't get it to compile on Leopard. Is there anyone who has managed to do it so far, either for the builtin Apache or MAMP. I'd be grateful if someone posts a link to a precompiled binary (for intel, python 2.5, apache 2.2 or 2.0).
After 3 hours of trial and error I've managed to compile mod_wsgi 2.3 for the Apache that comes with Leopard. Here are the instructions in case anyone else needs this.
./configure
Change 2 lines in the Makefile
CFLAGS = -Wc,'-arch i386'
LDFLAGS = -arch i386 -Wl,-F/Library/Frameworks -framework Python -u _PyMac_Error
make && sudo make install
Make a thin binary of the original httpd
cd /usr/sbin
sudo mv ./httpd ./httpd.fat
sudo lipo ./httpd.fat -thin i386 -output ./httpd.i386
sudo ln -s ./httpd.i386 ./httpd
This should work on intel macbook, macbook pro, imac and mac mini. As I understood the problem is modwsgi won't compile against MacPython 2.5.2 because of some weird architecture missmatch problem. But, if you compile it as a thin binary it won't play with the Apache fat binary. So this hack solves the problem. The rest is pretty standard configuration, like on any other platform.
A:
This doesn't directly answer your question, but have you thought about using something like MacPorts for this sort of thing? If you're compiling a lot of software like this, MacPorts can really make your life easier, since building software and dependencies is practically automatic.
| Is there anyone who has managed to compile mod_wsgi for apache on Mac OS X Leopard? | I'm working on a Django project that requires debugging on a multithreaded server. I've found mod_wsgi 2.0+ to be the easiest to work with, because of easy workarounds for python module reloading. Problem is can't get it to compile on Leopard. Is there anyone who has managed to do it so far, either for the builtin Apache or MAMP. I'd be grateful if someone posts a link to a precompiled binary (for intel, python 2.5, apache 2.2 or 2.0).
After 3 hours of trial and error I've managed to compile mod_wsgi 2.3 for the Apache that comes with Leopard. Here are the instructions in case anyone else needs this.
./configure
Change 2 lines in the Makefile
CFLAGS = -Wc,'-arch i386'
LDFLAGS = -arch i386 -Wl,-F/Library/Frameworks -framework Python -u _PyMac_Error
make && sudo make install
Make a thin binary of the original httpd
cd /usr/sbin
sudo mv ./httpd ./httpd.fat
sudo lipo ./httpd.fat -thin i386 -output ./httpd.i386
sudo ln -s ./httpd.i386 ./httpd
This should work on intel macbook, macbook pro, imac and mac mini. As I understood the problem is modwsgi won't compile against MacPython 2.5.2 because of some weird architecture missmatch problem. But, if you compile it as a thin binary it won't play with the Apache fat binary. So this hack solves the problem. The rest is pretty standard configuration, like on any other platform.
| [
"This doesn't directly answer your question, but have you thought about using something like MacPorts for this sort of thing? If you're compiling a lot of software like this, MacPorts can really make your life easier, since building software and dependencies is practically automatic.\n"
] | [
2
] | [] | [] | [
"apache",
"django",
"python"
] | stackoverflow_0000369305_apache_django_python.txt |
Q:
How to make a color from a number
I have a function that returns a float from 0 to 255. I would like to make a gradient in red color from this, but I need a string in "#FFFFFF" format. Is there a function for that?
A:
You could use:
"#%02X0000" % x
A:
def rgbtohex(r,g,b):
return "#%02X%02X%02X" % (r,g,b)
| How to make a color from a number | I have a function that returns a float from 0 to 255. I would like to make a gradient in red color from this, but I need a string in "#FFFFFF" format. Is there a function for that?
| [
"You could use:\n\"#%02X0000\" % x\n\n",
"def rgbtohex(r,g,b):\n return \"#%02X%02X%02X\" % (r,g,b)\n\n"
] | [
12,
3
] | [] | [] | [
"colors",
"python"
] | stackoverflow_0000392728_colors_python.txt |
Q:
How do I respond to mouse clicks on sprites in PyGame?
What is the canonical way of making your sprites respond to mouse clicks in PyGame ?
Here's something simple, in my event loop:
for event in pygame.event.get():
if event.type == pygame.QUIT:
exit_game()
[...]
elif ( event.type == pygame.MOUSEBUTTONDOWN and
pygame.mouse.get_pressed()[0]):
for sprite in sprites:
sprite.mouse_click(pygame.mouse.get_pos())
Some questions about it:
Is this the best way of responding to mouse clicks ?
What if the mouse stays pressed on the sprite for some time ? How do I make a single event out of it ?
Is this a reasonable way to notify all my sprites of the click ?
Thanks in advance
A:
I usually give my clickable objects a click function, like in your example. I put all of those objects in a list, for easy iteration when the click functions are to be called.
when checking for which mousebutton you press, use the button property of the event.
import pygame
from pygame.locals import * #This lets you use pygame's constants directly.
for event in pygame.event.get():
if event.type == MOUSEBUTTONDOWN: #Better to seperate to a new if statement aswell, since there's more buttons that can be clicked and makes for cleaner code.
if event.button == 1:
for object in clickableObjectsList:
object.clickCheck(event.pos)
I would say this is the recommended way of doing it. The click only registers once, so it wont tell your sprite if the user is "dragging" with a button. That can easily be done with a boolean that is set to true with the MOUSEBUTTONDOWN event, and false with the MOUSEBUTTONUP. The have "draggable" objects iterated for activating their functions... and so on.
However, if you don't want to use an event handler, you can let an update function check for input with:
pygame.mouse.get_pos()
pygame.mouse.get_pressed().
This is a bad idea for larger projects, since it can create hard to find bugs. Better just keeping events in one place. Smaller games, like simple arcade games might make more sense using the probing style though.
| How do I respond to mouse clicks on sprites in PyGame? | What is the canonical way of making your sprites respond to mouse clicks in PyGame ?
Here's something simple, in my event loop:
for event in pygame.event.get():
if event.type == pygame.QUIT:
exit_game()
[...]
elif ( event.type == pygame.MOUSEBUTTONDOWN and
pygame.mouse.get_pressed()[0]):
for sprite in sprites:
sprite.mouse_click(pygame.mouse.get_pos())
Some questions about it:
Is this the best way of responding to mouse clicks ?
What if the mouse stays pressed on the sprite for some time ? How do I make a single event out of it ?
Is this a reasonable way to notify all my sprites of the click ?
Thanks in advance
| [
"I usually give my clickable objects a click function, like in your example. I put all of those objects in a list, for easy iteration when the click functions are to be called.\nwhen checking for which mousebutton you press, use the button property of the event.\nimport pygame\nfrom pygame.locals import * #This lets you use pygame's constants directly.\n\nfor event in pygame.event.get():\n if event.type == MOUSEBUTTONDOWN: #Better to seperate to a new if statement aswell, since there's more buttons that can be clicked and makes for cleaner code.\n if event.button == 1:\n for object in clickableObjectsList:\n object.clickCheck(event.pos)\n\nI would say this is the recommended way of doing it. The click only registers once, so it wont tell your sprite if the user is \"dragging\" with a button. That can easily be done with a boolean that is set to true with the MOUSEBUTTONDOWN event, and false with the MOUSEBUTTONUP. The have \"draggable\" objects iterated for activating their functions... and so on.\nHowever, if you don't want to use an event handler, you can let an update function check for input with:\npygame.mouse.get_pos() \npygame.mouse.get_pressed().\n\nThis is a bad idea for larger projects, since it can create hard to find bugs. Better just keeping events in one place. Smaller games, like simple arcade games might make more sense using the probing style though.\n"
] | [
11
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0000380420_pygame_python.txt |
Q:
Using user input to find information in a Mysql database
I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product.
I am a bit stuck on how to get started. Does anyone have any tips for me?
A:
Use python-mysql. It is a dbapi-compatible module that lets you talk to the database.
import MySQLdb
user_input = raw_input("Please enter barcode and press Enter button: ")
db = MySQLdb.connect(passwd="moonpie",db="thangs")
mycursor = db.cursor()
mycursor.execute("""SELECT name, price FROM Product
WHERE barcode = %s""", (user_input,))
# calls fetchone until None is returned (no more rows)
for row in iter(mycursor.fetchone, None):
print row
If you want something more high-level, consider using SQLAlchemy as a layer. It could allow you to do:
product = session.query(Product).filter(Product.barcode == user_input).scalar()
print product.name, product.price
A:
A barcode is simply a graphical representation of a series of characters (alphanumeric)
So if you have a method for users to enter this code (a barcode scanner), then its just an issue of querying the mysql database for the character string.
A:
That is a very ambiguous question. What you want can be done in many ways depending on what you actually want to do.
How are your users going to enter the bar code? Are they going to use a bar code scanner? Are they entering the bar code numbers manually?
Is this going to run on a desktop/laptop computer or is it going to run on a handheld device?
Is the bar code scanner storing the bar codes for later retrieval or is it sending them directly to the computer. Will it send them through a USB cable or wireless?
A:
To start with, treat the barcode input as plain old text.
It has been quite a while since I worked with barcode scanners, but I doubt they have changed that much, the older ones used to just piggyback on the keyboard input, so from a programming perspective, the net result was a stream of characters in the keyboard buffer, either typed or scanned made no difference.
If the device you are targeting differs from that, you will need to write something to deal with that before you get to the database query.
If you have one of the devices to play with, plug it in, start notepad, start scanning some barcodes and see what happens.
| Using user input to find information in a Mysql database | I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product.
I am a bit stuck on how to get started. Does anyone have any tips for me?
| [
"Use python-mysql. It is a dbapi-compatible module that lets you talk to the database.\nimport MySQLdb\n\nuser_input = raw_input(\"Please enter barcode and press Enter button: \")\n\ndb = MySQLdb.connect(passwd=\"moonpie\",db=\"thangs\")\nmycursor = db.cursor()\nmycursor.execute(\"\"\"SELECT name, price FROM Product\n WHERE barcode = %s\"\"\", (user_input,))\n\n# calls fetchone until None is returned (no more rows)\nfor row in iter(mycursor.fetchone, None):\n print row\n\nIf you want something more high-level, consider using SQLAlchemy as a layer. It could allow you to do: \nproduct = session.query(Product).filter(Product.barcode == user_input).scalar()\nprint product.name, product.price\n\n",
"A barcode is simply a graphical representation of a series of characters (alphanumeric)\nSo if you have a method for users to enter this code (a barcode scanner), then its just an issue of querying the mysql database for the character string.\n",
"That is a very ambiguous question. What you want can be done in many ways depending on what you actually want to do.\nHow are your users going to enter the bar code? Are they going to use a bar code scanner? Are they entering the bar code numbers manually? \nIs this going to run on a desktop/laptop computer or is it going to run on a handheld device? \nIs the bar code scanner storing the bar codes for later retrieval or is it sending them directly to the computer. Will it send them through a USB cable or wireless?\n",
"To start with, treat the barcode input as plain old text. \nIt has been quite a while since I worked with barcode scanners, but I doubt they have changed that much, the older ones used to just piggyback on the keyboard input, so from a programming perspective, the net result was a stream of characters in the keyboard buffer, either typed or scanned made no difference. \nIf the device you are targeting differs from that, you will need to write something to deal with that before you get to the database query. \nIf you have one of the devices to play with, plug it in, start notepad, start scanning some barcodes and see what happens.\n"
] | [
5,
1,
0,
0
] | [] | [] | [
"python",
"sql",
"user_input"
] | stackoverflow_0000387606_python_sql_user_input.txt |
Q:
How do I create a Python class in C?
I have a legacy C library that creates a tree of objects. I would like to convert the tree into a pre-existing Python class. How do I create the PyObject for that class?
A:
Cython is capable of doing this. It's a semi-fork of Pyrex, and it can wrap existing data structures and expose them to Python. In fact, this is one of the sections in the user guide. Cython is relatively easy to use, and it includes an HTML-output format that shows all the generated code as well as highlighted hot spots where optimization could be applied.
A:
Take a look at generating your Python bindings by using a tool such as pybindgen. These guys are trying to make a superior binding generator, they talk about the shortcomings of other tools (e.g. SWIG) on their front page.
A:
I've had success using Robin in these scenarios.
| How do I create a Python class in C? | I have a legacy C library that creates a tree of objects. I would like to convert the tree into a pre-existing Python class. How do I create the PyObject for that class?
| [
"Cython is capable of doing this. It's a semi-fork of Pyrex, and it can wrap existing data structures and expose them to Python. In fact, this is one of the sections in the user guide. Cython is relatively easy to use, and it includes an HTML-output format that shows all the generated code as well as highlighted hot spots where optimization could be applied.\n",
"Take a look at generating your Python bindings by using a tool such as pybindgen. These guys are trying to make a superior binding generator, they talk about the shortcomings of other tools (e.g. SWIG) on their front page.\n",
"I've had success using Robin in these scenarios.\n"
] | [
4,
2,
1
] | [] | [] | [
"c",
"python"
] | stackoverflow_0000378773_c_python.txt |
Q:
How to process two forms in one view?
I have two completely different forms in one template. How to process them in one view? How can I distinguish which of the forms was submitted? How can I use prefix to acomplish that? Or maybe it's better to write separate views?
regards
chriss
A:
Personally, I'd use one view to handle each form's POST.
On the other hand, you could use a hidden input element that indicate which form was used
<form action="/blog/" method="POST">
{{ blog_form.as_p }}
<input type="hidden" name="form-type" value"blog-form" /> <!-- set type -->
<input type="submit" value="Submit" />
</form>
...
<form action="/blog/" method="POST">
{{ micro_form.as_p }}
<input type="hidden" name="form-type" value"micro-form" /> <!-- set type -->
<input type="submit" value="Submit" />
</form>
With a view like:
def blog(request):
if request.method == 'POST':
if request.POST['form-type'] == u"blog-form": # test the form type
form = BlogForm(request.POST)
...
else:
form = MicroForm(request.POST)
...
return render_to_response('blog.html', {
'blog_form': BlogForm(),
'micro_form': MicroForm(),
})
... but once again, I think one view per form (even if the view only accepts POSTs) is simpler than trying to do the above.
A:
like ayaz said, you should give unique name to form submit button
<form action="." method="post">
......
<input type="submit" name="form1">
</form>
<form action="." method="post">
......
<input type="submit" name="form2">
</form>
#view
if "form1" in request.POST:
...
if "form2" in request.POST:
...
A:
If the two forms are completely different, it will certainly not hurt to have them be handled by two different views. Otherwise, you may use the 'hidden input element' trick zacherates has touched upon. Or, you could always give each submit element a unique name, and differentiate in the view which form was submitted based on that.
| How to process two forms in one view? | I have two completely different forms in one template. How to process them in one view? How can I distinguish which of the forms was submitted? How can I use prefix to acomplish that? Or maybe it's better to write separate views?
regards
chriss
| [
"Personally, I'd use one view to handle each form's POST.\nOn the other hand, you could use a hidden input element that indicate which form was used\n<form action=\"/blog/\" method=\"POST\">\n {{ blog_form.as_p }}\n <input type=\"hidden\" name=\"form-type\" value\"blog-form\" /> <!-- set type -->\n <input type=\"submit\" value=\"Submit\" />\n</form>\n\n... \n\n<form action=\"/blog/\" method=\"POST\">\n {{ micro_form.as_p }}\n <input type=\"hidden\" name=\"form-type\" value\"micro-form\" /> <!-- set type -->\n <input type=\"submit\" value=\"Submit\" />\n</form>\n\nWith a view like:\ndef blog(request):\n if request.method == 'POST':\n if request.POST['form-type'] == u\"blog-form\": # test the form type\n form = BlogForm(request.POST) \n ...\n else:\n form = MicroForm(request.POST)\n ...\n\n return render_to_response('blog.html', {\n 'blog_form': BlogForm(),\n 'micro_form': MicroForm(),\n })\n\n... but once again, I think one view per form (even if the view only accepts POSTs) is simpler than trying to do the above.\n",
"like ayaz said, you should give unique name to form submit button\n<form action=\".\" method=\"post\">\n......\n<input type=\"submit\" name=\"form1\">\n</form>\n\n\n<form action=\".\" method=\"post\">\n......\n<input type=\"submit\" name=\"form2\">\n</form>\n\n\n#view\n\nif \"form1\" in request.POST:\n ...\nif \"form2\" in request.POST:\n ...\n\n",
"If the two forms are completely different, it will certainly not hurt to have them be handled by two different views. Otherwise, you may use the 'hidden input element' trick zacherates has touched upon. Or, you could always give each submit element a unique name, and differentiate in the view which form was submitted based on that.\n"
] | [
5,
4,
0
] | [] | [] | [
"django",
"django_forms",
"django_templates",
"python"
] | stackoverflow_0000392784_django_django_forms_django_templates_python.txt |
Q:
What values to use for FastCGI maxrequests, maxspare, minspare, maxchildren?
I'm running a Django app using FastCGI and lighttpd.
Can somebody explain me what I should consider when deciding what value to use for maxrequests, maxspare, minspare, maxchildren?
These options are not too well documented, but seem quite important.
Don't just tell me what they do; I want to understand what implications they have and how I should decide on what values to use.
Thanks.
A:
Let's start with the definition
maxrequests: How many requests does a child server before being killed
and a new one forked
maxspare : Maximum number of spare processes to keep running
minspare : Minimum number of spare processes to prefork
maxchildren: Hard limit number of processes in prefork mode
This means that you'll have at most maxchildren processes running at any given time in your webserver, each running for maxrequests requests. At server start you'll get minspare processes, which will keep growing until maxspare (or maxchildren) if more requests are coming.
So, minspare lets you say how many concurrent requests are you expecting at a minimum (important to avoid the process creation if you start with one, it's good to start at, say 10), and maxspare lets you say how many concurrent requests will your server attend to at most (without compromising it's expected response time and so on. Needs a stress test to validate). And maxrequests is talking about the lifetime of each child, in case they cannot run forever due to any kind of constraint.
| What values to use for FastCGI maxrequests, maxspare, minspare, maxchildren? | I'm running a Django app using FastCGI and lighttpd.
Can somebody explain me what I should consider when deciding what value to use for maxrequests, maxspare, minspare, maxchildren?
These options are not too well documented, but seem quite important.
Don't just tell me what they do; I want to understand what implications they have and how I should decide on what values to use.
Thanks.
| [
"Let's start with the definition\n\n maxrequests: How many requests does a child server before being killed \n and a new one forked\n maxspare : Maximum number of spare processes to keep running\n minspare : Minimum number of spare processes to prefork\n maxchildren: Hard limit number of processes in prefork mode\n\nThis means that you'll have at most maxchildren processes running at any given time in your webserver, each running for maxrequests requests. At server start you'll get minspare processes, which will keep growing until maxspare (or maxchildren) if more requests are coming.\nSo, minspare lets you say how many concurrent requests are you expecting at a minimum (important to avoid the process creation if you start with one, it's good to start at, say 10), and maxspare lets you say how many concurrent requests will your server attend to at most (without compromising it's expected response time and so on. Needs a stress test to validate). And maxrequests is talking about the lifetime of each child, in case they cannot run forever due to any kind of constraint.\n"
] | [
13
] | [
"Don't forget to coordinate your fcgi settings with your apache worker settings. I usually keep more apache workers around than fcgi workers... they are lighter weight and will wait for an available fcgi worker to free up to process the request if the concurrency reaches higher than my maxspare.\n"
] | [
-1
] | [
"django",
"fastcgi",
"python"
] | stackoverflow_0000393629_django_fastcgi_python.txt |
Q:
Programmatic Form Submit
I want to scrape the contents of a webpage. The contents are produced after a form on that site has been filled in and submitted.
I've read on how to scrape the end result content/webpage - but how to I programmatically submit the form?
I'm using python and have read that I might need to get the original webpage with the form, parse it, get the form parameters and then do X?
Can anyone point me in the rigth direction?
A:
you'll need to generate a HTTP request containing the data for the form.
The form will look something like:
<form action="submit.php" method="POST"> ... </form>
This tells you the url to request is www.example.com/submit.php and your request should be a POST.
In the form will be several input items, eg:
<input type="text" name="itemnumber"> ... </input>
you need to create a string of all these input name=value pairs encoded for a URL appended to the end of your requested URL, which now becomes
www.example.com/submit.php?itemnumber=5234&otherinput=othervalue etc...
This will work fine for GET. POST is a little trickier.
</motivation>
Just follow S.Lott's links for some much easier to use library support :P
A:
Using python, I think it takes the following steps:
parse the web page that contains the form, find out the form submit address, and the submit method ("post" or "get").
this explains form elements in html file
Use urllib2 to submit the form. You may need some functions like "urlencode", "quote" from urllib to generate the url and data for post method. Read the library doc for details.
A:
From a similar question - options-for-html-scraping - you can learn that with Python you can use Beautiful Soup.
Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:
Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.
Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.
Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.
The unusual name caught the attention of our host, November 12, 2008.
| Programmatic Form Submit | I want to scrape the contents of a webpage. The contents are produced after a form on that site has been filled in and submitted.
I've read on how to scrape the end result content/webpage - but how to I programmatically submit the form?
I'm using python and have read that I might need to get the original webpage with the form, parse it, get the form parameters and then do X?
Can anyone point me in the rigth direction?
| [
"you'll need to generate a HTTP request containing the data for the form. \nThe form will look something like:\n<form action=\"submit.php\" method=\"POST\"> ... </form>\n\nThis tells you the url to request is www.example.com/submit.php and your request should be a POST.\nIn the form will be several input items, eg:\n<input type=\"text\" name=\"itemnumber\"> ... </input>\n\nyou need to create a string of all these input name=value pairs encoded for a URL appended to the end of your requested URL, which now becomes \nwww.example.com/submit.php?itemnumber=5234&otherinput=othervalue etc...\n This will work fine for GET. POST is a little trickier.\n</motivation>\n\nJust follow S.Lott's links for some much easier to use library support :P\n",
"Using python, I think it takes the following steps:\n\nparse the web page that contains the form, find out the form submit address, and the submit method (\"post\" or \"get\").\n\nthis explains form elements in html file\n\nUse urllib2 to submit the form. You may need some functions like \"urlencode\", \"quote\" from urllib to generate the url and data for post method. Read the library doc for details.\n\n",
"From a similar question - options-for-html-scraping - you can learn that with Python you can use Beautiful Soup.\n\nBeautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:\n\nBeautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.\nBeautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.\nBeautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.\n\n\nThe unusual name caught the attention of our host, November 12, 2008.\n"
] | [
2,
2,
2
] | [
"You can do it with javascript. If the form is something like:\n<form name='myform' ...\n\nThen you can do this in javascript:\n<script language=\"JavaScript\">\nfunction submitform()\n{\ndocument.myform.submit();\n}\n</script> \n\nYou can use the \"onClick\" attribute of links or buttons to invoke this code. To invoke it automatically when a page is loaded, use the \"onLoad\" attribute of the element:\n<body onLoad=\"submitform()\" ...>\n\n"
] | [
-1
] | [
"forms",
"python",
"screen_scraping",
"submit"
] | stackoverflow_0000393738_forms_python_screen_scraping_submit.txt |
Q:
Python and regular expression with Unicode
I need to delete some Unicode symbols from the string 'ุจูุณูู
ู ุงูููููู ุงูุฑููุญูู
ููฐูู ุงูุฑููุญููู
ู'
I know they exist here for sure. I tried:
re.sub('([\u064B-\u0652\u06D4\u0670\u0674\u06D5-\u06ED]+)', '', 'ุจูุณูู
ู ุงูููููู ุงูุฑููุญูู
ููฐูู ุงูุฑููุญููู
ู')
but it doesn't work. String stays the same. What am I doing wrong?
A:
Are you using python 2.x or 3.0?
If you're using 2.x, try making the regex string a unicode-escape string, with 'u'. Since it's regex it's good practice to make your regex string a raw string, with 'r'. Also, putting your entire pattern in parentheses is superfluous.
re.sub(ur'[\u064B-\u0652\u06D4\u0670\u0674\u06D5-\u06ED]+', '', ...)
http://docs.python.org/tutorial/introduction.html#unicode-strings
Edit:
It's also good practice to use the re.UNICODE/re.U/(?u) flag for unicode regexes, but it only affects character class aliases like \w or \b, of which this pattern does not use any and so would not be affected by.
A:
Use unicode strings. Use the re.UNICODE flag.
>>> myre = re.compile(ur'[\u064B-\u0652\u06D4\u0670\u0674\u06D5-\u06ED]+',
re.UNICODE)
>>> myre
<_sre.SRE_Pattern object at 0xb20b378>
>>> mystr = u'ุจูุณูู
ู ุงูููููู ุงูุฑููุญูู
ููฐูู ุงูุฑููุญููู
ู'
>>> result = myre.sub('', mystr)
>>> len(mystr), len(result)
(38, 22)
>>> print result
ุจุณู
ุงููู ุงูุฑุญู
ู ุงูุฑุญูู
Read the article by Joel Spolsky called The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
| Python and regular expression with Unicode | I need to delete some Unicode symbols from the string 'ุจูุณูู
ู ุงูููููู ุงูุฑููุญูู
ููฐูู ุงูุฑููุญููู
ู'
I know they exist here for sure. I tried:
re.sub('([\u064B-\u0652\u06D4\u0670\u0674\u06D5-\u06ED]+)', '', 'ุจูุณูู
ู ุงูููููู ุงูุฑููุญูู
ููฐูู ุงูุฑููุญููู
ู')
but it doesn't work. String stays the same. What am I doing wrong?
| [
"Are you using python 2.x or 3.0?\nIf you're using 2.x, try making the regex string a unicode-escape string, with 'u'. Since it's regex it's good practice to make your regex string a raw string, with 'r'. Also, putting your entire pattern in parentheses is superfluous.\nre.sub(ur'[\\u064B-\\u0652\\u06D4\\u0670\\u0674\\u06D5-\\u06ED]+', '', ...)\n\nhttp://docs.python.org/tutorial/introduction.html#unicode-strings\nEdit:\nIt's also good practice to use the re.UNICODE/re.U/(?u) flag for unicode regexes, but it only affects character class aliases like \\w or \\b, of which this pattern does not use any and so would not be affected by.\n",
"Use unicode strings. Use the re.UNICODE flag.\n>>> myre = re.compile(ur'[\\u064B-\\u0652\\u06D4\\u0670\\u0674\\u06D5-\\u06ED]+', \n re.UNICODE)\n>>> myre\n<_sre.SRE_Pattern object at 0xb20b378>\n>>> mystr = u'ุจูุณูู
ู ุงูููููู ุงูุฑููุญูู
ููฐูู ุงูุฑููุญููู
ู'\n>>> result = myre.sub('', mystr)\n>>> len(mystr), len(result)\n(38, 22)\n>>> print result\nุจุณู
ุงููู ุงูุฑุญู
ู ุงูุฑุญูู
\n\nRead the article by Joel Spolsky called The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)\n"
] | [
110,
78
] | [] | [] | [
"character_properties",
"python",
"regex"
] | stackoverflow_0000393843_character_properties_python_regex.txt |
Q:
Scripting inside a Python application
I'd like to include Python scripting in one of my applications, that is written in Python itself.
My application must be able to call external Python functions (written by the user) as callbacks. There must be some control on code execution; for example, if the user provided code with syntax errors, the application must signal that.
What is the best way to do this?
Thanks.
edit: question was unclear. I need a mechanism similar to events of VBA, where there is a "declarations" section (where you define global variables) and events, with scripted code, that fire at specific points.
A:
Use __import__ to import the files provided by the user. This function will return a module. Use that to call the functions from the imported file.
Use try..except both on __import__ and on the actual call to catch errors.
Example:
m = None
try:
m = __import__("external_module")
except:
# invalid module - show error
if m:
try:
m.user_defined_func()
except:
# some error - display it
A:
If you'd like the user to interactively enter commands, I can highly recommend the code module, part of the standard library. The InteractiveConsole and InteractiveInterpreter objects allow for easy entry and evaluation of user input, and errors are handled very nicely, with tracebacks available to help the user get it right.
Just make sure to catch SystemExit!
$ python
Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> shared_var = "Set in main console"
>>> import code
>>> ic = code.InteractiveConsole({ 'shared_var': shared_var })
>>> try:
... ic.interact("My custom console banner!")
... except SystemExit, e:
... print "Got SystemExit!"
...
My custom console banner!
>>> shared_var
'Set in main console'
>>> shared_var = "Set in sub-console"
>>> sys.exit()
Got SystemExit!
>>> shared_var
'Set in main console'
A:
RestrictedPython provides a restricted execution environment for Python, e.g. for running untrusted code.
A:
"My application must be able to call external Python functions (written by the user) as callbacks".
There's an alternative that's often simpler.
Define classes which call method functions at specific points. You provide a default implementation.
Your user can then extended the classes and provide appropriate method functions instead of callbacks.
This rarely requires global variables. It's also simpler to implement because your user does something like the following
import core_classes
class MyExtension( core_classes.SomeClass ):
def event_1( self, args ):
# override the default behavior for event_1.
core_classes.main( MyExtension )
This works very smoothly, and allows maximum flexibility. Errors will always be in their code, since their code is the "main" module.
| Scripting inside a Python application |
I'd like to include Python scripting in one of my applications, that is written in Python itself.
My application must be able to call external Python functions (written by the user) as callbacks. There must be some control on code execution; for example, if the user provided code with syntax errors, the application must signal that.
What is the best way to do this?
Thanks.
edit: question was unclear. I need a mechanism similar to events of VBA, where there is a "declarations" section (where you define global variables) and events, with scripted code, that fire at specific points.
| [
"Use __import__ to import the files provided by the user. This function will return a module. Use that to call the functions from the imported file.\nUse try..except both on __import__ and on the actual call to catch errors.\nExample:\nm = None\ntry:\n m = __import__(\"external_module\")\nexcept:\n # invalid module - show error\nif m:\n try:\n m.user_defined_func()\n except:\n # some error - display it\n\n",
"If you'd like the user to interactively enter commands, I can highly recommend the code module, part of the standard library. The InteractiveConsole and InteractiveInterpreter objects allow for easy entry and evaluation of user input, and errors are handled very nicely, with tracebacks available to help the user get it right.\nJust make sure to catch SystemExit!\n$ python\nPython 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) \n[GCC 4.0.1 (Apple Inc. build 5465)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> shared_var = \"Set in main console\"\n>>> import code\n>>> ic = code.InteractiveConsole({ 'shared_var': shared_var })\n>>> try:\n... ic.interact(\"My custom console banner!\")\n... except SystemExit, e:\n... print \"Got SystemExit!\"\n... \nMy custom console banner!\n>>> shared_var\n'Set in main console'\n>>> shared_var = \"Set in sub-console\"\n>>> sys.exit()\nGot SystemExit!\n>>> shared_var\n'Set in main console'\n\n",
"RestrictedPython provides a restricted execution environment for Python, e.g. for running untrusted code.\n",
"\"My application must be able to call external Python functions (written by the user) as callbacks\".\nThere's an alternative that's often simpler.\nDefine classes which call method functions at specific points. You provide a default implementation.\nYour user can then extended the classes and provide appropriate method functions instead of callbacks.\nThis rarely requires global variables. It's also simpler to implement because your user does something like the following\nimport core_classes\nclass MyExtension( core_classes.SomeClass ):\n def event_1( self, args ):\n # override the default behavior for event_1.\n\ncore_classes.main( MyExtension )\n\nThis works very smoothly, and allows maximum flexibility. Errors will always be in their code, since their code is the \"main\" module.\n"
] | [
8,
8,
4,
0
] | [] | [] | [
"python",
"scripting"
] | stackoverflow_0000393871_python_scripting.txt |